uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,314,259,992,763
arxiv
\section{Introduction} At the frontier of computational statistics there is growing interest in parallel implementation of Monte Carlo algorithms using multi-processor and distributed architectures. However, the resampling step of sequential Monte Carlo (SMC) methods \citep{gordon1993novel} (see \citep{kunsch2013particle} for a recent overview) which involves a degree of interaction between simulated ``particles'', hinders their parallelization. So, whilst multi-processor implementation offers some speed up for SMC, the potential benefits of distributed computing are not fully realized \citep{lee2010utility}. Performing resampling only occasionally, a technique originally suggested for the somewhat different reason of variance reduction \citep{liu1995blind}, alleviates this problem to some extent, but the collective nature of the resampling operation remains the computational bottleneck. On the other hand, crude attempts to entirely do away with the resampling step may result in unstable or even non-convergent algorithms. With these issues in mind we seek a better understanding of the relationship between the interaction structure of SMC algorithms and theoretical properties of the approximations they deliver. Our overall aim is to address the following question: \smallskip{} \emph{To what extent can the degree of interaction between particles be reduced, whilst ensuring provable stability of the algorithm?} \smallskip{} Our strategy is to introduce and study an unusually general type of SMC algorithm featuring a parameterized resampling mechanism. This provides a flexible framework in which we are ultimately able to attach meaning to \emph{degree of interaction} in terms of graph-theoretic quantities. To address the matter of \emph{provable stability}, we seek conditions under which the algorithm yields time-uniformly convergent approximations of prediction filters, and approximations of marginal likelihoods whose relative variance can be controlled at a linear-in-time cost. The general algorithm we study is defined in terms of a family of Markov transition matrices, $\alpha$, and we refer to the algorithm itself as $\alpha$SMC. We shall see that through particular choices of $\alpha$ one obtains, as instances of $\alpha$SMC, well known algorithms including sequential importance sampling (SIS), the bootstrap particle filter (BPF) and the adaptive resampling particle filter (ARPF) in which resampling is triggered by monitoring some functional criterion, such as the Effective Sample Size (ESS) \citep{liu1995blind}. Although the ESS does not necessarily appear in the definition of the general $\alpha$SMC algorithm, we find that it does appear quite naturally from the inverse quadratic variation of certain martingale sequences in its analysis. This allows us to make precise a sense in which algorithmic control of the ESS can guarantee stability of the algorithm. Our results apply immediately to the ARPF, but our study has wider-reaching methodological consequences: in our framework it becomes clear that the standard adaptive resampling strategy is just one of many possible ways of algorithmically controlling the ESS, and we can immediately suggest new, alternative algorithms which are provably stable, but designed to avoid the type of complete interaction which is inherent to the ARPF and which hinders its parallelization. The structure of this paper and our main contributions are as follows. Section~\ref{sec:aSMC} introduces the general algorithm, $\alpha$SMC. We explain how it accommodates several standard algorithms as particular cases and comment on some other existing SMC methods. Section~\ref{sec:Martingale-approximations-and} presents Theorem \ref{thm:convergence}, a general convergence result for $\alpha$SMC. We give conditions which ensure unbiased approximation of marginal likelihoods and we elucidate connections between certain invariance properties of the matrices $\alpha$ and the negligibility of increments in a martingale error decomposition, thus formulating simple sufficient conditions for weak and strong laws of large numbers. We also discuss some related existing results. Section~\ref{sec:stability} presents our second main result, Theorem \ref{thm:L_R_mix}. We show, subject to regularity conditions on the hidden Markov model (HMM) under consideration, that enforcement of a strictly positive lower bound on a certain coefficient associated with ESS of $\alpha$SMC is sufficient to guarantee non-asymptotic, time-uniform bounds on: 1) the exponentially normalized relative second moment of error in approximation of marginal likelihoods, and 2) the $L_{p}$ norm of error in approximation of prediction filters. The former implies a linear-in-time variance bound and the latter implies time-uniform convergence. These results apply immediately to the ARPF. Section~\ref{sec:Discussion} houses discussion and application of our results. We point out the pitfalls of some naive approaches to parallelization of SMC and discuss what can go wrong if the conditions of Theorem~\ref{thm:convergence} are not met. Three new algorithms, which adapt the degree of interaction in order to control the ESS and which are therefore provably stable, are then introduced. We discuss computational complexity and through numerical experiments examine the degree of interaction involved in these algorithms and the quality of the approximations they deliver compared to the ARPF. \section{$\alpha$SMC\label{sec:aSMC}} A hidden Markov model (HMM) with measurable state space $\left(\mathsf{X},\mathcal{X}\right)$ and observation space $\left(\mathsf{Y},\mathcal{Y}\right)$ is a process $\left\{ \left(X_{n},Y_{n}\right);n\geq0\right\} $ where $\left\{ X_{n};n\geq0\right\} $ is a Markov chain on $\mathsf{X}$, and each observation $Y_{n}$, valued in $\mathsf{Y}$, is conditionally independent of the rest of the process given $X_{n}$. Let $\mu_{0}$ and $f$ be respectively a probability distribution and a Markov kernel on $\left(\mathsf{X},\mathcal{X}\right)$, and let $g$ be a Markov kernel acting from $\left(\mathsf{X},\mathcal{X}\right)$ to $\left(\mathsf{Y},\mathcal{Y}\right)$, with $g(x,\cdot)$ admitting a density, denoted similarly by $g(x,y)$, with respect to some dominating $\sigma$-finite measure. The HMM specified by $\mu_{0}$, $f$ and $g$, is \begin{eqnarray} & & X_{0}\sim\mu_{0}(\cdot),\quad\left.X_{n}\right|\{X_{n-1}=x_{n-1}\}\sim f(x_{n-1},\cdot),\quad n\geq1,\label{eq:HMM}\\ & & \;\quad\quad\quad\quad\hspace{1.1em}\quad\left.Y_{n}\right|\left\{ X_{n}=x_{n}\right\} \sim g(x_{n},\cdot),\quad\quad\quad\quad n\geq0.\nonumber \end{eqnarray} We shall assume throughout that we are presented with a fixed observation sequence $\left\{ y_{n};n\geq0\right\} $ and write \[ g_{n}(x):=g(x,y_{n}),\quad n\geq0. \] The following assumption imposes some mild regularity which ensures that various objects appearing below are well defined. It shall be assumed to hold throughout without further comment. \begin{assumption*} $\mathbf{\mathbf{(A1)}}$ For each $n\geq0$, $\sup_{x}g_{n}(x)<+\infty$ and $g_{n}(x)>0$ for all $x\in\mathsf{X}$. \end{assumption*} We take as a recursive definition of the \emph{prediction filters}, the sequence of distributions $\left\{ \pi_{n};n\geq0\right\} $ given by \begin{eqnarray} & & \pi_{0}:=\mu_{0},\nonumber \\ & & \pi_{n}\left(A\right):=\frac{\int_{\mathsf{X}}\pi_{n-1}\left(dx\right)g_{n-1}(x)f(x,A)}{\int_{\mathsf{X}}\pi_{n-1}\left(dx\right)g_{n-1}(x)},\quad A\in\mathcal{X},\quad n\geq1,\label{eq:filtering_recursion} \end{eqnarray} and let $\left\{ Z_{n};n\geq0\right\} $ be defined by \begin{equation} Z_{0}:=1,\quad\quad Z_{n}:=Z_{n-1}\int_{\mathsf{X}}\pi_{n-1}\left(dx\right)g_{n-1}\left(x\right),\quad n\geq1.\label{eq:Z_recusion} \end{equation} Due to the conditional independence structure of the HMM, $\pi_{n}$ is the conditional distribution of $X_{n}$ given $Y_{0:n-1}=y_{0:n-1}$; and $Z_{n}$ is the marginal likelihood of the first $n$ observations, evaluated at the point $y_{0:n-1}$. Our main computational objectives are to approximate $\left\{ \pi_{n};n\geq0\right\} $ and $\left\{ Z_{n};n\geq0\right\} $. \subsection{The general algorithm} With population size $N\geq1$, we write $[N]:=\{1,\ldots,N\}$. \emph{To simplify presentation, whenever a summation sign appears without the summation set made explicit, the summation set is taken to be $[N]$, for example we write $\Sigma_{i}$ to mean $\Sigma_{i=1}^{N}$. } The $\alpha$SMC algorithm involves simulating a sequence $\left\{ \zeta_{n};n\geq0\right\} $ with each $\zeta_{n}=\left\{ \zeta_{n}^{1},\ldots,\zeta_{n}^{N}\right\} $ valued in $\mathsf{X}^{N}$. Denoting $\mathbb{X}:=\left(\mathsf{X}^{N}\right)^{\mathbb{N}}$, $\mathcal{F}^{\mathbb{X}}:=\left(\mathcal{X}^{\otimes N}\right)^{\otimes\mathbb{N}}$, we shall view $\left\{ \zeta_{n};n\geq0\right\} $ as the canonical coordinate process on the measurable space $\left(\mathbb{X},\mathcal{F}^{\mathbb{X}}\right)$, and write $\mathcal{F}_{n}$ for the $\sigma$-algebra generated by $\left\{ \zeta_{0},\ldots,\zeta_{n}\right\} $. By convention, we let $\mathcal{F}_{-1}:=\{\mathbb{X},\emptyset\}$ be the trivial $\sigma$-algebra. The sampling steps of the $\alpha$SMC algorithm, described below, amount to specifying a probability measure, say $\mathbb{P}$, on $\left(\mathbb{X},\mathcal{F}^{\mathbb{X}}\right)$. Expectation w.r.t.~$\mathbb{P}$ shall be denoted by $\mathbb{E}$. Let $\mathbb{A}_{N}$ be a non-empty set of Markov transition matrices, each of size $N\times N$. For $n\geq0$ let $\alpha_{n}:\mathbb{X}\rightarrow\mathbb{A}_{N}$ be a matrix-valued map, and write $\alpha_{n}^{ij}$ for the $i$th row, $j$th column entry so that for each $i$ we have $\sum_{j}\alpha_{n}^{ij}=1$ (with dependence on the $\mathbb{X}$-valued argument suppressed). The following assumption places a restriction on the relationship between $\alpha$ and the particle system $\left\{ \zeta_{n};n\geq0\right\} $. \begin{assumption*} \textbf{\emph{(A2)}} For each $n\geq0$, the entries of $\alpha_{n}$ are all measurable with respect to $\mathcal{F}_{n}$ \end{assumption*} Intuitively, the members of $\mathbb{A}_{N}$ will specify different possible interaction structures for the particle algorithm and under \textbf{(A2)}, each $\alpha_{n}$ is a random matrix chosen from $\mathbb{A}_{N}$ according to some deterministic function of $\left\{ \zeta_{0},\ldots,\zeta_{n}\right\} $. Examples are given below. We shall write $\mathbf{1}_{1/N}$ for the $N\times N$ matrix which has $1/N$ as every entry and write $Id$ for the identity matrix of size apparent from the context in which this notation appears. We shall occasionally use $Id$ also to denote identity operators in certain function space settings. Let $\mathcal{M}$, $\mathcal{P}$ and $\mathcal{L}$ be respectively the collections of measures, probability measures and real-valued, bounded, $\mathcal{X}$-measurable functions on $\mathsf{X}$. We write \[ \left\Vert \varphi\right\Vert :=\sup_{x}\left|\varphi(x)\right|,\quad\quad\text{osc}(\varphi):=\sup_{x,y}\left|\varphi(x)-\varphi(y)\right|, \] and \begin{equation} \mu(\varphi):=\int_{\mathsf{X}}\varphi(x)\mu(dx),\quad\text{for any}\quad\varphi\in\mathcal{L},\;\mu\in\mathcal{M}.\label{eq:mu(phi)_notation} \end{equation} \begin{rem*} Note that $\mathbb{X}$, $\mathcal{F}^{\mathbb{X}}$, $\mathcal{F}_{n}$, $\mathbb{P}$, $\alpha$ and various other objects depend on $N$, but this dependence is suppressed from the notation. Unless specified otherwise, any conditions which we impose on such objects should be understood as holding for all $N\geq1$. \end{rem*} Let $\left\{ W_{n}^{i};i\in[N],n\geq0\right\} $ be defined by the following recursion: \begin{equation} W_{0}^{i}:=1,\quad\quad W_{n}^{i}:=\sum_{j}\alpha_{n-1}^{ij}W_{n-1}^{j}g_{n-1}(\zeta_{n-1}^{j}),\quad i\in[N],n\geq1.\label{eq:W_n_defn} \end{equation} The following algorithm implicitly specifies the law $\mathbb{P}$ of the $\alpha$SMC particle system. For each $n\geq1$, the ``Sample'' step should be understood as meaning that the variables $\zeta_{n}=\left\{ \zeta_{n}^{i}\right\} _{i\in[N]}$ are conditionally independent given $\left\{ \zeta_{0},\ldots,\zeta_{n-1}\right\} $. The line of Algorithm~\ref{alg:aSMC} marked $(\star)$ is intentionally generic, it amounts to a practical, if imprecise restatement of \textbf{(A2). }In the sequel we shall examine instances of $\alpha$SMC which arise when we consider specific $\mathbb{A}_{N}$ and impose more structure at line $(\star)$. \begin{algorithm}[H] \begin{raggedright} \qquad{}For $n=0$, \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}For $i=1,\ldots,N$, \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}\qquad{}Set\quad{} $W_{0}^{i}=1$ \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}\qquad{}Sample\quad{} $\left\{ \zeta_{0}^{i}\right\} _{i\in[N]}\iid\mu_{0}$ \par\end{raggedright} \begin{raggedright} \qquad{}For $n\geq1$, \par\end{raggedright} \begin{raggedright} $(\star)$\qquad{}\enskip{}\hspace{0.25em}Select $\alpha_{n-1}$ from $\mathbb{A}_{N}$ according to some functional of $\left\{ \zeta_{0},\ldots,\zeta_{n-1}\right\} $ \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}For $i=1,\ldots,N$, \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}\qquad{}Set\quad{} $W_{n}^{i}=\sum_{j}\alpha_{n-1}^{ij}W_{n-1}^{j}g_{n-1}(\zeta_{n-1}^{j})$ \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}\qquad{}Sample\quad{} $\zeta_{n}^{i}|\mathcal{F}_{n-1}\;\sim\;\dfrac{\sum_{j}\alpha_{n-1}^{ij}W_{n-1}^{j}g_{n-1}(\zeta_{n-1}^{j})f(\zeta_{n-1}^{j},\cdot)}{W_{n}^{i}}$ \par\end{raggedright} \protect\caption{$\alpha$SMC\label{alg:aSMC}} \end{algorithm} We shall study the objects \begin{equation} \pi_{n}^{N}:=\frac{\sum_{i}W_{n}^{i}\;\delta_{\zeta_{n}^{i}}}{\sum_{i}W_{n}^{i}},\quad\quad\quad Z_{n}^{N}:=\frac{1}{N}\sum_{i}W_{n}^{i},\quad n\geq0,\label{eq:pi^N_andZ^N} \end{equation} which as the notation suggests, are to be regarded as approximations of $\pi_{n}$ and $Z_{n}$, respectively. We shall also be centrally concerned with the following coefficient, which is closely related to the ESS, \begin{equation} \mathcal{E}_{n}^{N}:=\frac{\left(N^{-1}\sum_{i}W_{n}^{i}\right)^{2}}{N^{-1}\sum_{i}\left(W_{n}^{i}\right)^{2}}=\frac{\left(N^{-1}\sum_{i}\sum_{j}\alpha_{n-1}^{ij}W_{n-1}^{j}g_{n-1}(\zeta_{n-1}^{j})\right)^{2}}{N^{-1}\sum_{i}\left(\sum_{j}\alpha_{n-1}^{ij}W_{n-1}^{j}g_{n-1}(\zeta_{n-1}^{j})\right)^{2}},\quad n\geq1,\label{eq:ESS_defn_front} \end{equation} and by convention $\mathcal{E}_{0}^{N}:=1$. The second equality in (\ref{eq:ESS_defn_front}) is immediate from the definition of $W_{n}^{i}$, see (\ref{eq:W_n_defn}). Note that $\mathcal{E}_{n}^{N}$ is always valued in $[0,1]$, and if we write \begin{equation} N_{n}^{\text{eff}}:=N\mathcal{E}_{n}^{N},\label{eq:N_eff} \end{equation} we obtain the ESS of \citet{liu1995blind}, although of course in a generalized form, since $\mathcal{E}_{n}^{N}$ is defined in terms of the generic ingredients of $\alpha$SMC. A few comments on generality are in order. Firstly, for ease of presentation, we have chosen to work with a particularly simple version of $\alpha$SMC, in which new samples are proposed using the HMM Markov kernel $f$. The algorithm is easily generalized to accommodate other proposal kernels. Secondly, whilst we focus on the application of SMC methods to HMM's, our results and methodological ideas are immediately transferable to other contexts, for example via the framework of \citep{smc:meth:DDJ06}. \subsection{Instances of $\alpha$SMC\label{sub:Instances-of-SMC}} We now show how $\alpha$SMC admits SIS, the BPF and the ARPF, as special cases, through particular choices of $\mathbb{A}_{N}$. Our presentation is intended to illustrate the structural generality of $\alpha$SMC, thus setting the scene for the developments which follow. The following lemma facilitates exposition by ``unwinding'' the quantities $\left\{ W_{n}^{i}\right\} _{i\in[N]}$ defined recursively in (\ref{eq:W_n_defn}). It is used throughout the remainder of the paper. \begin{lem} \label{lem:W_n_representation}For $n\geq1$, $0\leq p<n$ and $i_{n}\in[N]$, \begin{equation} W_{n}^{i_{n}}=\sum_{\left(i_{p},\ldots,i_{n-1}\right)\in[N]^{n-p}}W_{p}^{i_{p}}\prod_{q=p}^{n-1}g_{q}(\zeta_{q}^{i_{q}})\alpha_{q}^{i_{q+1}i_{q}},\label{eq:unwind} \end{equation} and in particular \begin{equation} W_{n}^{i_{n}}=\sum_{\left(i_{0},\ldots,i_{n-1}\right)\in[N]^{n}}\prod_{p=0}^{n-1}g_{p}(\zeta_{p}^{i_{p}})\alpha_{p}^{i_{p+1}i_{p}}.\label{eq:unwind2} \end{equation} \end{lem} The proof of (\ref{eq:unwind})--(\ref{eq:unwind2}) is a simple induction and is therefore omitted. From (\ref{eq:unwind2}) and definitions above we immediately observe: \begin{cor} \label{cor:measurability}If $\mathbf{(A2)}$ holds, then $W_{n}^{i}$ must be measurable w.r.t.~$\mathcal{F}_{n-1}$ for every $n\geq0$ and $i\in[N]$. \end{cor} \subsubsection*{Sequential importance sampling: $\mathbb{A}_{N}=\{Id\}$} Since in this case $\mathbb{A}_{N}$ consists of only a single element, $\alpha$ is actually a deterministic sequence, \textbf{(A2)} is trivially satisfied and at line $(\star)$ of Algorithm~\ref{alg:aSMC} we have $\alpha_{n}=Id$ fixed for all $n\geq0$. In this situation Lemma \ref{lem:W_n_representation} gives $W_{n}^{i}=\prod_{p=0}^{n-1}g_{p}(\zeta_{p}^{i})$ for $n\geq1$, so in turn \[ \pi_{n}^{N}=\frac{\sum_{i}\delta_{\zeta_{n}^{i}}\prod_{p=0}^{n-1}g_{p}(\zeta_{p}^{i})}{\sum_{i}\prod_{p=0}^{n-1}g_{p}(\zeta_{p}^{i})},\quad\quad\quad Z_{n}^{N}=\frac{1}{N}\sum_{i}\prod_{p=0}^{n-1}g_{p}(\zeta_{p}^{i}),\quad n\geq1, \] and $\alpha$SMC reduces to: \begin{algorithm}[H] \begin{raggedright} \qquad{}For $n=0$, \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}For $i=1,\ldots,N$, \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}\qquad{}Set\quad{} $W_{0}^{i}=1$. \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}\qquad{}Sample\quad{} $\zeta_{0}^{i}\sim\mu_{0}$, \par\end{raggedright} \begin{raggedright} \qquad{}For $n\geq1$, \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}For $i=1,\ldots,N$, \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}\qquad{}Set\quad{} $W_{n}^{i}=W_{n-1}^{i}g_{n-1}(\zeta_{n-1}^{i})$. \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}\qquad{}Sample\quad{} $\zeta_{n}^{i}|\mathcal{F}_{n-1}\;\sim\; f(\zeta_{n-1}^{i},\cdot)$. \par\end{raggedright} \protect\caption{Sequential importance sampling\label{alg:SIS}} \end{algorithm} \subsubsection*{Bootstrap particle filter: $\mathbb{A}_{N}=\{\mathbf{1}_{1/N}\}$} In this case $\alpha$ is again a deterministic sequence and \textbf{(A2)} is trivially satisfied. At line $(\star)$ we have $\alpha_{n}=\mathbf{1}_{1/N}$ fixed for all $n\geq0$. Lemma~\ref{lem:W_n_representation} gives, for all $i_{n}\in[N]$, \begin{equation} W_{n}^{i_{n}}=\sum_{\left(i_{0},\ldots,i_{n-1}\right)\in[N]^{n}}\prod_{p=0}^{n-1}\frac{g_{p}(\zeta_{p}^{i_{p}})}{N}=\prod_{p=0}^{n-1}\left(\frac{1}{N}\sum_{i_{p}}g_{p}(\zeta_{p}^{i_{p}})\right),\quad n\geq1.\label{eq:bootstrap_W_n^i} \end{equation} Note that then $W_{n}^{i}=W_{n}^{j}$ for all $i,j$, so $NW_{n}^{i}=\sum_{j}W_{n}^{j}$ and we obtain, according to (\ref{eq:pi^N_andZ^N}), \begin{equation} \pi_{n}^{N}=\frac{1}{N}\sum_{i}\delta_{\zeta_{n}^{i}},\quad\quad\quad Z_{n}^{N}=\prod_{p=0}^{n-1}\left(\frac{1}{N}\sum_{i_{p}}g_{p}(\zeta_{p}^{i_{p}})\right),\quad n\geq1,\label{eq:bootstrap_Z_n^N} \end{equation} and $\alpha$SMC algorithm reduces to the BPF. Since $W_{n}^{i}=W_{n}^{j}$ for all $i,j$, we write by convention the weight update steps only for $W_{n}^{1}$. \begin{algorithm}[H] \begin{raggedright} \qquad{}For $n=0$, \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}Set\quad{} $W_{0}^{1}=1$. \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}For $i=1,\ldots,N$, \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}\qquad{}Sample\quad{} $\zeta_{0}^{i}\sim\mu_{0}$, \par\end{raggedright} \begin{raggedright} \qquad{}For $n\geq1$, \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}Set\quad{} $W_{n}^{1}=W_{n-1}^{1}\cdot\dfrac{1}{N}\sum_{i}g_{n-1}(\zeta_{n-1}^{i})$. \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}For $i=1,\ldots,N$, \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}\qquad{}Sample\quad{} $\zeta_{n}^{i}|\mathcal{F}_{n-1}\;\sim\;\dfrac{\sum_{j}g_{n-1}(\zeta_{n-1}^{j})f(\zeta_{n-1}^{j},\cdot)}{\sum_{j}g_{n-1}(\zeta_{n-1}^{j})}$. \par\end{raggedright} \protect\caption{Bootstrap particle filter\label{alg:boot_pf}} \end{algorithm} \subsubsection*{Adaptive resampling particle filter: $\mathbb{A}_{N}=\{Id,\mathbf{1}_{1/N}\}$} In this case each $\alpha_{n}$ is allowed to take only the value $Id$ or $\mathbf{1}_{1/N}$, with the latter corresponding to resampling, and vice-versa. The choice between $Id$ and $\mathbf{1}_{1/N}$ is made by comparing some functional of the particle system to a threshold value. We consider the case of the popular ESS-based resampling rule \citep{liu1995blind}, partly for simplicity, but also because monitoring of the ESS is especially pertinent to the discussions which follow. This ARPF arises as an instance of $\alpha$SMC if we take as line $(\star)$ of Algorithm~\ref{alg:aSMC} the rule: \begin{equation} \alpha_{n-1}:=\begin{cases} \mathbf{1}_{1/N}, & \quad\text{ if }\quad\frac{\left(N^{-1}\sum_{i}W_{n-1}^{i}g_{n-1}(\zeta_{n-1}^{i})\right)^{2}}{N^{-1}\sum_{i}\left(W_{n-1}^{i}g_{n-1}(\zeta_{n-1}^{i})\right)^{2}}<\tau.\\ Id, & \quad\text{ otherwise}, \end{cases}\label{eq:alpha_ARPF} \end{equation} where $\tau\in(0,1]$ is a threshold value. Lemma~\ref{lem:ARPF_A2} in the appendix shows by an inductive argument that the adaptation rule (\ref{eq:alpha_ARPF}) satisfies $\mathbf{(A2)}$. The ARPF is traditionally expressed in terms of the random times at which resampling occurs. For completeness, the appendix contains derivations of expressions for $\pi_{n}^{N}$ and $Z_{n}^{N}$ in terms of such times and similar manipulations can be used to write out the form of $\alpha$SMC in this case. Looking back to the expression for $\mathcal{E}_{n}^{N}$ in (\ref{eq:ESS_defn_front}), we find: \begin{eqnarray} \alpha_{n-1}=\mathbf{1}_{1/N}\quad & \Rightarrow & \quad\mathcal{E}_{n}^{N}=1,\label{eq:alpha_ARPF_1}\\ \alpha_{n-1}=Id\;\;\;\quad & \Rightarrow & \quad\mathcal{E}_{n}^{N}=\frac{\left(N^{-1}\sum_{i}W_{n-1}^{i}g_{n-1}(\zeta_{n-1}^{i})\right)^{2}}{N^{-1}\sum_{i}\left(W_{n-1}^{i}g_{n-1}(\zeta_{n-1}^{i})\right)^{2}}.\label{eq:alpha_ARPF_2} \end{eqnarray} We then adopt the point of view that according to (\ref{eq:alpha_ARPF})--(\ref{eq:alpha_ARPF_2}), the ARPF \emph{enforces} the condition: $\inf_{n\geq0}\mathcal{E}_{n}^{N}\geq\tau>0$, or equivalently \[ \inf_{n\geq0}N_{n}^{\text{eff}}\geq N\tau>0, \] by construction. This seemingly trivial observation turns out to be crucial when we address time-uniform convergence of the ARPF in Section~\ref{sec:stability}, and the condition $\inf_{n\geq0}\mathcal{E}_{n}^{N}>0$ will appear repeatedly in discussions which lead to the formulation of new, provably stable algorithms in Section~\ref{sec:Discussion}. \subsubsection*{Comments on other algorithms} In the engineering literature, a variety of algorithmic procedures involving distributed computing have been suggested \citep{bolic2005resampling}. ``Local'' particle approximations of Rao--Blackwellized filters have been devised in \citep{chen2011decentralized} and \citep{johansen2012exact} . \citet{Verge_island_particle} have recently suggested an ``island'' particle algorithm, designed for parallel implementation, in which there are two levels of resampling and the total population size $N=N_{1}N_{2}$ is defined in terms of the number of particles per island, $N_{1}$, and the number of islands, $N_{2}$. Interaction at both levels occurs by resampling, at the island level this means entire blocks of particles are replicated and/or discarded. They investigate the trade-off between $N_{1}$ and $N_{2}$ and provide asymptotic results which validate their algorithms. In the present work, we provide some asymptotic results in Section~\ref{sec:Martingale-approximations-and} but it is really the non-asymptotic results in Section~\ref{sec:stability} which lead us to suggest specific novel instances of $\alpha$SMC in Section~\ref{sec:Discussion}. Moreover, in general $\alpha$SMC is distinct from all these algorithms and, other than in some uninteresting special cases, none of them coincide with the adaptive procedures we suggest in Section~\ref{sub:Algorithms-with-adaptive}. \section{Convergence\label{sec:Martingale-approximations-and}} In this section our main objective is to investigate, for general $\alpha$SMC (Algorithm~\ref{alg:aSMC}), conditions for convergence \begin{equation} Z_{n}^{N}-Z_{n}\rightarrow0\quad\text{and}\quad\pi_{n}^{N}(\varphi)-\pi_{n}(\varphi)\rightarrow0,\label{eq:as_convergence_intro} \end{equation} at least in probability, as $N\rightarrow\infty$. In the case of SIS, i.e.~$\mathbb{A}_{N}=\{Id\}$, it is easy to establish (\ref{eq:as_convergence_intro}), since the processes $\left\{ \zeta_{n}^{i};n\geq0\right\} _{i\in[N]}$ are independent Markov chains, of identical law. On the other hand, for the bootstrap filter, i.e.~$\mathbb{A}_{N}=\{\mathbf{1}_{1/N}\}$, the convergence $\pi_{n}^{N}(\varphi)-\pi_{n}(\varphi)\rightarrow0$, can be proved under very mild conditions, by decomposing $\pi_{n}^{N}(\varphi)-\pi_{n}(\varphi)$ in terms of ``local'' sampling errors, see amongst others \citep{smc:theory:Dm04,smc:the:DM08} for this type of approach. For instance, for $A\in\mathcal{X}$ we may write \begin{eqnarray} \pi_{1}^{N}(A)-\pi_{1}(A) & = & \frac{1}{N}\sum_{i}\delta_{\zeta_{1}^{i}}(A)-\frac{\sum_{i}g_{0}(\zeta_{0}^{i})f(\zeta_{0}^{i},A)}{\sum_{i}g_{0}(\zeta_{0}^{i})}\label{eq:intro_boot_decomp1}\\ & + & \frac{\sum_{i}g_{0}(\zeta_{0}^{i})f(\zeta_{0}^{i},A)}{\sum_{i}g_{0}(\zeta_{0}^{i})}-\pi_{1}(A).\label{eq:intro_boot_decomp2} \end{eqnarray} Heuristically, the term on the r.h.s.~of (\ref{eq:intro_boot_decomp1}) converges to zero because given $\mathcal{F}_{0}$, the samples $\left\{ \zeta_{1}^{i}\right\} _{i\in[N]}$ are conditionally i.i.d.~according $\frac{\sum_{i}g_{0}(\zeta_{0}^{i})f(\zeta_{0}^{i},\cdot)}{\sum_{i}g_{0}(\zeta_{0}^{i})}$, and the term in (\ref{eq:intro_boot_decomp2}) converges to zero because the samples $\left\{ \zeta_{0}^{i}\right\} _{i\in[N]}$ are i.i.d.~according to $\mu_{0}$. A similar argument ensures that $\pi_{n}^{N}(\varphi)-\pi_{n}(\varphi)\rightarrow0$, for any $n\geq0$ and therefore by the continuous mapping theorem $Z_{n}^{N}-Z_{n}\rightarrow0$, since \[ Z_{n}=\prod_{p=0}^{n-1}\pi_{p}(g_{p}),\quad\text{and}\quad Z_{n}^{N}=\prod_{p=0}^{n-1}\pi_{p}^{N}(g_{p}). \] In the case of $\alpha$SMC, $\left\{ \zeta_{n}^{i}\right\} _{i\in[N]}$ are conditionally independent given $\mathcal{F}_{n-1}$, but we do not necessarily have either the unconditional independence structure of SIS, or the conditionally i.i.d.~structure of the BPF to work with. \citet{smc:the:DM08} have established a CLT for the ARPF using an inductive approach w.r.t.~deterministic time periods. \citet{arnaud2009smc} have obtained a CLT for the ARPF based on an alternative multiplicative functional representation of the algorithm. Convergence of the ARPF was studied in \citep{del2012adaptive} by coupling the adaptive algorithm to a reference particle system, for which resampling occurs at deterministic times. One of the benefits of their approach is that existing asymptotic results for non-adaptive algorithms, such as central limit theorems (CLT), can then be transferred to the adaptive algorithm with little further work. Their analysis involves a technical assumption \citep[Section 5.2]{del2012adaptive} to deal with the situation where the threshold parameters coincide with the adaptive criteria. Our analysis of $\alpha$SMC does not rest on any such technical assumption, and in some ways is more direct, but we do not obtain concentration estimates or a CLT. Some more detailed remarks on this matter are given after the statement of Theorem~\ref{thm:convergence}. \citet{crisan2012particle} studied convergence and obtained a CLT for an adaptive resampling particle filter in continuous time under conditions which they verify for the case of ESS-triggered resampling, without needing the type of technical assumption of \citep{del2012adaptive}. Their study focuses, in part, on the random times at which resampling occurs and dealing with the subtleties of the convergence in continuous time. Our asymptotic $N\rightarrow\infty$ analysis is in some ways less refined, but in comparison to this and the other existing works, we analyze a more general algorithm, and it is this generality which allows us to suggest new adaptive algorithms in Section~\ref{sec:Discussion}, informed by the time-uniform non-asymptotic error bounds in our Theorem~\ref{thm:L_R_mix}. To proceed, we need some further notation involving $\alpha$. Let us define the matrices: $\alpha_{n,n}:=Id$ for $n\geq0$, and recursively \begin{equation} \alpha_{p,n}^{ij}:=\sum_{k}\alpha_{p+1,n}^{ik}\alpha_{p}^{kj},\quad\quad(i,j)\in[N]^{2},\;0\leq p<n,\label{eq:a_pn_defn} \end{equation} and the vectors: \begin{equation} \beta_{n,n}^{i}:=N^{-1},\quad\quad n\geq0,\; i\in[N],\label{eq:beta_n_n_defn} \end{equation} and recursively \begin{equation} \beta_{p,n}^{i}:=\sum_{j}\beta_{p+1,n}^{j}\alpha_{p}^{ji}\quad\quad i\in[N],\;0\leq p<n.\label{eq:beta_defn} \end{equation} Note that since each $\alpha_{n}$ is a random Markov transition matrix, so is each $\alpha_{p,n}$, and each $\left\{ \beta_{p,n}^{i}\right\} _{i\in[N]}$ defines a random probability distribution on $[N]$. Moreover, from these definitions we immediately have the identity \begin{equation} \beta_{p,n}^{i}=N^{-1}\sum_{j}\alpha_{p,n}^{ji},\quad\quad i\in[N],\;0\leq p\leq n.\label{eq:beta_in_terms_of_alpha} \end{equation} \begin{assumption*} $\;$ $\mathbf{(B)}$ - for all $0\leq p\leq n$ and $i\in[N]$, \textbf{$\beta_{p,n}^{i}$} is measurable w.r.t.~the trivial $\sigma$-algebra $\mathcal{F}_{-1}$. $\;$ $\mathbf{(B^{+})}$ -\textbf{ }assumption $\mathbf{(B)}$ holds and, for all $0\leq p\leq n$, $\lim_{N\rightarrow\infty}\max_{i\in[N]}\beta_{p,n}^{i}=0$. $\;$ $\mathbf{(B^{++})}$ -\textbf{ }every member of $\mathbb{A}_{N}$ admits the uniform distribution on $[N]$ as an invariant distribution \end{assumption*} We note the following: \begin{itemize} \item Intuitively, $\mathbf{(B)}$ ensures that even though $\alpha$ is a sequence of random Markov transition matrices, the elements of the probability vector $\left\{ \beta_{p,n}^{i}\right\} _{i\in[N]}$ are all constants. $\mathbf{(B)}$ holds, trivially, when every element of every $\alpha_{n}$ is measurable w.r.t.~$\mathcal{F}_{-1}$, i.e.~the sequence $\alpha$ is completely pre-determined. This is true, for example, when the set $\mathbb{A}_{N}$ consists of only a single element, as is the case for SIS and the BPF. \item The $\lim_{N\rightarrow\infty}\max_{i\in[N]}\beta_{p,n}^{i}=0$ part of $\mathbf{(B^{+})}$ is an asymptotic negligibility condition. In Section~\ref{sub:Ensuring-convergence} we describe what can go wrong when this assumption does not hold. \item $\mathbf{(B^{++})}$ does not require the members of $\mathbb{A}_{N}$ to be irreducible, for example it is satisfied with $\mathbb{A}_{N}=\{Id\}$ \item $\mathbf{(B^{++})}$\textbf{$\Rightarrow$}$\mathbf{(B^{+})}$. To see this, observe that when $\mathbf{(B^{++})}$ holds, every random matrix $\alpha_{p,n}$, defined in (\ref{eq:a_pn_defn}), also admits the uniform distribution on $[N]$ as invariant, then using (\ref{eq:beta_in_terms_of_alpha}) we have $\beta_{p,n}^{i}=N^{-1}\sum_{j}\alpha_{p,n}^{ji}=N^{-1}$ for all $i\in[N]$. The reverse implication is clearly not true in general. \item $\mathbf{(B^{++})}$ holds when every member of $\mathbb{A}_{N}$ is doubly-stochastic, because such matrices always leave the uniform distribution invariant. $\mathbf{(B^{++})}$\textbf{ }therefore holds for the ARPF, which has $\mathbb{A}_{N}=\{Id,\mathbf{1}_{1/N}\}$. \end{itemize} The main result of this section is \begin{thm} \label{thm:convergence} $\;$Assume $\mathbf{(A2)}$. For any $n\geq0$, $\varphi\in\mathcal{L}$ and $r\geq1$, $\;$ 1) if $\mathbf{(B)}$ holds, then $\mathbb{E}[Z_{n}^{N}]=Z_{n}$ for any $N\geq1$, $\;$ 2) if $\mathbf{(B^{+})}$ holds, then \begin{eqnarray} & & \lim_{N\rightarrow\infty}\mathbb{E}\left[\left|Z_{n}^{N}-Z_{n}\right|^{r}\right]=0,\label{eq:convergence_L_r_statement_gam_weak}\\ & & \lim_{N\rightarrow\infty}\mathbb{E}\left[\left|\pi_{n}^{N}(\varphi)-\pi_{n}(\varphi)\right|^{r}\right]=0,\label{eq:convergence_L_r_statement_pi_weak} \end{eqnarray} \quad{}and therefore $Z_{n}^{N}\rightarrow Z_{n}$ and $\pi_{n}^{N}(\varphi)\rightarrow\pi_{n}(\varphi)$ in probability as $N\rightarrow\infty$, $\;$ 3) if $\mathbf{(B^{++})}$\textbf{ }holds\textbf{, }then \begin{eqnarray} & & \sup_{N\geq1}\sqrt{N}\mathbb{E}\left[\left|Z_{n}^{N}-Z_{n}\right|^{r}\right]^{1/r}<+\infty,\label{eq:convergence_L_r_statement_gamm}\\ & & \sup_{N\geq1}\sqrt{N}\mathbb{E}\left[\left|\pi_{n}^{N}(\varphi)-\pi_{n}(\varphi)\right|^{r}\right]^{1/r}<+\infty,\label{eq:convergence_L_r_statement_pi} \end{eqnarray} \quad{}and therefore $Z_{n}^{N}\rightarrow Z_{n}$ and $\pi_{n}^{N}(\varphi)\rightarrow\pi_{n}(\varphi)$ almost surely, as $N\rightarrow\infty$.\end{thm} \begin{rem} The lack-of-bias property $\mathbb{E}[Z_{n}^{N}]=Z_{n}$ is desirable since it could be used to validate the use of $\alpha$SMC within composite SMC/MCMC algorithms such as those of \citep{andrieu2010particle}. \end{rem} \begin{rem} \label{rem_alpha_conv}Theorem~\ref{thm:convergence} holds without any sort of requirement that the entries of each $\alpha_{n}$ converge as $N\rightarrow\infty$. For example, $\mathbf{(B^{++})}$ holds if for $N$ odd we choose $\mathbb{A}_{N}=\{Id\}$ and for $N$ even we choose $\mathbb{A}_{N}=\{\mathbf{1}_{1/N}\}$. As a reflection of this, and as is apparent upon inspection of the proof, without further assumption we cannot in general replace $\sup_{N\geq1}$ in (\ref{eq:convergence_L_r_statement_gamm})--(\ref{eq:convergence_L_r_statement_pi}) with $\lim_{N\rightarrow\infty}$, because such limits may not exist. \end{rem} \begin{rem} The proof of Theorem~\ref{thm:convergence} hinges on a martingale decomposition of errors associated with $Z_{n}^{N}$ and $\pi_{n}^{N}(\varphi)$, see Proposition~\ref{prop:martingale}. Our overall approach is inspired by some of the ideas of \citep[Chapters 7 and 9]{smc:theory:Dm04}, but the path we take and the details are necessarily different since the analysis of \citep{smc:theory:Dm04} does not apply to $\alpha$SMC in general. \end{rem} The following notation is used throughout the remainder of the paper. Introduce the non-negative kernels \begin{equation} Q_{n}:\mathsf{X}\times\mathcal{X}\rightarrow\mathbb{R}_{+},\quad\quad Q_{n}(x,dx^{\prime}):=g_{n-1}(x)f(x,dx^{\prime}),\quad n\geq1,\label{eq:Q_GM_defn} \end{equation} the corresponding operators on functions and measures: \begin{eqnarray} Q_{n}(\varphi)(x) & := & \int_{\mathsf{X}}Q_{n}(x,dx^{\prime})\varphi(x^{\prime}),\quad\varphi\in\mathcal{L},\label{eq:Q_op_defn}\\ \mu Q_{n}(\cdot) & := & \int_{\mathsf{X}}\mu(dx)Q_{n}(x,\cdot),\quad\mu\in\mathcal{M},\label{eq:Q_op_defn-1} \end{eqnarray} and for $n\geq1$ and $0\leq p<n$, \begin{equation} Q_{p,p}:=Id,\quad\quad Q_{p,n}:=Q_{p+1}\cdots Q_{n}.\label{eq:Q_semigroup} \end{equation} We shall also consider the following scaled versions of these operators: \begin{equation} \overline{Q}_{n}:=\frac{Q_{n}}{\pi_{n-1}(g_{n-1})},\quad\quad\overline{Q}_{p,p}:=Id,\quad\quad\overline{Q}_{p,n}:=\overline{Q}_{p+1}\cdots\overline{Q}_{n}.\label{eq:Q_bar_defn} \end{equation} Then define the non-negative measures \[ \gamma_{n}:=\mu_{0}Q_{0,n}(\cdot),\quad n\geq0, \] under $\mathbf{\mathbf{(A1)}}$ we are assured that $\gamma_{n}(1)>0$. Due to the conditional independence structure of the HMM, it can easily be checked that \[ \pi_{n}=\frac{\gamma_{n}}{\gamma_{n}(1)},\quad\quad Z_{n}=\gamma_{n}(1),\quad n\geq0, \] and \[ \overline{Q}_{p,n}=\frac{Q_{p,n}}{\pi_{p}Q_{p,n}(1)}. \] For $i\in[N]$ and $0\leq p\leq n$, introduce the random measures \begin{equation} \Gamma_{p,n}^{N}:=\sum_{i}\beta_{p,n}^{i}W_{p}^{i}\delta_{\zeta_{p}^{i}},\quad\quad\overline{\Gamma}_{p,n}^{N}:=\frac{\Gamma_{p,n}^{N}}{\gamma_{p}(1)}.\label{eq:Gamma_defn} \end{equation} where $W_{p}^{i}$ is as in (\ref{eq:W_n_defn}). For simplicity of notation, we shall write $\Gamma_{n}^{N}:=\Gamma_{n,n}^{N},\;\overline{\Gamma}_{n}^{N}:=\overline{\Gamma}_{n,n}^{N}$. If we define \begin{equation} \overline{W}_{n}^{i}:=\frac{W_{n}^{i}}{\gamma_{n}(1)},\quad n\geq0,\label{eq:W_bar_defn} \end{equation} then we have from (\ref{eq:Gamma_defn}), \[ \overline{\Gamma}_{p,n}^{N}=\sum_{i}\beta_{p,n}^{i}\overline{W}_{p}^{i}\delta_{\zeta_{p}^{i}}. \] Finally, we observe from (\ref{eq:beta_n_n_defn}) that \[ \Gamma_{n}^{N}=\sum_{i}\beta_{n,n}^{i}W_{n}^{i}\delta_{\zeta_{n}^{i}}=N^{-1}\sum_{i}W_{n}^{i}\delta_{\zeta_{n}^{i}}. \] \subsection{Error decomposition} Throughout this section let $\varphi\in\mathcal{L}$, $n\geq0$ and $N\geq1$ be arbitrarily chosen, but then fixed. Define, for $1\leq p\leq n$ and $i\in[N]$, \[ \Delta_{p,n}^{i}:=\overline{Q}_{p,n}(\varphi)(\zeta_{p}^{i})-\frac{\sum_{j}\alpha_{p-1}^{ij}W_{p-1}^{j}\overline{Q}_{p-1,n}(\varphi)(\zeta_{p-1}^{j})}{\sum_{j}\alpha_{p-1}^{ij}W_{p-1}^{j}\overline{Q}_{p}(1)(\zeta_{p-1}^{j})}, \] and $\Delta_{0,n}^{i}:=\overline{Q}_{0,n}(\varphi)(\zeta_{0}^{i})-\mu_{0}\overline{Q}_{0,n}(\varphi)$, so that $\mathbb{E}\left[\left.\Delta_{p,n}^{i}\right|\mathcal{F}_{p-1}\right]=0$ for any $i\in[N]$ and $0\leq p\leq n$. Then for $0\leq p\leq n$ and $i\in[N]$ set $k:=pN+i$, and \[ \xi_{k}^{N}:=\sqrt{N}\beta_{p,n}^{i}\overline{W}_{p}^{i}\Delta_{p,n}^{i}, \] so as to define a sequence $\left\{ \xi_{k}^{N};k=1,\ldots,(n+1)N\right\} $. For $k=1,\ldots,(n+1)N$, let $\mathcal{F}^{(k)}$ be the $\sigma$-algebra generated by $\left\{ \zeta_{p}^{i};\; pN+i\leq k,\; i\in[N],0\leq p\leq n\right\} $. Set $\mathcal{F}^{(-1)}:=\{\mathbb{X},\emptyset\}$. The following proposition is the main result underlying Theorem~\ref{thm:convergence}. The proof is given in the appendix. \begin{prop} \label{prop:martingale} Assume $\mathbf{(A2)}$ and $\mathbf{(B)}$. We have the decomposition \begin{equation} \sqrt{N}\left[\overline{\Gamma}_{n}^{N}(\varphi)-\pi_{n}(\varphi)\right]=\sum_{k=1}^{(n+1)N}\xi_{k}^{N},\label{eq:Gamma_telescope} \end{equation} where for $k=1,\ldots,(n+1)N$, the increment $\xi_{k}^{N}$ is measurable w.r.t.~$\mathcal{F}^{(k)}$ and satisfies \begin{equation} \mathbb{E}\left[\left.\xi_{k}^{N}\right|\mathcal{F}^{(k-1)}\right]=\mathbb{E}\left[\left.\xi_{k}^{N}\right|\mathcal{F}_{p-1}\right]=0,\quad\text{with}\quad p:=\left\lfloor (k-1)/N\right\rfloor .\label{eq:xi_cond_exp} \end{equation} For each $r\geq1$ there exists a universal constant $B(r)$ such that \begin{eqnarray} & & \mathbb{E}\left[\left|\overline{\Gamma}_{n}^{N}(\varphi)-\pi_{n}(\varphi)\right|^{r}\right]^{1/r}\nonumber \\ & & \leq B(r)^{1/r}\sum_{p=0}^{n}\mathrm{osc}\left(\overline{Q}_{p,n}(\varphi)\right)\mathbb{E}\left[\left|\sum_{i}\left(\beta_{p,n}^{i}\overline{W}_{p}^{i}\right)^{2}\right|^{r/2}\right]^{1/r}.\label{eq:martingale_burkholder_bound} \end{eqnarray} \end{prop} The proof of Theorem~\ref{thm:convergence}, which is mostly technical, is given in the appendix. Here we briefly discuss our assumptions and sketch some of the main arguments. Part 1) of Theorem~\ref{thm:convergence} follows immediately from (\ref{eq:Gamma_telescope}) and (\ref{eq:xi_cond_exp}) applied with $\varphi=1$. In turn, the martingale structure of\textbf{ }(\ref{eq:Gamma_telescope}) and (\ref{eq:xi_cond_exp}) is underpinned by the measurability conditions \textbf{(A2)} and $\mathbf{(B)}$. The proofs of parts 2) and 3) of Theorem~\ref{thm:convergence}, involve applying Proposition~\ref{prop:martingale} in conjunction with the identities \begin{eqnarray} Z_{n}^{N}-Z_{n} & = & \Gamma_{n}^{N}(1)-\gamma_{n}(1),\nonumber \\ \pi_{n}^{N}(\varphi)-\pi_{n}(\varphi) & = & \frac{\Gamma_{n}^{N}(\varphi)}{\Gamma_{n}^{N}(1)}-\frac{\gamma_{n}(\varphi)}{\gamma_{n}(1)}.\label{eq:convergence_sketch_id} \end{eqnarray} In order to prove that these errors convergence to zero in probability we show that the quadratic variation term in (\ref{eq:martingale_burkholder_bound}) converges to zero. In general, we cannot hope for the latter convergence without some sort of negligibility hypothesis on the product terms $\left\{ \mathrm{osc}\left(\overline{Q}_{p,n}(\varphi)\right)\beta_{p,n}^{i}\overline{W}_{p}^{i};i\in[N]\right\} $. Assumption $\mathbf{\mathbf{(A1)}}$ allows us to crudely upper-bound $\mathrm{osc}\left(\overline{Q}_{p,n}(\varphi)\right)$ and $\overline{W}_{p}^{i}$; the measurability condition $\mathbf{(B)}$ allows us to dispose of the expectation in (\ref{eq:martingale_burkholder_bound}); then via Markov's inequality and the classical equivalence: \[ \lim_{N\rightarrow\infty}\max_{i\in[N]}\beta_{p,n}^{i}=0\quad\Leftrightarrow\quad\lim_{N\rightarrow\infty}\sum_{i}\left(\beta_{p,n}^{i}\right)^{2}=0, \] which holds since $\left(\max_{i\in[N]}\beta_{p,n}^{i}\right)^{2}\leq\sum_{i}\left(\beta_{p,n}^{i}\right)^{2}\leq\max_{i\in[N]}\beta_{p,n}^{i}$, the negligibility part of $\mathbf{(B^{+})}$ guarantees that $\left|\Gamma_{n}^{N}(\varphi)-\gamma_{n}(\varphi)\right|$ converges to zero in probability. The stronger condition $\mathbf{(B^{++})}$ buys us the $\sqrt{N}$ scaling displayed in part 3). In Section~\ref{sub:Ensuring-convergence} we discuss what can go wrong when $\mathbf{(B^{+})}$ does not hold. \section{Stability\label{sec:stability}} In this section we study the stability of approximation errors under the following regularity condition. \begin{assumption*} $\mathbf{(C)}$ There exists $\left(\delta,\epsilon\right)\in[1,\infty)^{2}$ such that \[ \sup_{n\geq0}\sup_{x,y}\frac{g_{n}(x)}{g_{n}(y)}\leq\delta,\quad\quad f(x,\cdot)\leq\epsilon f(y,\cdot),\quad(x,y)\in\mathsf{X}^{2}. \] \end{assumption*} \begin{rem} \label{rem:assumption_C}Assumption $\mathbf{(C)}$ is a standard hypothesis in studies of non-asymptotic stability properties of SMC algorithms. Similar conditions have been adopted in \citep[Chapter 7]{smc:theory:Dm04} and \citep{smc:the:LGO04}, amongst others. $\mathbf{(C)}$ guarantees that $Q_{p,n}$, and related objects, obey a variety of regularity conditions. In particular, we immediately obtain \begin{equation} \sup_{p,n}\sup_{x}\overline{Q}_{p,n}(1)(x)\leq\sup_{p,n}\sup_{x,y}\frac{Q_{p,n}(1)(x)}{Q_{p,n}(1)(y)}\leq\delta\epsilon<+\infty.\label{eq:Q_p,n_bounded} \end{equation} Furthermore if we introduce the following operators on probability measures: \begin{equation} \Phi_{n}:\mu\in\mathcal{P}\mapsto\frac{\mu Q_{n}}{\mu(g_{n-1})}\in\mathcal{P}\quad\quad n\geq1,\label{eq:Phi_defn} \end{equation} \begin{equation} \Phi_{p,n}:=\Phi_{n}\circ\cdots\circ\Phi_{p+1},\quad0\leq p<n,\label{eq:Phi_defn_semigroup} \end{equation} then by \citep[Theorem 4.3.1]{smc:theory:Dm04}, there exists a finite constant $C$ and $\rho\in[0,1)$ such that \begin{equation} \sup_{\mu,\mu^{\prime}\in\mathcal{P}}\left\Vert \Phi_{p,n}(\mu)-\Phi_{p,n}(\mu^{\prime})\right\Vert \leq C\rho^{n-p}.\label{eq:Phi_stability} \end{equation} It follows from (\ref{eq:filtering_recursion}), (\ref{eq:Phi_defn}) and (\ref{eq:Phi_defn_semigroup}) that \[ \pi_{n+1}=\Phi_{n+1}(\pi_{n})=\Phi_{p,n+1}(\pi_{p})=\Phi_{0,n+1}(\mu_{0}),\quad0\le p\leq n, \] so (\ref{eq:Phi_stability}) can be used to describe the forgetting of the initial distribution of the non-linear filter. Properties similar to (\ref{eq:Q_p,n_bounded}) and (\ref{eq:Phi_stability}) can be obtained under conditions weaker and more realistic than $\mathbf{(C)}$, see e.g. \citep{whiteley2013} but the developments involved are substantially more technical, lengthy and complicated to present. Our aim is to expedite the presentation of stability properties of $\alpha$SMC, and $\mathbf{(C)}$ allows this to be achieved whilst retaining some of the essence of more realistic hypotheses on $g_{n}$ and $f$. \end{rem} The main result of this section is the following theorem, whose proof we briefly postpone. \begin{thm} \label{thm:L_R_mix}Assume $\mathbf{(A2)}$, $\mathbf{(B^{++})}$\textbf{\textup{\emph{ }}}\textup{\emph{and }}$\mathbf{(C)}$\textup{\emph{.}}\textbf{\textup{\emph{ }}}\textup{\emph{Then there exist finite constants, $c_{1}$ and for each $r\geq1$, $c_{2}(r)$, such that for any $\tau\in(0,1]$, $N\geq1$, and $\varphi\in\mathcal{L}$,}} \begin{equation} \inf_{n\geq0}\mathcal{E}_{n}^{N}\geq\tau\quad\Rightarrow\quad\begin{cases} \;\;\sup_{n\geq1}\;\mathbb{E}\left[\left(\dfrac{Z_{n}^{N}}{Z_{n}}\right)^{2}\right]^{1/n}\;\leq\;1+\dfrac{c_{1}}{N\tau},\\ \\ \quad\quad\quad\quad\quad\quad\quad\quad\quad\mathrm{and}\\ \\ \;\;\sup_{n\geq0}\;\mathbb{E}\left[\left|\pi_{n}^{N}(\varphi)-\pi_{n}(\varphi)\right|^{r}\right]^{1/r}\;\leq\;\dfrac{c_{2}(r)}{\sqrt{N\tau}}. \end{cases}\label{eq:them_Stability_Statement} \end{equation} \end{thm} \begin{rem} \label{rem:linear_variance}Using very similar arguments to those of \citet[Proof of Corollary 5.2]{smc:the:CdMG11}, it follows from the first inequality of (\ref{eq:them_Stability_Statement}) that \begin{eqnarray*} \left.\begin{array}{c} \inf_{n\geq0}\mathcal{E}_{n}^{N}\geq\tau\\ \\ \text{and}\\ \\ N\tau\geq nc_{1} \end{array}\right\} & \quad\Rightarrow\quad & \mathbb{E}\left[\left(\dfrac{Z_{n}^{N}}{Z_{n}}-1\right)^{2}\right]\leq\frac{2nc_{1}}{N\tau}. \end{eqnarray*} \end{rem} \begin{rem} It follows immediately from the second inequality in (\ref{eq:them_Stability_Statement}) that when $\inf_{n\geq0}\mathcal{E}_{n}^{N}\geq\tau$ for all $N\geq1$, the prediction filter errors are time-uniformly convergent in the sense \[ \lim_{N\rightarrow\infty}\sup_{n\geq0}\mathbb{E}\left[\left|\pi_{n}^{N}(\varphi)-\pi_{n}(\varphi)\right|^{r}\right]^{1/r}=0. \] \end{rem} \begin{rem} Further to the discussion of Section~\ref{sub:Instances-of-SMC}, in the case of the BPF we have $\mathcal{E}_{n}^{N}=1$ and hence $\inf_{n\geq0}\mathcal{E}_{n}^{N}\geq\tau$ always, and for the ARPF we also have $\inf_{n\geq0}\mathcal{E}_{n}^{N}\geq\tau$ always, by virtue of the ESS rule for selection of $\alpha_{n}$. In Section~\ref{sec:Discussion} we shall introduce new algorithms designed to guarantee $\inf_{n\geq0}\mathcal{E}_{n}^{N}\geq\tau$. \end{rem} \begin{rem} It is possible to deduce estimates for the constants $c_{1}$ and $c_{2}(r)$ using the statements and proofs of Propositions~\ref{prop:norm_const_bound} and~\ref{prop:L_p_bound_mixing}, which are the main ingredients in the proof of Theorem~\ref{thm:L_R_mix}. We omit such expressions only for simplicity of presentation. \end{rem} The proofs of Propositions~\ref{prop:norm_const_bound} and~\ref{prop:L_p_bound_mixing} are given in the appendix. \begin{prop} \label{prop:norm_const_bound}Assume $\mathbf{(A2)}$, $\mathbf{(B^{++})}$\textbf{\textup{\emph{ }}}\textup{\emph{and}}\textup{ }$\mathbf{(C)}$\textup{\emph{. If for some sequence of constants $\left\{ \tau_{n};n\geq0\right\} \in(0,1]^{\mathbb{N}}$ and $N\geq1$, \[ \mathcal{E}_{n}^{N}\geq\tau_{n}, \] then for any $n\geq1$, \[ \mathbb{E}\left[\left(\frac{Z_{n}^{N}}{Z_{n}}-1\right)^{2}\right]\leq\sum_{p=0}^{n-1}\frac{\mathrm{osc}\left(\overline{Q}_{p,n}(1)\right)^{2}}{N\tau_{p}}\left(\mathbb{E}\left[\left(\frac{Z_{p}^{N}}{Z_{p}}-1\right)^{2}\right]+1\right). \] }} \end{prop} \medskip{} \begin{prop} \label{prop:L_p_bound_mixing}Consider the constants and Markov kernels: \[ \delta_{p,n}:=\sup_{x,y}\frac{Q_{p,n}(1)(x)}{Q_{p,n}(1)(y)},\quad\quad P_{p,n}(x,A):=\frac{Q_{p,n}(\mathbb{I}_{A})(x)}{Q_{p,n}(1)(x)},\quad x\in\mathsf{X},A\in\mathcal{X},\;0\leq p\leq n. \] Assume $\mathbf{(A2)}$,\emph{ }$\mathbf{(B)}$\textbf{\textup{\emph{ }}}\textup{\emph{and }}$\mathbf{(C)}$\textup{\emph{.}}\textbf{\textup{\emph{ }}}\textup{\emph{Then for any $r\geq1$ there exists a finite constant $B(r)$ such that for any $N\geq1$, $n\geq0$, and $\varphi\in\mathcal{L}$,}}\textup{ \begin{equation} \mathbb{E}\left[\left|\pi_{n}^{N}(\varphi)-\pi_{n}(\varphi)\right|^{r}\right]^{1/r}\leq4B(r)^{1/r}\sum_{p=0}^{n}\delta_{p,n}\left\Vert P_{p,n}(\bar{\varphi})\right\Vert \mathbb{E}\left[\left|\mathcal{C}_{p,n}^{N}\right|^{r}\right]^{1/r}.\label{eq:L_p_bound_mixing_statement} \end{equation} }\textup{\emph{where $\bar{\varphi}:=\varphi-\pi_{n}(\varphi)$ and}} \[ \mathcal{C}_{p,n}^{N}:=\frac{\sqrt{\sum_{i}\left(\beta_{p,n}^{i}W_{p}^{i}\right)^{2}}}{\sum_{i}\beta_{p,n}^{i}W_{p}^{i}}. \] \end{prop} \medskip{} \begin{proof} \emph{(of Theorem~\ref{thm:L_R_mix})} For the first bound on the right of (\ref{eq:them_Stability_Statement}) under the conditions of the Theorem we apply Proposition~\ref{prop:norm_const_bound} to give the following recursive bound: \begin{equation} v_{n}\leq\sum_{p=0}^{n-1}\frac{C}{N\tau}\left(v_{p}+1\right),\label{eq:v_n_recursion} \end{equation} where $v_{n}:=\mathbb{E}\left[\left(Z_{n}^{N}/Z_{n}-1\right)^{2}\right]$ and \[ C:=\sup_{p,n}\text{osc}\left(\overline{Q}_{p,n}(1)\right)^{2}\leq4\sup_{p,n}\left\Vert \overline{Q}_{p,n}(1)\right\Vert ^{2}<+\infty, \] under $\mathbf{(C)}$; see Remark~\ref{rem:assumption_C}, (\ref{eq:Q_p,n_bounded}). We shall now prove \begin{equation} v_{n}\leq\left(1+\frac{C}{N\tau}\right)^{n}-1,\quad\forall n\geq0,\label{eq:v_n_induction_hyp} \end{equation} which holds trivially if $C=0$, since in that case $v_{n}=0$ by (\ref{eq:v_n_recursion}). Therefore suppose $C>0$. The argument is inductive. To initialize, note that since by definition $Z_{0}^{N}=Z_{0}=1$, we have $v_{0}=0$. Now assume (\ref{eq:v_n_induction_hyp}) holds at all ranks strictly less than some fixed $n\geq1$. Using (\ref{eq:v_n_recursion}), we then have at rank $n$, \begin{eqnarray*} v_{n} & \leq & \frac{C}{N\tau}\sum_{p=0}^{n-1}\left(v_{p}+1\right)\\ & \leq & \frac{C}{N\tau}\sum_{p=0}^{n-1}\left(1+\frac{C}{N\tau}\right)^{p}\\ & = & \frac{C}{N\tau}\frac{\left(1+\frac{C}{N\tau}\right)^{n}-1}{\left(1+\frac{C}{N\tau}\right)-1}\\ & = & \left(1+\frac{C}{N\tau}\right)^{n}-1. \end{eqnarray*} This completes the proof of (\ref{eq:v_n_induction_hyp}), from which the second equality on the right of (\ref{eq:them_Stability_Statement}) follows immediately upon noting that by Theorem~\ref{thm:convergence}, $\mathbb{E}[Z_{n}^{N}]=Z_{n}$. For the second bound on the right of (\ref{eq:them_Stability_Statement}), first note that as per Remark~\ref{rem:assumption_C}, under $\mathbf{(C)}$\textbf{ }we have \begin{eqnarray*} \left\Vert P_{p,n}(\bar{\varphi})\right\Vert & = & \sup_{x}\left|P_{p,n}(\varphi)(x)-\pi_{n}(\varphi)\right|\\ & = & \sup_{x}\left|\Phi_{p,n}(\delta_{x})(\varphi)-\Phi_{p,n}(\pi_{p})(\varphi)\right|\\ & \leq & \sup_{\mu,\nu\in\mathcal{P}}\left\Vert \Phi_{p,n}(\mu)-\Phi_{p,n}(\nu)\right\Vert \left\Vert \varphi\right\Vert \leq\left\Vert \varphi\right\Vert C\rho^{n-p}, \end{eqnarray*} and \[ \sup_{n\geq0}\sup_{p\leq n}\;\delta_{p,n}<+\infty, \] Using these upper bounds, the fact that under $\mathbf{(B^{++})}$ we have $\beta_{p,n}^{i}=1/N$, and Proposition~\ref{prop:L_p_bound_mixing}, we find that there exists a finite constant $\widetilde{B}(r)$ such that for any $N\geq1$, $n\geq0$, $\varphi\in\mathcal{L}$, \emph{ \[ \mathbb{E}\left[\left|\pi_{n}^{N}(\varphi)-\pi_{n}(\varphi)\right|^{r}\right]^{1/r}\leq\left\Vert \varphi\right\Vert \frac{\tilde{B}(r)}{\sqrt{N}}\sum_{p=0}^{n}\rho^{n-p}\mathbb{E}\left[\left|\mathcal{E}_{p}^{N}\right|^{-r/2}\right]^{1/r}, \] }where \[ \mathcal{E}_{n}^{N}=\frac{\left(N^{-1}\sum_{i}W_{n}^{i}\right)^{2}}{N^{-1}\sum_{i}\left(W_{n}^{i}\right)^{2}}. \] \end{proof} \section{Discussion\label{sec:Discussion}} \subsection{Why not just run independent particle filters and average?\label{sub:Why-not-just}} One obvious approach to parallelization of SMC is to run a number of independent copies of a standard algorithm, such as the BPF, and then in some sense simply average their outputs. Let us explain possible shortcomings of this approach. Suppose we want to run $s\geq1$ independent copies of Algorithm~\ref{alg:boot_pf}, each with $q\geq1$ particles. For purposes of exposition, it is helpful to express this collection of independent algorithms as a particular instance of $\alpha$SMC: for the remainder of Section~\ref{sub:Why-not-just}, we set $N=sq$ and consider Algorithm~\ref{alg:aSMC} with $\mathbb{A}_{N}$ chosen to consist only of the block diagonal matrix: \begin{equation} \left[\begin{array}{cccc} \mathbf{q^{-1}} & \mathbf{0} & \cdots & \mathbf{0}\\ \mathbf{0} & \mathbf{q^{-1}} & \cdots & \mathbf{0}\\ \vdots & \vdots & \ddots & \vdots\\ \mathbf{0} & \mathbf{0} & \cdots & \mathbf{q^{-1}} \end{array}\right]\label{eq:alpha_block} \end{equation} where $\mathbf{q}^{-1}$ is a $q\times q$ submatrix with every entry equal to $q^{-1}$ and $\mathbf{0}$ is a submatrix of zeros, of the same size. In this situation, a simple application of Lemma~\ref{lem:W_n_representation} shows that for any $n\geq1$ and $\ell\in[s]$, if we define $B(\ell):=\{(\ell-1)q+1,(\ell-1)q+2,\ldots,\ell q\}$, then \begin{equation} \text{for all}\quad i_{n}\in B(\ell),\quad\quad W_{n}^{i_{n}}=\prod_{p=0}^{n-1}\left(N^{-1}\sum_{i_{p}\in B(\ell)}g_{p}\left(\zeta_{p}^{i_{p}}\right)\right)=:\mathbb{W}_{n}^{\ell},\label{eq:W_n^i_blocks} \end{equation} c.f. (\ref{eq:bootstrap_W_n^i})--(\ref{eq:bootstrap_Z_n^N}), and furthermore upon inspection of Algorithm~\ref{alg:aSMC}, we find \begin{equation} \text{for all }\ell\in[s]\text{ and }i\in B(\ell),\quad\quad\mathbb{P}\left(\left.\zeta_{n}^{i}\in A\right|\mathcal{F}_{n-1}\right)=\frac{\sum_{j\in B(\ell)}g_{n-1}\left(\zeta_{n-1}^{j}\right)f\left(\zeta_{n-1}^{j},A\right)}{\sum_{j\in B(\ell)}g_{n-1}\left(\zeta_{n-1}^{j}\right)},\label{eq:blocks_law} \end{equation} for any $A\in\mathcal{X}$. It follows that the blocks of particles \[ \hat{\zeta}_{n}^{k}:=\left\{ \zeta_{n}^{i}\right\} _{i\in B(\ell)},\quad\ell\in[s], \] are independent, and for each $\ell\in[s]$, the sequence $\left\{ \hat{\zeta}_{n}^{\ell};n\geq0\right\} $ evolves under the same law as a BPF, with $q$ particles. Furthermore we notice \[ \pi_{n}^{N}=\pi_{n}^{sq}=\frac{\sum_{i}W_{n}^{i}\;\delta_{\zeta_{n}^{i}}}{\sum_{i}W_{n}^{i}}=\frac{\sum_{\ell\in[s]}\sum_{i\in B(\ell)}W_{n}^{i}\;\delta_{\zeta_{n}^{i}}}{\sum_{\ell\in[s]}\sum_{i\in B(\ell)}W_{n}^{i}}=\frac{\sum_{\ell\in[s]}\mathbb{W}_{n}^{\ell}\left(q^{-1}\sum_{i\in B(\ell)}\delta_{\zeta_{n}^{i}}\right)}{\sum_{\ell\in[s]}\mathbb{W}_{n}^{\ell}}, \] where $q^{-1}\sum_{i\in B(\ell)}\delta_{\zeta_{n}^{i}}$ may be regarded as the approximation of $\pi_{n}$ obtained from the $\ell$th block of particles. Since we have assumed that $\mathbb{A}_{N}$ consists only of the matrix (\ref{eq:alpha_block}), $\mathbf{(A2)}$ and$\mathbf{(B^{++})}$ hold, and by Theorem~\ref{thm:convergence} we are assured of the a.s.~convergence $\pi_{n}^{sq}(\varphi)\rightarrow\pi_{n}(\varphi)$ when $q$ is fixed and $s\rightarrow\infty$. In words, we have convergence as the total number of bootstrap algorithms tends to infinity, even though the number of particles within each algorithm is fixed. On the other hand, simple averaging of the output from the $m$ independent algorithms would entail reporting: \begin{equation} \frac{1}{sq}\sum_{i\in[sq]}\delta_{\zeta_{n}^{i}}\label{eq:naive} \end{equation} as an approximation of $\pi_{n}$; the problem is that (\ref{eq:naive}) is biased, in the sense that in general it is not true that, with $q$ fixed, $(sq)^{-1}\sum_{i\in[sq]}\varphi(\zeta_{n}^{i})\rightarrow\pi_{n}(\varphi)$ as $s\rightarrow\infty$ (although obviously we do have convergence if $q\rightarrow\infty$). In summary, simple averages across independent particle filters do not, in general, converge as the number of algorithms grows. We can also discuss the quality of an approximation of $Z_{n}$ obtained by simple averaging across the $s$ independent algorithms; let us consider the quantities \[ \mathbb{Z}_{n}^{(q,\ell)}:=\frac{1}{\ell}\sum_{j\in[\ell]}\mathbb{W}_{n}^{j},\quad\ell\in[s]. \] Comparing (\ref{eq:W_n^i_blocks}) with (\ref{eq:bootstrap_Z_n^N}), and noting (\ref{eq:blocks_law}) and the independence properties described above, we have \begin{equation} \mathbb{E}\left[\mathbb{Z}_{n}^{(q,s)}\right]=Z_{n},\quad\quad\mathbb{E}\left[\left(\frac{\mathbb{Z}_{n}^{(q,s)}}{Z_{n}}-1\right)^{2}\right]=\frac{1}{s}\mathbb{E}\left[\left(\frac{\mathbb{Z}_{n}^{(q,1)}}{Z_{n}}-1\right)^{2}\right],\label{eq:Z_naive_average} \end{equation} where the first equality holds due to the first part of Theorem~\ref{thm:convergence}: in this context the well known lack-of-bias property of the BPF. Under certain ergodicity and regularity conditions, \citep[Proposition 4]{WhiteleyTPF} establishes that $\mathbb{E}\left[\left(\mathbb{Z}_{n}^{(q,1)}/Z_{n}\right)^{2}\right]$ grows exponentially fast along observation sample paths when $q$ is fixed and $n\rightarrow\infty$. When that occurs, it is clear from (\ref{eq:naive}) that $s$ must be scaled up exponentially fast with $n$ in order to control the relative variance of $\mathbb{Z}_{n}^{(q,s)}$. On the other hand, by Theorem~\ref{thm:L_R_mix} and Remark~\ref{rem:linear_variance}, it is apparent that if we design an instance of $\alpha$SMC so as to enforce $\inf_{n\geq0}\mathcal{E}_{n}^{N}>0$, then we can control $\mathbb{E}\left[\left(Z_{n}^{N}/Z_{n}\right)^{2}\right]$ at a more modest computational cost. When $\mathbb{A}_{N}$ consists only of the matrix (\ref{eq:alpha_block}) we do not have a guarantee that $\inf_{n\geq0}\mathcal{E}_{n}^{N}>0$, but in Section~\ref{sub:Algorithms-with-adaptive} we shall suggest some novel algorithms which do guarantee this lower bound and therefore enjoy the time-uniform convergence and linear-in-time variance properties of Theorem~\ref{thm:L_R_mix}. Before addressing these stability issues we discuss the conditions under which the $\alpha$SMC algorithm converges. \subsection{Ensuring convergence\label{sub:Ensuring-convergence}} Throughout Section~\ref{sub:Ensuring-convergence}, we consider the generic Algorithm~\ref{alg:aSMC}. We describe what can go wrong if the conditions of Theorem~\ref{thm:convergence} do not hold: as an example of a situation in which $\mathbf{(B^{+})}$ does not hold; suppose that $\mathbb{A}_{N}$ consists only of the transition matrix of a simple random walk on the star graph with $N$ vertices, call it $\mathcal{S}_{N}$. That is, for $N>2$, $\mathcal{S}_{N}$ is an undirected tree with one internal vertex and $N-1$ leaves, and for $N\leq2$, all vertices are leaves. Examples of $\mathcal{S}_{N}$ are illustrated in Figure~\ref{fig:Star-graphs-of}. It is elementary that a simple random walk on $\mathcal{S}_{N}$ has unique invariant distribution given by \[ \frac{d_{N}^{i}}{\sum_{j}d_{N}^{j}},\quad i\in[N],\quad\quad\text{ where}\quad d_{N}^{i}:=\text{ degree of vertex }i\text{ in }\mathcal{S}_{N}. \] Assuming that for every $N>2$ the internal vertex of $\mathcal{S}_{N}$ is labelled vertex $1$, then for all $0\leq p<n$, we have $\beta_{p,n}^{1}=1/2$ for all $N>2$, so $\mathbf{(B^{+})}$ does not hold, and thus part 2) of Theorem~\ref{thm:convergence} does not hold. \begin{figure} \hfill{}\includegraphics[width=1\textwidth]{stars2}\hfill{} \protect\caption{\label{fig:Star-graphs-of}Star graphs. } \end{figure} As a more explicit example of convergence failure, suppose that $\mathbb{A}_{N}$ consists only of the matrix which has $1$ for every entry in its first column, and zeros for all other entries. This is the transition matrix of a random walk on a directed graph of which all edges lead to vertex $1$. It follows that for all $0\leq p<n$, we have $\beta_{p,n}^{1}=1$ and $\beta_{p,n}^{i}=0$ for all $i\in[N]\setminus\{1\}$, so $\mathbf{(B^{+})}$ clearly does not hold. If additionally $f(x,\cdot)=\delta_{x}(\cdot)$ , then by inspection of Algorithm~\ref{alg:aSMC} we have $\mathbb{P}\left(\left\{ \zeta_{n}^{i}=\zeta_{0}^{1}\right\} \right)=1$ for all $i\in[N]$ and all $n\geq1$. We then also have $\mathbb{P}\left(\left\{ \pi_{n}^{N}=\delta_{\zeta_{0}^{1}}\right\} \right)=1$, so that we obtain a generally poor and non-convergent approximation of $\pi_{n}$. In both these situations vertex $1$ is, in graph theoretic terms, a \emph{hub} and an intuitive explanation of the convergence failure is that the contribution of particle $1$ to $\pi_{n}^{N}$ does not become negligible as $N\rightarrow\infty$, so that no ``averaging'' takes place. Assumption $\mathbf{(B^{+})}$ ensures enough negligibility to prove the weak laws of large numbers in Theorem~\ref{thm:convergence}. Assumption $\mathbf{(B^{++})}$ may be viewed as ensuring negligibility, and in such a way as to ensure the $\sqrt{N}$ rate of convergence and strong law in the final part of Theorem~\ref{thm:convergence}. As a practical summary, we recommend verifying \emph{$\mathbf{(B^{++})}$}, or at least\textbf{ }avoid using graphs with hubs, since otherwise $\alpha$SMC may not converge. \subsection{Provably stable algorithms with adaptive interaction\label{sub:Algorithms-with-adaptive}} There are of course many choices of $\mathbb{A}_{N}$ which do satisfy $\mathbf{(B^{++})}$. In this section we provide some guidance and suggestions on this matter. In order to focus our attention we consider in addition to $\mathbf{(B^{++})}$, the following criteria against which to assess candidates for $\mathbb{A}_{N}$ and whatever functional is used at line $(\star)$ of Algorithm~\ref{alg:aSMC}: \begin{enumerate}[\hspace{5 mm}(a)] \item\label{enu:criterion1}the condition $\inf_{n\geq0}\mathcal{E}_{n}^{N}>0$ should be enforced, so as to ensure stability \item\label{enu:criterion2}the computational complexity of associated sampling, weight and ESS calculations should not be prohibitively high \end{enumerate} The motivation for (\ref{enu:criterion1}) is the theoretical assurance given by Theorem~\ref{thm:L_R_mix}. The motivation for (\ref{enu:criterion2}) is simply that we do not want an algorithm which is much more expensive than any of the standard SMC methods, Algorithms~\ref{alg:SIS}--\ref{alg:boot_pf} and the ARPF. It is easily checked that the complexity of SIS is $O(N)$ per unit time step, which is the same as the complexity of the BPF \citep{carpenter1999improved} and the ARPF. Throughout the remainder of Section~\ref{sub:Algorithms-with-adaptive} we shall assume that $\mathbb{A}_{N}$ consists only of transition matrices of simple random walks on regular undirected graphs. We impose a little structure in addition to this as per the following definition, which identifies an object related to the standard notion of a block-diagonal matrix. \begin{defn*} A \textbf{B-matrix} is a Markov transition matrix which specifies a simple random walk on a regular undirected graph which has a self-loop at every vertex and whose connected components are all complete subgraphs. \end{defn*} Note that due to the graph regularity appearing in this definition, if $\mathbb{A}_{N}$ consists only of B-matrices, then $\mathbf{(B^{++})}$ is immediately satisfied. This regularity is also convenient for purposes of interpretation: it seems natural to use graph degree to give a precise meaning to ``degree of interaction''. Indeed $Id$ and $\mathbf{1}_{1/N}$ are both B-matrices, respectively specifying simple random walks on $1$-regular and $N$-regular graphs, and recall for the ARPF, $\mathbb{A}_{N}=\left\{ Id,\mathbf{1}_{1/N}\right\} $; the main idea behind the new algorithms below is to consider an instance of $\alpha$SMC in which $\mathbb{A}_{N}$ is defined to consist of B-matrices of various degrees $d\in[N]$, and define adaptive algorithms which select the value of $\alpha_{n-1}$ by searching through $\mathbb{A}_{N}$ to find the graph with the smallest $d$ which achieves $\mathcal{E}_{n}^{N}\geq\tau>0$ and hence satisfies criterion 1. In this way, we ensure provable stability whilst trying to avoid the complete interaction which occurs when $\alpha_{n-1}=\mathbf{1}_{1/N}$. Another appealing property of B-matrices is formalized in the following lemma; see criterion (\ref{enu:criterion2}) above. The proof is given in the appendix. \begin{lem} \label{lem:complexity}Suppose that $A=\left(A^{ij}\right)$ is a B-matrix of size $N$. Then given the quantities $\left\{ W_{n-1}^{i}\right\} _{i\in[N]}$ and $\left\{ g_{n-1}(\zeta_{n-1}^{i})\right\} _{i\in[N]}$, the computational complexity of calculating $\left\{ W_{n}^{i}\right\} _{i\in[N]}$ and simulating $\left\{ \zeta_{n}^{i}\right\} _{i\in[N]}$ as per Algorithm~\ref{alg:aSMC}, using $\alpha_{n-1}=A$, is $O(N)$. \end{lem} When calculating the overall complexity of Algorithm~\ref{alg:aSMC} we must also consider the complexity of line $(\star)$, which in general depends on $\mathbb{A}_{N}$ and the particular functional used to choose $\alpha_{n}$. We resume this complexity discussion after describing the specifics of some adaptive algorithms. \subsubsection*{Adaptive interaction} Throughout this section we set $m\in\mathbb{N}$ and then $N=2^{m}$. Consider Algorithm~\ref{alg:aSMC} with $\mathbb{A}_{N}$ chosen to be the set of B-matrices of size $N$. We suggest three adaptation rules at line $(\star)$ of Algorithm~\ref{alg:aSMC}: Simple, Random, and Greedy, all implemented via Algorithm~\ref{alg:generic adaptation} (note that dependence of some quantities on $n$ is suppressed from the notation there), but differing in the way they select the index list $\mathcal{I}_{k}$ which appears in the ``while'' loop of that procedure. The methods for selecting $\mathcal{I}_{k}$ are summarised in Table~\ref{tab:Choosing_I_k}: the Simple rule needs little explanation, the Random rule implements an independent random shuffling of indices and the Greedy rule is intended, heuristically, to pair large weights, $\mathbb{W}_{k}^{i}$, will small weights in order to terminate the ``while'' loop with as small a value of $k$ as possible. Note that, formally, in order for our results for $\alpha$SMC to apply when the Random rule is used, the underlying probability space must be appropriately extended, but the details are trivial so we omit them. \begin{algorithm} \begin{raggedright} \qquad{}at iteration $n$ and line $(\star)$ of Algorithm~\ref{alg:aSMC}, \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}for $i=1,\ldots,N$, \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}\qquad{}set $B(0,i)=\{i\}$, $\mathbb{W}_{0}^{i}=W_{n-1}^{i}g_{n-1}(\zeta_{n-1}^{i})$, \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}set $k=0$ \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}set $\overline{\mathbb{W}}_{0}=N^{-1}\sum_{i}\mathbb{W}_{0}^{i}$ , $\mathcal{E}=\frac{\left(\overline{\mathbb{W}}_{0}\right)^{2}}{N^{-1}\sum_{i}\left(\mathbb{W}_{0}^{i}\right)^{2}}$, \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}while $\mathcal{E}<\tau$ \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}\qquad{}set $\mathcal{I}_{k}$ according to the Simple, Random or Greedy scheme of Table~\ref{tab:Choosing_I_k}. \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}\qquad{}for $i=1,\ldots,N/2^{k+1}$ \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}\qquad{}\qquad{}set $B(k+1,i)=B(k,\mathcal{I}_{k}(2i-1))\cup B(k,\mathcal{I}_{k}(2i))$ \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}\qquad{}\qquad{}set $\mathbb{W}_{k+1}^{i}=\mathbb{W}_{k}^{\mathcal{I}_{k}(2i-1)}/2+\mathbb{W}_{k}^{\mathcal{I}_{k}(2i)}/2$ \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}\qquad{}set $k=k+1$ \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}\qquad{}set $\mathcal{E}=\frac{\left(\overline{\mathbb{W}}_{0}\right)^{2}}{N^{-1}2^{k}\sum_{i\in[N/2^{k}]}\left(\mathbb{W}_{k}^{i}\right)^{2}}$ \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}set $K_{n-1}=k$ \par\end{raggedright} \begin{raggedright} \qquad{}\qquad{}set $\alpha_{n-1}^{ij}=\begin{cases} 1/2^{K_{n-1}}, & \text{if }i\sim j\text{ according to \ensuremath{\left\{ B(K_{n-1},i)\right\} _{i\in[N/2^{K_{n-1}}]}}}\\ 0, & \text{otherwise}. \end{cases}$ \par\end{raggedright} \protect\caption{\label{alg:generic adaptation}Adaptive selection of $\alpha_{n-1}$} \end{algorithm} Following the termination of the ``while'' loop, Algorithm~\ref{alg:generic adaptation} outputs an integer $K_{n-1}$ and a partition $\left\{ B(K_{n-1},i)\right\} _{i\in[N/2^{K_{n-1}}]}$ of $[N]$ into $N/2^{K_{n-1}}$ subsets, each of cardinality $2^{K_{n-1}}$; this partition specifies $\alpha_{n-1}$ as a B-matrix and $2^{K_{n-1}}$ is the degree of the corresponding graph (we keep track of $K_{n-1}$ for purposes of monitoring algorithm performance in Section~\ref{sub:Numerical-illustrations}). Proposition~\ref{prop:Upon-termination-of} is a formal statement of its operation and completes our complexity considerations. The proof is given in the appendix. It can be checked by an inductive argument similar to the proof of Lemma~\ref{lem:ARPF_A2}, also in the appendix, that when $\alpha_{n}$ is chosen according to Algorithm~\ref{alg:generic adaptation} combined with any of the adaptation rules in Table~\ref{tab:Choosing_I_k}, \textbf{(A2)} is satisfied. \begin{prop} \label{prop:Upon-termination-of}The weights $\left\{ \mathbb{W}_{k}^{i}\right\} _{i\in[N/2^{k}]}$ calculated in Algorithm~\ref{alg:generic adaptation} obey the expression \begin{equation} \mathbb{W}_{k}^{i}=2^{-k}\sum_{j\in B(k,i)}W_{n-1}^{j}g_{n-1}(\zeta_{n-1}^{j}).\label{eq:W_k_explicit} \end{equation} Moreover, $\alpha_{n-1}$ delivered by Algorithm~\ref{alg:generic adaptation} is a B-matrix and when this procedure is used at line $(\star)$ of Algorithm~\ref{alg:aSMC}, the weights calculated in Algorithm~\ref{alg:aSMC} are given, for any $i\in[N/2^{K_{n-1}}]$, by \begin{equation} W_{n}^{j}=\mathbb{W}_{K_{n-1}}^{i},\quad\quad\text{for all \quad}j\in B(K_{n-1},i)\label{eq:W_equals_bb_W} \end{equation} and $\mathcal{E}_{n}^{N}\geq\tau$ always. The overall worst-case complexity of Algorithm~\ref{alg:aSMC} is, for the three adaptation rules in Table~\ref{tab:Choosing_I_k}, Simple: $O(N)$, Random: $O(N)$, and Greedy: $O(N\log_{2}N)$. \end{prop} \begin{table}[h] \begin{tabular}[c]{>{\raggedright}p{1.2cm}l} \toprule \addlinespace \textbf{\footnotesize{Simple}} & {\footnotesize{set $\mathcal{I}_{k}=(1,\ldots,N/2^{k})$}}\tabularnewline\addlinespace \midrule \addlinespace \textbf{\footnotesize{Random}} & {\footnotesize{if $k=0$, set $\mathcal{I}_{k}$ to a random permutation of $[N/2^{k}]$, otherwise $\mathcal{I}_{k}=(1,\ldots,N/2^{k})$}}\tabularnewline\addlinespace \midrule \addlinespace \multirow{2}{1.2cm}{\textbf{\footnotesize{Greedy}}} & {\footnotesize{set $\mathcal{I}_{k}$ such that}}\tabularnewline & {\hspace{\bigskipamount}\footnotesize{$\mathbb{W}_{k}^{\mathcal{I}_{k}(1)}\geq\mathbb{W}_{k}^{\mathcal{I}_{k}(3)}\geq\cdots\geq\mathbb{W}_{k}^{\mathcal{I}_{k}(N/2^{k}-1)}\geq\mathcal{\mathbb{W}}_{k}^{\mathcal{I}_{k}(N/2^{k})}\cdots\geq\mathbb{W}_{k}^{\mathcal{I}_{k}(4)}\geq\mathbb{W}_{k}^{\mathcal{I}_{k}(2)}$}}\tabularnewline\addlinespace \bottomrule\addlinespace \end{tabular}\protect\caption{\label{tab:Choosing_I_k}Adaptation rules for choosing $\mathcal{I}_{k}$} \end{table} \subsection{Numerical illustrations\label{sub:Numerical-illustrations}} We consider a stochastic volatility HMM: \begin{eqnarray*} & & X_{0}\sim\mathcal{N}(0,1),\quad X_{n}=aX_{n-1}+\sigma V_{n},\\ & & Y_{n}=\varepsilon W_{n}\exp(X_{n}/2). \end{eqnarray*} where $\left\{ V_{n}\right\} _{n\in\mathbb{N}}$ and $\left\{ W_{n}\right\} _{n\in\mathbb{N}}$ are sequences of mutually i.i.d.~$\mathcal{N}(0,1)$ random variables, $\left|a\right|<1$, and $\sigma,\varepsilon>0$. To study the behaviour of the different adaptation rules in terms of effective sample size, a sequence of $3\cdot10^{4}$ observations were generated from the model with $a=0.9$, $\sigma=0.25$, and $\varepsilon=0.1$. This model obviously does not satisfy $\mathbf{(C)}$, but $\mathbf{(A1)}$ is satisfied as long as the observation record to does include the value zero. The ARPF and $\alpha$SMC with the Simple, Random and Greedy adaptation procedures specified in Section~\ref{sub:Algorithms-with-adaptive} were run on this data with $N=2^{10}$ and threshold $\tau=0.6$. To give some impression of ESS and interaction behaviour, Figure~\ref{fig:ESS-and-interaction} shows snapshots of $N_{n}^{\text{eff}}$ and $K_{n}$ versus $n$, for $575\leq n\leq825$. The sample path of $N_{n}^{\text{eff}}$ for ARPF displays a familiar saw-tooth pattern, jumping back up to $N=2^{10}$ when resampling, i.e.~when $K_{n}=10$. The Simple adaptation scheme keeps $N_{n}^{\text{eff}}$ just above the threshold $\tau N=0.6\times2^{10}$, whereas the Greedy strategy is often able to keep $N_{n}^{\text{eff}}$ well above this threshold, with smaller values of $K_{n}$, i.e.~with a lower degree of interaction. The results for the Random adaptation rule, not shown in this plot, where qualitatively similar to those of the Greedy algorithm but slightly closer to the Simple adaptation. In order to examine the stationarity of the particle processes as well as the statistical behavior of the degree of interaction over time, Figure~\ref{fig:histograms_and_E_vs_k} shows two histograms of $K_{n}$ for each of the adaptation rules. One histogram is based on the sample of $K_{n}$ where $100<n\leq15050$, and the other is based on $K_{n}$ where $15050<n\leq30000$. For each algorithm, the similarity between the histograms for the two time intervals suggests that the process $\left\{ K_{n}\right\} _{n\geq0}$ is stationary. As expected, the distribution of $K_{n}$ for ARPF is dichotomous taking only values equal to $K_{n}=0$ when there is no interaction, i.e.~the resampling is skipped or $K_{n}=10$ for the complete interaction, i.e.~resampling. It is apparent that the Simple, Random and Greedy algorithms move the distribution of $K_{n}$ towards smaller values and almost always manage to avoid the complete interaction. For the Random and Greedy algorithms, $K_{n}$ rarely exceeds $1$, i.e.~in order to guarantee $\mathcal{E}_{n}^{N}$ it is rarely necessary to consider anything more than pair-wise interaction. \begin{figure} \hfill{}\includegraphics[width=1\textwidth]{figure1}\hfill{}\protect\caption{\label{fig:ESS-and-interaction}Snapshots of ESS and degree of interaction. Top: $N_{n}^{\text{eff}}$ vs.~$n$ (solid) and threshold $\tau N$ (dashed). Bottom: $K_{n}$ vs.~$n$ (stems) and the base two logarithm of the time-average of $2^{K_{n}}$ (dashed). Recall from Section \ref{sub:Algorithms-with-adaptive} that $2^{K_{n}}$ is the degree of the graph corresponding to the matrix $\alpha_{n}$ selected by Algorithm~\ref{alg:generic adaptation}, and returned to line $(\star)$ of Algorithm~\ref{alg:aSMC}. } \end{figure} The plot on the right of Figure~\ref{fig:histograms_and_E_vs_k} shows, for each of the Simple, Random and Greedy adaptation rules, the relationship between the intermediate variables $\mathcal{E}$ and $k$ appearing in the ``while'' loop of Algorithm~\ref{alg:generic adaptation}. In order to obtain equal sample sizes for plotting purposes, Algorithm~\ref{alg:generic adaptation} was modified slightly so as to evaluate $\mathcal{E}$ for every value $k\in\{0,\ldots,m\}$, whilst still outputting $K_{n-1}$ as the smallest value of $k$ achieving $\mathcal{E}\geq\tau$. The plotted data were then obtained, for each $k$, by averaging the corresponding values of $\mathcal{E}$ over the time steps of the algorithm. It is apparent that, for small values of $k$, the Random and Greedy strategies achieve a faster increase in $\mathcal{E}$ than the Simple strategy, and this explains the shape of the histograms on the left of Figure~\ref{fig:histograms_and_E_vs_k}. \begin{figure} \hfill{}\includegraphics[width=1\textwidth]{figure32}\hfill{} \protect\caption{\label{fig:histograms_and_E_vs_k}Left: Histograms of $K_{n}$ for the ARPF and the three adaptation rules of Table~\ref{tab:Choosing_I_k}. The light bars were obtained from $\left\{ K_{n};n=101,\ldots,15050\right\} $ and the dark bars from $\left\{ K_{n};n=15051,\ldots,30000\right\} $ Right: Growth of $\mathcal{E}$ vs.~$k$ for the Simple (solid), Random (dash-dot) and Greedy (dashed). } \end{figure} Figure~\ref{fig:Mean-square-error} shows a comparison of the mean squared errors (MSE) of approximating the conditional expectation of $\phi(X_{p})$ with respect to the underlying stochastic volatility HMM given the observations $\{y_{n};0\leq n\leq p+\ell\}$, where $\ell\in\{-5,0,1\}$ and $\phi$ is some test function. The cases, $\ell=-5$, $\ell=0$, and $\ell=1$ correspond to the lag 5 smoother, filter and one step predictor, respectively. The lag 5 smoother results were obtained by tracing back ancestral lineages. In order to estimate the approximation error, a reference value for the conditional expectation was evaluated by running a BPF with a large sample size $N=2^{17}$. Approximation errors were evaluated for $N_{\mathrm{MC}}=1000$ Monte Carlo runs of 1000 time steps each with $N=2^{9}$, and MSE was obtained by averaging over the time steps and the Monte Carlo runs. First 30 time steps were excluded in the calculations to avoid any non-stationary effects due to initialization. The results show that the Random and Greedy algorithms produce consistently smaller errors than the Simple algorithm and for large values of $\tau$ the Greedy algorithm appears to consistently outperform ARPF. \begin{figure} \hfill{}\includegraphics[width=1\textwidth]{figure4}\hfill{}\protect\caption{\label{fig:Mean-square-error}MSE vs.~$\tau$ for the lag 5 smoother, filter, and one step predictor using the four algorithms ARPF (solid), Simple ($\triangle$), Random ($\circ$), and Greedy ($\times)$ and three test functions $\phi$.} \end{figure} \subsection{Concluding remarks} \begin{itemize} \item The martingale decomposition presented in Proposition~\ref{prop:martingale} may also be exploited to pursue central limit theorems. A study of this will be conducted elsewhere, but we believe, further to Remark~\ref{rem_alpha_conv}, that it will in general involve some further hypotheses in order to ensure convergence of the covariance of this martingale and thus prove the existence of a well-defined asymptotic variance. \item It is worth pointing out that there are also SMC algorithms other than those listed in Section~\ref{sub:Instances-of-SMC} that can be formulated as instances of $\alpha$SMC, e.g.~the stratified resampling algorithm of \citet{Kitagawa1996} and the auxiliary particle filter of \citet{pitt1999filtering}. It should be kept in mind, however, that the successful formulation of any algorithm as an instance of $\alpha$SMC does not necessarily imply that the assumptions $\mathbf{(B)}$, $\mathbf{(B^{+})}$ or $\mathbf{(B^{++})}$ hold, and the validity of Theorems~\ref{thm:convergence} and~\ref{thm:L_R_mix} is in that sense not automatic. \end{itemize} \section{Appendix\label{sec:Appendix}} \begin{lem} \label{lem:ARPF_A2}If for every $n\geq0$, $\alpha_{n}$ is chosen according to the ESS thresholding rule (\ref{eq:alpha_ARPF}), then $\mathbf{(A2)}$ is satisfied. \end{lem} \begin{proof} The proof is by induction. To initialize, we have at rank $n=0$, \begin{equation} \alpha_{0}:=\begin{cases} \mathbf{1}_{1/N}, & \quad\text{ if }\quad\frac{\left(N^{-1}\sum_{i}W_{0}^{i}g_{0}(\zeta_{0}^{i})\right)^{2}}{N^{-1}\sum_{i}\left(W_{0}^{i}g_{0}(\zeta_{0}^{i})\right)^{2}}<\tau,\\ Id, & \quad\text{ otherwise}, \end{cases}\label{eq:alpha_ARPF-1} \end{equation} noting that by definition $W_{0}^{i}=1$, we find that the entries of $\alpha_{0}$ are measurable w.r.t.~$\mathcal{F}_{0}$. For the induction hypothesis, suppose that for some $n\geq0$ and all $p\leq n$, the entries of $\alpha_{p}$ are measurable w.r.t.~$\mathcal{F}_{n}$. It follows immediately from Lemma~\ref{lem:W_n_representation}, equation (\ref{eq:unwind2}), that $\left\{ W_{n+1}^{i}\right\} _{i\in[N]}$ are all measurable w.r.t.~$\mathcal{F}_{n+1}$, and it follows from (\ref{eq:alpha_ARPF}) applied at rank $n+1$ that the entries of $\alpha_{n+1}$ are measurable w.r.t.~$\mathcal{F}_{n+1}$, and hence the induction hypothesis holds at rank $n+1$. This completes the proof. \end{proof} \subsubsection*{Resampling times description of the ARPF} In order to derive expressions for $\pi_{n}^{N}$ and $Z_{n}^{N}$ in the case of the ARPF, define a family of random sets $\left\{ \sigma_{n};n\geq1\right\} $, and random times $\left\{ T_{n};n\geq1\right\} $ as follows \begin{eqnarray} \sigma_{n} & := & \left\{ m;\;1\leq m\leq n\text{\;\ and }\;\alpha_{m-1}=\mathbf{1}_{1/N}\right\} ,\nonumber \\ T_{n} & := & \max\left(\sigma_{n}\right),\label{eq:T_n_defn} \end{eqnarray} with $T_{n}:=0$ one the event $\left\{ \sigma_{n}=\emptyset\right\} $. Intuitively, $T_{n}$ can be thought of as the last resampling time before $n$. Then by construction, using the recursive definition of $W_{n}^{i}$ in (\ref{eq:W_n_defn}), and (\ref{eq:T_n_defn}), we have on the event $\left\{ \sigma_{n}\neq\emptyset\right\} $, \begin{eqnarray} & W_{T_{n}}^{i} & =\sum_{j}\alpha_{T_{n}-1}^{ij}W_{T_{n}-1}^{j}g_{T_{n}-1}(\zeta_{T_{n}-1}^{j})\nonumber \\ & & =\frac{1}{N}\sum_{j}W_{T_{n}-1}^{j}g_{T_{n}-1}(\zeta_{T_{n}-1}^{j})\;=:\;\widetilde{W}_{n},\quad n\geq1,\label{eq:W_adapt} \end{eqnarray} which is independent of $i$. On the event $\{\sigma_{n}=\emptyset\}$, define $\widetilde{W}_{n}:=1$. On the event $\{\sigma_{n}\neq\emptyset\}\cap\{T_{n}=n\}$, we trivially have $W_{n}^{i}=W_{T_{n}}^{i}=\widetilde{W}_{n}$, by (\ref{eq:W_adapt}). On the event $\{\sigma_{n}\neq\emptyset\}\cap\{T_{n}<n\}$, applying equation (\ref{eq:unwind}) of Lemma~\ref{lem:W_n_representation} with $p=T_{n}$, and (\ref{eq:W_adapt}), yields \begin{eqnarray*} W_{n}^{i_{n}} & = & \sum_{\left(i_{T_{n}},\ldots,i_{n-1}\right)\in[N]^{n-T_{n}}}W_{T_{n}}^{i_{T_{n}}}\prod_{q=T_{n}}^{n-1}g_{q}(\zeta_{q}^{i_{q}})\alpha_{q}^{i_{q+1}i_{q}}\\ & = & \widetilde{W}_{n}\sum_{\left(i_{T_{n}},\ldots,i_{n-1}\right)\in[N]^{n-T_{n}}}\prod_{q=T_{n}}^{n-1}g_{q}(\zeta_{q}^{i_{q}})\mathbb{I}[i_{q+1}=i_{q}]\\ & = & \widetilde{W}_{n}\prod_{p=T_{n}}^{n-1}g_{p}(\zeta_{p}^{i_{n}}). \end{eqnarray*} Collecting the above definitions and substituting into (\ref{eq:pi^N_andZ^N}) gives \[ \pi_{n}^{N}=\frac{\sum_{i}\delta_{\zeta_{n}^{i}}\prod_{p=T_{n}}^{n-1}g_{p}(\zeta_{p}^{i})}{\sum_{i}\prod_{p=T_{n}}^{n-1}g_{p}(\zeta_{p}^{i})},\quad\quad Z_{n}^{N}=\widetilde{W}_{n}\cdot\frac{1}{N}\sum_{i}\prod_{p=T_{n}}^{n-1}g_{p}(\zeta_{p}^{i}), \] with the convention $\prod_{p=n}^{n-1}g_{p}(\zeta_{p}^{i})=1$. Similar elementary calculations can be used to derive expressions for the sampling steps of the ARPF, in the interests of brevity we leave it to the reader to write out the details. \subsection*{Proofs and auxiliary results for Section~\ref{sec:Martingale-approximations-and}} \begin{proof} \emph{(of Theorem~\ref{thm:convergence})}. For part 1), note \[ \overline{\Gamma}_{n}^{N}(1)-\pi_{n}(1)=\frac{Z_{n}^{N}}{Z_{n}}-1, \] then applying Proposition~\ref{prop:martingale} with $\varphi=1$ and using (\ref{eq:Gamma_telescope})--(\ref{eq:xi_cond_exp}) gives \[ \mathbb{E}[Z_{n}^{N}]=Z_{n}. \] Moving to the proof of part 2), let us assume for now, only $\mathbf{(A1)},\mathbf{(A2)}$\textbf{ }and $\mathbf{\mathbf{(B)}}$, but not necessarily $\mathbf{(B^{+})}$. Define $c_{n}:=\sup_{x}g_{n}(x)/\pi_{n}(g_{n})$. Under $\mathbf{\mathbf{(A1)}}$, we have \[ \mathrm{osc}\left(\overline{Q}_{p,n}(\varphi)\right)\leq2\left\Vert \varphi\right\Vert \sup_{x}\overline{Q}_{p,n}(1)(x)\leq2\left\Vert \varphi\right\Vert \prod_{q=p}^{n-1}c_{q}<+\infty \] and also using Lemma~\ref{lem:W_n_representation}, (\ref{eq:W_bar_defn}) and the fact that each $\alpha_{p}$ is a Markov transition matrix, we obtain \[ 0<\overline{W}_{p}^{i_{p}}\leq\sum_{\left(i_{0},\ldots,i_{p-1}\right)\in[N]^{p}}\prod_{q=0}^{p-1}c_{q}\alpha_{q}^{i_{q+1}i_{q}}=\prod_{q=0}^{p-1}c_{q}<+\infty. \] From (\ref{eq:martingale_burkholder_bound}) we then obtain \begin{eqnarray} & & \mathbb{E}\left[\left|\overline{\Gamma}_{n}^{N}(\varphi)-\pi_{n}(\varphi)\right|^{r}\right]^{1/r}\nonumber \\ & & \leq2\left\Vert \varphi\right\Vert B(r)^{1/r}\left(\prod_{p=0}^{n-1}c_{p}\right)\sum_{p=0}^{n}\left|\sum_{i}\left(\beta_{p,n}^{i}\right)^{2}\right|^{1/2}\nonumber \\ & & \leq2\left\Vert \varphi\right\Vert B(r)^{1/r}\left(\prod_{p=0}^{n-1}c_{p}\right)\sum_{p=0}^{n}\left|\max_{i\in[N]}\beta_{p,n}^{i}\right|^{1/2},\label{eq:L_r_bound_proof_max} \end{eqnarray} where the final inequality holds because $\left\{ \beta_{p,n}^{i}\right\} _{i\in[N]}$ is a probability vector. Then invoking $\mathbf{(B^{+})}$, the convergence in (\ref{eq:convergence_L_r_statement_gam_weak}) follows from (\ref{eq:L_r_bound_proof_max}) applied with $\varphi=1$. For (\ref{eq:convergence_L_r_statement_pi_weak}), we apply Minkowski's inequality, the fact $\left|\Gamma_{n}^{N}(\varphi)/\Gamma_{n}^{N}(1)\right|\leq\left\Vert \varphi\right\Vert $ and (\ref{eq:L_r_bound_proof_max}) twice to obtain \begin{eqnarray} \mathbb{E}\left[\left|\pi_{n}^{N}(\varphi)-\pi_{n}(\varphi)\right|^{r}\right]^{1/r} & \leq & \mathbb{E}\left[\left|\overline{\Gamma}_{n}^{N}(\varphi)-\pi_{n}(\varphi)\right|^{r}\right]^{1/r}\nonumber \\ & + & \mathbb{E}\left[\left|\frac{\Gamma_{n}^{N}(\varphi)}{\Gamma_{n}^{N}(1)}\right|^{r}\left|\overline{\Gamma}_{n}^{N}(1)-1\right|^{r}\right]^{1/r}\nonumber \\ & \leq & 4\left\Vert \varphi\right\Vert \left[B(r)\right]^{1/r}\left(\prod_{p=0}^{n-1}c_{p}\right)\sum_{p=0}^{n}\left|\max_{i\in[N]}\beta_{p,n}^{i}\right|^{1/2},\label{eq:L_r_bound_proof_pi} \end{eqnarray} The convergence in probability then follows from Markov's inequality, completing the proof of part 2). For part 3), under $\mathbf{(B^{++})}$ we have $\beta_{p,n}^{i}=1/N$, and therefore $\left|\max_{i\in[N]}\beta_{p,n}^{i}\right|^{1/2}=N^{-1/2}$. Substituting this into (\ref{eq:L_r_bound_proof_max}) with $\varphi=1$, and into (\ref{eq:L_r_bound_proof_pi}), gives (\ref{eq:convergence_L_r_statement_gamm})-(\ref{eq:convergence_L_r_statement_pi}). The almost sure convergence follows from the Borel--Cantelli lemma. \end{proof} \begin{proof} \emph{(of Proposition~\ref{prop:martingale})}. Applying the identities $\beta_{p-1,n}^{i_{p-1}}=\sum_{i_{p}}\beta_{p,n}^{i_{p}}\alpha_{p-1}^{i_{p}i_{p-1}}$, see (\ref{eq:beta_defn}), and $\overline{W}_{p}^{i_{p}}=\sum_{i_{p-1}}\alpha_{p-1}^{i_{p}i_{p-1}}\overline{W}_{p-1}^{i_{p-1}}\overline{Q}_{p}(1)(\zeta_{p-1}^{i_{p-1}})$, see (\ref{eq:W_n_defn}), (\ref{eq:Q_bar_defn}), (\ref{eq:W_bar_defn}); with the conventions $\alpha_{-1}:=Id$, and $\overline{\Gamma}_{-1}^{N}\overline{Q}_{-1,n}(\varphi)=\overline{W}_{-1}^{i}\overline{Q}_{-1,n}(\varphi)(\zeta_{-1}^{i}):=\mu_{0}\overline{Q}_{0,n}(\varphi)=\pi_{n}(\varphi)$, we have \begin{eqnarray} & & \overline{\Gamma}_{n}^{N}(\varphi)-\pi_{n}(\varphi)\nonumber \\ & & =\sum_{p=0}^{n}\left[\overline{\Gamma}_{p,n}^{N}\overline{Q}_{p,n}(\varphi)-\overline{\Gamma}_{p-1,n}^{N}\overline{Q}_{p-1,n}(\varphi)\right]\nonumber \\ & & =\sum_{p=0}^{n}\left[\sum_{i_{p}}\beta_{p,n}^{i_{p}}\overline{W}_{p}^{i_{p}}\overline{Q}_{p,n}(\varphi)(\zeta_{p}^{i_{p}})-\sum_{i_{p-1}}\sum_{i_{p}}\beta_{p,n}^{i_{p}}\alpha_{p-1}^{i_{p}i_{p-1}}\overline{W}_{p-1}^{i_{p-1}}\overline{Q}_{p-1,n}(\varphi)(\zeta_{p-1}^{i_{p-1}})\right]\nonumber \\ & & =\sum_{p=0}^{n}\sum_{i_{p}}\beta_{p,n}^{i_{p}}\overline{W}_{p}^{i_{p}}\left[\overline{Q}_{p,n}(\varphi)(\zeta_{p}^{i_{p}})-\frac{\sum_{i_{p-1}}\alpha_{p-1}^{i_{p}i_{p-1}}\overline{W}_{p-1}^{i_{p-1}}\overline{Q}_{p-1,n}(\varphi)(\zeta_{p-1}^{i_{p-1}})}{\overline{W}_{p}^{i_{p}}}\right]\nonumber \\ & & =\sum_{p=0}^{n}\sum_{i_{p}}\beta_{p,n}^{i_{p}}\overline{W}_{p}^{i_{p}}\Delta_{p,n}^{i_{p}}=N^{-1/2}\sum_{k=1}^{(n+1)N}\xi_{k}^{N}.\label{eq:Gamma-gamma_decomp} \end{eqnarray} Each $\xi_{k}^{N}$ is measurable w.r.t.~$\mathcal{F}^{(k)}$ because, using Corollary~\ref{cor:measurability}, \textbf{(A2)} and $\mathbf{\mathbf{(B)}}$ we have that for any $k=1,\ldots,(n+1)N$, if we set $p:=\left\lfloor (k-1)/N\right\rfloor $ and $i:=k-pN$, the quantity $\Delta_{p,n}^{i}$ is measurable w.r.t.~$\mathcal{F}^{(k)}$ and $\beta_{p,n}^{i_{p}}\overline{W}_{p}^{i_{p}}$ is measurable w.r.t.~$\mathcal{F}_{p-1}$. To verify (\ref{eq:xi_cond_exp}), again use the fact that for any $i\in[N]$ and $0\leq p\leq n$, $\beta_{p,n}^{i}\overline{W}_{p}^{i}$ is measurable w.r.t.~$\mathcal{F}_{p-1}$, and note that given $\mathcal{F}_{p-1}$, the particles $\left\{ \zeta_{p}^{i}\right\} _{i=1}^{N}$ are conditionally independent, and distributed as specified in Algorithm~\ref{alg:aSMC}. Hence for any $k=1,\ldots,(n+1)N$ and $p:=\left\lfloor (k-1)/N\right\rfloor $ and $i:=k-pN$, we have $\mathbb{E}\left[\left.\xi_{k}\right|\mathcal{F}^{(k-1)}\right]=\sqrt{N}\beta_{p,n}^{i}\overline{W}_{p}^{i}\mathbb{E}\left[\left.\Delta_{p,n}^{i}\right|\mathcal{F}_{p-1}\right]=0$. For the inequality (\ref{eq:martingale_burkholder_bound}), by Minkowski's inequality and (\ref{eq:Gamma-gamma_decomp}), \begin{equation} \mathbb{E}\left[\left|\overline{\Gamma}_{n}^{N}(\varphi)-\pi_{n}(\varphi)\right|^{r}\right]^{1/r}\leq\sum_{p=0}^{n}\mathbb{E}\left[\left|\overline{\Gamma}_{p,n}^{N}\overline{Q}_{p,n}(\varphi)-\overline{\Gamma}_{p-1}^{N}\overline{Q}_{p-1,n}(\varphi)\right|^{r}\right]^{1/r}.\label{eq:Gamma-gamma_mink} \end{equation} For each term in (\ref{eq:Gamma-gamma_mink}), using the above stated conditional independence and measurability properties, we may apply \citep[Lemma 7.3.3]{smc:theory:Dm04} to establish the existence of an independent constant $B(r)$, depending only on $r$ and such that \begin{eqnarray} & & \mathbb{E}\left[\left.\left|\overline{\Gamma}_{p,n}^{N}\overline{Q}_{p,n}(\varphi)-\overline{\Gamma}_{p-1}^{N}\overline{Q}_{p-1,n}(\varphi)\right|^{r}\right|\mathcal{F}_{p-1}\right]\nonumber \\ & & =\mathbb{E}\left[\left.\left|\sum_{i}\beta_{p,n}^{i}\overline{W}_{p}^{i}\Delta_{p}^{i}\right|^{r}\right|\mathcal{F}_{p-1}\right]\nonumber \\ & & \leq B(r)\text{osc}\left(\overline{Q}_{p,n}(\varphi)\right)^{r}\left(\sum_{i}\left(\beta_{p,n}^{i}\overline{W}_{p}^{i}\right)^{2}\right)^{r/2},\label{eq:GammaN-GammaN} \end{eqnarray} almost surely. The proof is completed upon combining this estimate with (\ref{eq:Gamma-gamma_mink}). \end{proof} \subsection*{Proofs for Section~\ref{sec:stability}} \begin{proof} \emph{(of Proposition~\ref{prop:norm_const_bound})} The proof follows a similar line of argument to \citep[Proof of Theorem 16.4.1.]{DelMoral2013book}, but applies to a more general algorithm than considered there. To start, we apply Proposition~\ref{prop:martingale}, equation (\ref{eq:Gamma_telescope}) with $\varphi=1$ and (\ref{eq:xi_cond_exp}), we obtain \[ \mathbb{E}\left[\left(\frac{Z_{n}^{N}}{Z_{n}}-1\right)^{2}\right]=\sum_{p=0}^{n}\sum_{i_{p}}\mathbb{E}\left[\left(\beta_{p,n}^{i_{p}}\overline{W}_{p}^{i_{p}}\Delta_{p,n}^{i_{p}}\right)^{2}\right]. \] Under $\mathbf{(B^{++})}$ we have $\beta_{p,n}^{i_{p}}=1/N$, then using the other hypotheses of the Proposition and noting $\text{osc}\left(Q_{n,n}(1)\right)=\text{osc}\left(1\right)=0$, we have for $n\geq1$, \begin{eqnarray*} \mathbb{E}\left[\left(\frac{Z_{n}^{N}}{Z_{n}}-1\right)^{2}\right] & = & \sum_{p=0}^{n}\sum_{i}\mathbb{E}\left[\frac{1}{N^{2}}\left(\overline{W}_{p}^{i}\right)^{2}\left(\Delta_{p,n}^{i}\right)^{2}\right]\\ & \leq & \sum_{p=0}^{n-1}\text{osc}\left(\overline{Q}_{p,n}(1)\right)^{2}\mathbb{E}\left[\frac{1}{N^{2}}\sum_{i}\left(\overline{W}_{p}^{i}\right)^{2}\right]\\ & = & \sum_{p=0}^{n-1}\text{osc}\left(\overline{Q}_{p,n}(1)\right)^{2}\frac{1}{N}\mathbb{E}\left[\frac{1}{\mathcal{E}_{p}^{N}}\left(\frac{1}{N}\sum_{i}\overline{W}_{p}^{i}\right)^{2}\right]\\ & \leq & \sum_{p=0}^{n-1}\frac{\text{osc}\left(\overline{Q}_{p,n}(1)\right)^{2}}{N\tau_{p}}\mathbb{E}\left[\left(\frac{1}{N}\sum_{i}\overline{W}_{p}^{i}-1\right)^{2}+1\right]\\ & = & \sum_{p=0}^{n-1}\frac{\text{osc}\left(\overline{Q}_{p,n}(1)\right)^{2}}{N\tau_{p}}\left(\mathbb{E}\left[\left(\frac{Z_{p}^{N}}{Z_{p}}-1\right)^{2}\right]+1\right), \end{eqnarray*} where last two lines use $N^{-1}\sum_{i}\overline{W}_{p}=Z_{p}^{N}/Z_{p}$ and by Theorem~\ref{thm:convergence}, $\mathbb{E}\left[Z_{p}^{N}\right]=Z_{p}$. \end{proof} \begin{proof} \emph{(of Proposition~\ref{prop:L_p_bound_mixing})} First note that by exactly the same arguments as in the proof of Proposition~\ref{prop:martingale}, equation (\ref{eq:GammaN-GammaN}), we have for any $\phi\in\mathcal{L}$, $0\leq p\leq n$, \begin{eqnarray} & & \mathbb{E}\left[\left.\left|\Gamma_{p,n}^{N}Q_{p,n}(\phi)-\Gamma_{p-1}^{N}Q_{p-1,n}(\phi)\right|^{r}\right|\mathcal{F}_{p-1}\right]\nonumber \\ & & \leq B(r)\text{osc}\left(Q_{p,n}(\phi)\right)^{r}\left(\sum_{i}\left(\beta_{p,n}^{i}W_{p}^{i}\right)^{2}\right)^{r/2},\label{eq:GammaN-GammaN_mixing_proof} \end{eqnarray} with the convention $\Gamma_{-1}^{N}Q_{-1,n}(\phi)=\gamma_{n}(\phi)$. For the remainder of the proof, fix $\varphi\in\mathcal{L}$ arbitrarily, and set $\bar{\varphi}:=\varphi-\pi_{n}(\varphi)$. Defining \[ D_{p,n}^{N}:=\frac{\Gamma_{p,n}^{N}Q_{p,n}\left(\bar{\varphi}\right)}{\Gamma_{p,n}^{N}Q_{p,n}(1)},\quad0\leq p\leq n, \] and then noting \[ D_{n,n}^{N}=\frac{\Gamma_{n}^{N}\left(\bar{\varphi}\right)}{\Gamma_{n}^{N}(1)}=\pi_{n}^{N}(\varphi)-\pi_{n}(\varphi), \] we shall focus on the decomposition: \begin{eqnarray} \pi_{n}^{N}(\varphi)-\pi_{n}(\varphi) & = & D_{0,n}^{N}+\sum_{p=1}^{n}D_{p,n}^{N}-D_{p-1,n}^{N},\label{eq:pi_n^N_decomp} \end{eqnarray} with the convention that the summation is zero when $n=0$. For $1\leq p\leq n$, write \[ D_{p,n}^{N}-D_{p-1,n}^{N}=T_{p,n}^{\left(N,1\right)}+T_{p,n}^{\left(N,2\right)}, \] where \begin{eqnarray*} T_{p,n}^{\left(N,1\right)} & := & \frac{1}{\Gamma_{p,n}^{N}Q_{p,n}(1)}\left[\Gamma_{p,n}^{N}Q_{p,n}\left(\bar{\varphi}\right)-\Gamma_{p-1,n}^{N}Q_{p-1,n}\left(\bar{\varphi}\right)\right]\\ T_{p,n}^{\left(N,2\right)} & := & \frac{\Gamma_{p-1,n}^{N}Q_{p-1,n}(\bar{\varphi})}{\Gamma_{p-1,n}^{N}Q_{p-1,n}(1)}\frac{\left[\Gamma_{p-1,n}^{N}Q_{p-1,n}\left(1\right)-\Gamma_{p,n}^{N}Q_{p,n}\left(1\right)\right]}{\Gamma_{p,n}^{N}Q_{p,n}\left(1\right)}. \end{eqnarray*} We have the estimates \begin{equation} \frac{\text{osc}\left(Q_{p,n}(\phi)\right)}{\inf_{x}Q_{p,n}\left(1\right)(x)}\leq2\delta_{p,n}\left\Vert P_{p,n}(\phi)\right\Vert ,\label{eq:osc/inf_bound} \end{equation} (which is finite under assumption $(\mathbf{C})$ - see also (\ref{eq:Q_p,n_bounded})), and \begin{eqnarray} \left|\frac{\Gamma_{p-1,n}^{N}Q_{p-1,n}(\phi)}{\Gamma_{p-1,n}^{N}Q_{p-1,n}(1)}\right| & \leq & \left\Vert P_{p-1,n}(\phi)\right\Vert .\label{eq:Gamma/Gamma_bound} \end{eqnarray} Applying (\ref{eq:GammaN-GammaN_mixing_proof}) with $\phi=\bar{\varphi}$, using (\ref{eq:osc/inf_bound}) and noting that $\Gamma_{p,n}^{N}(1)$ is measurable w.r.t.~$\mathcal{F}_{p-1}$, we obtain \begin{eqnarray*} \mathbb{E}\left[\left.\left|T_{p,n}^{\left(N,1\right)}\right|^{r}\right|\mathcal{F}_{p-1}\right]^{1/r} & \leq & B(r)^{1/r}2\delta_{p,n}\frac{\left\Vert P_{p,n}(\bar{\varphi})\right\Vert }{\Gamma_{p,n}^{N}(1)}\left(\sum_{i}\left(\beta_{p,n}^{i}W_{p}^{i}\right)^{2}\right)^{1/2}. \end{eqnarray*} Applying (\ref{eq:GammaN-GammaN_mixing_proof}) with $\phi=1$, using (\ref{eq:Gamma/Gamma_bound}) and the same measurability condition, we obtain \begin{eqnarray*} \mathbb{E}\left[\left.\left|T_{p,n}^{\left(N,2\right)}\right|^{r}\right|\mathcal{F}_{p-1}\right]^{1/r} & \leq & B(r)^{1/r}2\delta_{p,n}\frac{\left\Vert P_{p-1,n}(\bar{\varphi})\right\Vert }{\Gamma_{p,n}^{N}(1)}\left(\sum_{i}\left(\beta_{p,n}^{i}W_{p}^{i}\right)^{2}\right)^{1/2}. \end{eqnarray*} Therefore, via Minkowski's inequality and using \[ \left\Vert P_{p-1,n}(\bar{\varphi})\right\Vert =\left\Vert Q_{p-1,n}(\bar{\varphi})/Q_{p-1,n}(1)\right\Vert =\left\Vert Q_{p}Q_{p,n}(\bar{\varphi})/Q_{p-1,n}(1)\right\Vert \leq\left\Vert P_{p,n}(\bar{\varphi})\right\Vert , \] we have \begin{equation} \mathbb{E}\left[\left|D_{p,n}^{N}-D_{p-1,n}^{N}\right|^{r}\right]^{1/r}\leq B(r)^{1/r}4\delta_{p,n}\left\Vert P_{p,n}(\bar{\varphi})\right\Vert \mathbb{E}\left[\left|\mathcal{C}_{p,n}^{N}\right|^{r}\right]^{1/r}.\label{eq:D_p-D_p_proof} \end{equation} For the remaining term, $D_{0,n}^{N}$, we have \begin{eqnarray*} D_{0,n}^{N} & = & \frac{1}{\Gamma_{0,n}^{N}Q_{0,n}(1)}\left(\Gamma_{0,n}^{N}Q_{0,n}\left(\bar{\varphi}\right)-\gamma_{n}(\bar{\varphi})\right), \end{eqnarray*} where the final equality holds since $\gamma_{n}(\bar{\varphi})=\gamma_{n}(\varphi)-\gamma_{n}(1)\pi_{n}(\varphi)=0$. Using (\ref{eq:GammaN-GammaN_mixing_proof}) and (\ref{eq:osc/inf_bound}) in a similar fashion to above we obtain \begin{equation} \mathbb{E}\left[\left|D_{0,n}^{N}\right|^{r}\right]^{1/r}\leq B(r)^{1/r}2\delta_{0,n}\left\Vert P_{0,n}(\bar{\varphi})\right\Vert \mathbb{E}\left[\left|\mathcal{C}_{0,n}^{N}\right|^{r}\right]^{1/r}.\label{eq:D_0_proof} \end{equation} The proof is complete upon using Minkowski's inequality to bound the moments of (\ref{eq:pi_n^N_decomp}) using (\ref{eq:D_p-D_p_proof}) and (\ref{eq:D_0_proof}). \end{proof} \subsection*{Proofs for Section~\ref{sec:Discussion}} \begin{proof} \emph{(of Lemma~\ref{lem:complexity})} Label the vertices of the graph corresponding to $A$ arbitrarily with the integers $[N]$. Let $s\geq1$ be the number of connected components of this graph. Then for each $\ell\in[s]$ let $B(\ell)$ be the set of labels of the $\ell$'th connected component. Since $A$ is a B-matrix, each connected component is complete, so we have for any $\ell\in[s]$ and $i\in B(\ell)$, \begin{equation} W_{n}^{i}=\sum_{j}\alpha_{n-1}^{ij}W_{n-1}^{j}g_{n-1}(\zeta_{n-1}^{j})=\left|B(\ell)\right|^{-1}\sum_{j\in B(\ell)}W_{n-1}^{j}g_{n-1}(\zeta_{n-1}^{j}).\label{eq:W_n_complexity} \end{equation} The complexity of calculating $W_{n}^{i}$ is thus $O(\left|B(\ell)\right|)$, and since $W_{n}^{i}=W_{n}^{j}$ for all $i,j\in B(\ell)$, the complexity of calculating $\left\{ W_{n}^{i}\right\} _{i\in[N]}$ is $O\left(\sum_{\ell\in[s]}\left|B(\ell)\right|\right)=O(N)$. Arguing similarly to (\ref{eq:W_n_complexity}), with $\alpha_{n-1}=A$ we find that under Algorithm~\ref{alg:aSMC}, for each $\ell\in[s]$, the $\left\{ \zeta_{n}^{i}\right\} _{i\in B(\ell)}$ are conditionally i.i.d.~according to \begin{equation} \frac{\sum_{j\in B(\ell)}W_{n-1}^{j}g_{n-1}(\zeta_{n-1}^{j})f(\zeta_{n-1}^{j},\cdot)}{\sum_{j\in B(\ell)}W_{n-1}^{j}g_{n-1}(\zeta_{n-1}^{j})}.\label{eq:complexity_block_distbn} \end{equation} By the same arguments used in \citep{carpenter1999improved} to address the BPF, drawing $\left|B(\ell)\right|$ samples from (\ref{eq:complexity_block_distbn}) can be achieved at $O(\left|B(\ell)\right|)$ complexity, and thus the overall complexity of the sampling part of Algorithm~\ref{alg:aSMC} is $O(\sum_{\ell\in[s]}\left|B(\ell)\right|)=O(N)$. \end{proof} \begin{proof} \emph{(of Proposition~\ref{prop:Upon-termination-of})} We prove (\ref{eq:W_k_explicit}) by induction. We have \[ \mathbb{W}_{0}^{i}=W_{n-1}^{i}g_{n-1}(\zeta_{n-1}^{i})=\sum_{j\in B(0,i)}W_{n-1}^{j}g_{n-1}(\zeta_{n-1}^{j}) \] and, when (\ref{eq:W_k_explicit}) holds at rank $k$, we have at rank $k+1$, \begin{eqnarray*} \mathbb{W}_{k+1}^{i} & = & \mathbb{W}_{k}^{\mathcal{I}_{k}(2i-1)}/2+\mathbb{W}_{k}^{\mathcal{I}_{k}(2i)}/2\\ & = & 2^{-(k+1)}\sum_{j\in B(k,\mathcal{I}_{k}(2i-1))}W_{n-1}^{j}g_{n-1}(\zeta_{n-1}^{j})\\ & + & 2^{-(k+1)}\sum_{j\in B(k,\mathcal{I}_{k}(2i))}W_{n-1}^{j}g_{n-1}(\zeta_{n-1}^{j})\\ & = & 2^{-(k+1)}\sum_{j\in B(k,\mathcal{I}_{k}(2i-1))\cup B(k,\mathcal{I}_{k}(2i))}W_{n-1}^{j}g_{n-1}(\zeta_{n-1}^{j})\\ & = & 2^{-(k+1)}\sum_{j\in B(k+1,i)}W_{n-1}^{j}g_{n-1}(\zeta_{n-1}^{j}). \end{eqnarray*} Finally, for any $i\in[N/2^{k}]$ and $j\in B(k,i)$ \[ W_{n}^{j}=\sum_{\ell}\alpha_{n-1}^{i\ell}W_{n-1}^{\ell}g_{n-1}(\zeta_{n-1}^{\ell})=2^{-k}\sum_{\ell\in B(k,i)}W_{n-1}^{\ell}g_{n-1}(\zeta_{n-1}^{\ell})=\mathbb{W}_{k}^{i}, \] which establishes (\ref{eq:W_k_explicit})--(\ref{eq:W_equals_bb_W}). No matter what adaptation rule of Table~\ref{tab:Choosing_I_k} is used, each quantity $\left\{ B(k,i)\right\} _{i\in[N/2^{k}]}$ obtained by Algorithm~\ref{alg:generic adaptation} is, by construction, a partition of $[N]$ and thus the $\alpha_{n-1}$ output by Algorithm \ref{alg:generic adaptation} is a B-matrix. Noting that a B-matrix always admits the uniform distribution on $[N]$ as an invariant distribution, we have for any B-matrix, say $A$, the identity $\overline{\mathbb{W}}_{0}=N^{-1}\sum_{i}\sum_{j}A^{ij}W_{n-1}^{i}g_{n-1}(\zeta_{n-1}^{i})$ and so upon termination of the ``while'' loop in Algorithm~\ref{alg:generic adaptation}, $\mathcal{E}=\mathcal{E}_{n}^{N}$ and hence $\mathcal{E}_{n}^{N}\geq\tau$ always. For the Simple and Random adaptation rules, the worst case complexity of Algorithm~\ref{alg:generic adaptation} is as follows. The part of the algorithm preceding the ``while'' loop is $O(N)$. The complexity of iteration $k$ of the ``while'' loop is $O(N/2^{k})$, the worst case is when the loop terminates with $k=m$, in which case the complexity of the ``while'' loop is $O(\sum_{k=0}^{m}N/2^{k})$, thus the overall complexity is no more than $O(N)$. For the Greedy procedure, the sort operation required to obtain $\mathcal{I}_{k}$ is of complexity $O\left(N/2^{k}\log_{2}\left(N/2^{k}\right)\right)$, and so in the worst case, the complexity of the ``while'' loop is of the order \[ t(N):=\sum_{k=0}^{m}\frac{N}{2^{k}}\log_{2}\left(\frac{N}{2^{k}}\right), \] or expressed recursively, $t(N)=t(N/2)+N\log_{2}N$, and $t(2)=2$. A simple induction shows that this recursion has solution $t(N)=2[1+N(\log_{2}N-1)]$, hence the overall worst case complexity of the ``while'' loop is $O(N\log_{2}N)$. The proof is complete since by Lemma~\ref{lem:complexity}, the complexity of operations in Algorithm~\ref{alg:aSMC} other than line $(\star)$ is $O(N)$. \end{proof} \bibliographystyle{imsart-nameyear}
1,314,259,992,764
arxiv
\section{Introduction} \label{i} Amongst the different heavy-fermion materials, CeCoIn$_5$ has a prominent role as the Ce-based heavy-fermion superconductor with the highest $T_c$~= 2.3~K and as the most studied material of the \lq 115\rq -family of Ce compounds.\cite{petrovic2001,sarrao2007,thompson2013} Detailed information about electronic excitations and charge dynamics in heavy fermions can be obtained by optical spectroscopy in different spectral ranges.\cite{basov2011,scheffler2013} Concerning infrared optics, CeCoIn$_5$ has been examined in great detail.\cite{singley2002,mena2005,burch2007,okamura2015} These studies focused on signatures of the hybridization of conduction and $f$ electrons whereas the dynamics of the mobile charge carriers in heavy-fermion materials have to be probed at even lower frequencies, in the GHz and THz ranges.\cite{scheffler2013,webb1986,degiorgi1999,marc_nature,dressel2006,scheffler2006,scheffler10} Previous GHz and THz studies on CeCoIn$_5$ have explicitly addressed the superconducting transition,\cite{ormeno2002,nevirkovets2008,SudhakarRao2009,truncik2013} while the heavy-electron charge dynamics in the metallic state of CeCoIn$_5$ have hardly been addressed by optics so far. The main reason here is that the sensitivity of broadband GHz and THz experiments is limited and for studies on highly conductive materials requires thin-film samples,\cite{marc_rsi,marc_strip,Pra13} which are difficult to grow in the case of heavy-fermion metals. Here we explicitly study the charge response of a CeCoIn$_5$ thin film. \section{Methods} \label{m} The sample under study is a 70\,nm thick film of CeCoIn$_5$ deposited via molecular beam epitaxy on a dielectric $5\times5\times0.5\,$\,mm$^3$ MgF$_2$ substrate. This technique has already been applied for growth of several thin-film systems based on CeCoIn$_5$ and CeIn$_3$.\cite{shishido2010,mizukami2011,shimozawa2012,goh2012,shimozawa2014} Other than in previous THz studies \cite{Sch13} on CeCoIn$_5$ thin films, we do not need additional metallic buffer layers to achieve high-quality samples. THz transmission measurements, however, still remain challenging for several reasons. First, the film needs to be thin enough to allow for a detectable transmission signal, while the sample quality favors thick films. Here we have chosen a thickness of 70\,nm, which is a compromise between sample quality and suitability for our experimental technique. Second, thin films of CeCoIn$_5$ often rapidly degrade in ambient air conditions so that exposure time must be cut to a minimum.\cite{Sch13} Third, the aperture, through which the focused THz radiation passes before it is transmitted through the sample, needs to have a diameter $d_a$ smaller than the sample. We have chosen $d_a=$ 3\,mm which restricts our accessible spectral range to wavelengths shorter than 1\,mm due to diffraction effects. After deposition in Kyoto, the film was sealed in a glass tube under vacuum conditions before it was shipped to Stuttgart. Right after removal from the glass tube, the sample was mounted onto the THz sample holder, transferred to the cryostat, and rapidly cooled down in He-gas atmosphere. Here, the overall exposure time to ambient air was less than 5 minutes. The entire optical measurements were subsequently performed during a period of about 36 hours. During this time, the sample was always kept below 150\,K. Afterwards, it was removed from the cryostat and contacted in standard 4-point geometry in order to measure the dc-sheet resistance, and correspondingly obtain the dc transport resistivity $\rho_{dc}$. Even after a measurement time of $\sim$\,60 hours including an exposure time of $\sim$\,30 minutes we could not infer any signs of degradation from the THz spectra. \begin{figure} \begin{centering} \includegraphics[scale=0.3]{res-eps-converted-to.pdf} \caption{\label{res} (Color online) dc transport resistivity $\rho_{dc}$ versus temperature of the CeCoIn$_5$ film studied in this work. Though being rather thin, all characteristics known from single-crystal CeCoIn$_5$ are well recovered. The inset shows a schematic drawing of the bilayer system. } \end{centering} \end{figure} \begin{figure} \begin{centering} \includegraphics[scale=0.15]{Tr_SO34.pdf} \caption{\label{Tr_SO34} (Color online) Raw transmittance of 70\,nm CeCoIn$_5$ film as function of temperature and frequency. The pronounced oscillation pattern is caused by the dielectric substrate acting as a Fabry-Perot resonator. At high temperatures, the oscillation pattern is constant while it acquires a strong frequency dependence towards low temperatures: the transmittance increases with increasing frequency and decreasing temperature except for the low frequency- and temperature limit, where it is suppressed.} \end{centering} \end{figure} The transmittance was measured using a set of tunable backward-wave oscillators as sources of coherent and monochromatic THz radiation and a He-cooled bolometer as detector.\cite{Pra13} Measurements were conducted in a spectral range spanning 13 - 46\,cm$^{-1}$ (i.e. 0.4 - 1.4\,THz). Sample temperatures between 6\,K and 150\,K were maintained in a home-built cryostat. We omitted measurements at higher and lower temperatures in order to restrain the measurement time and avoid degradation. Since our transmittance data is for a two-layer system (substrate + film), we measured a bare reference substrate in the same run to disentangle the properties of both layers. \section{Results and Discussion} \label{r} The temperature dependence of the dc transport resistivity $\rho_{dc}$ is shown in Fig. \ref{res}. The sample exhibits all characteristic regimes well known for CeCoIn$_5$,\cite{petrovic2001,malinowski2005} which is consistent with an excellent film quality. Starting at room temperature, the system behaves like a normal metal and $\rho_{dc}$ decreases slightly, passes a minimum at around $T_{min}=165$\,K and then increases again due to incoherent Kondo scattering. This increase levels off at around $T_{max}=40$\,K, where the system enters the coherent heavy-fermion state which then goes along with a rapid reduction of $\rho_{dc}$ upon further cooling before the curve bends down to the superconducting transition at presumably $T_c\approx 1.8$\,K slightly below our lowest measured temperature. The raw transmittance is displayed in Fig. \ref{Tr_SO34} as function of frequency and temperature. The spectra feature pronounced Fabry-Perot (FP) oscillations that stem from multiple reflections inside the substrate.\cite{Pra13} At $T=150$\,K, the highly-conductive metallic film suppresses the overall transmittance (at the FP peaks) to less than 1\%. Upon cooling, the spectra acquire a strong frequency dependence beyond the FP pattern. This is most pronounced in the high-frequency limit, where at $T=6$\,K the transmittance is about 4 times larger than at high temperatures. At intermediate frequencies, this enhancement is less strong and at the lowest frequencies it is even reversed at around 30\,K. Measurements of the bare substrate reveal only minor losses at 150\,K, which disappear below $\sim$80\,K. Thus, the observed frequency and temperature dependence of the transmittance can completely be attributed to the electronic properties of the CeCoIn$_5$ film. Such a behavior at THz frequencies is expected for a good metal, where the electron scattering rate $\Gamma = 1/\tau$, with $\tau$ the time between two scattering events, shifts into the examined spectral range upon cooling. In our case of CeCoIn$_5$ the shift of $\Gamma$ is attributed to the gradual emergence of a coherent heavy-fermion state with a concomitant slowing down of the Drude relaxation rate combined with a reduction of temperature-dependent scattering e.g.\ due to phonons.\cite{millis1986} \begin{figure} \begin{centering} \includegraphics[scale=0.41]{DrudeParametersSO34-eps-converted-to.pdf} \caption{\label{Drude} (Color online) Temperature dependence of (a) the dc resistivity obtained from optical and transport probes, (b) the scattering time $\tau$, (c) the scattering rate $\Gamma=1/\tau$, and (d) $\tau/\sigma_0$, which is a measure for the electron-mass enhancement. } \end{centering} \end{figure} As further data analysis, we fitted the transmittance (see Fig. \ref{fits}(a) below) to well-established Fresnel equations for multiple reflections \cite{Dre02}, where the complex refractive index $n+ik$ is expressed in terms of the Drude conductivity \begin{equation} \sigma_1(\omega)+i\sigma_2(\omega)=\sigma_0\left(\frac{1}{1+(\omega \tau)^2}+i\frac{\omega \tau}{1+(\omega \tau)^2}\right)\label{eq:Drude} \end{equation} with angular frequency $\omega=2\pi \nu$ and temperature-dependent parameters dc conductivity $\sigma_0$ and scattering time $\tau=1/\Gamma$. Fig. \ref{Drude} (a) displays $\rho_0=1/\sigma_0$, i.e.\ the dc resistivity from optical Drude analysis, and compares it to $\rho_{dc}$ obtained from the transport measurement. In the regime of incoherent Kondo scattering, i.e.\ between $T_{min}$ and $T_{max}$, the results of both the optical and transport measurements coincide. Between $T_{min}$ and the inflection point of the $\rho_{dc}(T)$ curve, $\tau$ and $\Gamma$, see Fig. \ref{Drude} (b) and inset (d), remain roughly constant. For lower temperatures, $\tau$ and $\Gamma$ tend to increase and decrease, respectively, signaling the emergence of the underlying heavy-fermion state. At around 25\,K, results from the optical and transport probes in Fig. \ref{Drude} (a) start to deviate. We observe a more rapid decrease of $\rho_{dc}$ compared to $\rho_0$ which even levels off at around 13\,K. Down to this temperature, $\tau$ and $\Gamma$ display a strong temperature dependence in the now well developed heavy-fermion state. At even lower temperatures, $\rho_0$ remains fairly constant and tends to a slight increase which is in clear contrast to the transport result. At the same time, the temperature dependence of $\tau$ and $\Gamma$ becomes less pronounced. In the Drude framework, $\sigma_0$ is given by \begin{equation} \sigma_0=\frac{Ne^2\tau}{m}\label{eq:mass} \end{equation} where $N,e$ and $m$ are the electron density, charge, and mass. In this free electron picture, the temperature dependence of $\rho_0$ is usually determined by $\tau$ given that the number of carriers is constant. In heavy-fermion metals, however, $m$ is renormalized by electron-electron interactions and the effective mass $m^*$ becomes strongly temperature dependent. While $\rho_0$ remains fairly constant in the heavy-fermion regime, $\tau$ drastically increases. Even without knowledge of $N$ we can infer the temperature dependence of $m^*$ by plotting $\tau/\sigma_0=m^*/(Ne^2)$, see Fig. \ref{Drude}(c). Upon cooling, the mass enhancement already sets in well before the heavy-fermion state is fully developed, speeds up, and eventually levels off towards the lowest temperature. Assuming a constant value of $N$, this would translate to a mass enhancement of roughly four between the highest and lowest temperature of this study. This value is small when compared to those found for CeCoIn$_5$ from other techniques such as specific heat \cite{petrovic2001,kim2001} or quantum oscillations \cite{settai2001,mccollam2005}, but those probes usually reveal the effective mass only for very low temperatures, whereas our work indicates that the mass enhancement upon cooling is not complete yet at our lowest temperature of 6~K. Closer to our approach are optical studies at higher frequencies, in the infrared regime,\cite{singley2002,mena2005} which found a frequency-dependent effective mass enhancement which amounts to approximately 20 and 13, respectively, at their lowest frequencies and temperatures (30~cm$^{-1}$, 10~K and 40~cm$^{-1}$, 8~K, respectively). Since our optical measurements reach lower frequencies and temperatures, more pronounced mass enhancement is expected, which is not found in the data of Fig. \ref{Drude}(c). This can be explained since the simple Drude analysis performed here does not take into account any possible frequency dependence of scattering rate and effective mass. As we will show below, this description becomes more inaccurate for decreasing temperature. Here, a more detailed analysis based on data of the full optical response (real and imaginary parts) is desired. \begin{figure}[tb] \begin{centering} \includegraphics[scale=0.45]{Tr_fitted-eps-converted-to.pdf} \caption{\label{fits} (Color online) (a) Transmittance spectra and Fresnel fits including Drude conductivity for several temperatures. The spectra and fits are shifted for clarity. While at high temperatures the transmittance is well described by the fit, the fidelity gradually decreases towards low temperatures signaling a breakdown of the simple single-particle Drude description. (b) and (c) schematically explain such behavior by comparing the Drude and Fermi-liquid (FL) predictions for the optical response: in FL theory the scattering rate $\Gamma$ (dashed) is frequency dependent, which leads to different spectra in the real part $\sigma_1$ of the conductivity (full) and in the transmittance.} \end{centering} \end{figure} However, the trends that we find for $\rho_0, \tau, \Gamma$ and $m^*$ in the present work are in good agreement with previous results \cite{Sch13} on CeCoIn$_5$ thin films, where the analysis was less straightforward due to metallic buffer layers beneath the thin film. In \cite{Sch13} the discrepancy between optical and transport probes was explained by a rapid degradation of the film during the measurement. Here, this seems less likely as we could not observe any signs of degradation in the optical spectra. Fig. \ref{fits} (a) comprises the transmittance together with the Fresnel fits including the single-particle Drude formula, Eq. (\ref{eq:Drude}). At high temperatures, where no mismatch between optical and transport probes was visible, the fit captures the transmittance in the entire frequency range very well. Upon cooling down, the agreement between the simple theory and experimental data becomes less good: most severe deviations arise in the low and high frequency limits, where the actual transmittance drops below the theory expectation. Furthermore, a small phase shift arises at intermediate frequencies. By judging the fit fidelity, one can understand the discrepancy between $\rho_0$ and $\rho_{dc}$ as an increasing incapability of the single-particle Drude theory to reproduce the dynamical properties of CeCoIn$_5$. Indeed, in a number of correlated electron systems a frequency dependence of $\tau$ was found resulting from electron-electron interactions, and by Kramers-Kronig relations, one could also expect a frequency-dependent effective mass. This might also explain the failure of the Drude theory describing the many-body heavy-fermion state in CeCoIn$_5$ at sufficiently low temperatures. In Fig. \ref{fits}(b) and (c) we qualitatively explain how such deviations from Drude behavior in the transmittance spectra can arise due to electronic correlations. Our reference for an interacting electron system is a Fermi liquid (FL) with optical properties well understood from the theoretical side \cite{gurzhi1959,maslov2012,Berthod2013} and recently studied experimentally.\cite{scheffler2013,schneider2014,stricker2014} For CeCoIn$_5$, non-FL behavior (with linear temperature dependence of dc resistivity compared to the FL quadratic behavior) was found experimentally for many properties and is expected for the THz response, but in lack of an appropriate non-FL prediction, we here refer to the FL case to demonstrate the overall behavior. FL theory predicts a quadratic frequency dependence of the scattering rate $\Gamma(\omega) = \Gamma(\omega=0) + b \omega^2$ (with the prefactor $b$ depending on material) compared to the frequency-independent $\Gamma$ within the Drude response. As a result, $\sigma_1(\omega)$ for FL notably deviates from the Drude case, with the characteristic feature being a higher $\sigma_1$ at higher frequency (\lq non-Drude trail\rq).\cite{Berthod2013} Such differences in $\sigma_1$ correspond to differences in the transmittance, as is shown in Fig. \ref{fits}(c): if one compares the transmittance spectra of an interacting electron system (our CeCoIn$_5$ data at low temperature and the FL in the schematic figure) with the spectrum for a Drude metal, one finds that the maxima of the FP oscillations can be modeled properly in a limited (intermediate) frequency range, while the maxima in the Drude case surpass those of the interacting case for both lower and higher frequency. Therefore we interpret the insufficient Drude fits of our low-temperature transmittance data as evidence for electronic correlations in the THz response of CeCoIn$_5$. Whether these can be described as FL optics or whether, as expected, non-FL features govern the THz properties remains to be seen from further studies that should address the phase shift in the THz response in addition to the transmittance. \section{Summary} \label{s} In summary, we discussed the transmittance of THz radiation through a high-quality thin film of CeCoIn$_5$ measured by quasi-optical spectroscopy and compared it to transport measurements of the dc resistivity $\rho_{dc}$. We found a perfect agreement of the dc resistivity $\rho_0=1/\sigma_0$ obtained from Drude optics and $\rho_{dc}$ in the regime of incoherent Kondo scattering. At lower temperatures, the scattering time $\tau$ and effective mass $m^*$ acquire a strong temperature dependence and the agreement between $\rho_{dc}$ and $\rho_0$ becomes worse. We attribute this to an increasing incapability of the simple single-particle picture, i.e.\ the Drude theory, in favor of a more advanced description that accounts for the electronic correlations associated with the low-temperature heavy-fermion state. With the recent improvements in growing high-quality thin films, optical experiments at THz and GHz frequencies become feasible and we hope that our results motivate further investigations illuminating the unconventional charge carrier dynamics in CeCoIn$_5$. \section{Acknowledgements} \label{a} This study was supported by the DFG. The work in Japan was supported by KAKENHI from JSPS. U.S.P. acknowledges financial support from the Studienstiftung des deutschen Volkes. \section{References}
1,314,259,992,765
arxiv
\section{Introduction} We succeeded recently in synthesizing the entire series of new ternary R$_3$Pt$_{23}$Si$_{11}$ compounds where R is a lanthanide element.\cite{opagiste2012,opagiste2014,opagiste2015} They crystallize within the same face centered cubic structure ($Fm\bar3m$ space group) as reported for the first time by Tursina et \textit{al.} for Ce$_3$Pt$_{23}$Si$_{11}$.\cite{tursina2002} The evolution of the lattice parameter in the series is consistent with the lanthanide contraction effect, except for Eu$_3$Pt$_{23}$Si$_{11}$ and Yb$_3$Pt$_{23}$Si$_{11}$ where the rare earth ion is divalent. As a consequence the Eu ions bear a magnetic moment, while those of Yb are not magnetic. Joint studies of the magnetic and thermodynamic properties show that almost all the R$_3$Pt$_{23}$Si$_{11}$ present a magnetic ordering at low temperature. The ordered phase is ferromagnetic for Ce-, Sm-, Eu-, Gd-, Tb-, Dy-, Ho- and Er$_3$Pt$_{23}$Si$_{11}$ and antiferromagnetic for Nd$_3$Pt$_{23}$Si$_{11}$.\cite{opagiste2013} The ordering temperatures of compounds with trivalent rare earths follow the de Gennes law, as expected when the magnetic interactions are mediated via the indirect RKKY-type interactions. The reduced magnetic moment observed in the ordered phases for the $L\neq0$ ions, the low-temperature Van Vleck-type paramagnetism in the Pr and Tm compounds, evidence important crystalline electric field (CEF) effects.\\ Extensive studies of the magnetic properties of Ce$_3$Pt$_{23}$Si$_{11}$ were performed on a high-quality single crystal sample. The magnetic susceptibility is isotropic in the paramagnetic phase, as expected for a cubic compound, while in the ferromagnetic phase (T$_C$= 0.44 K) one observes an easy magnetization axis along the [111] direction of the cube.\cite{opagiste2011} Neutron diffraction experiments, performed on the same single crystal, reveal a quite singular spin arrangement of the six Ce ions in the unit cell. They divide into pairs, each pair having its moments aligned with one of the three fourfold axes of the cubic structure. The combination of these three directions leads to a magnetization along a threefold direction. The CEF investigation in Ce$_3$Pt$_{23}$Si$_{11}$ by neutron spectroscopy (NS), reveals two magnetic excitations at 139$\pm$5 K (12$\pm$0.5 meV) and 227$\pm$2 K (19.6$\pm$0.2 meV).\\ New NS studies have been carried out on the Pr$_3$Pt$_{23}$Si$_{11}$ and Nd$_3$Pt$_{23}$Si$_{11}$ compounds. We report here the experimental results and compare them with those of Ce$_3$Pt$_{23}$Si$_{11}$. The first section describes the experimental details. In the second section the NS results are presented. The last section is devoted to the analysis of the experimental results and to the conclusion.\\ \section{Experimental} High-quality polycrystalline samples of Pr$_3$Pt$_{23}$Si$_{11}$ and Nd$_3$Pt$_{23}$Si$_{11}$ were prepared for neutron experiments. The stoichiometric proportions of the different constituents: Pr or Nd (99.99\%, Johnson Matthey), Pt (99.95\%, Alfa Aesar) and Si (99.9999\%, Alfa Aesar), were melted by induction technique in a cold copper crucible under a high purity argon atmosphere. Samples were melted several times to improve the homogeneity. Mass losses during this first step were less than 0.1\%. The sample quality was checked by the conventional X-ray powder diffraction technique using the Cu-K$\alpha$ radiation on a Philipps PW1730 diffractometer. Diffraction patterns are consistent with the face-centered cubic structure ($Fm\bar{3}m$ space group) and confirm that, within the experimental accuracy, no impurity phases are present.\\ Inelastic neutron scattering experiments were carried out at the Institute Laue-Langevin (ILL) in Grenoble on the IN4C time-of-flight spectrometer. Measurements were performed on two samples of 2.772 g and 2.785 g of Pr$_3$Pt$_{23}$Si$_{11}$ and Nd$_3$Pt$_{23}$Si$_{11}$ respectively. The sample holder used for these experiments consisted of a thin aluminum foil, thus reducing the contribution of the empty cell to a minimum. Three incident wavelengths, $\lambda_i$, have been selected in order to investigate the excitations over an extended energy range. The associated instrumental resolutions are determined by the full width at half maximum (FWHM) of the incoherent elastic peak: $\lambda_i$ = 1.493 {\AA} and incident energy E$_i$ = 36.7 meV with FWHM = 1.54 meV, $\lambda_i$ = 2.22 {\AA} and E$_i$ = 16.6 meV with FWHM = 0.83 meV, $\lambda_i$ = 2.98 {\AA} and E$_i$ = 9.21 meV with FWHM = 0.35 meV. The spectra were collected in the temperature range 4 K to 150 K for scattering angles ranging from 13$^{\circ}$ to 135$^{\circ}$, and were normalized, respectively, to the incident flux and to a vanadium standard. In order to highlight the dependence on the scattering vector \textit{Q}, spectra were further averaged out in three groups, the mean scattering angles $\theta$ of which are 31.81$^{\circ}$, 67.16$^{\circ}$ and 102.77$^{\circ}$, respectively. The scattering by magnetic excitations is enhanced at low \textit{Q} values whereas phonon excitations are dominant at high \textit{Q}. Also the scattering by magnetic excitations is stronger at low temperature, while phonon scattering becomes preponderant at high temperature.\\ \section{Neutron spectroscopy} For the Pr$_3$Pt$_{23}$Si$_{11}$ compound, figure~\ref{fig1} compares the three angular groups of inelastic spectra collected at T = 4 K with E$_i$ = 36.7 meV. The spectrum at $\theta$ = 31.81$^{\circ}$ shows three excitations centered at 4.7$\pm$0.2 meV (FWHM = 2.04 meV), 12.2$\pm$0.5 meV (FWHM = 2.50 meV) and E$_3$ = 23.8$\pm$0.2 meV (FWHM = 1.5 meV). Note that the FWHM of the two first excitations are larger than the experimental resolution. This may indicate that several excitations, too close in energy to be resolved, contribute to the peak. The progressive decrease of the intensity of the peaks at 4.7 meV and 23.8 meV in the spectra at $\theta$ = 67.16$^{\circ}$ and $\theta$ = 102.77$^{\circ}$ is consistent with the evolution expected for magnetic scattering. As shown in figure~\ref{fig1} the excitation at 12.2 meV is located in an energy region highly-populated with phonons. Therefore the intensity decrease at high \textit{Q} is partially masked by the increase of the phonon intensity. Very likely the peak may contain both, phonon and magnetic scattering. The magnetic origin of the peak at 12.2 meV is however indisputably confirmed in figure~\ref{fig2}, which illustrates the thermal evolution of the inelastic spectra at $\theta$ = 31.81$^{\circ}$. At this angle the phonon contribution is negligible and one observes the progressive decrease of the intensity for the three peaks with increasing the temperature. In the figure the arrows indicate the positions of magnetic excitations. At 10 K a fourth excitation appears at 18.8$\pm$0.2 meV (FWHM = 1.9 meV). Warming up the sample, its intensity progressively decreases in a similar way to the other magnetic excitations. We have pointed out the large width of some magnetic excitations therefore higher resolution spectra have been collected with E$_i$ = 9.21 meV. They are reported in figure~\ref{fig3} where two magnetic excitations can be distinguished at 4.33$\pm$0.05 meV (FWHM = 0.4 meV) and 5.26$\pm$0.05 meV (FWHM = 0.4 meV). Consequently the excitation at 18.8 meV in figure~\ref{fig2} corresponds to the transition from the second excited level at 5.26 meV to the level at 23.8 meV. \begin{figure} \includegraphics[width=\columnwidth]{figure1.eps} \caption{\label{fig1} (Color online) Pr$_3$Pt$_{23}$Si$_{11}$, incident neutron energy E$_i$ = 36.7 meV: evolution of the down-scattering processes as a function of the scattering angle at T = 4 K. Dots represent the spectrum at $\theta$ = 31.81$^{\circ}$, triangles the spectrum at $\theta$ = 67.16$^{\circ}$ and stars the spectrum at $\theta$ = 102.77$^{\circ}$. Black arrows indicate the positions of the magnetic excitations at E$_1$ = 4.7 meV, E$_2$ = 12.2 meV and E$_3$ = 23.8 meV.} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{figure2.eps} \caption{\label{fig2} (Color online) Pr$_3$Pt$_{23}$Si$_{11}$, incident neutron energy E$_i$ = 36.7 meV: thermal evolution of the inelastic spectra. For clarity the spectra are vertically shifted by 0.5. Arrows show the positions of the magnetic excitations.} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{figure3.eps} \caption{\label{fig3} (Color online) Pr$_3$Pt$_{23}$Si$_{11}$, incident neutron energy E$_i$ = 9.21 meV: thermal evolution of the inelastic spectra. The weak structures pointed out by stars have an incoherent thermal evolution and are ascribed to spurious effects. For clarity the spectra are vertically shifted by 0.5.} \end{figure} For the Nd$_3$Pt$_{23}$Si$_{11}$ compound, the evolution of the inelastic spectra as function of the scattering angle and as function of the temperature are displayed in figures~\ref{fig4} and~\ref{fig5} respectively. Spectra in figure~\ref{fig4} reveal, at 4 K, three magnetic excitations at 8.8$\pm$0.2 meV (FWHM = 1.6 meV), 11.8$\pm$0.5 meV (FWHM = 1.9 meV) and 25.8$\pm$0.2 meV (FWHM = 1.4 meV) and a bump around 4 meV. In order to resolve this bump, spectra have been collected at 4 K with $E_i$ = 16.6 meV. As shown in the inset in figure~\ref{fig4}, a well defined peak is located at 4.46$\pm$0.10 meV (FWHM = 0.73 meV). Its evolution with the scattering angle confirms a magnetic origin. In the same figure, the profile and the large FWHM of the peak centered at 11.72$\pm$0.15 meV let suppose that the peak is double. It was seen in Pr$_3$Pt$_{23}$Si$_{11}$ that phonon scattering is important in this energy range (8 - 16 meV). As the phonon scattering in Nd$_3$Pt$_{23}$Si$_{11}$ should not be very different from that in Pr$_3$Pt$_{23}$Si$_{11}$, a weak phonon contribution is very likely. In figure~\ref{fig5} the thermal evolution of the intensity of these four peaks appears fully consistent with that expected for magnetic scattering. At 50 K a fifth peak emerges at 21.5$\pm$0.2 meV (FWHM = 1.5 meV) and progressively decreases with temperature. It can be ascribed to the transition between the first excited level at 4.46 meV and the level at 25.8 meV.\\ \begin{figure} \includegraphics[width=\columnwidth]{figure4.eps} \caption{\label{fig4} (Color online) Nd$_3$Pt$_{23}$Si$_{11}$, incident neutron energy E$_i$ = 36.7 meV: evolution of the down-scattering processes as function of the scattering angle at T = 4 K. Dots represent the spectrum at $\theta$ = 31.81$^{\circ}$, triangles the spectrum at $\theta$ = 67.16$^{\circ}$ and stars the spectrum at $\theta$ = 102.77$^{\circ}$. Black arrows indicate the positions of the magnetic excitations at E$_1$ = 4.46 meV, E$_2$ = 8.75 meV and E$_3$ = 11.72 meV and E$_4$ = 25.80 meV. The inset shows the spectra obtained with a better resolution (E$_i$ = 16.6 meV). They definitely confirm the magnetic excitation at E$_1$ = 4.46 meV.} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{figure5.eps} \caption{\label{fig5} (Color online) Nd$_3$Pt$_{23}$Si$_{11}$, incident neutron energy E$_i$ = 36.7 meV: thermal evolution of the inelastic spectra. For clarity the spectra are vertically shifted by 0.5. Arrows show the positions of the magnetic excitations.} \end{figure} With the purpose of comparing the magnetic scattering of the different compounds, the data obtained previously for the Ce$_3$Pt$_{23}$Si$_{11}$ and La$_3$Pt$_{23}$Si$_{11}$ samples have been treated in the same manner as those of Pr$_3$Pt$_{23}$Si$_{11}$ and Nd$_3$Pt$_{23}$Si$_{11}$. They are displayed in figures~\ref{fig6} and \ref{fig7}. At T = 2 K in the Ce$_3$Pt$_{23}$Si$_{11}$ spectrum (see figure~\ref{fig6}-a) two main inelastic structures are observed at E$_1$ = 12$\pm$0.5 meV of FWHM = 2.0 meV and at E$_2$ = 19.6$\pm$0.2 meV of FWHM = 1.7 meV respectively. Their thermal evolution is consistent with that of magnetic excitations. The excitation at 12 meV is larger than the experimental resolution. The existence of wide phonon structures around 8.8 and 13.6 meV is confirmed by the La$_3$Pt$_{23}$Si$_{11}$ spectra (see figure~\ref{fig6}-b). As in Pr- and Nd$_3$Pt$_{23}$Si$_{11}$, in the Ce compound the excitation at 12 meV is mixed with a phonon contribution. Figure~\ref{fig7} shows that the energies of the two excitations, the magnetic one at 12 meV and the phonon structure around 13.6 meV, are too close to be resolved within the experimental accuracy. Further spectra of La$_3$Pt$_{23}$Si$_{11}$ collected with an incident energy of 16.6 meV, not shown here, confirm the existence of a large phonon structure (FWHM = 2 meV) at 8.8 meV. Then the bump around 8.5 meV (FWHM = 2.4 meV) in the Ce spectra is due to phonon excitations.\\ \begin{figure} \includegraphics[width=\columnwidth]{figure6.eps} \caption{\label{fig6} (Color online) Thermal evolution of the inelastic spectra with incident neutrons of E$_i$ = 36.7 meV (a) for Ce$_3$Pt$_{23}$Si$_{11}$, the arrows show the positions of the magnetic excitations, (b) for La$_3$Pt$_{23}$Si$_{11}$. For clarity the spectra are shifted vertically by 0.5.} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{figure7.eps} \caption{\label{fig7} (Color online) Thermal evolution of the inelastic spectra (incident neutron energy E$_i$ = 16.6 meV) in Ce$_3$Pt$_{23}$Si$_{11}$. The non-symmetric decrease with increasing the temperature of the intensity of the peak at 12 meV reveals a phonon contribution around 12.5 - 13.5 meV.} \end{figure} \section{Analysis and discussion} Despite the R$_3$Pt$_{23}$Si$_{11}$ compounds crystallize within a highly symmetric face centered cubic structure ($Fm\bar{3}m$ space group), the point symmetry of the rare earth site (24$d$) is orthorhombic (\textit{m.mm}, group $D_{2h}$). The rare earth ions are coordinated by tetragonal prisms of eight Pt atoms (in the sites 96$k$), R[Pt$8$]. As shown in figure~\ref{fig8}, the orientation of the tetragonal prism differs for the six equivalent rare earth sites within the unit cell.\\ \begin{figure} \includegraphics[width=\columnwidth]{figure8.eps} \caption{\label{fig8} (Color online) The R[Pt8] coordinations in the phase centered cubic structure ($Fm\bar{3}m$ space group) of the R$_3$Pt$_{23}$Si$_{11}$: projection on the \textit{b,c} plane of the R[Pt8] tetragonal prisms in the unit cell. The rare earth ions are in the \textit{24d} sites and the first neighbors Pt ions in the \textit{96k} sites. The orientation of these tetragonal prisms depends on the coordinates of the six equivalent rare earth positions in the cell. For rare earths at the ($\frac{1}{4},\frac{1}{4},0$) or ($\frac{3}{4},\frac{1}{4},0$) sites the height Pt first neighbors form two rectangles lying in ($x,y$) planes at $z/a=\pm$0.0843 respectively (3). For rare earths at the sites ($\frac{1}{4},0,\frac{1}{4}$) or ($\frac{1}{4},0,\frac{3}{4}$), the Pt rectangles are lying in ($x,z$) planes at $y/a=\pm$0.0843 (1), while at the ($0,\frac{1}{4},\frac{1}{4}$) or ($0,\frac{3}{4},\frac{1}{4}$) sites they are lying in ($y,z$) planes at $x/a$=$\pm$0.0843 (2).} \end{figure} \subsection{Orthorhombic crystal field} Group-theory allows predicting how the degeneracy of the rare earth ion \textit{J} multiplets is lifted by the CEF interactions.~\cite{tinkham2003} At a site of orthorhombic symmetry, a complete splitting is achieved. For half integer \textit{J}, the multiplets split into $\frac{2J+1}{2}$ magnetic doublets. Therefore, within the fundamental multiplet, the maximum number of CEF transitions from the ground state is $\frac{2J+1}{2}$-1. For integer \textit{J} (non-Kramer's ions), the multiplets are split in $2J+1$ non-magnetic singlets. At most $2J$ CEF transitions can be expected from the ground state within the fundamental multiplet.\\ For Ce$^{3+}$ (\textit{J}=5/2) and Nd$^{3+}$ (\textit{J}=9/2) ions, the fundamental multiplet is decomposed into three and five magnetic doublets respectively. This is consistent with the number of magnetic excitations observed in Ce$_3$Pt$_{23}$Si$_{11}$ and Nd$_3$Pt$_{23}$Si$_{11}$. This also agrees with the value, $R\ln2$ per ion, of the magnetic entropy at the ordering temperature deduced from specific heat measurements in both compounds.\cite{opagiste2012}\\ In the case of Pr$^{3+}$ ions, the CEF interactions split the \textit{J}=4 multiplet into 9 non-magnetic singlets. This explains the low temperature Van Vleck susceptibility of Pr$_3$Pt$_{23}$Si$_{11}$. However only four transitions are observed in the neutron inelastic spectra among the eight expected. This suggests that some transitions are weak or forbidden.\\ The CEF Hamiltonian acting on a rare earth ion at a site of \textit{mmm} point group symmetry is written as:~\cite{hutchings1964} \begin{eqnarray} \mathcal{H}_{CEF}=&&\alpha_J(V^{0}_{2}O^{0}_{2}+V^{2}_{2}O^{2}_{2})\nonumber\\&&+\beta_J(V^{0}_{4}O^{0}_{4}+V^{2}_{4}O^{2}_{4}+V^{4}_{4}O^{4}_{4})\nonumber\\&&+\gamma_J(V^{0}_{6}O^{0}_{6}+V^{2}_{6}O^{2}_{6}+V^{4}_{6}O^{4}_{6}+V^{6}_{6}O^{6}_{6}) \label{eq:one} \end{eqnarray} The $O^{m}_{l}$ are the Stevens operators.\cite{stevens1952} $\alpha_J$, $\beta_J$ and $\gamma_J$ are the 2nd-, 4th- and 6th-order Stevens coefficients, the value of which depends on the rare earth. The $V^{m}_{l}$ are the CEF parameters that depend on the surroundings. The number of CEF parameters is too large, nine parameters for Pr and Nd and five for Ce ( $\gamma_J$ = 0), to be determined from the neutron spectroscopy only. Many sets of CEF parameter can be found that comply with the energy scheme deduced from the NS spectra. However, CEF-based calculations show that only very few are simultaneously consistent with the magnetic properties. In particular, calculations should account for the thermal variation of the first-order magnetic susceptibility, for the anisotropy, the field and temperature dependence of the magnetization. In an orthorhombic system, the second-order CEF parameters can be deduced from the anisotropy of the magnetic susceptibility measured on single crystals along the three axes of the orthorhombic structure.~\cite{fillion1984} In the present case, these parameters cannot be obtained since the symmetry of the crystal is cubic and the first-order magnetic susceptibility isotropic. Therefore an alternative approach has to be implemented in order to determine the whole sets of CEF parameters.\\ \subsection{Genetic algorithm: search for CEF parameters} As no single crystal measurements on Pr$_3$Pt$_{23}$Si$_{11}$ and Nd$_3$Pt$_{23}$Si$_{11}$ are available, the determination of the nine CEF parameters for these compounds appears hopeless. To this regard, the case of Ce$_3$Pt$_{23}$Si$_{11}$ seems more favorable. Indeed only five CEF parameters are to be determined and more experimental data are available: the spin arrangement in the ordered phase, the value of the moment at 100 mK, magnetic measurements on single crystal in both the paramagnetic and ferromagnetic phases.\\ In the search for the CEF parameters of Ce$_3$Pt$_{23}$Si$_{11}$, we proceed first by effectively exploring the space formed by the sets of $V^{m}_{l}$ parameters that yield an energy level schema compatible with the energies of the experimental excitations, E$_1$ = 139 K and E$_2$ = 227.5 K. In this optimization problem, many sets of values can be found because of the small number of imposed constraints. We therefore developed a numerical program based on genetic algorithms (GA), which constitutes the most adapted approach for solving a non unique solution problem.~\cite{goldberg1989} In our implementation the GA population of candidate solutions was limited to 100 and its evolution was studied during 400 generations. The range of the $V_{lm}$ parameters were bounded to a reasonable energy range, [-500 K, 500 K], and coded on 32 bit words, each word representing a chromosome and each bit a gene in the GA terminology. The quality of a candidate solution was evaluated through its fitness function, which, in our case, measures the sum of the squared differences between the calculated energy gaps and the expected ones. At each generation, the GA population was submitted to crossover operations between chromosomes with a rate of 90\%, allowing to explore locally the parameter space. Mutation operations were also applied with a rate of 0.2\%, allowing to make huge leaps and to capture other possible solutions. Only the best candidate solution was kept. We then proceeded to a local minimization based on a simplex method to refine the $V_{lm}$ values, thus increasing their precision.~\cite{nelder1965}\\ This procedure was iteratively used to sample the subspace of CEF parameters that comply with the energy level schema. The compliance of these sets with the magnetic properties has yet to be considered. An important criteria is the value of the ordered magnetic moment at low temperature. An experimental value of 1.2$\pm0.2 \mu_B$ was deduced from neutron diffraction measurements at 100 mK.~\cite{opagiste2011} Among the solutions of the genetic algorithm, only the sets that give a value for the moment of the fundamental doublet below 1.7 $\mu_B$/Ce were retained for the next step.\\ \subsection{Molecular field model} In order to compare the magnetic experimental data with CEF based calculations, a model adapted to the particular crystalline structure is required. The unit cell of the R$_3$Pt$_{23}$Si$_{11}$ compounds contains six equivalent rare-earth sites. The simplest microscopic model has to account for the effects of the CEF, the molecular exchange field $\bm{H}_m = n \bm{M}$ ($n$ is the molecular field constant and $\bm{M}$ the magnetization) and the applied magnetic field $\bm{H}$ (both $\bm{H}_m$ and $\bm{H}$ in Tesla unit). The Hamiltonian describing site $i$ then reads as : \begin{equation} \label{hamilt} \mathcal{H}_{i} = {\mathcal{H}_{CEF}}_i + \mu_B\,{g_J} \bm{H} \cdot \bm{J}_i + \mu_B\,{g_J} {\bm{H}_m}_i \cdot \bm{J}_i \end{equation} As the system orders ferromagnetically, the molecular and applied magnetic fields can be merged into a total field $\bm{H}_T$, identical on all sites, in both the paramagnetic and ordered states. \begin{equation} \mathcal{H}_{i} = {\mathcal{H}_{CEF}}_i + \mu_B\,{g_J} \bm{H}_T \cdot \bm{J}_i \end{equation} The orientation of the orthorhombic axes varying from site to site (see figure 8), the six ions cannot be described by the same CEF Hamiltonian $\mathcal{H}_{CEF}$. In order to use the same CEF expression for all sites, one would need six, symmetry equivalent, sets of CEF parameters. An alternative way consists in using the same CEF Hamiltonian, with a single set of CEF parameters describing a reference site (with index 0). Any other site $i$ can be brought in coincidence with this reference by application of a transformation $T_i$ consisting in rotations from the cubic point group of the crystallographic cell. Then, instead of treating the Hamiltonian $\mathcal{H}_{i}$ in order to obtain the statistical moment $\bm{m}_i$, one can solve the question using the Hamiltonian $\mathcal{H}_{0}$ where the total field is replaced by $T(\bm{H}_T)$, which yields a magnetic moment $\bm{m}$. The magnetic moment $\bm{m}_i$ is then obtained by rotating $\bm{m}$ back to the original orientation of site $i$ : $\bm{m}_i = T^{-1}(\bm{m})$. In this way, all the six magnetic moments in the unit cell can be computed, which allows to define the magnetization $\bm{M}$ and, therefore, the molecular field $\bm{H}_m = n \bm{M}$ for the next iteration... until convergence.\\ We can now proceed, for a given set of CEF parameters, to the calculations of the thermal variation of the susceptibility. These CEF parameters are then optimized by minimizing a $\chi^2$ that accounts for the deviations from both the experimental low temperature susceptibility and energy scheme. The optimized set is qualified only if the associated deviations are within the experimental errors.\\ In a last step, we checked whether the low temperature transition probabilities between the CEF ground state and the two excited doublets are compatible with the intensity of the observed magnetic transitions (see figure~\ref{fig6}). Gaussian fits to the 2 K spectra show that the intensity ratio, I(E$_2$)/I(E$_1$), is of the order of 0.4. Among the qualified sets of CEF parameters, only one nicely fulfills this last criteria. \begin{table} \caption{\label{table1}{Ce$_3$Pt$_{23}$Si$_{11}$: Eigenstates and eigenvalues of the CEF Hamiltonian, $\mathcal{H}_{0}$ for the reference site, calculated with the set of CEF parameters $V^{0}_{2}$ = 182 K, $V^{2}_{2}$ = 133.9 K, $V^{0}_{4}$ = -1.7 K, $V^{2}_{4}$ = -17.2 K and $V^{4}_{4}$ = 285.2 K.}} \begin{ruledtabular} \begin{tabular}{cc} Eigenstates & Eigenvalues (K)\\ \colrule $+0.950\left|\pm\frac{5}{2}\right\rangle-0.305\left|\mp\frac{3}{2}\right\rangle+0.064\left|\pm\frac{1}{2}\right\rangle$ & -121.8\\ $-0.207\left|\pm\frac{5}{2}\right\rangle-0.462\left|\mp\frac{3}{2}\right\rangle+0.862\left|\pm\frac{1}{2}\right\rangle$ & +16.8\\ $\mp0.233\left|\pm\frac{5}{2}\right\rangle\mp0.833\left|\mp\frac{3}{2}\right\rangle\mp0.502\left|\pm\frac{1}{2}\right\rangle$ & +105\\ \end{tabular} \end{ruledtabular} \end{table} This set of CEF parameters is: $V^{0}_{2}$ = 182 K, $V^{2}_{2}$ = 133.9 K, $V^{0}_{4}$ = -1.7 K, $V^{2}_{4}$ = -17.2 K and $V^{4}_{4}$ = 285.2 K. The eigenstates and eigenvalues of the CEF Hamiltonian are given in table~\ref{table1}. An exchange constant of \textit{n} = 0.5 T/$\mu_B$ has been determined from the value of the inverse CEF susceptibility at \textit{T}$_C$ = 0.44 K. The calculated inverse susceptibility is compared in figure~\ref{fig9} with the experimental data.\\ \begin{figure} \includegraphics[width=\columnwidth]{figure9.eps} \caption{\label{fig9} (Color online) Calculated (lines) and experimental (dots) thermal variation of the inverse susceptibility. The inset details the low temperature curves. The calculations are performed with the set of CEF parameters given in Table~\ref{table1} and the molecular field constant \textit{n} = 0.5 T/$\mu_B$. The systematic error in the experimental susceptibility curve is of the order of 2\%.} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{figure10.eps} \caption{\label{fig10} (Color online) Calculated magnetization curves for fields applied along the x, y and z axes and at T = 4.2 K, for a Ce$^{3+}$ at the reference orthorhombic 0 site, under the influence of the CEF defined by the parameters in Table~\ref{table1}.} \end{figure} \subsection{Analysis of the magnetization processes} As shown in figure~\ref{fig9} the calculated and experimental thermal variations of the inverse susceptibility are in good agreement. The isotropy of the first-order magnetic susceptibility conceals the large magnetic anisotropy, resulting from the orthorhombic CEF at the Ce$^{3+}$ site. This anisotropy is illustrated in Figure~\ref{fig10}, which shows the magnetization curves calculated, at 4.2 K for the reference Ce$^{3+}$ site, for fields applied along the \textit{x}, \textit{y} and \textit{z} axes. It appears that a significant magnetization is obtained only along the \textit{z} axis. In the specific crystal structure of the R$_3$Pt$_{23}$Si$_{11}$ compounds, at each R site this axis is in coincidence with one of the fourfold axes of the cubic cell. In case of a magnetic field applied along a fourfold axis of the cubic cell, only one third of the Ce$^{3+}$ ions substantially contribute to the magnetization. Reciprocally, for a field applied along a threefold axis of the cubic cell, all Ce$^{3+}$ ions contribute to the magnetization via a $1/\sqrt{3}$ projection ratio.\\ Indeed, the magnetization measurements (see figure~\ref{fig11}) show that the effective easy axis is the threefold one, in agreement with the CEF-based calculations for the cubic system. Figure~\ref{fig11}-a compares the calculated magnetization at 4.2 K with the experimental curves measured with fields applied along the three main symmetry axes of the cube. The calculations reproduce qualitatively well the experimental magnetization curves and, as expected, they find the easy and hard magnetization axes, along the threefold and fourfold axes respectively. However, above 2 T, the calculated magnetization is systematically larger than the experimental one by 8 to 13\%. In the ferromagnetic phase (see figure~\ref{fig11}-b), the calculations confirm an easy magnetization axis along the threefold direction, in agreement with the experiments. At 100 mK, the experimental curves along the [110] and [001] axes show kinks at about 2.5 and 1 T respectively. Calculations qualitatively reproduce these discontinuities, which correspond to the end of the rotation of the moments towards the direction of the applied field. The magnetization is again calculated larger than the experimental values. \begin{figure} \includegraphics[width=\columnwidth]{figure11.eps} \caption{\label{fig11} (Color online) Calculated (lines) and experimental (dots) magnetization curves for fields applied along the threefold, twofold and fourfold axes of the cubic cell at (a) 4.2 K (b) 100 mK. The relative experimental error on the magnetization value is estimated of the order of 3\%. The calculations are performed with the set of CEF parameters given in Table ~\ref{table1} and a molecular field constant \textit{n} = 0.5 T/$\mu_B$.} \end{figure} The used model handles only the CEF effects on the magnetic properties of the Ce ions. Actually, there exist other contributions to the magnetization, from the matrix and/or from the conduction electrons. They may become sizable with respect to that of the rare earth ions in this rather diluted system. Depending on their signs they can lead to an effective reduction of the magnetic signal. In most cases, these contributions can be well accounted for by the signal of the isostructural La compound. In the present case the value of the diamagnetic susceptibility in La$_3$Pt$_{23}$Si$_{11}$, $\chi\approx-2\times10^{-5}$ $\mu_B/T$, cannot explain alone the disagreement between calculations and experimental data. An other contribution, due the local polarization of the conduction electrons by the rare earth ions, can lead to a reduction of the moment. As Ce$_3$Pt$_{23}$Si$_{11}$ shows no evidence of Kondo coupling, this local polarization is opposite to that of the rare earth. According to the calculated moment for the Ce ions at 100 mK, $\muµ_{cal}$ = 1.57 $\muµ_B$, and the value of the moment determined by neutron diffraction, $\muµ$ = 1.2$\pm0.2~\muµ_B$, this polarization would be of the order of 0.2-0.4 $\mu_B$ per Ce site. The existence of such a polarization could be confirmed experimentally by polarized neutron diffraction. It should be emphasized, however, that in rare earth compounds, higher-orders effects, such as magnetostriction, may manifest under large magnetic fields. Those, which are not accounted for in the Hamiltonian of Eq.~\ref{hamilt}, may also explain the excess in the calculated magnetization.\\ \section{Conclusion} The number of magnetic excitations in the neutron spectroscopy spectra of the Ce, Pr and Nd compounds in the R$_3$Pt$_{23}$Si$_{11}$ series, is consistent with the orthorhombic symmetry at the rare earth site. Unfortunately, for Pr- and Nd$_3$Pt$_{23}$Si$_{11}$ the number of CEF parameters in the Hamiltonian is too large to allow for their unambiguous determination. This impedes further analysis of the magnetic properties. In the case of Ce$_3$Pt$_{23}$Si$_{11}$, the reduced number of CEF parameters (5) and the availability of comprehensive experimental data alleviate the difficulty. To determine the CEF at the cerium site, a protocol has been developed. First, a collection of sets of CEF parameters, that comply with the neutron energy level schema, is obtained by a genetic algorithm method. Second, using a microscopic model adapted to the special crystal structure of the R$_3$Pt$_{23}$Si$_{11}$, the initial sets are self-consistently optimized in order to simultaneously describe the CEF energy schema and the first-order magnetic susceptibility. Last, the transition probabilities between the ground and the excited CEF states, as derived from the intensities in the NS spectra, are confronted with those computed for the optimized sets of CEF parameters. This allows to identify an unique set of optimized parameters. Calculations show that this CEF is responsible for a large magnetic anisotropy of the Ce ions. Combining the response of the 6 Ce sites in the cubic crystallographic shell, the magnetization processes in the paramagnetic range are well reproduced. The large orthorhombic anisotropy is directly responsible for the easy threefold and hard fourfold axes of this cubic system. Beyond the Ce case, this should stand for all elements in the R$_3$Pt$_{23}$Si$_{11}$ series. The ferromagnetic phase of Ce$_3$Pt$_{23}$Si$_{11}$ is governed by the competition between the local orthorhombic anisotropy and the exchange couplings, resulting in a magnetization along a threefold axis. Calculations including an applied magnetic field allow to interpret the evolution of the magnetization at very low temperature : the compromise between the anisotropy and the total field results in progressive rotations of the magnetic moments. Ce$_3$Pt$_{23}$Si$_{11}$ is thus an interesting model compound that helps understanding the behavior of systems where conflicting local anisotropies are forced to cooperate within the overall higher symmetry of the crystal.\\ \begin{acknowledgments} Authors greatly acknowledge R. Haettel from the Institut N\'eel and O. Meulien from the Institut Laue-Langevin for their technical assistance. \end{acknowledgments}
1,314,259,992,766
arxiv
\section{Introduction}\label{ss:18O} An overwhelming majority of the matter within our solar system has a unique $^{18}$O/$^{16}$O isotopic signature. However, a collection of presolar grain samples feature peculiar oxygen isotopic ratios. These outliers are found within the trove of presolar grains gathered over the years from primitive meteorites and interplanetary dust particles. This study is motivated by observations of presolar grains that nucleated in the atmospheres of distant, evolved stars before the formation of the Sun. These grains retain the isotopic ratios of the stellar surface they originated from. During the birth of the Sun, most presolar grains were annihilated as gas and dust collapsed to form the nascent star. As the solar system cooled and the Sun ascended the main sequence, the presolar grains that survived were incorporated into primitive meteorites. The study of their abnormal isotopic ratios provides crucial constraints for astrophysical models. This paper focuses on oxide grains referred to as $\it{Group~2}$ grains, approximately 15$\%$ of all presolar oxides~\cite{NIT08}. They exhibit a characteristic $^{18}$O/$^{16}$O abundance ratio $\leq$1.5$\times$10$^{-3}$~\cite{PAL11}, reflecting a substantial $^{18}$O depletion~\cite{HOP10} with respect to the solar value, (2.09$^{+0.13}_{-0.12}$)$\times$10$^{-3}$~\cite{SCO06}. It has been hypothesized that asymptotic giant branch (AGB) stars are an $^{18}$O depletion site~\cite{NIT08}. During the AGB stage---the final phase of nucleosynthesis during the evolution of a 0.8$-$8.0 M$_{\odot}$ star~\cite{LAT99,NOW01}---a star undergoes substantial nucleosynthesis and mass loss. Peeling away the surface layers enveloping an AGB star reveals numerous burning sites and a complex interplay between these regions. A stellar core, composed of electron degenerate carbon and oxygen, is surrounded by alternately burning helium and hydrogen shells. During periods of helium-burning, referred to as thermal pulses, thermonuclear runaway (TNR) occurs and drives convection between the two burning sites. When the TNR subsides, the star compensates for this period of activity by expanding and cooling. The hydrogen burning shell is quenched during expansion, and the convective envelope dredges the products of nucleosynthesis to the surface of the star (third dredge-up). After this dredge-up event, the star contracts, and the hydrogen shell reignites. This interplay between the helium and hydrogen shells repeats episodically~\cite{LAT99}. During the AGB phase, $^{18}$O depletion may occur due to $\it{cool~bottom~processing}$ (CBP)~\cite{NIT08}. This $\it{extra~mixing}$ process was proposed by~\citet{WAS95a} to account for isotopic anomalies, including $^{18}$O depletion, in presolar grains. During CBP, material circulates between the convective envelope and the radiative zone that separates the envelope from the hydrogen burning shell. The base of the convective envelope remains cool, thus distinguishing this process from $\it{hot~bottom~burning}$ that occurs in 4$-$7 M$_{\odot}$ AGB stars~\cite{WAS95a,NIT08}. As the circulated matter approaches the hydrogen shell, it reaches temperatures high enough to destroy $^{18}$O via hydrogen burning. The processed material is then recirculated into the convective envelope and transported to the stellar surface. Grains nucleate in the stellar atmosphere depleted in $^{18}$O due to processes that occurred deep within the star. Then, powerful stellar winds inject these grains into the interstellar medium. The mechanism driving CBP is not understood, and several explanations have been proposed, including magnetic buoyancy~\cite{BUS10}, gravity waves~\cite{DEN03}, shear instability~\cite{ZAH92,MAE98}, meridional circulation~\cite{SWE79}, and convective overshoot~\cite{HER97}. The observed $^{18}$O depletion in some presolar oxide grains and AGB stellar atmospheres helped motivate the introduction of CBP into AGB stellar models. These models provided some insight into the class of AGB stars that might experience CBP and the temperature of the stellar plasma at the site of this extra mixing. According to~\citet{PAL11}, $^{18}$O depletion by CBP may occur in 1.5$-$1.7~M$_{\odot}$ AGB stars. Cool bottom processing models resulted in stellar plasma temperature regimes within low-mass AGB stars that could allow for $^{18}$O~+~$p$ reactions to occur. According to~\citet{NOL03}, a temperature in excess of 39.8$-$44.7~MK (depending on the evolution of the core, hydrogen burning shell, and convective envelope boundaries) is sufficient for $^{18}$O depletion during CBP in a 1.5~M$_{\odot}$ model star. The depletion of $^{18}$O in a stellar plasma at low temperatures is driven by $^{18}$O($p$,$\alpha$)$^{15}$N and, to a lesser extent, $^{18}$O($p$,$\gamma$)$^{19}$F. The former reaction was recently studied indirectly by \citet{LAC10}. In the present work, we report a direct, low-energy measurement of the $^{18}$O($p$,$\gamma$)$^{19}$F reaction. The goal of this measurement was to improve our knowledge of levels in the $^{19}$F compound nucleus that are relevant to nuclear astrophysics. \begin{figure}[!bp] \begin{center} \thinlines \setlength{\unitlength}{1.1mm} \begin{picture}(70,100) \put(15,31.26){\line(0,1){58.73}} \multiput(15,22)(0,4){3}{\line(0,1){2}} \put(20,22){\line(0,1){68}} \put(65,22){\line(0,1){68}} \put(20,5){\line(0,1){16}} \put(65,5){\line(0,1){16}} \put(18,21){\line(1,0){4}} \put(18,22){\line(1,0){4}} \put(63,21){\line(1,0){4}} \put(63,22){\line(1,0){4}} \put(0,27){\large $\mathsf{^{18}O+\it{p}}$} \put(39,0){\large $\mathsf{^{19}F}$} \put(4.6,96){\large $\mathsf{E^{lab}_{R}}$} \put(3.6,91){$\mathsf{[keV]}$} \put(22,96){\large $\mathsf{E_{x}}$} \put(20.5,91){$\mathsf{[keV]}$} \put(58.25,96){\large $\mathsf{2J^{\pi}}$} \put(20,5){\line(1,0){45}} \put(22,5.25){\footnotesize $\mathsf{0}$} \put(58.25,5.25){\footnotesize $\mathsf{1^{+}}$} \put(20,12.33){\line(1,0){45}} \put(22,12.58){\footnotesize $\mathsf{110}$} \put(58.25,12.58){\footnotesize $\mathsf{1^{-}}$} \put(20,18.13){\line(1,0){45}} \put(22,18.38){\footnotesize $\mathsf{197}$} \put(58.25,18.38){\footnotesize $\mathsf{5^{+}}$} \put(20,25){\line(1,0){1.75}} \put(22,24.25){\footnotesize $\mathsf{7900}$} \put(27.75,25){\line(1,0){37.25}} \put(20,26.93){\line(1,0){9.85}} \put(35.5,26.93){\line(1,0){18}} \put(59.75,26.93){\line(1,0){5.25}} \put(30,25.6){{\footnotesize $\mathsf{7929}$}} \put(53.75,25.6){\footnotesize $\mathsf{7^{+},9}$} \put(20,27.46){\line(1,0){45}} \put(22,27.71){\footnotesize $\mathsf{7937}$} \put(57.1,27.71){\footnotesize $\mathsf{11^{+}}$} \put(0,31.27){\line(1,0){15}} \put(0,31.51){\footnotesize $\mathsf{7994}$} \put(7,32.0){\footnotesize $\mathsf{22}$} \put(10, 32.6){\line(1,0){0.5}} \put(10.75, 32.6){\line(1,0){0.5}} \put(11.5, 32.6){\line(1,0){0.5}} \put(12.25, 32.6){\line(1,0){0.5}} \put(13, 32.6){\line(1,0){0.5}} \put(15, 32.6){\vector(1, 0){0}} \put(20,32.6){\line(1,0){45}} \put(22,32.85){\footnotesize $\mathsf{8014}$} \put(58.25,32.85){\footnotesize $\mathsf{5^{+}}$} \put(7,36.66){\footnotesize $\mathsf{95}$} \put(10, 37.26){\line(1,0){0.5}} \put(10.75, 37.26){\line(1,0){0.5}} \put(11.5, 37.26){\line(1,0){0.5}} \put(12.25, 37.26){\line(1,0){0.5}} \put(13, 37.26){\line(1,0){0.5}} \put(15, 37.26){\vector(1, 0){0}} \put(20,37.26){\line(1,0){45}} \put(22,37.51){\footnotesize $\mathsf{8084}$} \put(58.25,37.51){\footnotesize $\mathsf{3^{+}}$} \put(5.8,40.26){\footnotesize $\mathsf{151}$} \put(10, 40.86){\line(1,0){4}} \put(15, 40.86){\vector(1, 0){0}} \put(20,40.86){\line(1,0){1.75}} \put(22,40.11){\footnotesize $\mathsf{8138}$} \put(58.25,40.11){\footnotesize $\mathsf{1^{+}}$} \put(27.75,40.86){\line(1,0){30.25}} \put(61,40.86){\line(1,0){4}} \put(20,42.33){\line(1,0){45}} \put(22,42.58){\footnotesize $\mathsf{8160}$} \put(20,44.93){\line(1,0){45}} \put(22,45.18){\footnotesize $\mathsf{8199}$} \put(57.3,45.18){\footnotesize $\mathsf{(5^{+})}$} \put(5.8,44.33){\footnotesize $\mathsf{217}$} \put(10, 44.93){\line(1,0){4}} \put(15, 44.93){\vector(1, 0){0}} \put(20,44.93){\line(1,0){1.75}} \put(5.8,48){\footnotesize $\mathsf{275}$} \put(10, 48.6){\line(1,0){4}} \put(15, 48.6){\vector(1, 0){0}} \put(20,48.6){\line(1,0){1.75}} \put(22,48.1){\footnotesize $\mathsf{8254}$} \put(53.7,48.1){\footnotesize $\mathsf{(5,7)^{-}}$} \put(27.75,48.6){\line(1,0){25.5}} \put(61,48.6){\line(1,0){4}} \put(20,50.86){\line(1,0){1.75}} \put(22,50.25){\footnotesize $\mathsf{8288}$} \put(27.75,50.86){\line(1,0){29}} \put(61,50.86){\line(1,0){4}} \put(56.9,50.25){\footnotesize $\mathsf{13^{-}}$} \put(5.8,51.73){\footnotesize $\mathsf{334}$} \put(10, 52.33){\line(1,0){4}} \put(15, 52.33){\vector(1, 0){0}} \put(20,52.33){\line(1,0){45}} \put(22,52.58){\footnotesize $\mathsf{8310}$} \put(58.25,52.58){\footnotesize $\mathsf{5^{+}}$} \put(20,56.33){\line(1,0){45}} \put(22,56.58){\footnotesize $\mathsf{8370}$} \put(53.75,56.58){\footnotesize $\mathsf{5^{+},7}$} \put(5.8,69){\footnotesize $\mathsf{623}$} \put(10, 70.6){\line(1,0){4}} \put(15, 70.6){\vector(1, 0){0}} \put(20,70.6){\line(1,0){45}} \put(22,68.5){\footnotesize $\mathsf{8584}$} \put(58.25,68.5){\footnotesize $\mathsf{5^{+}}$} \put(5.8,71){\footnotesize $\mathsf{632}$} \put(10, 71.13){\line(1,0){4}} \put(15, 71.13){\vector(1, 0){0}} \put(20,71.13){\line(1,0){1.75}} \put(22,71){\footnotesize $\mathsf{8592}$} \put(58.25,71){\footnotesize $\mathsf{3^{-}}$} \put(27.75,71.13){\line(1,0){30.25}} \put(61,71.13){\line(1,0){4}} \put(20,73.6){\line(1,0){1.75}} \put(22,73){\footnotesize $\mathsf{8629}$} \put(27.75,73.6){\line(1,0){30.25}} \put(61,73.6){\line(1,0){4}} \put(58.25,73){\footnotesize $\mathsf{7^{-}}$} \put(5.8,74.4){\footnotesize $\mathsf{693}$} \put(10, 75){\line(1,0){4}} \put(15, 75){\vector(1, 0){0}} \put(20,75){\line(1,0){45}} \put(22,75.25){\footnotesize $\mathsf{8650}$} \put(58.25,75.25){\footnotesize $\mathsf{1^{+}}$} \put(5.8,83.93){\footnotesize $\mathsf{844}$} \put(10, 84.53){\line(1,0){4}} \put(15, 84.53){\vector(1, 0){0}} \put(20,84.53){\line(1,0){45}} \put(22,84.78){\footnotesize $\mathsf{8793}$} \put(58.25,84.78){\footnotesize $\mathsf{1^{+}}$} \end{picture} \caption{\label{fig:lev}Truncated $^{19}$F level diagram and $^{18}$O~+~$p$ resonances~\cite{SYM78,LOR79,WIE80,VOG90,TIL95,BEC95} through E$^{\mathrm{lab}}_{\mathrm{R}}$~=~844~keV. The E$^{\mathrm{lab}}_{\mathrm{R}}$~=~95~keV resonance corresponds to the E$_{\mathrm{x}}$~=~8084~keV excited state. For an explanation of our reported spin and parity for this level, see Sec.~\ref{ss:18O}. Dashed arrows indicate unobserved resonances and solid arrows indicate observed resonances with known $\gamma$-ray decays. The proton threshold, Q$_{p\gamma}$~=~7994~keV, was taken from Ref.~\cite{AUD11}.} \end{center} \end{figure} Within the CBP temperature regime, the $^{18}$O($p$,$\gamma$)$^{19}$F reaction rate may be influenced by an unobserved, low-energy resonance at E$^{\mathrm{lab}}_{\mathrm{R}}$~=~95~$\pm$~3~keV~\cite{TIL95,AUD11} (see Fig.~\ref{fig:lev}). In the competing $^{18}$O($p$,$\alpha$)$^{15}$N reaction, a strength of $\omega\gamma_{p\alpha}$~=~(1.6~$\pm$~0.5)$\times$10$^{-7}$~eV at E$^{\mathrm{lab}}_{\mathrm{R}}$~=~95~keV was directly measured by~\citet{LOR79}. The resonance strength is the integral of the reaction cross section and is defined as~\cite{ILI07}: \begin{equation}\label{wgpx} \omega\gamma~=~\frac{(2\mathrm{J}~+~1)}{(2\mathrm{J}_{p}~+~1)(2\mathrm{J}_{\mathrm{t}}~+~1)}\frac{\Gamma_{p}\Gamma_{\mathrm{x}}}{(\Gamma_{p}~+~\Gamma_{\alpha}~+~\Gamma_{\gamma})} \end{equation} where J is the compound nucleus spin, J$_{p}$~=~1/2 is the proton spin, J$_{\mathrm{t}}$~=~0 is the $^{18}$O target nucleus spin, $\Gamma_{p}$ is the proton partial width, $\Gamma_{\gamma}$ is the $\gamma$-ray partial width, $\Gamma_{\alpha}$ is the $\alpha$-particle partial width, $\Gamma_{\mathrm{x}}~=~\Gamma_{\gamma}$ for photon emission, and $\Gamma_{\mathrm{x}}~=~\Gamma_{\alpha}$ for $\alpha$-particle emission. In the $^{18}$O($p$,$\gamma$)$^{19}$F reaction, the E$^{\mathrm{lab}}_{\mathrm{R}}$~=~95~keV resonance has never been observed, and none of the $\gamma$-ray decays from the resonance level are known. Upper limits were placed on the resonance strength in the past, first by~\citet{WIE80} with $\omega\gamma_{p\gamma}$~$\leq$~5$\times$10$^{-8}$~eV and then by~\citet{VOG90} with $\omega\gamma_{p\gamma}$~$\leq$~4$\times$10$^{-8}$~eV. With a proton separation energy of Q$_{p\gamma}$~=~7993.5994~$\pm$~0.0011~keV~\cite{AUD11}, the E$^{\mathrm{lab}}_{\mathrm{R}}$~=~95~keV resonance corresponds to the E$_{\mathrm{x}}$~=~8084~$\pm$~3~keV~\cite{TIL95} level in the $^{19}$F nucleus. The previous experimental information regarding the structure of this compound nucleus level is summarized in Table~\ref{table:levparam}. From the $^{18}$O($^{3}$He,$d$)$^{19}$F experiment performed by~\citet{SCH70}, it is clear that the proton angular momentum transfer for the 8084~keV level is restricted to ${\ell_{p}}$~=~(2,~3). In~\citet{LAC08}, the Trojan Horse Method was used to investigate this level with the $^{2}$H($^{18}$O,$\alpha$$^{15}$N)n reaction. They determined that J~=~3/2 and $\ell_{\alpha}$~=~1. Consequently, based on the angular momentum coupling rules, the spin-parity and orbital angular momentum amount to J$^{\pi}$~=~(3/2)$^{+}$ and $\ell_{p}$~=~2, respectively. Note that an incorrect spin and parity of J$^{\pi}$~=~(3/2)$^{-}$ was assumed previously for this level~\cite{ILI10b}. \begin{table}[!bp] \begin{center} \caption{\label{table:levparam}E$_{\mathrm{x}}$~=~8084~keV level parameters.} \begin{tabular}{cccccc} \hline\hline \multicolumn{2}{c}{Parameter} & \multicolumn{2}{c}{Value (eV)} & \multicolumn{2}{c}{Reference} \\ \hline \multicolumn{2}{c}{$\omega\gamma_{p\alpha}$} & \multicolumn{2}{c}{(1.6~$\pm$~0.5)$\times$10$^{-7}$} & \multicolumn{2}{c}{~\cite{LOR79}} \\ \multicolumn{2}{c}{$\omega\gamma_{p\gamma,\mathrm{UL}}$} & \multicolumn{2}{c}{$\leq$ 4.0$\times$10$^{-8}$} & \multicolumn{2}{c}{~\cite{VOG90}} \\ \multicolumn{2}{c}{$\omega\gamma_{p\gamma,\mathrm{LL}}$} & \multicolumn{2}{c}{$\geq$ 1.3$\times$10$^{-11}$} & \multicolumn{2}{c}{Sec.~\ref{ss:rr}} \\ \multicolumn{2}{c}{$\Gamma_{\gamma}$} & \multicolumn{2}{c}{(6.0~$\pm$~2.5)$\times$10$^{-1}$~\footnote{Private communication from K. Allen quoted in~\citet{WIE80}.}} & \multicolumn{2}{c}{~\cite{WIE80}} \\ \multicolumn{2}{c}{$\Gamma$} & \multicolumn{2}{c}{$\leq$ 3.0$\times$10$^{3}$~\footnote{Total width determined from slope of front edge of thick-target yield curve.}} & \multicolumn{2}{c}{~\cite{LOR79}} \\ \hline\hline \end{tabular} \end{center} \end{table} Here we report on a new search for the E$^{\mathrm{lab}}_{\mathrm{R}}$ = 95 keV resonance in $^{18}$O($p$,$\gamma$)$^{19}$F with significantly improved sensitivity compared to previous studies \cite{WIE80,VOG90}. In the following we will discuss the experimental setup, including a brief outline of the accelerators (Sec.~\ref{ss:lena}) and detector system (Sec.~\ref{ss:detect}). Our oxygen target fabrication process is described in Sec.~\ref{ss:trgt}. The methodology we employed to characterize our detector efficiencies is discussed in Sec.~\ref{ss:detect}, and Sec.~\ref{ss:daq} outlines some of the features of our data acquisition electronics. Results for resonant and non-resonant proton capture on $^{18}$O are presented in Sec.~\ref{ss:res} and Sec.~\ref{ss:nonres}, respectively. An improved $^{18}$O($p$,$\gamma$)$^{19}$F reaction rate is presented in Sec.~\ref{ss:rr}. Concluding remarks are presented in Sec.~\ref{ss:conc}. \section{Experiment} \subsection{Accelerators}\label{ss:lena} The Laboratory for Experimental Nuclear Astrophysics (LENA) is dedicated to the measurement of low-energy nuclear reactions relevant to stellar nucleosynthesis. The cross sections measured at LENA lie within an energy regime that is susceptible to Coulomb suppression, and the LENA facility features key tools that increase the detection sensitivity. The LENA facility is a two-accelerator laboratory and consists of a high-current, low-energy Electron Cyclotron Resonance Ion Source (ECRIS) and an upgraded HVEC 1MV JN Van de Graaff. The LENA ECRIS produces average beam currents of I$_{p}$~=~1.5~mA on target within a bombarding energy range of 50~keV~$\leq$~E$^{\mathrm{lab}}_{p}$~$\leq$~215~keV. The high-current allows for a substantial increase in low-energy nuclear reaction yields. The LENA 1MV JN Van de Graaff is capable of producing H$^{+}$ beam currents of I$_{p}$~$\leq$~250 $\mu$A at the target. Typical beam energy resolution achieved with the JN ranges between 1$-$2~keV. In this study, it was primarily used to test our targets by measuring excitation functions ($\gamma$-ray yield vs. bombarding energy) of the well-known $^{18}$O($p$,$\gamma$)$^{19}$F resonance at E$^{\mathrm{lab}}_{\mathrm{R}}$~=~150.82~$\pm$~0.09~keV~\cite{BEC95}. These excitation functions provided information on target thickness and stability during the experiment. See Sec.~\ref{ss:trgt} for more information on our $^{18}$O targets. A detailed description of the LENA accelerator facility can be found in~\citet{CES10}. \subsection{Targets}\label{ss:trgt} The anodic oxidation of tantalum targets was first outlined by~\citet{AMS64}. The anodization process allows target thicknesses to be consistently reproduced, and it also allows the production of robust oxygen targets that remain stable when exposed to intense H$^{+}$ beam. A new anodization chamber was designed and assembled for our measurement according to the description in Ref.~\cite{AMS78b}. During fabrication, all tantalum backings were etched in an acid bath in order to reduce beam induced backgrounds by removing surface contaminants ($^{11}$B and $^{19}$F). Subsequently, all etched tantalum backings were resistively heated. These outgassed target backings were anodized at 64~V using 99.3$\%$ enriched $^{18}$O water to produce Ta$_{2}$$^{18}$O$_{5}$ targets with an expected target thickness of $\sim$18~keV at E$^{\mathrm{lab}}_{\mathrm{R}}$~=~151~keV. Excitation functions were collected during this experiment at the well-known E$^{\mathrm{lab}}_{\mathrm{R}}$~=~151~keV resonance in the $^{18}$O($p$,$\gamma$)$^{19}$F reaction with the JN Van de Graaff. Target thicknesses near 100~keV were estimated with the relationship~\cite{ILI07}: \begin{equation}\label{trgtthcknss} \frac{\Delta \mathrm{E}(151)}{\epsilon_{\mathrm{eff}}(151)}~=~\frac{\Delta \mathrm{E}(\mathrm{E}_{p})}{\epsilon_{\mathrm{eff}}(\mathrm{E}_{p})} \end{equation} where $\Delta$E is the measured target thickness in energy units, and $\epsilon_{\mathrm{eff}}$ is the effective stopping power in the center-of-mass system, derived from Bragg's rule~\cite{FOX05,ILI07}: \begin{equation}\label{eps_eff} \epsilon_{\mathrm{eff}}~=~\frac{\mathrm{M}_{\mathrm{^{18}O}}}{\mathrm{M}_{\mathit{p}}+\mathrm{M}_{\mathrm{^{18}O}}}\Big(\frac{\mathrm{N}_{\mathrm{O}}}{\mathrm{N}_{\mathrm{^{18}O}}}\epsilon_{\mathrm{^{18}O}}~+~\frac{\mathrm{N}_{\mathrm{Ta}}}{\mathrm{N}_{\mathrm{^{18}O}}}\epsilon_{\mathrm{Ta}}\Big) \end{equation} where M$_{p}$ and M$_{\mathrm{^{18}O}}$ are the mass of the proton and the $^{18}$O atom, $\epsilon_{\mathrm{^{18}O}}$ and $\epsilon_{\mathrm{Ta}}$ are the laboratory stopping powers of protons in $^{18}$O and Ta (calculated with $\mathtt{SRIM}$~\cite{ZIE04}), and N$_{i}$ are number densities (N$_{\mathrm{O}}$~=~N$_{\mathrm{^{16}O}}$~+~N$_{\mathrm{^{17}O}}$~+~N$_{\mathrm{^{18}O}}$). We found that our targets could withstand proton accumulations of Q$_{p}$ $>$ 45 C without significant degradation at I$^{\mathrm{ECRIS}}_{p}$~=~0.5$-$1.0~mA. \subsection{Detectors}\label{ss:detect} Almost all $^{18}$O($p$,$\gamma$)$^{19}$F resonances are known to decay via emission of multiple, coincident $\gamma$-rays. Therefore, the simultaneous detection of two or more photons allows an opportunity to increase the signal-to-noise ratio significantly. To accomplish this signal optimization, a $\gamma$-ray spectrometer consisting of several detectors was used. The LENA $\gamma\gamma$-coincidence detector system was assembled with an 135$\%$ HPGe detector at 0 degrees to the beam and in close running geometry with the target chamber. The distance between the HPGe detector and the target midpoint was 1.1 cm \cite{LON06}. The target chamber and HPGe detector were surrounded by a 16-segment NaI(Tl) annulus. Plastic scintillator paddles covered the two detectors on five sides and suppressed cosmic-ray muon events. In a two-dimensional NaI(Tl) vs. HPGe energy spectrum, appropriate gates were set during off-line data sorting. The low-energy thresholds set on these gates removed events caused by environmental background ($^{40}$K, $^{208}$Tl), and the high-energy thresholds excluded events with a total energy that exceeded the excitation energy of the decaying $^{19}$F compound nucleus. Most of the latter events were presumably caused by cosmic ray interactions. The LENA $\gamma\gamma$-coincidence detector was described in detail by~\citet{LON06}. The internal geometry of the HPGe detector is well known and was measured previously~\cite{CAR10} using computed tomography (CT). Based on the known dimensions, relative peak efficiencies were simulated using G$\textsc{eant}$4~\cite{AGO03,ALL06} by assuming mono-energetic $\gamma$-rays (E$_{\gamma}$~=~0.05$-$15.0 MeV) emitted from an extended beamspot (1.2 cm diameter) on the target. The sum-peak method~\cite{KIM03,ILI07} was used to obtain absolute peak and total efficiencies for $^{60}$Co (E$_{\gamma}$~=~1173~keV, 1332~keV). The absolute peak efficiency of the LENA HPGe detector was determined to be $\eta^{\mathrm{Ge,P}}_{1332}$~=~0.040~$\pm$~0.003, and the absolute total efficiency was $\eta^{\mathrm{Ge,T}}_{1253}$~=~0.188~$\pm$~0.012. Corrections for the finite beamspot size were estimated with G$\textsc{eant}$4~\cite{LON06}. Peak efficiencies at lower energies were measured with $^{56}$Co and at higher energies (up to 10 MeV) with nuclear reactions. These reactions included $^{14}$N($p$,$\gamma$)$^{15}$O at E$^{\mathrm{lab}}_{\mathrm{R}}$~=~278~keV~\cite{MAR11} and $^{27}$Al($p$,$\gamma$)$^{28}$Si at E$^{\mathrm{lab}}_{\mathrm{R}}$~=~406~keV~\cite{END90,POW98}. All efficiency data were corrected for coincidence summing effects using the matrix method outlined by~\citet{SEM90}. The separate data sets were bootstrapped together across the full energy range to determine the experimental peak efficiencies of the detector. Between measured energies, the peak efficiencies were found by interpolation using G$\textsc{eant}$4. Total efficiencies were also simulated in G$\textsc{eant}$4 and then normalized to the measured $^{60}$Co sum-peak total efficiency. The estimation of the $\gamma$-ray coincidence efficiencies needed for the data analysis is detailed in Sec.~\ref{ss:res}. \subsection{Data Acquisition and Procedure}\label{ss:daq} As discussed in Sec.~\ref{ss:detect}, the LENA spectrometer is composed of three different detectors: a HPGe detector, a 16-segment NaI(Tl) annulus, and plastic scintillator plates. Timing and energy signals were processed using standard NIM and VME modules. The HPGe signals served as master triggers for the electronics. Coincidence and anti-coincidence events were sorted using the acquisition software $\mathtt{JAM}$~\cite{SWA02}; this system was convenient for applying the desired software energy and timing gates. Additional details concerning the electronics setup for the LENA $\gamma\gamma$-coincidence detector can be found in~\citet{LON06}. Initial excitation functions were produced with the JN Van de Graaff at the well-known E$^{\mathrm{lab}}_{\mathrm{R}}$~=~151~keV resonance for each target. In order to search for the E$^{\mathrm{lab}}_{\mathrm{R}}$~=~95~keV resonance, a charge of 80~C was accumulated on-resonance at a bombarding energy of E$^{\mathrm{lab}}_{p}$~=~105~keV, and 40~C were accumulated off-resonance at E$^{\mathrm{lab}}_{p}$~=~85~keV. The average beam current on target amounted to I$_{p}$~=~754~$\mu$A. \section{Results} \subsection{Resonant Data Analysis}\label{ss:res} \begin{figure}[!bp] \begin{center} \includegraphics[scale=0.45]{coin} \caption{\label{fig:coin}(Color online) HPGe singles spectrum (blue) and $\gamma\gamma$-coincidence spectrum (red). Reduction in background amounts to a factor of 100. The prominent background peak at 511~keV arises from the annihilation of pair-produced positrons. Dashed lines indicate the anticipated locations of the 1~$\rightarrow$~0 (110~keV) and 2~$\rightarrow$~0 (197~keV) transitions in $^{19}$F. The spectra shown represent on-resonance data, with a total charge accumulation of 80 C at E$^{\mathrm{lab}}_{p}$ = 105~keV.} \end{center} \end{figure} Gates were constructed in $\mathtt{JAM}$ to produce $\gamma\gamma$-coincidence spectra, uncover the $\gamma$-ray decay fingerprint of the resonance, and reduce background contributions. Sample spectra are displayed in Fig.~\ref{fig:coin}, showing the on-resonance (ungated) singles HPGe detector spectrum in blue and the coincidence gated spectrum in red. For the latter spectrum, only those events in the HPGe detector are accepted that are coincident with events in the NaI(Tl) counter of energy 4.25~MeV~$\leq$~E$^{\mathrm{NaI(Tl)}}_{\gamma}$~$\leq$~10.0~MeV. It can be seen in Fig.~\ref{fig:coin} that this condition suppresses the environmental background by two orders of magnitude. Most $^{19}$F levels decay by $\gamma\gamma$-cascades through the first (110~keV) excited state, and all $^{19}$F levels (with known decay schemes) de-excite through the second (197~keV) excited state. In Fig.~\ref{fig:coin}, vertical dashed lines indicate anticipated locations of the $\gamma$-rays originating from the de-excitation of the first excited state (110~keV) and second excited state (197~keV). Note that because of their low energy, the 110~keV photons would be significantly attenuated. No peaks were observed for these two secondary decays. In fact, although we achieved a considerably improved detection sensitivity compared to previous studies (by about half an order of magnitude; see below), no $\gamma$-rays from the decay of the E$^{\mathrm{lab}}_{\mathrm{R}}$ = 95 keV resonance were observed in any of our singles or coincidence spectra. An improved upper limit on the resonance strength of the unobserved E$^{\mathrm{lab}}_{\mathrm{R}}$~=~95~keV resonance was determined relative to the strength of the well-known resonance at E$^{\mathrm{lab}}_{\mathrm{R}}$~=~151~keV. The resonance strength is given by~\cite{GOV59,ILI07}: \begin{equation} \label{eqn:resstreng} \omega\gamma~=~\frac{2 \epsilon_{\mathrm{eff}}}{\lambda^{2}}\frac{\mathcal{N}_{\mathrm{max}}}{\mathcal{N}_{p} \mathcal{B} \eta \mathcal{W}} \end{equation} where $\epsilon_{\mathrm{eff}}$ is the effective stopping power at the resonance energy as defined in Eq.~(\ref{eps_eff}), $\lambda$ is the de Broglie wavelength, where~\cite{ILI07}: \begin{equation} \label{eqn:debroglie} \frac{\lambda^{2}}{2}~=~\Bigg(\frac{\mathrm{M}_{p}+\mathrm{M}_{\mathrm{t}}}{\mathrm{M}_{p} \mathrm{M}_{\mathrm{t}}}\Bigg) \frac{4.125\times 10^{-18}}{\mathrm{E}_{\mathrm{R}}^{\mathrm{c.m.}}}~(\mathrm{cm^{2}}), \end{equation} $\mathcal{N}_{\mathrm{max}}$ is the total number of detected $\gamma$-rays if the target is considered infinitely thick, $\mathcal{N}_{p}$ is the number of incident protons: \begin{equation} \label{eqn:particles} \mathcal{N}_{p}~=~\frac{\mathrm{Q}}{e} \end{equation} where Q is the accumulated charge on target and $e$ is the unit charge in Coulomb, $\mathcal{B}$ is the branching ratio, $\eta$ is the efficiency of the detector, and $\mathcal{W}$ is the angular correlation. For the ratio of resonance strengths we obtain~\cite{ILI07}: \begin{equation} \label{eqn:relres} \frac{\omega\gamma_{95}}{\omega\gamma_{151}}~=~ \Bigg(\frac{\epsilon_{\mathrm{eff}}\mathcal{N}_{\mathrm{max}}}{\lambda^{2}\mathcal{N}_{p}\mathcal{B}\eta\mathcal{W}}\Bigg)_{95}\times\Bigg(\frac{\epsilon_{\mathrm{eff}}\mathcal{N}_{\mathrm{max}}}{\lambda^{2}\mathcal{N}_{p}\mathcal{B}\eta\mathcal{W}}\Bigg)_{151}^{-1}. \end{equation} In this equation, $\omega\gamma_{151}$~=~(9.7~$\pm$~0.5)$\times$10$^{-4}$~eV~\cite{ILI07} from the weighted mean of the resonance strengths reported in Refs.~\cite{WIE80,BEC82,VOG90}. All $^{19}$F levels decay through the second excited state (2~$\rightarrow$~0), and we chose not to exclude the possibility that the 8084 keV level decays with a substantial primary ground state branch. Therefore, we used the following expression to estimate an upper limit for the number of $^{19}$F compound nuclei produced~\cite{ROW02,ILI07}: \begin{equation} \label{eqn:ulresfrac} \Bigg(\frac{\mathcal{N}_{\mathrm{max}}}{\mathcal{B}\eta\mathcal{W}}\Bigg)_{95}~=~\frac{\mathcal{N}_{\mathrm{R}0}}{\eta_{\mathrm{R}0}^{\mathrm{\mathrm{Ge,P}}}}~+~\frac{\mathcal{N}_{20}}{\eta_{20}^{\mathrm{Ge,P}}f_{\gamma}} \end{equation} where $\mathcal{N}_{\mathrm{R}0}$ is the upper limit on the intensity of the ground state transition in the singles HPGe spectrum, $\mathcal{N}_{20}$ is the upper limit on the intensity of the decay from the $^{19}$F second excited state to the ground state (2~$\rightarrow$~0; see Fig.~\ref{fig:lev}) in the coincidence-gated HPGe spectrum, $\eta_{\mathrm{R}0}^{\mathrm{Ge,P}}$ is the HPGe peak efficiency for the ground state transition, $\eta_{20}^{\mathrm{Ge,P}}$ is the HPGe peak efficiency of the 2~$\rightarrow$~0 transition, and $f_{\gamma}$ is a $\gamma\gamma$-coincidence correction factor that depends on the $\gamma$-ray decay scheme and the coincidence gate selected. To calculate the correction factor $f_\gamma$, a G$\textsc{eant}$4 simulation was conducted that, for a given energy level, used the known emission probabilities to predict the total number of detected $\gamma$-rays arising from the 2~$\rightarrow$~0 transition for a variety of coincidence gates. Our new upper limit was extracted by requiring a rectangular energy gate of 4.25~MeV~$\leq$~E$^{\mathrm{NaI(Tl)}}_{\gamma}$~$\leq$~10.0~MeV in the two-dimensional NaI(Tl) vs. HPGe coincidence energy spectrum. The simulated coincidence histograms could then be sorted with the same energy gates and conditions that were used to analyze the experimental data. The correction factor, $f_\gamma$, was calculated by solving the following equation~\cite{ILI07}: \begin{equation} \label{eqn:postpro} \mathcal{N}'_{20}~=~\mathcal{N}_{\mathrm{R}}\eta^{\mathrm{Ge,P}}_{20}\mathit{f}_{\gamma} \end{equation} where $\mathcal{N}'_{20}$ is the simulated intensity of the 197~keV peak in the coincidence spectrum, $\mathcal{N}_{\mathrm{R}}$ is the total number of simulated reactions, and $\eta^{\mathrm{Ge,P}}_{20}$ is the 2~$\rightarrow$~0 singles peak efficiency (Sec.~\ref{ss:detect}). This procedure was tested at the E$^{\mathrm{lab}}_{\mathrm{R}}$~=~151~keV resonance, where the simulated intensities agreed with the experimental values within uncertainty (4$\%$ for the 2$\rightarrow$0 decay in a rectangular coincidence spectrum). \begin{figure}[!bp] \begin{center} \includegraphics[scale=0.45]{fg_mean} \caption{\label{fig:fg}{(Color online) $\gamma\gamma$-coincidence correction factors ($f_{\gamma}$) for all $^{19}$F energy levels with J~$< $~9/2 (open blue circles) and E$_{\mathrm{x}}$~$\geq$~5500~keV; levels with high ground state decay modes were excluded. Additionally, correction factors for levels with J$^{\pi}$~=~(3/2)$^{+}$ are indicated by solid red circles. These correction factors were generated by applying a 4.25~MeV~$\leq$~E$^{\mathrm{NaI(Tl)}}_{\gamma}$~$\leq$~10.0 MeV gate. The mean $f_{\gamma}$ value for the entire distribution, $f_{\gamma}$~=~0.17~$\pm$~0.09, is represented in this figure by the solid blue line. The two dashed blue lines represent the uncertainty (in this instance, the root-mean-square).}} \end{center} \end{figure} Because the E$^{\mathrm{lab}}_{\mathrm{R}}$~=~95~keV resonance has no known decay scheme or branching ratios, we first calculated values of $f_\gamma$, according to the procedure described above, for all bound and unbound $^{19}$F levels with known decay schemes~\cite{TIL95}. We then adopted a reasonable average from the ensemble of values. The statistical analysis was restricted to levels with J~$<$~9/2 (open blue circles in Fig.~\ref{fig:fg}) and E$_{\mathrm{x}}$~$\geq$~5500~keV. The constraint on the spin was chosen to associate the calculated mean $f_{\gamma}$ value with low-spin states, while the energy threshold was set so that the $f_{\gamma}$ values were associated with complex $\gamma$-ray decay routes to the $^{19}$F second excited state. As an additional constraint, no level with a ground state branching ratio that exceeded the total probability of emission to the $^{19}$F second excited state, was included in this analysis. This final constraint was added because the ground state decay mode is already included in the strength upper limit calculation---see Eq.~(\ref{eqn:ulresfrac}). Results of this analysis for a 4.25~MeV~$\leq$~E$^{\mathrm{NaI(Tl)}}_{\gamma}$~$\leq$~10.0 MeV gate are shown in Fig.~\ref{fig:fg}. An average value of $f_{\gamma}$~=~0.17~$\pm$~0.09 represents a reasonable $\gamma\gamma$-coincidence correction factor estimate for the E$^{\mathrm{lab}}_{\mathrm{R}}$~=~95~keV resonance. The quoted uncertainty of the mean correction factor is the root-mean-square of the distribution. In Fig.~\ref{fig:fg}, the J$^{\pi}$~=~(3/2)$^{+}$ levels (levels with the same spin and parity as the E$^{\mathrm{lab}}_{\mathrm{R}}$~=~95~keV resonance level) are indicated with closed red circles. The peak intensity upper limit for the 2$\rightarrow$0 transition (197~keV) was obtained from the HPGe coincidence spectrum using the Bayesian statistical approach outlined in~\citet{ZHU07}. According to this method, conditional, non-informative posterior probability density functions were generated for each energy region, and peak intensity upper limits were calculated. The E$^{\mathrm{lab}}_{\mathrm{R}}$~=~95~keV resonance strength upper limit was then determined by generating normally distributed probability density functions for all of the other quantities that entered into the resonance strength calculation---Eqs.~(\ref{eqn:debroglie}$-$\ref{eqn:ulresfrac}). All probability density functions were then randomly sampled iteratively, and this process produced a resonance strength probability density function that was then integrated to the 90$\%$ confidence level. A new resonance strength upper limit of $\omega\gamma_{95}$~$\leq$~7.8$\times$10$^{-9}$~eV (90$\%$~CL) was obtained for the E$^{\mathrm{lab}}_{\mathrm{R}}$~=~95~keV resonance in the $^{18}$O($p$,$\gamma$)$^{19}$F reaction. This new upper limit improves upon the upper limit presented in~\citet{VOG90} by about a factor of 5. \subsection{Non-resonant Data Analysis}\label{ss:nonres} Data collected during this study at E$^{\mathrm{lab}}_{p}$~=~105~keV are also important for obtaining improved estimates for the direct capture cross section of $^{18}$O($p$,$\gamma$)$^{19}$F. During direct capture, a final bound state is created when the nucleus acquires a proton via $\gamma$-ray emission; this process occurs without the formation of a compound nucleus~\cite{ILI04}. The experimental $^{18}$O($p$,$\gamma$)$^{19}$F direct capture cross section at E$^{\mathrm{lab}}_{p}$~=~1850~keV was measured previously by~\citet{WIE80}. While the $f_{\gamma}$ estimation explained in Sec.~\ref{ss:res} relied upon a statistical argument, no such assumption was necessary to determine the direct capture correction factor, $f_{\gamma}^{\mathrm{DC}}$. To calculate the required direct capture branching ratios, we first extrapolated the experimental cross section to E$^{\mathrm{lab}}_{p}$~=~105~keV for all direct capture transitions observed in Ref.~\cite{WIE80}. To this end, two different direct capture codes were employed. The code $\mathtt{TEDCA}$~\cite{KRA92} was used to compute the direct capture cross section for a zero scattering potential. The bound state and scattering state potential parameters used were adopted from~\citet{ILI04}. The code $\mathtt{DIRCAP}$~\cite{ILI04} was utilized to perform the same calculation with a hard-sphere scattering potential. The calculated cross sections (from E$^{\mathrm{c.m.}}_{p}$~=~0.03$-$1.99~MeV) were normalized to the measured direct capture cross sections at E$^{\mathrm{lab}}_{p}$~=~1850~keV~\cite{WIE80}. The direct capture branching ratios derived from this procedure were required for the calculation of coincidence efficiency correction factors, $f_{\gamma}^{\mathrm{DC}}$, using the code G$\textsc{eant}$4 (Sec.~\ref{ss:res}). \begin{figure}[!bp] \begin{center} \includegraphics[scale=0.45]{Stotal} \caption{\label{fig:totals}(Color online) Total direct capture S-factor for $^{18}$O($p$,$\gamma$)$^{19}$F. The solid lines represent direct capture model calculations: (black)~\citet{WIE80}; (blue) using code $\mathtt{DIRCAP}$; (red) using code $\mathtt{TEDCA}$; the latter two results are normalized to the measured direct capture cross section at E$^{\mathrm{lab}}_{p}$~=~1850~keV~\cite{WIE80}. Our measured upper limits (90$\%$, 95$\%$, 99$\%$ confidence levels) at E$^{\mathrm{c.m.}}_{p}$~=~99.4~keV are displayed as three black arrows.} \end{center} \end{figure} No direct capture transitions were observed in any of the singles or coincidence spectra accumulated at E$^{\mathrm{lab}}_{p}$~=~105~keV. An experimental upper limit on the total direct capture cross section was obtained from~\cite{ILI07}: \begin{equation} \label{eqn:dcyield} \mathcal{Y}~=~\frac{\mathcal{N}_{20}}{\mathcal{N}_{p}f^{\mathrm{DC}}_{\gamma}}~=~\frac{1}{\epsilon_{\mathrm{eff}}}\int^{\mathrm{E}_{p}^{\mathrm{c.m.}}}_{\mathrm{E}_{p}^{\mathrm{c.m.}}-\Delta \mathrm{E}} \sigma^{\mathrm{DC}}(\mathrm{E})~\mathrm{dE} \end{equation} where $\mathcal{Y}$ is the measured yield upper limit, $\mathcal{N}_{20}$ is the intensity upper limit of the 2~$\rightarrow$~0 transition from the Bayesian treatment discussed in Sec.~\ref{ss:res}, and $\sigma^{\mathrm{DC}}$(E) is the total direct capture cross section. This expression assumes that the effective stopping power is approximately constant over the target thickness, as was the case in the present experiment. The cross section can be rewritten in the form~\cite{ILI07}: \begin{equation} \label{eqn:sigmas} \sigma(\mathrm{E})~=~\frac{\mathcal{S}(\mathrm{E})}{\mathrm{E}}e^{-2\pi\eta} \end{equation} where $\mathcal{S}$(E) is the astrophysical S-factor, E is the center-of-mass energy, and $e^{-2\pi\eta}$ is the Gamow factor. By assuming a nearly constant S-factor over the target thickness, Eqs.~(\ref{eqn:dcyield}$-$\ref{eqn:sigmas}) can be integrated numerically to extract $\sigma$(E) or $\mathcal{S}$(E) from the measured yield. This set of calculations was performed for the same $\gamma\gamma$-coincidence gate used in Sec.~\ref{ss:res}. For the total experimental astrophysical S-factor, we found an upper limit of $\mathcal{S}^{\mathrm{DC}}_{\mathrm{total}}~\leq$~8.1~keV~b (90$\%$~CL), corresponding to a direct capture cross section upper limit of $\sigma^{\mathrm{DC}}_{\mathrm{total}}~\leq$~1.8~pb (90$\%$~CL). Note that these values are nearly independent (within 2$\%$) of the direct capture code used to calculate the branching ratios at E$^{\mathrm{lab}}_{p}$~=~105~keV. Our experimental total S-factor upper limit (90$\%$ CL) at E$^{\mathrm{lab}}_{p}$~=~105~keV is shown in Fig.~\ref{fig:totals}, along with the values corresponding to the 95$\%$ and 99$\%$ confidence levels. It is interesting to compare our measured upper limit values with direct capture model calculations. The black solid curve represents the total S-factor reported by~\citet{WIE80}, while the red and blue solid lines were calculated in the present work using the codes $\mathtt{TEDCA}$~\cite{KRA92} and $\mathtt{DIRCAP}$~\cite{ILI04}, respectively. The latter two were normalized to the previously measured direct capture cross section at E$^{\mathrm{lab}}_{p}$~=~1850~keV~\cite{WIE80}. At E$^{\mathrm{lab}}_{p}$~=~105~keV, our measured upper limits are smaller than the prediction of~\citet{WIE80} by about a factor of 2. The $\mathtt{DIRCAP}$ S-factor (blue line) was only marginally consistent with our experimental upper limit (90$\%$ CL) while the extrapolation derived from the code $\mathtt{TEDCA}$ (red line) fell within the 90$\%$ confidence level. \subsection{Reaction Rates}\label{ss:rr} Thermonuclear reaction rates for $^{18}$O($p$,$\gamma$)$^{19}$F were calculated using the Monte Carlo method of~\citet{LON10b}. In the Monte Carlo calculation, using the code $\mathtt{RatesMC}$, we adopted the same nuclear physics input as in Ref.~\cite{ILI10c}, except for the E$^{\mathrm{lab}}_{\mathrm{R}}$~=~95~keV resonance strength, the total direct capture S-factor, the Q-value~\cite{AUD11}, and the resonance energies. Based on~\citet{ILI04} , we adopted the $\mathtt{TEDCA}$ extrapolation of the total direct capture S-factor, normalized at E$^{\mathrm{lab}}_{p}$~=~1850~keV~\cite{WIE80}. For bombarding energies below E$^{\mathrm{c.m.}}_{p}$~=~2.0~MeV, our adopted total S-factor can be expanded around $E$~=~0, with the result: \begin{align} \label{eqn:sfit} &\mathcal{S}(\mathrm{E})~\approx~\mathcal{S}(0)~+~\mathcal{S}'(0)\mathrm{E}~+~\frac{1}{2}\mathcal{S}''(0)\mathrm{E}^{2} \\ \nonumber &=~7.06~+~2.98\times10^{-3}\mathrm{E}~-~2.60\times10^{-7}\mathrm{E}^{2}~(\mathrm{keV~b}), \end{align} where E is the center-of-mass energy. Note that at low energies, our new direct capture S-factor is significantly smaller than the result reported in~\citet{WIE80}. Also note that the $\mathcal{S'}$(0) coefficient presented in~\citet{WIE80} was reported incorrectly and should in fact be $\mathcal{S'}$(0)~=~$-0.34$$\times$10$^{-3}$ b~\cite{WIE80b}. This correction is already applied to the black line displayed in Fig.~\ref{fig:totals}. In the present work, we reported on an improved upper limit of the E$^{\mathrm{lab}}_{\mathrm{R}}$~=~95~keV resonance strength, $\omega\gamma$~$\leq$~7.8$\times$10$^{-9}$~eV (90$\%$ CL). For this particular $^{19}$F level, we may also estimate a lower limit for the resonance strength based on the available resonance properties (see Tab.~\ref{table:levparam}). The ratio of resonance strengths in the ($p$,$\gamma$) and ($p$,$\alpha$) channels, according to Eq.~(\ref{wgpx}), is given by: \begin{equation} \label{eqn:pgparatio} \frac{\omega\gamma_{p\gamma}}{\omega\gamma_{p\alpha}}~=~\frac{\Gamma_{\gamma}}{\Gamma_{\alpha}}. \end{equation} The ($p$,$\alpha$) strength was measured by~\citet{LOR79}, with the result $\omega\gamma_{p\alpha}$~=~(1.6~$\pm$~0.5)$\times$10$^{-7}$~eV. An upper limit for the total width of $\Gamma$~$<$~3$\times$10$^{3}$~eV was obtained from the slope of the low-energy edge of the thick-target yield curve~\cite{LOR79}, implying an upper limit of $\Gamma_{\alpha}$~$<$~3$\times$10$^{3}$~eV for the $\alpha$-particle partial width. Finally, a value of $\Gamma_{\gamma}$~=~(6.0~$\pm$~2.5)$\times$10$^{-1}$~eV was reported for the $\gamma$-ray partial width in~\citet{WIE80}. With these input values and their associated uncertainties, we found, from Eq.~(\ref{eqn:pgparatio}), a lower limit on the ($p$,$\gamma$) strength of $\omega\gamma_{p\gamma}$~$\geq$~1.3$\times$10$^{-11}$~eV. \begin{figure}[!bp] \begin{center} \begin{picture}(70,260) \put(-78.5,175){\includegraphics[scale=0.21]{RatesMC_0}} \put(5.5,238){(a)} \put(40.75,175){\includegraphics[scale=0.21]{RatesMC_1}} \put(124.75,238){(b)} \put(-93.5,91){\includegraphics[scale=0.21]{RatesMCAxis}} \put(-78.5,94.5){\includegraphics[scale=0.21]{RatesMC_2}} \put(5.5,157.5){(c)} \put(40.75,94.5){\includegraphics[scale=0.21]{RatesMC_3}} \put(124.75,157.5){(d)} \put(-32,0){\includegraphics[scale=0.21]{RatesMCAxis}} \put(-78.5,14){\includegraphics[scale=0.21]{RatesMC_4}} \put(5.5,77){(e)} \put(40.75,14){\includegraphics[scale=0.21]{RatesMC_5}} \put(124.75,77){(f)} \end{picture} \caption{\label{fig:ratesmc}(Color online)(Left) Reaction rate probability density functions (red) for $^{18}$O($p$,$\gamma$)$^{19}$F at 0.02~GK, 0.2~GK, and 2.0~GK generated by the $\mathtt{RatesMC}$ Monte Carlo code. The lognormal approximations are overlaid in black. (Right) The corresponding cumulative probability functions used to define the low, median and high rates as 0.16, 0.5, and 0.84 quantiles, respectively. } \end{center} \end{figure} Since we were able to calculate both an upper and a lower limit on the strength, we estimated a recommended value and a factor uncertainty using the following equations~\cite{LON10b}: \begin{align} \label{eqn:expect} &\omega\gamma = \sqrt{\omega\gamma_{\mathrm{LL}}\times\omega\gamma_{\mathrm{UL}}} = 3.2\times10^{-10}~\mathrm{eV},\\ &\mathrm{f.u.} = \sqrt{\frac{\omega\gamma_{\mathrm{UL}}}{\omega\gamma_{\mathrm{LL}}}} = 25. \end{align} In our Monte Carlo procedure, the rate contribution of the E$^{\mathrm{lab}}_{\mathrm{R}}$~=~95~keV resonance strength was found by randomly sampling a lognormal distribution constructed from the mean value and factor uncertainty. Associating resonance strengths with lognormal distributions is discussed in~\citet{LON10b}. \begin{table}[!tp] \begin{center} \begin{tabular}{cccccccc} \hline\hline \multicolumn{2}{c}{T (GK)} & \multicolumn{2}{c}{Low rate} & \multicolumn{2}{c}{Median rate} & \multicolumn{2}{c}{High rate} \\ \hline \multicolumn{2}{c}{0.010}&\multicolumn{2}{c}{2.923$\times$10$^{-24}$}&\multicolumn{2}{c}{4.967$\times$10$^{-24}$}&\multicolumn{2}{c}{8.785$\times$10$^{-24}$}\\ \multicolumn{2}{c}{0.011}&\multicolumn{2}{c}{2.581$\times$10$^{-23}$}&\multicolumn{2}{c}{4.222$\times$10$^{-23}$}&\multicolumn{2}{c}{7.344$\times$10$^{-23}$}\\ \multicolumn{2}{c}{0.012}&\multicolumn{2}{c}{1.719$\times$10$^{-22}$}&\multicolumn{2}{c}{2.676$\times$10$^{-22}$}&\multicolumn{2}{c}{4.468$\times$10$^{-22}$}\\ \multicolumn{2}{c}{0.013}&\multicolumn{2}{c}{9.299$\times$10$^{-22}$}&\multicolumn{2}{c}{1.370$\times$10$^{-21}$}&\multicolumn{2}{c}{2.181$\times$10$^{-21}$}\\ \multicolumn{2}{c}{0.014}&\multicolumn{2}{c}{4.223$\times$10$^{-21}$}&\multicolumn{2}{c}{5.973$\times$10$^{-21}$}&\multicolumn{2}{c}{8.985$\times$10$^{-21}$}\\ \multicolumn{2}{c}{0.015}&\multicolumn{2}{c}{1.709$\times$10$^{-20}$}&\multicolumn{2}{c}{2.321$\times$10$^{-20}$}&\multicolumn{2}{c}{3.309$\times$10$^{-20}$}\\ \multicolumn{2}{c}{0.016}&\multicolumn{2}{c}{6.180$\times$10$^{-20}$}&\multicolumn{2}{c}{8.151$\times$10$^{-20}$}&\multicolumn{2}{c}{1.113$\times$10$^{-19}$}\\ \multicolumn{2}{c}{0.018}&\multicolumn{2}{c}{6.283$\times$10$^{-19}$}&\multicolumn{2}{c}{8.027$\times$10$^{-19}$}&\multicolumn{2}{c}{1.041$\times$10$^{-18}$}\\ \multicolumn{2}{c}{0.020}&\multicolumn{2}{c}{4.820$\times$10$^{-18}$}&\multicolumn{2}{c}{6.045$\times$10$^{-18}$}&\multicolumn{2}{c}{7.721$\times$10$^{-18}$}\\ \multicolumn{2}{c}{0.025}&\multicolumn{2}{c}{3.235$\times$10$^{-16}$}&\multicolumn{2}{c}{4.041$\times$10$^{-16}$}&\multicolumn{2}{c}{5.168$\times$10$^{-16}$}\\ \multicolumn{2}{c}{0.030}&\multicolumn{2}{c}{8.728$\times$10$^{-15}$}&\multicolumn{2}{c}{1.093$\times$10$^{-14}$}&\multicolumn{2}{c}{1.402$\times$10$^{-14}$}\\ \multicolumn{2}{c}{0.040}&\multicolumn{2}{c}{1.206$\times$10$^{-12}$}&\multicolumn{2}{c}{1.555$\times$10$^{-12}$}&\multicolumn{2}{c}{2.340$\times$10$^{-12}$}\\ \multicolumn{2}{c}{0.050}&\multicolumn{2}{c}{9.349$\times$10$^{-11}$}&\multicolumn{2}{c}{1.227$\times$10$^{-10}$}&\multicolumn{2}{c}{2.056$\times$10$^{-10}$}\\ \multicolumn{2}{c}{0.060}&\multicolumn{2}{c}{9.553$\times$10$^{-9}$}&\multicolumn{2}{c}{1.274$\times$10$^{-8}$}&\multicolumn{2}{c}{1.818$\times$10$^{-8}$}\\ \multicolumn{2}{c}{0.070}&\multicolumn{2}{c}{3.573$\times$10$^{-7}$}&\multicolumn{2}{c}{4.764$\times$10$^{-7}$}&\multicolumn{2}{c}{6.503$\times$10$^{-7}$}\\ \multicolumn{2}{c}{0.080}&\multicolumn{2}{c}{5.495$\times$10$^{-6}$}&\multicolumn{2}{c}{7.304$\times$10$^{-6}$}&\multicolumn{2}{c}{9.891$\times$10$^{-6}$}\\ \multicolumn{2}{c}{0.090}&\multicolumn{2}{c}{4.553$\times$10$^{-5}$}&\multicolumn{2}{c}{6.049$\times$10$^{-5}$}&\multicolumn{2}{c}{8.137$\times$10$^{-5}$}\\ \multicolumn{2}{c}{0.100}&\multicolumn{2}{c}{2.437$\times$10$^{-4}$}&\multicolumn{2}{c}{3.239$\times$10$^{-4}$}&\multicolumn{2}{c}{4.347$\times$10$^{-4}$}\\ \multicolumn{2}{c}{0.110}&\multicolumn{2}{c}{9.505$\times$10$^{-4}$}&\multicolumn{2}{c}{1.263$\times$10$^{-3}$}&\multicolumn{2}{c}{1.692$\times$10$^{-3}$}\\ \multicolumn{2}{c}{0.120}&\multicolumn{2}{c}{2.921$\times$10$^{-3}$}&\multicolumn{2}{c}{3.880$\times$10$^{-3}$}&\multicolumn{2}{c}{5.191$\times$10$^{-3}$}\\ \multicolumn{2}{c}{0.130}&\multicolumn{2}{c}{7.484$\times$10$^{-3}$}&\multicolumn{2}{c}{9.946$\times$10$^{-3}$}&\multicolumn{2}{c}{1.330$\times$10$^{-2}$}\\ \multicolumn{2}{c}{0.140}&\multicolumn{2}{c}{1.663$\times$10$^{-2}$}&\multicolumn{2}{c}{2.211$\times$10$^{-2}$}&\multicolumn{2}{c}{2.957$\times$10$^{-2}$}\\ \multicolumn{2}{c}{0.150}&\multicolumn{2}{c}{3.300$\times$10$^{-2}$}&\multicolumn{2}{c}{4.386$\times$10$^{-2}$}&\multicolumn{2}{c}{5.867$\times$10$^{-2}$}\\ \multicolumn{2}{c}{0.160}&\multicolumn{2}{c}{5.970$\times$10$^{-2}$}&\multicolumn{2}{c}{7.942$\times$10$^{-2}$}&\multicolumn{2}{c}{1.062$\times$10$^{-1}$}\\ \multicolumn{2}{c}{0.180}&\multicolumn{2}{c}{1.581$\times$10$^{-1}$}&\multicolumn{2}{c}{2.102$\times$10$^{-1}$}&\multicolumn{2}{c}{2.814$\times$10$^{-1}$}\\ \multicolumn{2}{c}{0.200}&\multicolumn{2}{c}{3.388$\times$10$^{-1}$}&\multicolumn{2}{c}{4.507$\times$10$^{-1}$}&\multicolumn{2}{c}{6.033$\times$10$^{-1}$}\\ \multicolumn{2}{c}{0.250}&\multicolumn{2}{c}{1.274$\times$10$^{0}$}&\multicolumn{2}{c}{1.694$\times$10$^{0}$}&\multicolumn{2}{c}{2.266$\times$10$^{0}$}\\ \multicolumn{2}{c}{0.300}&\multicolumn{2}{c}{2.932$\times$10$^{0}$}&\multicolumn{2}{c}{3.903$\times$10$^{0}$}&\multicolumn{2}{c}{5.212$\times$10$^{0}$}\\ \multicolumn{2}{c}{0.350}&\multicolumn{2}{c}{5.153$\times$10$^{0}$}&\multicolumn{2}{c}{6.853$\times$10$^{0}$}&\multicolumn{2}{c}{9.134$\times$10$^{0}$}\\ \multicolumn{2}{c}{0.400}&\multicolumn{2}{c}{7.695$\times$10$^{0}$}&\multicolumn{2}{c}{1.020$\times$10$^{1}$}&\multicolumn{2}{c}{1.360$\times$10$^{1}$}\\ \multicolumn{2}{c}{0.450}&\multicolumn{2}{c}{1.037$\times$10$^{1}$}&\multicolumn{2}{c}{1.370$\times$10$^{1}$}&\multicolumn{2}{c}{1.819$\times$10$^{1}$}\\ \multicolumn{2}{c}{0.500}&\multicolumn{2}{c}{1.303$\times$10$^{1}$}&\multicolumn{2}{c}{1.715$\times$10$^{1}$}&\multicolumn{2}{c}{2.269$\times$10$^{1}$}\\ \multicolumn{2}{c}{0.600}&\multicolumn{2}{c}{1.841$\times$10$^{1}$}&\multicolumn{2}{c}{2.387$\times$10$^{1}$}&\multicolumn{2}{c}{3.123$\times$10$^{1}$}\\ \multicolumn{2}{c}{0.700}&\multicolumn{2}{c}{2.464$\times$10$^{1}$}&\multicolumn{2}{c}{3.121$\times$10$^{1}$}&\multicolumn{2}{c}{3.988$\times$10$^{1}$}\\ \multicolumn{2}{c}{0.800}&\multicolumn{2}{c}{3.356$\times$10$^{1}$}&\multicolumn{2}{c}{4.137$\times$10$^{1}$}&\multicolumn{2}{c}{5.129$\times$10$^{1}$}\\ \multicolumn{2}{c}{0.900}&\multicolumn{2}{c}{4.759$\times$10$^{1}$}&\multicolumn{2}{c}{5.709$\times$10$^{1}$}&\multicolumn{2}{c}{6.938$\times$10$^{1}$}\\ \multicolumn{2}{c}{1.000}&\multicolumn{2}{c}{6.916$\times$10$^{1}$}&\multicolumn{2}{c}{8.167$\times$10$^{1}$}&\multicolumn{2}{c}{9.819$\times$10$^{1}$}\\ \multicolumn{2}{c}{1.250}&\multicolumn{2}{c}{1.719$\times$10$^{2}$}&\multicolumn{2}{c}{2.000$\times$10$^{2}$}&\multicolumn{2}{c}{2.380$\times$10$^{2}$}\\ \multicolumn{2}{c}{1.500}&\multicolumn{2}{c}{3.630$\times$10$^{2}$}&\multicolumn{2}{c}{4.213$\times$10$^{2}$}&\multicolumn{2}{c}{4.975$\times$10$^{2}$}\\ \multicolumn{2}{c}{1.750}&\multicolumn{2}{c}{6.403$\times$10$^{2}$}&\multicolumn{2}{c}{7.430$\times$10$^{2}$}&\multicolumn{2}{c}{8.726$\times$10$^{2}$}\\ \multicolumn{2}{c}{2.000}&\multicolumn{2}{c}{9.921$\times$10$^{2}$}&\multicolumn{2}{c}{1.149$\times$10$^{3}$}&\multicolumn{2}{c}{1.342$\times$10$^{3}$}\\ \multicolumn{2}{c}{2.500}&\multicolumn{2}{c}{1.842$\times$10$^{3}$}&\multicolumn{2}{c}{2.129$\times$10$^{3}$}&\multicolumn{2}{c}{2.487$\times$10$^{3}$}\\ \multicolumn{2}{c}{3.000}&\multicolumn{2}{c}{2.798$\times$10$^{3}$}&\multicolumn{2}{c}{3.230$\times$10$^{3}$}&\multicolumn{2}{c}{3.769$\times$10$^{3}$}\\ \multicolumn{2}{c}{3.500}&\multicolumn{2}{c}{3.777$\times$10$^{3}$}&\multicolumn{2}{c}{4.369$\times$10$^{3}$}&\multicolumn{2}{c}{5.130$\times$10$^{3}$}\\ \multicolumn{2}{c}{4.000}&\multicolumn{2}{c}{4.758$\times$10$^{3}$}&\multicolumn{2}{c}{5.507$\times$10$^{3}$}&\multicolumn{2}{c}{6.507$\times$10$^{3}$}\\ \multicolumn{2}{c}{5.000}&\multicolumn{2}{c}{6.600$\times$10$^{3}$}&\multicolumn{2}{c}{7.729$\times$10$^{3}$}&\multicolumn{2}{c}{9.353$\times$10$^{3}$}\\ \multicolumn{2}{c}{6.000}&\multicolumn{2}{c}{(8.727$\times$10$^{3}$)}&\multicolumn{2}{c}{(1.056$\times$10$^{4}$)}&\multicolumn{2}{c}{(1.277$\times$10$^{4}$)}\\ \multicolumn{2}{c}{7.000}&\multicolumn{2}{c}{(1.167$\times$10$^{4}$)}&\multicolumn{2}{c}{(1.411$\times$10$^{4}$)}&\multicolumn{2}{c}{(1.707$\times$10$^{4}$)}\\ \multicolumn{2}{c}{8.000}&\multicolumn{2}{c}{(1.452$\times$10$^{4}$)}&\multicolumn{2}{c}{(1.757$\times$10$^{4}$)}&\multicolumn{2}{c}{(2.125$\times$10$^{4}$)}\\ \multicolumn{2}{c}{9.000}&\multicolumn{2}{c}{(1.718$\times$10$^{4}$)}&\multicolumn{2}{c}{(2.078$\times$10$^{4}$)}&\multicolumn{2}{c}{(2.514$\times$10$^{4}$)}\\ \multicolumn{2}{c}{10.000}&\multicolumn{2}{c}{(2.032$\times$10$^{4}$)}&\multicolumn{2}{c}{(2.458$\times$10$^{4}$)}&\multicolumn{2}{c}{(2.974$\times$10$^{4}$)}\\ \hline\hline \end{tabular} \caption{\label{table:reactionrate}Experimental Monte Carlo-based $^{18}$O($p$,$\gamma$)$^{19}$F reaction rates (in units of cm$^{3}~$mol$^{-1}$~s$^{-1}$). For T~$\geq$~5.5~GK, rates were matched to Hauser-Feshbach results~\cite{GOR08}.} \end{center} \end{table} Our new low, median, and high $^{18}$O($p$,$\gamma$)$^{19}$F reaction rates (corresponding to 0.16, 0.50 and 0.84 quantiles, respectively, of the cumulative reaction rate distribution) are tabulated in Tab.~\ref{table:reactionrate} over a stellar plasma temperature range of 0.01$-$10.00~GK. Reaction rate probability density functions at a few sample temperatures (0.02, 0.2, 2.0~GK) are displayed as red histograms in Fig.~\ref{fig:ratesmc} (left panel), with the lognormal approximations shown as black solid lines. On the right, the corresponding cumulative probability functions are shown with the dashed lines indicating the low, median, and high rates. It can be seen that a lognormal function approximates the actual Monte Carlo distribution rather well. Figure~\ref{fig:buc12vili10pg} compares our new reaction rate with the one published by~\citet{ILI10c}. The new (solid lines) and previous (dotted lines) high and low rates are normalized to the previous recommended rate~\cite{ILI10c}. Note that the previous rates contained two small mistakes: (i) an erroneous assignment of J$^{\pi}$~=~(3/2)$^{-}$, and (ii) the incorrectly reported value of $\mathcal{S'}$(0)~=~0.34$\times$10$^{-3}$~b from Ref.~\cite{WIE80}. The dashed vertical line at 44.7~MK indicates the highest temperature threshold at which, according to~\citet{NOL03}, CBP can occur. The vertical dashed line at 5.5~GK represents the stellar temperature beyond which the rates must be found with the aid of Hauser-Feshbach calculations. This threshold was computed based on the methodology outlined by~\citet{NEW08}. \begin{figure}[!bp] \begin{center} \includegraphics[scale=0.45]{buc12vili10pg} \caption{\label{fig:buc12vili10pg}Present (solid lines) and previous~\cite{ILI10c} (dotted lines) high and low reaction rates, normalized to the recommended previous rates. The vertical dashed line at 44.7~MK represents the highest lower limit on CBP temperatures within a low-mass AGB star according to Ref.~\cite{NOL03}. The vertical dashed line at 5.5~GK represents the temperature at which the experimental rates need to be extrapolated with the aid of Hauser-Feshbach results~\cite{GOR08}.} \end{center} \end{figure} \begin{figure}[!bp] \begin{center} \includegraphics[scale=0.45]{Fraction} \caption{\label{fig:fraction}(Color online) Fractional contributions at low temperatures (T$_{9}$~$<$~0.2) to the $^{18}$O($p$,$\gamma$)$^{19}$F reaction rate are shown. The blue line represents the direct capture contribution, the red line is the E$^{\mathrm{lab}}_{\mathrm{R}}$~=~95~keV contribution, and the dashed, dashed-dotted and dashed-double-dotted lines are the contributions from the E$^{\mathrm{lab}}_{\mathrm{R}}$~=~22~keV, E$^{\mathrm{lab}}_{\mathrm{R}}$~=~151~keV and E$^{\mathrm{lab}}_{\mathrm{R}}$~=~844~keV resonances, respectively.} \end{center} \end{figure} The difference, in Fig.~\ref{fig:buc12vili10pg}, between new and previous rates at temperatures below 50~MK can be explained by our lower estimates both for the contributions from direct capture and the resonance at E$^{\mathrm{lab}}_{\mathrm{R}}$~=~95~keV. Since the new rates are smaller at CBP threshold temperatures compared to the previous results, it is even less likely that the $^{18}$O($p$,$\gamma$)$^{19}$F reaction contributes significantly to the depletion of $^{18}$O observed in stellar atmospheres and presolar grain samples. The slight increase in the rate above 44.7~MK is dependent upon the calculated E$^{\mathrm{lab}}_{\mathrm{R}}$~=~95~keV strength upper limit and cannot account for observed $^{18}$O depletions. The difference at temperatures in excess of 5~GK is solely caused by our treatment of the direct capture contribution: the S-factor expansion was artificially cut off at E$^{\mathrm{c.m.}}_{p}$~=~1.0~MeV in previous work~\cite{WIE80,ILI10c}, while in the present work the S-factor is calculated up to energies of E$^{\mathrm{c.m.}}_{p}$~=~2.0~MeV (Fig.~\ref{fig:totals}), resulting in a much higher cutoff value and a significantly increased direct capture contribution. \begin{figure}[!tp] \begin{center} \includegraphics[scale=0.45]{buc12vili10pa} \caption{\label{fig:buc12vili10pa}Ratios between ($p$,$\alpha$) low and high reaction rates from~\citet{ILI10c} and the present ($p$,$\gamma$) high and low reaction rates, respectively (solid black lines). The corresponding ratios based solely on the previous rates~\cite{ILI10c} are shown as dotted lines. The vertical dashed line at 44.7~MK indicates the highest CBP temperature threshold according to~\citet{NOL03}.} \end{center} \end{figure} The fractional contributions to the total $^{18}$O($p$,$\gamma$)$^{19}$F reaction rates are shown in Fig.~\ref{fig:fraction} for low temperatures. It can be seen that our lower estimate for the direct capture process (blue solid line) contributes significantly ($>$10$\%$) at temperatures of 0.011$-$0.05~GK. Our considerably lower estimate for the E$^{\mathrm{lab}}_{\mathrm{R}}$~=~95~keV resonance (red solid line) yields a negligible contribution ($<$3$\%$) in the temperature region relevant to cool bottom processing. The dashed lines represent the reaction rate contributions of the other resonances (E$^{\mathrm{lab}}_{\mathrm{R}}$~=~22, 151 and 844~keV) that influence the rate at stellar plasma temperatures below 0.2~GK. The ratio of $^{18}$O($p$,$\alpha$)$^{15}$N and $^{18}$O($p$,$\gamma$)$^{19}$F high and low rates is shown in Fig.~\ref{fig:buc12vili10pa}. The dotted lines are based on the results of Ref.~\cite{ILI10c} alone, while the solid lines incorporate the new $^{18}$O($p$,$\gamma$)$^{19}$F rates. For the temperature region relevant to CBP, the established ($p$,$\alpha$) rate~\cite{ILI10c} exceeds the ($p$,$\gamma$) rate by a factor of 5100$-$1700 over the temperature range 0.03$-$0.05~GK. From our improved E$^{\mathrm{lab}}_{\mathrm{R}}$~=~95~keV resonance strength upper limit and our refined direct capture S-factor, we support the conclusion that the ($p$,$\gamma$) reaction does not contribute significantly to the overall $^{18}$O destruction at temperatures suggested for CBP to occur in low-mass AGB stars. Future efforts to study $^{18}$O depletion by CBP in AGB stars should focus on direct measurement of the $^{18}$O($p$,$\alpha$)$^{15}$N reaction at low energies. \section{Conclusion}\label{ss:conc} A study of the $^{18}$O($p$,$\gamma$)$^{19}$F reaction was performed at the Laboratory for Experimental Nuclear Astrophysics (LENA). A new resonance strength upper limit of $\omega\gamma~\leq$~7.8$\times$10$^{-9}$~eV (90$\%$ CL) for the E$^{\mathrm{lab}}_{\mathrm{R}}$~=~95~keV resonance was measured that improves upon the previous ($p$,$\gamma$) upper limit published by~\citet{VOG90} by about half an order of magnitude. Our data also allow for a significant improvement of the total direct capture S-factor prediction. Our direct capture S-factor amounts to about half of the previously accepted value at low energies~\cite{WIE80}. With this experimental information, new Monte Carlo-based reaction rates for $^{18}$O($p$,$\gamma$)$^{19}$F are derived. We find that the new reaction rates in the hypothesized CBP temperature regime are even smaller than previously assumed. Clearly, $^{18}$O depletion in low-mass AGB stellar atmospheres and some presolar oxide grains is dominated by the competing $^{18}$O($p$,$\alpha$)$^{15}$N reaction. Future studies of $^{18}$O depletion by cool bottom processing in low-mass AGB stars should focus on direct measurement of the ($p$,$\alpha$) reaction at low energies. \acknowledgments The authors would like to thank TUNL technical staff members J. Addison, B. P. Carlin, J. Dunham, B. Jelinek, P. Mulkey, R. O'Quinn, and C. Westerfeldt. Special thanks to A. L. Sallaska, L. N. Downen and B. M. Oginni. Additional thanks to R. Longland, J. R. Newton, and C. W. Arnold. The authors would also like to thank A. Coc (CSNSM, Orsay) and G. Angelou (Monash University). This work was supported in part by the US Department of Energy under Contract no. DE-FG02-97ER41041. Additional support was provided for MQB by the DOE NNSA Stewardship Science Graduate Fellowship under Grant no. DE-FC52-08NA28752.
1,314,259,992,767
arxiv
\section{Introduction}\label{sect:intro} In the scenario of \textit{secure network coding} introduced by Cai et al.\@ \cite{Cai2011}, a source node transmits $n$ packets from $n$ outgoing links to sink nodes through a network that implements network coding \cite{Ahlswede2000a,Koetter2003,Li2003}, and each sink node receives $n$ packets from $n$ incoming links. In the network, there is a wiretapper who observes $\mu (< n)$ links. The problem is how to encode a secret message into $n$ transmitted packets at the source node, in such a way that the wiretapper obtain no information about the message in the sense of information theoretic security. As shown in \cite{ElRouayheb2012}, secure network coding can be seen as a generalization of the wiretap channel II \cite{Ozarow1984} or secret sharing schemes based on linear codes \cite{Chen2007,Duursma2010} for network coding. Hence, in secure network coding, the secrecy is realized by introducing the randomness into $n$ transmitted packets as follows. Suppose the message is represented by $l$ packets $S_1,\dots,S_l$ $(1 \leq l \leq n)$. Then, the source node encodes $(S_1,\dots,S_l)$ together with $n-l$ random packets by linear codes, and generates $n$ transmitted packets \cite{Silva2011,Ngai2011,ElRouayheb2012}. Silva et al. \cite{Silva2011} proposed the \textit{universal secure network coding} that is based on maximum rank distance (MRD) codes \cite{Gabidulin1985}. Their scheme was universal in the sense that their scheme guarantees that over \textit{any} underlying network code, no information about $S$ leaks out even if any $n-l$ links are observed by a wiretapper. As shown in \cite{Silva2011}, their scheme with MRD codes is optimal in terms of security and communication rate. However, there exists some restrictions in universal secure network coding with MRD codes. In their scheme, the network must transport packets of size $m \geq n$. The MRD code used in the scheme is defined over an $\mathbb{F}_{q^m}^n$, where $\mathbb{F}_{q^m}$ is an $m$-degree field extension of a field $\mathbb{F}_q$ with order $q$. Thus, the size of the field $\mathbb{F}_{q^m}$ increases exponentially with $m$, and the restriction of MRD codes with $m \geq n$ invokes the large computational cost for encoding and decoding of MRD codes if $n$ is large. It is undesirable especially in resource constraint environments. Considering secure network coding without such a restriction, Ngai et al.\@ \cite{Ngai2011}, and later Zhang et al.\@ \cite{Zhang2009}, investigated the security performance of secure network coding based on general linear codes. They introduced a new parameter of linear codes, called the \textit{relative network generalized Hamming weight} (RNGHW), and revealed that the security performance is expressed in terms of the RNGHW. The RNGHW depends on the set of coding vectors of the underlying network code. Hence, the RNGHW is not universal. The aim of this paper is to investigate the security performance of universal secure network coding based on general linear codes, which is always guaranteed over \textit{any} underlying network code, even over random network code. This paper defines the universal security performance by the following two criteria. One is called the \textit{universal equivocation} ${\Theta}_\mu$ that is the minimum uncertainty of the message under observation of $\mu (< n)$ links, guaranteed independently of the underlying network code. The other is called the \textit{universal ${\Omega}$-strong security}, where ${\Omega}$ is a performance measure such that no part of the secret message is deterministically revealed even if at most ${\Omega}$ links are observed. The paper \cite{Kurihara2012} proposed a specific construction of the secure network coding that attains the universal $(n-1)$-strong security, and such a scheme is called universal strongly secure network coding \cite{Silva2009}. Namely, the definition of universal ${\Omega}$-strong security given in this paper is a generalization of universal strongly secure network coding considered in \cite{Kurihara2012,Silva2009} for the number of tapped links. In order to express ${\Theta}_\mu$ and ${\Omega}$ in terms of code parameters, this paper introduces two parameters of linear codes, called the \textit{relative dimension/intersection profile} (RDIP) and the \textit{relative generalized rank weight} (RGRW). The RGRW is a generalization of the minimum rank distance \cite{Gabidulin1985} of a code. We reveal that ${\Theta}_\mu$ and ${\Omega}$ can be expressed in terms of the RDIP and the RGRW of the codes. Duursma et al.\@ \cite{Duursma2010} first observed that the \textit{relative generalized Hamming weight} \cite{Luo2005} exactly expresses the security performance and the error correction capability of secret sharing. Our definitions of RGRW and RDIP are motivated by their result \cite{Duursma2010}. Assume that the attacker is able not only to eavesdrop but also to inject erroneous packets anywhere in the network. Also assume that the network may suffer from the rank deficiency of the transfer matrix at a sink node. Silva et al.\@'s scheme based on MRD codes \cite{Silva2011} enables to correct such errors and rank deficiency at each sink node, where its error correction capability is guaranteed over any underlying network code, {i.\@e.\@,~} universal. This paper also generalizes their result and reveals that the universal error correction capability of secure network coding based on arbitrary linear codes can be expressed in terms of the RGRW of the codes. The remainder of this paper is organized as follows. \Sect{sect:prelimi} presents basic notations, and introduces linear network coding. \Sect{sect:wiretap} defines the universal security performance and universal error correction capability of secure network coding over wiretap network. \Sect{sect:defrdiprgrw} defines the RDIP and RGRW of linear codes, and introduces their basic properties. In \Sect{sect:universalsecure}, the universal security performance is expressed in terms of the RDIP and RGRW. The security of existing schemes \cite{Kurihara2012,Silva2009,Silva2011} is also analyzed as applications of the RDIP and RGRW in Examples \ref{ex1} and \ref{ex2}. \Sect{sect:errorcorrection} gives the expression of the universal error correction capability in terms of the RGRW, and also analyze the error correction of \cite{Silva2011} by the RGRW in Example \ref{ex3}. \section{Preliminary}\label{sect:prelimi} \subsection{Basic Notations} Let $H(X)$ be the Shannon entropy for a random variable $X$, $H(X|Y)$ be the conditional entropy of $X$ given $Y$, and $I(X;Y)$ be the mutual information between $X$ and $Y$ \cite{Cover2006}. We write $|\mathcal{X}|$ as the cardinality of a set $\mathcal{X}$. The entropy and the mutual information are always computed by using $\log_{q^m}$. Let $\mathbb{F}_q$ stand for a finite field containing $q$ elements and $\mathbb{F}_{q^m}$ be an $m$-degree field extension of $\mathbb{F}_q$ ($m \geq 1$). Let $\mathbb{F}_q^n$ denote an $n$-dimensional row vector space over $\mathbb{F}_q$. Similarly, $\mathbb{F}_{q^m}^n$ stands for an $n$-dimensional row vector space over $\mathbb{F}_{q^m}$. Unless otherwise stated, we consider subspaces, ranks, dimensions, etc, over the field extension $\mathbb{F}_{q^m}$ instead of the base field $\mathbb{F}_q$. An $[n,k]$ linear code $\mathcal{C}$ over $\mathbb{F}_{q^m}^n$ is a $k$-dimensional subspace of $\mathbb{F}_{q^m}^n$. Let $\mathcal{C}^{\perp}$ denote a \textit{dual code} of a code $\mathcal{C}$. A subspace of a code is called a \textit{subcode} \cite{MacWilliams1977}. For $\mathcal{C} \subseteq \mathbb{F}_{q^m}^n$, we denote by $\mathcal{C}|\mathbb{F}_{q}$ a \textit{subfield subcode} of $\mathcal{C}$ over $\mathbb{F}_q$ \cite{MacWilliams1977}. Observe that $\mathsf{dim\,} \mathcal{C}$ means the dimension of $\mathcal{C}$ as a vector space over $\mathbb{F}_{q^m}$ whereas $\mathsf{dim\,} \mathcal{C}|\mathbb{F}_q$ is the dimension of $\mathcal{C}|\mathbb{F}_q$ over $\mathbb{F}_q$. For a vector $\vec{v}=[v_1,\dots,v_n]\in\mathbb{F}_{q^m}^n$ and a subspace $V \subseteq \mathbb{F}_{q^m}^n$, we denote $\vec{v}^{\,q} = [v_1^q,\dots,v_n^q]$ and $V^q = \{\vec{v}^{\,q} : \vec{v} \in V\}$. Define a family of subspaces $V\subseteq\mathbb{F}_{q^m}^n$ satisfying $V = V^q$ by $\Gamma(\F_{q^m}^n) \triangleq \{\text{subspace } V \subseteq \mathbb{F}_{q^m}^n : V = V^q\}$. Also define $\colinvi{i} \triangleq \{V\in\Gamma(\F_{q^m}^n) : \mathsf{dim\,} V = i\}$. For a subspace $V \!\subseteq\! \mathbb{F}_{q^m}^n$, the followings are equivalent: 1) $V \!\in\! \Gamma(\F_{q^m}^n)$; 2) $\mathsf{dim\,} V \!=\! \mathsf{dim\,} V|\mathbb{F}_q$ \cite[Lemma 1]{Stichtenoth1990}. \subsection{Linear Network Coding}\label{sect:linearnetwork} As in \cite{Silva2011,Ngai2011,Zhang2009,Cai2011,ElRouayheb2012}, we consider a multicast communication network represented by a directed multigraph with unit capacity links, a single source node, and multiple sink nodes. We assume that \textit{linear network coding} \cite{Li2003,Koetter2003} is employed over the network. Elements of a column vector space $\mathbb{F}_q^{m \times 1}$ are called \textit{packets}. Assume that each link in the network can carry a single $\mathbb{F}_q$-symbol per one time slot, and that each link transports a single packet over $m$ time slots without delays, erasures, or errors. The source node produces $n$ packets $X_1$, \ldots, $X_n\in \mathbb{F}_q^{m \times 1}$ and transmits $X_1$, \ldots, $X_n$ on $n$ outgoing links over $m$ consecutive time slots. Define the $m \times n$ matrix $X=[X_1,\dots,X_n]$. The data flow on any link can be represented as an $\mathbb{F}_q$-linear combination of packets $X_1,\dots,X_n \in \mathbb{F}_q^{m \times 1}$. Namely, the information transmitted on a link $e$ can be denoted as $b_e X^{\rm T} \in \mathbb{F}_q^{1 \times m}$, where $b_e \in \mathbb{F}_q^n$ is called a \textit{global coding vector} (GCV) of $e$. Suppose that a sink node has $N$ incoming links. Then, the information received at a sink node can be represented as an $N \times m$ matrix $AX^{\rm T} \in\mathbb{F}_q^{N \times m}$, where $A\in\mathbb{F}_q^{N \times n}$ is the transfer matrix constructed by gathering the GCV's of $N$ incoming links. The network code is called \textit{feasible} if every transfer matrix to a sink node has rank $n$ over $\mathbb{F}_q$. The system is called \textit{coherent} if $A$ is known to each sink node; otherwise, called \textit{noncoherent}. \section{Universal Security Performance and Universal Error Correction Capability of Secure Network Coding}\label{sect:wiretap} This section introduces the wiretap network model with packet errors and the nested coset coding scheme in secure network coding \cite{ElRouayheb2012,Silva2011,Zhang2009,Ngai2011}. Then, we define the universal security performance in terms of the \textit{universal equivocation} and the \textit{universal ${\Omega}$-strong security} on the wiretap network model. We also define the universal error correction capability of secure network coding. From now on, only one sink node is assumed without loss of generality. In addition, we focus on the fundamental case of coherent systems in this paper due to the space constraint. But, as in \cite{Silva2011}, all analysis in this paper can be easily adapted to the case of noncoherent systems. \subsection{Wiretap Networks with Errors, and Nested Coset Coding}\label{sect:nestedcoding} Following \cite{Cai2011,Silva2011,Ngai2011,Zhang2009,ElRouayheb2012}, assume that in the setup of \Sect{sect:linearnetwork}, there is a wiretapper who has access to packets transmitted on any $\mu$ links. Let $\mathcal{W}$ be the set of $|\mathcal{W}|=\mu$ links observed by the wiretapper. Then the packets observed by the wiretapper are given by $W^{\rm T} = B_\mathcal{W} X^{\rm T}$, where rows of $B_\mathcal{W} \in \mathbb{F}_q^{\mu \times n}$ are the GCV's associated with the links in $\mathcal{W}$. In the scenario \cite{ElRouayheb2012,Silva2011,Zhang2009,Ngai2011}, the source node first regards an $m$-dimensional column vector space $\mathbb{F}_q^{m \times 1}$ as $\mathbb{F}_{q^m}$, and fix $l$ for $1 \!\leq\! l \!\leq\! n$. Let $S\!=\![S_1,\dots,S_l] \!\in\! \mathbb{F}_{q^m}^l$ be the secret message, and assume that $S_1,\dots,S_l$ are uniformly distributed over $\mathbb{F}_{q^m}^l$ and mutually independent. Under the wiretapper's observation, the source node wants to transmit $S$ without information leakage to the wiretapper. To protect $S$ from the wiretapper, the source node encodes $S$ to a transmitted vector $X\!=\![X_1,\dots,X_n]\!\in\!\mathbb{F}_{q^m}^n$ of $n$ packets by applying the \textit{nested coset coding scheme} \cite{Zamir2002,Subramanian2009,Chen2007,Duursma2010} on $S$. In \cite{Duursma2010,Chen2007}, its special case is called a \textit{secret sharing scheme based on linear codes}. \begin{definition}[Nested Coset Coding Scheme]\label{def:nestedcoding} Let $\mathcal{C}_1 \subseteq \mathbb{F}_{q^m}^n$ be a linear code over $\mathbb{F}_{q^m}$ ($m \geq 1$), and $\mathcal{C}_2 \subsetneqq \mathcal{C}_1$ be its subcode with dimension $\mathsf{dim\,} \mathcal{C}_2 = \mathsf{dim\,} \mathcal{C}_1 - l$ over $\mathbb{F}_{q^m}$. Let $\psi:\mathbb{F}_{q^m}^l\rightarrow\mathcal{C}_1/\mathcal{C}_2$ be an arbitrary isomorphism. For a secret message $S\in\mathbb{F}_{q^m}^l$, we choose $X$ from a coset $\psi(S) \in \mathcal{C}_1/\mathcal{C}_2$ uniformly at random and independently of $S$. \end{definition} Then, the source node finally transmit $X$ over the network coded network. \Def{def:nestedcoding} includes the Ozarow-Wyner coset coding scheme \cite{Ozarow1984} as a special case with $\mathcal{C}_1=\mathbb{F}_{q^m}^n$. Hence, when we set $\mathcal{C}_1=\mathbb{F}_{q^m}^n$, this is the secure network coding based on Ozarow-Wyner coset coding scheme \cite{Ngai2011,Silva2011,ElRouayheb2012}. Corresponding to $X$ transmitted from the source node, the sink node receives a vector of $N$ packets $Y \in \mathbb{F}_{q^m}^N$. Here we extend the basic network model described in \Sect{sect:linearnetwork} to incorporate packet errors and rank deficiency of the transfer matrix $A \in \mathbb{F}_q^{N \times n}$ of the sink node. Suppose that at most $t$ errors can occur in any of links, causing the corresponding packets to become corrupted. Then, as \cite{Silva2009a}, $Y$ can be expressed by \begin{align*} Y^{\rm T}=AX^{\rm T}+DZ^{\rm T}, \end{align*} where $Z\in\mathbb{F}_{q^m}^t$ is the $t$ error packets, and $D\in\mathbb{F}_q^{N \times t}$ is the transfer matrix of $Z$. We define $\rho \triangleq n-\mathsf{rank\,} A$ as the rank deficiency of $A$. In this setup, we want to decode $S$ correctly from $Y$. If the network is free of errors and the network code used is feasible, $X$ can be always reconstructed from $Y^{\rm T}=AX^{\rm T}$ as described in \Sect{sect:linearnetwork}. Then, the coset $\psi(S)$, and hence $S$, is uniquely determined from $X$ from \Def{def:nestedcoding}. \subsection{Definition of Universal Security Performance}\label{sect:securityperformance} The security performance of secure network coding in the above model was measured by the following criterion \cite{Zhang2009,Ngai2011}. \begin{definition}[Equivocation]\label{def:nonuniversalperformance} The minimum uncertainty ${\theta}_\mu$ of $S$ given $B_\mathcal{W}X^{\rm T}$ for all possible $\mathcal{W}$'s ($|\mathcal{W}|=\mu$) in the network is called \textit{equivocation}, defined as ${\displaystyle {\theta}_\mu \!\triangleq\! \min_{\mathcal{W}: |\mathcal{W}|=\mu} H(S|B_\mathcal{W}X^{\rm T})}$. \end{definition} As defined in \Def{def:nonuniversalperformance}, ${\theta}_\mu$ depends on the underlying network code. In \cite{Ngai2011,Zhang2009}, ${\theta}_\mu$ for $m=1$ was expressed in terms of the relative network generalized Hamming weight (RNGHW) of $\mathcal{C}_1$ and $\mathcal{C}_2$. The RNGHW is the value determined according to GCV's of all links in the network. Hence, the RNGHW cannot determine the equivocation over random linear network code \cite{Ho2006}. Here, we extend \Def{def:nonuniversalperformance} by requiring the independence of the underlying network code, as follows. \begin{definition}[Universal Equivocation]\label{def:universalperformance} The \textit{universal equivocation} ${\Theta}_\mu$ is the minimum uncertainty of $S$ given $BX^{\rm T}$ for all $B\in\mathbb{F}_q^{\mu \times n}$, defined as \begin{align*} {\Theta}_{\mu} \triangleq \min_{B \in \mathbb{F}_q^{\mu \times n}}H(S|BX^{\rm T}). \end{align*} \end{definition} As defined in \Def{def:universalperformance}, ${\Theta}_\mu$ does not depend on the set of $\mathcal{W}$'s in the network. Silva et al.\@'s universal secure network coding scheme based on MRD codes \cite{Silva2011} achieves ${\Theta}_{n-l} = H(S)$ in \Def{def:universalperformance} provided $m \geq n$. \Def{def:universalperformance} defines the security for the whole components of a message $S=[S_1,\dots,S_l]$. Here we focus on the security for every part of $S$, and give the following definition. \begin{definition}[Universal ${\Omega}$-Strong Security]\label{def:universalalpha} Let $S_\mathcal{Z}=(S_i:i\in\mathcal{Z})$ be a tuple for a subset $\mathcal{Z}\subseteq\{1,\dots,l\}$. We say that a secure network coding scheme attains the \textit{universal ${\Omega}$-strong security} if we have \begin{align} I(S_\mathcal{Z};BX^{\rm T})&=0,\quad \forall \mathcal{Z}, \forall B \in \mathbb{F}_q^{({\Omega}-|\mathcal{Z}|+1) \times n}.\label{eq:universalalpha} \end{align} \end{definition} As \cite{Harada2008,Matsumoto2011,Silva2009}, a scheme with universal ${\Omega}$-strong security does not leak any $|\mathcal{Z}|$ components of $S$ even if at most ${\Omega}-|\mathcal{Z}|+1$ links are observed by the wiretapper. Moreover, this guarantee holds over any underlying network code as ${\Theta}_\mu$. We note that if a scheme achieves the ${\Omega}$-strong security, the universal equivocation ${\Theta}_\mu$ for $\mu={\Omega}-l+1$ must be ${\Theta}_{{\Omega} -l+1}=H(S)$ as shown in \Def{def:universalalpha}. However, the converse does not always hold. The scheme in \cite{Kurihara2012} achieves ${\Omega}=n-1$ provided $m \geq l+n$ by nested coset coding with MRD codes. The universal strongly security in \cite{Silva2009} is a special case of \Def{def:universalalpha} with ${\Omega} =n-1$. \subsection{Definition of the Universal Error Correction Capability of Secure Network Coding}\label{sect:erroneousnetwork} In the model described in \Sect{sect:nestedcoding}, the error correction capability of secure network coding, guaranteed over any underlying network code, is defined as follows. \begin{definition}[Universally $t$-Error-$\rho$-Erasure-Correcting Secure Network Coding] A secure network coding scheme is called \textit{universally $t$-error-$\rho$-erasure-correcting}, if \begin{align*} &H(S|Y)=0,\quad Y^{\rm T}=AX^{\rm T}+DZ^{\rm T},\\ &\quad \forall A \!\in\!\mathbb{F}_q^{N \times n}: \mathsf{rank\,} A\!\geq\!n\!-\!\rho, \forall X \in \psi(S), \forall D \!\in\!\mathbb{F}_q^{N \times t}, \forall Z \!\in\! \mathbb{F}_{q^m}^t, \end{align*} {i.\@e.\@,~} $S$ can be uniquely determined from $Y$ against $t$ errors over any underlying network code with at most $\rho$ rank deficiency. \end{definition} Silva et al.\@'s scheme \cite[Section VI]{Silva2011} is universally $t$-error-$\rho$-erasure-correcting when the minimum rank distance \cite{Gabidulin1985} of $\mathcal{C}_1$ is greater than $2t+\rho$. \section{New Parameters of Linear Codes and Their Properties}\label{sect:defrdiprgrw} This section introduce the \textit{relative dimension/intersection profile} (RDIP) and the \textit{relative generalized rank weight} (RGRW) of linear codes. In the following sections, these parameters are used to characterize the universal security performance and the universal error correction capability of secure network coding. \subsection{Definition} We first define the \textit{relative dimension/intersection profile} (RDIP) of linear codes as follows. \begin{definition}[Relative Dimension/Intersection Profile]\label{def:rdip} Let $\mathcal{C}_1 \subseteq \mathbb{F}_{q^m}^n$ be a linear code and $\mathcal{C}_2\subsetneqq\mathcal{C}_1$ be its subcode. Then, the $i$-th relative dimension/intersection profile (RDIP) of $\mathcal{C}_1$ and $\mathcal{C}_2$ is the greatest difference between dimensions over $\mathbb{F}_{q^m}$ of intersections, defined as \begin{align} K_{R,i} (\mathcal{C}_1,\mathcal{C}_2) \triangleq \max_{V \in \colinvi{i}} \left\{ \mathsf{dim\,}(\mathcal{C}_1 \cap V) - \mathsf{dim\,}(\mathcal{C}_2 \cap V) \right\}, \label{eq:defrdip} \end{align} for $0 \leq i \leq n$. \end{definition} Next, we define the \textit{relative generalized rank weight} (RGRW) of linear codes as follows. \begin{definition}[Relative Generalized Rank Weight]\label{def:rgrw} Let $\mathcal{C}_1 \subseteq \mathbb{F}_{q^m}^n$ be a linear code and $\mathcal{C}_2\subsetneqq\mathcal{C}_1$ be its subcode. Then, the $i$-th relative generalized rank weight (RGRW) of $\mathcal{C}_1$ and $\mathcal{C}_2$ is defined by \begin{align} &M_{R,i}(\mathcal{C}_1,\mathcal{C}_2) \nonumber\\ & \!\triangleq\! \min \left\{ \mathsf{dim\,} V : V \!\in\! \Gamma(\F_{q^m}^n), \mathsf{dim\,}(\mathcal{C}_1 \!\cap\! V) \!-\! \mathsf{dim\,}(\mathcal{C}_2 \!\cap\! V) \!\geq\! i \right\},\label{eq:defrgrw} \end{align} for $0 \leq i \leq \mathsf{dim\,} (\mathcal{C}_1/\mathcal{C}_2)$. \end{definition} The relative dimension/length profile and the relative generalized Hamming weight introduced in \cite{Luo2005} are equivalent to \Eqs{eq:defrdip} and (\ref{eq:defrgrw}) with $\colinvi{i}$ and $\Gamma(\F_{q^m}^n)$ replaced by suitable smaller sets, respectively. \subsection{Basic Properties of the RDIP and the RGRW, and the Relation between the Rank Distance and the RGRW} This subsection introduces some basic properties of the RDIP and the RGRW, and also shows the relation between the RGRW and the rank distance \cite{Gabidulin1985}. These will be used for expressions of the universal security performance and the universal error correction capability of secure network coding. First, we introduce the following theorem and lemma about the RDIP and the RGRW. \begin{theorem}[Monotonicity of the RDIP]\label{thm:monotonerdip} Let $\mathcal{C}_1 \subseteq \mathbb{F}_{q^m}^n$ be a linear code and $\mathcal{C}_2 \subsetneqq \mathcal{C}_1$ be its subcode. Then, the $i$-th RDIP $K_{R,i}(\mathcal{C}_1,\mathcal{C}_2)$ is nondecreasing with $i$ from $K_{R,0}(\mathcal{C}_1,\mathcal{C}_2)=0$ to $K_{R,n}(\mathcal{C}_1,\mathcal{C}_2)=\mathsf{dim\,}(\mathcal{C}_1/\mathcal{C}_2)$, and $0 \leq K_{R,i+1}(\mathcal{C}_1,\mathcal{C}_2)-K_{R,i}(\mathcal{C}_1,\mathcal{C}_2)\leq 1$ holds. \end{theorem} \begin{IEEEproof} $K_{R,0}(\mathcal{C}_1,\mathcal{C}_2)=0$ and $K_{R,n}(\mathcal{C}_1,\mathcal{C}_2)=\mathsf{dim\,}(\mathcal{C}_1/\mathcal{C}_2)$, are obvious from \Def{def:rdip}. Recall that \begin{align*} \colinvi{i} = \left\{V \subseteq \mathbb{F}_{q^m}^n : V=\{\vec{u}G: \vec{u}\in\mathbb{F}_{q^m}^i\}, G\in\mathbb{F}_q^{i \times n}, \mathsf{rank\,} G = i \right\}, \end{align*} for $1 \leq i \leq n$ from \cite[Lemma 1]{Stichtenoth1990}. This implies that for any subspace $V_1 \in \colinvi{i+1}$, there always exist some $V_2$'s satisfying $V_2 \in \colinvi{i}$ and $V_2 \subsetneqq V_1$. This yields $K_{R,i}(\mathcal{C}_1,\mathcal{C}_2)\leq K_{R,i+1}(\mathcal{C}_1,\mathcal{C}_2)$. Next we show that the increment at each step is at most $1$. Consider arbitrary subspaces $V, V' \in \Gamma(\F_{q^m}^n)$ such that $\mathsf{dim\,} V'=\mathsf{dim\,} V + 1$ and $V \subsetneqq V'$. Let $f = \mathsf{dim\,}(\mathcal{C}_1 \cap V) - \mathsf{dim\,}(\mathcal{C}_2 \cap V)$; $g = \mathsf{dim\,}(\mathcal{C}_1 \cap V') - \mathsf{dim\,}(\mathcal{C}_2 \cap V')$. Since $\mathsf{dim\,} (\mathcal{C}_1 \cap V) +1 \!\geq\! \mathsf{dim\,} (\mathcal{C}_1 \cap V') \!\geq\! \mathsf{dim\,} (\mathcal{C}_1 \cap V)$ and $\mathcal{C}_2 \subsetneqq \mathcal{C}_1$, we have $f+1 \geq g \geq f$ and hence $K_{R,i}(\mathcal{C}_1,\mathcal{C}_2)+1 \geq K_{R,i+1}(\mathcal{C}_1,\mathcal{C}_2) \geq K_{R,i}(\mathcal{C}_1,\mathcal{C}_2)$. \end{IEEEproof} \begin{lemma}\label{lma:rgrw} Let $\mathcal{C}_1 \subseteq \mathbb{F}_{q^m}^n$ be a linear code and $\mathcal{C}_2 \subsetneqq \mathcal{C}_1$ be its subcode. Then, the $i$-th RGRW $M_{R,i}(\mathcal{C}_1,\mathcal{C}_2)$ is strictly increasing with $i$. Moreover, $M_{R,0}(\mathcal{C}_1,\mathcal{C}_2)=0$ and \begin{align*} &M_{R,i}(\mathcal{C}_1,\mathcal{C}_2) = \min \left\{ j : K_{R,j}(\mathcal{C}_1,\mathcal{C}_2) = i \right\}\\ &\ = \min \left\{ \mathsf{dim\,} V : V \in \Gamma(\F_{q^m}^n), \mathsf{dim\,}(\mathcal{C}_1 \cap V) - \mathsf{dim\,}(\mathcal{C}_2 \cap V) = i \right\}, \end{align*} where $0\leq i \leq \mathsf{dim\,} (\mathcal{C}_1/\mathcal{C}_2)$. \end{lemma} \begin{IEEEproof} First we have \begin{align*} &\min\left\{ j : K_{R,j}(\mathcal{C}_1,\mathcal{C}_2) \geq i \right\}\\ &\!=\! \min \big\{ j : \exists V \!\in\! \colinvi{j}, \text{ such that } \mathsf{dim\,}(\mathcal{C}_1 \!\cap\! V) \!-\!\mathsf{dim\,}(\mathcal{C}_2 \!\cap\! V) \!\geq\! i \big\}\\ &\!=\! \min\left\{ \mathsf{dim\,} V : V \in \Gamma(\F_{q^m}^n), \mathsf{dim\,}(\mathcal{C}_1 \cap V) -\mathsf{dim\,}(\mathcal{C}_2 \cap V) \geq i \right\}\\ &\!=\! M_{R,i}(\mathcal{C}_1,\mathcal{C}_2). \end{align*} From \Thm{thm:monotonerdip}, we have $\left\{ j : K_{R,j}(\mathcal{C}_1,\mathcal{C}_2) = i \right\} \cap \left\{ j : K_{R,j}(\mathcal{C}_1,\mathcal{C}_2) \geq i+1 \right\} =\emptyset$. We thus have \begin{align*} M_{R,i}(\mathcal{C}_1,\mathcal{C}_2) &= \min\left\{ j : K_{R,j}(\mathcal{C}_1,\mathcal{C}_2) \geq i \right\}\\ &= \min\left\{ j : K_{R,j}(\mathcal{C}_1,\mathcal{C}_2)= i \right\}. \end{align*} Therefore the RGRW is strictly increasing with $i$ and thus \begin{align*} &M_{R,i}(\mathcal{C}_1,\mathcal{C}_2) \\ &= \min \left\{ \mathsf{dim\,} V : V \in \Gamma(\F_{q^m}^n), \mathsf{dim\,}(\mathcal{C}_1 \cap V) -\mathsf{dim\,}(\mathcal{C}_2 \cap V) = i \right\}, \end{align*} is established. \end{IEEEproof} Next, we show the relation between the rank distance \cite{Gabidulin1985} and the RGRW. Let $\phi_m:\mathbb{F}_{q^m}\rightarrow \mathbb{F}_q^{m \times 1}$ be an $\mathbb{F}_q$-linear isomorphism that expands an element of $\mathbb{F}_{q^m}$ as a column vector over $\mathbb{F}_q$ with respect to some fixed basis for $\mathbb{F}_{q^m}$ over $\mathbb{F}_q$. Then, we define the \textit{rank over $\mathbb{F}_q$} of a vector $\vec{x}=[x_1,\dots,x_n] \in \mathbb{F}_{q^m}^n$, denoted by $\mathsf{rank}_{\mathbb{F}_q} (\vec{x})$, as the rank of $m \times n$ matrix $\left[\phi_m(x_1), \dots, \phi_m(x_n) \right]$ over $\mathbb{F}_q$. The rank distance \cite{Gabidulin1985} between two vectors $\vec{x},\vec{y}\in\mathbb{F}_{q^m}^n$ is given by $d_R(\vec{x},\vec{y}) \triangleq \mathsf{rank\,}_{\mathbb{F}_q}(\vec{y}-\vec{x})$. The minimum rank distance \cite{Gabidulin1985} of a code $\mathcal{C}$ is given as $d_R(\mathcal{C}) \!\triangleq\! \min\{d_R(\vec{x},\vec{y}): \vec{x},\vec{y} \!\in\! \mathcal{C},\vec{x} \!\neq\! \vec{y}\} \!=\! \min\{d_R(\vec{x},\vec{0}): \vec{x} \!\in\! \mathcal{C}, \vec{x} \!\neq\! \vec{0}\}$. For a subspace $V\subseteq\mathbb{F}_{q^m}^n$, we define by $V^* \triangleq \sum_{i=0}^{m-1} V^{q^i}$ the sum of subspaces $V,V^q,\dots,V^{q^{m-1}}$. \begin{lemma}\label{lma:xxxx} For a subspace $V \subseteq \mathbb{F}_{q^m}^n$ with $\mathsf{dim\,} V = 1$, we have $\mathsf{dim\,} V^* = d_R(V)$. \end{lemma} \begin{IEEEproof} Let $\vec{b} \!=\! [b_1,\dots,b_n] \!\in\! V$ be a nonzero vector, which implies $\mathsf{rank\,}_{\mathbb{F}_q}(\vec{b})\!=\!d_R(V)$. Let $M \!\triangleq\! \left[a_{i,j}\right]_{i,j=1}^{m,n} \!\in\! \mathbb{F}_{q^m}^{m \times n}$, $a_{i,j}\!=\!b_j^{q^{i-1}}$. Each vector in $V^*$ is represented by an $\mathbb{F}_{q^m}$-linear combination of $\vec{b},\vec{b}^q,\dots,\vec{b}^{q^{m-1}}$, and hence $\mathsf{dim\,} V^* \!=\! \mathsf{rank\,} M$. For $\alpha_1,\alpha_2\in\mathbb{F}_q$, $\beta_1,\beta_2\in\mathbb{F}_{q^m}$, we have $\alpha_1 \phi_m(\beta_1) + \alpha_2 \phi_m(\beta_2) \!=\! \phi_m(\alpha_1\beta_1+\alpha_2\beta_2)$. This implies that there always exists some $P\!\in\!\mathbb{F}_{q}^{n \times n}$ with $\mathsf{rank\,} P\!=\!n$ satisfying \begin{align} \vec{b} P \!=\! [g_1,\dots,g_{d_R(V)},0,\dots,0] \!\in\! \mathbb{F}_{q^m}^n, g_j \!\neq\! 0, \label{eq:transform} \end{align} where $g_1,\dots,g_{d_R(V)}$ are linearly independent over $\mathbb{F}_q$, and note that $P$ represents the elementary column operation on $[\phi_m(b_1),\dots,\phi_m(b_n)]$. Also for $\alpha_1,\alpha_2 \!\in\!\mathbb{F}_q$, $\beta_1,\beta_2\!\in\!\mathbb{F}_{q^m}$, we have $\alpha_1\beta_1^{q^i} \!+\! \alpha_2\beta_2^{q^i}\!=\!(\alpha_1\beta_1 \!+\! \alpha_2\beta_2)^{q^i}$ ($0 \!\leq\! i \!\leq\! m-1$). Hence, for $P\!\in\!\mathbb{F}_{q}^{n \times n}$ satisfying \Eq{eq:transform}, we also have $\vec{b}^{q^i} P \!=\! [g_1^{q^i},\dots,g_{d_R(V)}^{q^i},0,\dots,0] \!\in\! \mathbb{F}_{q^m}^n$ for all $0 \leq i \leq m-1$. Thus, by the elementary column operation on $M$ over $\mathbb{F}_q$, represented by $P$, we get $MP$. By eliminating zero columns from $MP$, we obtain a matrix $M' = \left[ f_{i,j}\right]_{i,j=1}^{m,d_R(V)}$, $f_{i,j}=g_j^{q^{i-1}}$, where $\mathsf{rank\,} M'=\mathsf{rank\,} M$. Let $M'_k\in\mathbb{F}_{q^m}^{k \times d_R(V)}$ $(1\leq k\leq d_R(V))$ be the submatrix consisting of the first $k$ rows of $M'$. Since $d_R(V) \!\leq\! \min\{m,n\}$ and $g_1,\dots,g_{d_R(V)}$ are linearly independent, $M'_k$ is the generator matrix of $[d_R(V),k]$ Gabidulin code and $\mathsf{rank\,} M'_k=k$ \cite{Gabidulin1985}. Thus, $M'_{d_R(V)}$ is nonsingular, and hence we have $\mathsf{rank\,} M'_{d_R(V)}=\mathsf{rank\,} M'\!=\!d_R(V)$. Therefore, $\mathsf{dim\,} V^* \!=\! \mathsf{rank\,} M \!=\! \mathsf{rank\,} M' \!=\! d_R(V)$. \end{IEEEproof} \begin{lemma}\label{lma:rankdistance} For a code $\mathcal{C}_1 \subseteq \mathbb{F}_{q^m}^n$ and its subcode $\mathcal{C}_2 \subsetneqq \mathcal{C}_1$, the first RGRW can be represented as $M_{R,1}(\mathcal{C}_1,\mathcal{C}_2) = \min \left\{ d_R(\vec{x},\vec{0}) : \vec{x} \in \mathcal{C}_1\backslash\mathcal{C}_2 \right\}$. \end{lemma} \begin{IEEEproof} $M_R(\mathcal{C}_1,\mathcal{C}_2)$ can be represented as \begin{align} &M_{R,1}(\mathcal{C}_1,\mathcal{C}_2)\nonumber\\ &= \min \left\{ \mathsf{dim\,} W : W \!\in\! \Gamma(\F_{q^m}^n), \mathsf{dim\,} (\mathcal{C}_1 \cap W)\! -\! \mathsf{dim\,} (\mathcal{C}_2 \cap W)\!\geq\! 1 \right\}\nonumber\\ &= \min \Big\{ \mathsf{dim\,} W : W \in \Gamma(\F_{q^m}^n),\nonumber\\ & \exists V \!\subseteq\! W \text{\,such that\,} V \!\subseteq\! (\mathcal{C}_1 \cap W), V \!\nsubseteq\! (\mathcal{C}_2 \cap W), \mathsf{dim\,} V \!\geq\! 1 \Big\}\label{eq:rankdistance1}. \end{align} For any subspace $V \subseteq \mathbb{F}_{q^m}^n$ with $\mathsf{dim\,} V \!\geq\! 1$, there always exists some $W \!\in\! \Gamma(\F_{q^m}^n)$ satisfying $W \!\supseteq\! V$, because we have $V^\ast \!\in\! \Gamma(\F_{q^m}^n)$ and $V^* \!\supseteq\! V$. Also, for subspaces $W$ and $V \!\subseteq\! W$ with $\mathsf{dim\,} V \!\geq\! 1$, if $W$ is the smallest space in $\Gamma(\F_{q^m}^n)$ including $V$, then $W\!=\!V^*$ \cite{Stichtenoth1990}. Thus \Eq{eq:rankdistance1} can be rewritten as \begin{align} &\min \Big\{ \mathsf{dim\,} W : V \!\subseteq\! \mathbb{F}_{q^m}^n, \mathsf{dim\,} V \!\geq\! 1\nonumber\\ &\quad \exists W \!\supseteq\! V, W\!\in\!\Gamma(\F_{q^m}^n), \text{\,such that\,} V \!\subseteq\! (\mathcal{C}_1 \cap W), V \!\nsubseteq\! (\mathcal{C}_2 \cap W) \Big\}\nonumber\\ &=\min \left\{ \mathsf{dim\,} V^* : V \!\subseteq\!\mathbb{F}_{q^m}^n, V \!\subseteq\! (\mathcal{C}_1 \!\cap\! V^*), V \!\nsubseteq\! (\mathcal{C}_2 \!\cap\! V^*), \mathsf{dim\,} V \!\geq\! 1 \right\}\nonumber\\ &= \min \left\{ \mathsf{dim\,} V^* : V \subseteq \mathcal{C}_1, V \nsubseteq \mathcal{C}_2, \mathsf{dim\,} V \geq 1 \right\},\label{eq:midrankdistance} \end{align} where the last equality of \Eq{eq:midrankdistance} is obtained by $V\subseteq(\mathcal{C}_1\cap V^*) \Leftrightarrow V\subseteq\mathcal{C}_1$, and $V\nsubseteq(\mathcal{C}_2\cap V^*) \Leftrightarrow V\nsubseteq\mathcal{C}_1$ from $V^*\supseteq V$. For subspaces $V$ and $V' \supseteq V$, we have $\mathsf{dim\,} V^* \leq \mathsf{dim\,} V'^*$. Therefore, \Eq{eq:midrankdistance} can be rewritten as follows. \begin{align*} &\min \left\{ \mathsf{dim\,} V^* : V \subseteq \mathcal{C}_1, V \nsubseteq \mathcal{C}_2, \mathsf{dim\,} V \geq 1 \right\}\\ &=\min \left\{ \mathsf{dim\,} V^* : V \subseteq \mathcal{C}_1, V \nsubseteq \mathcal{C}_2, \mathsf{dim\,} V = 1 \right\}\\ &= \min \left\{ d_R(V) : V \subseteq \mathcal{C}_1, V \nsubseteq \mathcal{C}_2, \mathsf{dim\,} V = 1 \right\}\ \text{(by \Lma{lma:xxxx})}\\ &= \min \left\{ d_R(\vec{x},\vec{0}) : \vec{x} \in \mathcal{C}_1\backslash \mathcal{C}_2 \right\}. \end{align*}\\[-6.3ex] \end{IEEEproof} \Lma{lma:rankdistance} immediately yields the following corollary. \begin{corollary}\label{coro:rankdistance} For a linear code $\mathcal{C}$, $d_{R}(\mathcal{C}) = M_{R,1}(\mathcal{C},\{\vec{0}\})$ holds. \end{corollary} This shows that $M_{R,1}(\cdot,\{\vec{0}\})$ is a generalization of $d_R(\cdot)$. Now we present the following proposition that generalizes the Singleton-type bound of the rank distance \cite{Gabidulin1985}. \begin{proposition}[Generalization of Singleton-Type Bound]\label{prop:generalizedsingleton} Let $\mathcal{C}_1 \subseteq \mathbb{F}_{q^m}^n$ be a linear code and $\mathcal{C}_2 \subsetneqq \mathcal{C}_1$ be its subcode. Then, the RGRW of $\mathcal{C}_1$ and $\mathcal{C}_2$ is upper bounded by \begin{align} M_{R,i}(\mathcal{C}_1,\mathcal{C}_2) \leq \min\left\{1, \frac{m}{(n-\mathsf{dim\,} \mathcal{C}_2)}\right\}(n-\mathsf{dim\,} \mathcal{C}_1)+i, \label{eq:singleton} \end{align} for $1 \leq i \leq \mathsf{dim\,}(\mathcal{C}_1/\mathcal{C}_2)$. \end{proposition} \begin{IEEEproof} We can consider that $\mathcal{C}_2$ is a systematic code without loss of generality. That is, the first $\mathsf{dim\,} \mathcal{C}_2$ coordinates of each basis of $\mathcal{C}_2$ is one of canonical bases of $\mathbb{F}_{q^m}^{\mathsf{dim\,} \mathcal{C}_2}$. Let $\mathcal{S}\subsetneqq\mathbb{F}_{q^m}^n$ be a linear code such that $\mathcal{C}_1$ is a direct sum of $\mathcal{C}_2$ and $\mathcal{S}$. Then, after suitable permutation of coordinates, a basis of $\mathcal{S}$ can be chosen such that its first $\mathsf{dim\,} \mathcal{C}_2$ coordinates are zero. Then, the effective length \cite{Forney1994} of a code $\mathcal{S}$ is less than or equal to $n-\mathsf{dim\,} \mathcal{C}_2$. Hence we have \begin{align} d_R(\mathcal{S}) &\leq \min\left\{1,\frac{m}{n-\mathsf{dim\,} \mathcal{C}_2}\right\}(n-\mathsf{dim\,} \mathcal{C}_2 - \mathsf{dim\,} \mathcal{S})+1,\nonumber\\ &= \min\left\{1,\frac{m}{n-\mathsf{dim\,} \mathcal{C}_2}\right\}(n-\mathsf{dim\,} \mathcal{C}_1)+1,\label{eq:singletonproof} \end{align} from the Singleton-type bound for rank metric \cite{Gabidulin1985}. Here we write $\kappa=\min\left\{1,m/(n-\mathsf{dim\,} \mathcal{C}_2)\right\}$ for the sake of simplicity. Recall that $d_R(\mathcal{S})=M_{R,1}(\mathcal{S},\{\vec{0}\})$ from \Coro{coro:rankdistance}, and $M_{R,1}(\mathcal{S},\{\vec{0}\}) \leq \kappa(n-\mathsf{dim\,}\mathcal{C}_1)+1$ holds from \Eq{eq:singletonproof}. We shall use the mathematical induction on $t$. We see that \Eq{eq:midproofsingleton} is true for $t=1$. Assume that for some $t \geq 1$, \begin{align} M_{R,t}(\mathcal{S},\{\vec{0}\}) \leq \kappa(n-\mathsf{dim\,}\mathcal{C}_1) + t, \label{eq:midproofsingleton} \end{align} is true. Then, by the monotonicity shown in \Prop{lma:rgrw}, \begin{align*} M_{R,t +1}(\mathcal{S},\{\vec{0}\}) &\leq M_{R,t}(\mathcal{S},\{\vec{0}\}) +1 \leq \kappa(n-\mathsf{dim\,}\mathcal{C}_1)+t +1, \end{align*} holds. Thus, it is proved by mathematical induction that \Eq{eq:midproofsingleton} holds for $1 \leq t \leq \mathsf{dim\,} (\mathcal{C}_1/\mathcal{C}_2)$. Lastly, we prove \Eq{eq:singleton} by the above discussion about the RGRW of $\mathcal{S}$ and $\{\vec{0}\}$. For an arbitrary fixed subspace $V \subseteq \mathbb{F}_{q^m}^n$, we have $\mathsf{dim\,} (\mathcal{C}_1 \cap V) \geq \mathsf{dim\,} (\mathcal{S} \cap V) + \mathsf{dim\,} (\mathcal{C}_2 \cap V)$, because $\mathcal{C}_1$ is a direct sum of $\mathcal{S}$ and $\mathcal{C}_2$. Hence, $\mathsf{dim\,} (\mathcal{C}_1 \cap V) - \mathsf{dim\,} (\mathcal{C}_2 \cap V)\geq \mathsf{dim\,} (\mathcal{S} \cap V)$ holds, and we have $M_{R,i}(\mathcal{C}_1,\mathcal{C}_2) \leq M_{R,i}(\mathcal{S},\{\vec{0}\})$ for $1 \leq i \leq \mathsf{dim\,} (\mathcal{C}_1/\mathcal{C}_2)$ from \Def{def:rgrw}. Therefore, from the foregoing proof, we have \begin{align*} M_{R,i}(\mathcal{C}_1,\mathcal{C}_2) \leq M_{R,i}(\mathcal{S},\{\vec{0}\}) \leq \kappa(n-\mathsf{dim\,}\mathcal{C}_1)+i, \end{align*} for $1 \leq i \leq \mathsf{dim\,} (\mathcal{C}_1/\mathcal{C}_2)$, and the proposition is proved. \end{IEEEproof} \Prop{prop:generalizedsingleton} immediately yields the following corollary. \begin{corollary}\label{coro:mrdrgrw} For a linear code $\mathcal{C} \subseteq \mathbb{F}_{q^m}^n$, $M_{R,i}(\mathcal{C},\{\vec{0}\}) \leq \min\{1,m/n\}(n-\mathsf{dim\,} \mathcal{C})+i$ for $1 \leq i \leq \mathsf{dim\,} \mathcal{C}$. The equality holds for all $i$ if and only if $\mathcal{C}$ is an MRD code. \end{corollary} \section{Universal Security Performance on Wiretap Networks}\label{sect:universalsecure} In this section, we express ${\Theta}_\mu$ and ${\Omega}$ given in \Sect{sect:securityperformance} in terms of the RDIP and RGRW. From now on, we use the following definition. \begin{definition} For $B\!\in\!\mathbb{F}_{q}^{\mu \times n}$, we define $V_B\!\triangleq\!\{\vec{u}B : \vec{u}\!\in\!\mathbb{F}_{q^m}^\mu\} \!\subseteq\! \mathbb{F}_{q^m}^n$. \end{definition} Recall that if an $\mathbb{F}_{q^m}$-linear space $V \subseteq \mathbb{F}_{q^m}^n$ admits a basis in $\mathbb{F}_{q}^n$ then $V \in \Gamma(\F_{q^m}^n)$ \cite{Stichtenoth1990}, which implies \begin{equation} V_B \in \Gamma(\F_{q^m}^n). \label{eq:vb} \end{equation} First, we give the following theorem for the universal equivocation ${\Theta}_\mu$ given in \Def{def:universalperformance} \begin{theorem}\label{thm:equivocation} Consider the nested coset coding in \Def{def:nestedcoding}. Then, the universal equivocation ${\Theta}_\mu$ of $\mathcal{C}_1,\mathcal{C}_2$ is given by \begin{align*} {\Theta}_\mu &= l-K_{R,\mu}(\mathcal{C}^{\perp}_2,\mathcal{C}^{\perp}_1). \end{align*} \end{theorem} \begin{IEEEproof} Let $B\in\mathbb{F}_{q}^{\mu \times n}$ be an arbitrary matrix. By the chain rule \cite{Cover2006}, we have the following equation for the conditional entropy of $S$ given $BX^{\rm T}$: \begin{align} H(S|BX^{\rm T}) &= H(S,X|BX^{\rm T}) - H(X|S, BX^{\rm T}) \nonumber\\ &= H(X|BX^{\rm T}) + H(S | X, BX^{\rm T}) - H(X|S, BX^{\rm T}) \nonumber\\ &= H(X|BX^{\rm T}) - H(X|S, BX^{\rm T}). \label{eq:nonuniforms} \end{align} Then, from \cite[Proof of Lemma 4.2]{Zhang2009}, we have \begin{align*} H(X|BX^{\rm T}) &= n-\mathsf{dim\,} \mathcal{C}^{\perp}_1 - \mathsf{dim\,} V_B + \mathsf{dim\,}(\mathcal{C}^{\perp}_1 \cap V_B), \\ H(X|S, BX^{\rm T}) &= n-\mathsf{dim\,} \mathcal{C}^{\perp}_2 - \mathsf{dim\,} V_B + \mathsf{dim\,}(\mathcal{C}^{\perp}_2 \cap V_B). \end{align*} By substituting these equations into \Eq{eq:nonuniforms}, we have \begin{align} H(S|BX^{\rm T}) &= \mathsf{dim\,} \mathcal{C}^{\perp}_2 \!-\! \mathsf{dim\,} \mathcal{C}^{\perp}_1 \!-\! \mathsf{dim\,}(\mathcal{C}^{\perp}_2 \cap V_B) \!+\! \mathsf{dim\,}(\mathcal{C}^{\perp}_1 \cap V_B) \nonumber\\ &= l - \mathsf{dim\,}(\mathcal{C}^{\perp}_2 \cap V_B) + \mathsf{dim\,}(\mathcal{C}^{\perp}_1 \cap V_B).\label{eq:nonuniforms2} \end{align} By \Eq{eq:vb} we have \begin{align} \left\{ V_B : B\in \mathbb{F}_{q}^{\mu\times n}\right\} = \bigcup_{i\leq \mu} \colinvi{i}. \label{eq:m3} \end{align} Thus, by \Eq{eq:nonuniforms2} and \Def{def:rdip}, the universal equivocation ${\Theta}_\mu$ is given as follows. \begin{align*} &{\Theta}_\mu = \min_{B \in \mathbb{F}_{q}^{\mu\times n}} H(S|BX^{\rm T})\\ &= l- \max_{B\in \mathbb{F}_{q}^{\mu\times n}} \left\{ \mathsf{dim\,}(\mathcal{C}^{\perp}_2 \cap V_B) - \mathsf{dim\,}(\mathcal{C}^{\perp}_1 \cap V_B) \right\}\\ &= l- \max_{V \in \bigcup_{i\leq \mu} \colinvi{i}} \left\{ \mathsf{dim\,}(\mathcal{C}^{\perp}_2 \cap V) - \mathsf{dim\,}(\mathcal{C}^{\perp}_1 \cap V) \right\}\mbox{(by \Eq{eq:m3})}\\ &= l- \max_{V \in \colinvi{\mu}} \left\{ \mathsf{dim\,}(\mathcal{C}^{\perp}_2 \cap V) - \mathsf{dim\,}(\mathcal{C}^{\perp}_1 \cap V) \right\}\mbox{(by Thm.\ \ref{thm:monotonerdip})}\\ &= l-K_{R,\mu}(\mathcal{C}^{\perp}_2,\mathcal{C}^{\perp}_1). \end{align*}\\[-6.3ex] \end{IEEEproof} \begin{example}\label{ex1} The existing schemes \cite{Kurihara2012,Silva2009,Silva2011} used MRD codes as $\mathcal{C}^{\perp}_1$ and $\mathcal{C}^{\perp}_2$, where $m \geq n$. By \Coro{coro:rankdistance}, we have $\mathsf{dim\,} (V \cap \mathcal{C}^{\perp}_2) = 0$ for any $V \in \colinvi{\mathsf{dim\,} \mathcal{C}_2}$. This implies $K_{R,\mu}(\mathcal{C}^{\perp}_2, \mathcal{C}^{\perp}_1)=K_{R,\mu}(\mathcal{C}^{\perp}_2, \{\vec{0}\})=0$ for $0\leq \mu \leq \mathsf{dim\,} \mathcal{C}_2$. On the other hand, $K_{R,\mathsf{dim\,} \mathcal{C}_1}(\mathcal{C}^{\perp}_2, \{\vec{0}\}) = \mathsf{dim\,} \mathcal{C}_1 - \mathsf{dim\,} \mathcal{C}_2$ by \Coro{coro:mrdrgrw}. Since $\mathsf{dim\,} (V \cap \mathcal{C}^{\perp}_1) \!=\! 0$ for any $V \!\in\! \colinvi{\mathsf{dim\,} \mathcal{C}_1}$ by \Coro{coro:rankdistance}, we have $K_{R,\mathsf{dim\,} \mathcal{C}_1}(\mathcal{C}^{\perp}_2, \mathcal{C}^{\perp}_1) \!=\! \mathsf{dim\,} \mathcal{C}_1 \!-\! \mathsf{dim\,} \mathcal{C}_2$. By \Thm{thm:monotonerdip}, $K_{R,\mu}(\mathcal{C}^{\perp}_2$, $\mathcal{C}^{\perp}_1) \!=\! \mu \!-\! \mathsf{dim\,} \mathcal{C}_2$ for $\mathsf{dim\,} \mathcal{C}_2 \!\leq\! \mu \!\leq\! \mathsf{dim\,} \mathcal{C}_1$. By \Thm{thm:equivocation}, we see that ${\Theta}_\mu \!=\! l \!-\! \max\{0, \mu \!-\! \mathsf{dim\,}\mathcal{C}_2\}$ for $0 \!\leq\! \mu \!\leq\! \mathsf{dim\,} \mathcal{C}_1 (= l \!+\! \mathsf{dim\,} \mathcal{C}_2)$ in the schemes \cite{Kurihara2012,Silva2009,Silva2011}. \end{example} We then have the following corollary by the RGRW. \Coro{prop:perfectsecrecy} shows that the wiretapper obtain no information of $S$ from any $M_{R,1}(\mathcal{C}^{\perp}_2,\mathcal{C}^{\perp}_1)-1$ links. \begin{corollary}\label{prop:perfectsecrecy} Consider the nested coset coding in \Def{def:nestedcoding}. Then, the wiretapper must observe at least $M_{R,j}(\mathcal{C}^{\perp}_2,\mathcal{C}^{\perp}_1)$ links to obtain the mutual information $j$ ($1 \leq j \leq l$) between $S$ and observed packets. \end{corollary} \begin{IEEEproof} From \Eq{eq:nonuniforms2}, the smallest number $\mu$ of tapped links satisfying $I(S;BX^{\rm T})=j$ $(1 \leq j \leq l)$ is \begin{align*} &\min \left\{ \mu: \exists B \in \mathbb{F}_q^{\mu \times n}, I(S;BX^{\rm T})=j\right\} \\ &\ = \min \left\{\mu: \exists B \in \mathbb{F}_q^{\mu \times n}, l-H(S|BX^{\rm T})=j\right\}\\ &\ = \min \left\{\mu : \exists B \in \mathbb{F}_q^{\mu \times n}, \mathsf{dim\,} (\mathcal{C}^{\perp}_2 \cap V_B) - \mathsf{dim\,}(\mathcal{C}^{\perp}_1 \cap V_B)=j\right\}. \end{align*} From \cite[Lemma 1]{Stichtenoth1990} and \Lma{lma:rgrw}, this equation can be rewritten as follows. \begin{align*} &\min \left\{\mu : \exists B \in \mathbb{F}_q^{\mu \times n}, \mathsf{dim\,} (\mathcal{C}^{\perp}_2 \cap V_B) - \mathsf{dim\,}(\mathcal{C}^{\perp}_1 \cap V_B)=j\right\} \\ &= \min \left\{\mathsf{dim\,} V : V \in \Gamma(\F_{q^m}^n), \mathsf{dim\,} (\mathcal{C}^{\perp}_2 \cap V) - \mathsf{dim\,}(\mathcal{C}^{\perp}_1 \cap V)=j\right\} \\ &= M_{R,j}(\mathcal{C}^{\perp}_2,\mathcal{C}^{\perp}_1). \end{align*}\\[-6.3ex] \end{IEEEproof} Although the message $S$ has been assumed to be uniformly distributed over $\mathbb{F}_{q^m}^l$ in \Sect{sect:nestedcoding}, the following proposition reveals that the wiretapper still obtain no information of $S$ from any $M_{R,1}(\mathcal{C}^{\perp}_2,\mathcal{C}^{\perp}_1)-1$ links even if $S$ is arbitrarily distributed. \begin{proposition}\label{coro:distribution} Fix the transfer matrix $B$ to the wiretapper. Suppose that the wiretapper obtain no information of $S$ from $BX^{\rm T}$ when $S$ is uniformly distributed over $\mathbb{F}_{q^m}^l$ as described in \Sect{sect:nestedcoding}. Then, even if $S$ is chosen according to an arbitrary distribution over $\mathbb{F}_{q^m}^l$, the wiretapper still obtain no information of $S$ from $BX^{\rm T}$, that is, $I(S;BX^{\rm T})=0$. \end{proposition} \begin{IEEEproof} When we assume that $S$ is arbitrarily distributed over $\mathbb{F}_{q^m}^l$, $H(X|S, BX^{\rm T})$ is upper bounded as follows from \cite[Proof of Lemma 6]{Silva2011} and \cite[Proof of Lemma 4.2]{Zhang2009}. \begin{align*} H(X|S, BX^{\rm T}) &\leq n-\mathsf{dim\,}\mathcal{C}^{\perp}_2-\mathsf{dim\,} V_B + \mathsf{dim\,}(\mathcal{C}^{\perp}_2\cap V_B). \end{align*} Also, since $X$ is uniformly distributed over a coset $\psi(S)\in\mathcal{C}_1/\mathcal{C}_2$ for fixed $S$, we have $H(X|S)=\mathsf{dim\,} \mathcal{C}_2=n-\mathsf{dim\,}\mathcal{C}^{\perp}_2$. For the dimension of a subspace $\{B X^{\rm T} : X\in\mathcal{C}_1\}$, we have \begin{align*} &\mathsf{dim\,} \{B X^{\rm T} : X \in \mathcal{C}_1\} =\mathsf{rank\,} B G^{\rm T} =\mathsf{rank\,} G B^{\rm T}\\ &\quad=\mathsf{dim\,} \{G \vec{v}^{\rm T} : \vec{v} \in V_B\} =\mathsf{dim\,} V_B - \mathsf{dim\,} (\mathcal{C}^{\perp}_1 \cap V_B), \end{align*} where $G\in\mathbb{F}_{q^m}^{\mathsf{dim\,} \mathcal{C}_1 \times n}$ is a generator matrix of $\mathcal{C}_1$. Hence we have $H(BX^{\rm T})\leq \mathsf{dim\,} V_B-\mathsf{dim\,} (\mathcal{C}^{\perp}_1 \cap V_B)$. We thus have \begin{align} I(S;BX^{\rm T}) &=I(S,X;BX^{\rm T}) - I(X;BX^{\rm T}|S)\nonumber\\ &=H(BX^{\rm T}) - H(X|S) + H(X|S, BX^{\rm T})\nonumber\\ &\leq \mathsf{dim\,}(\mathcal{C}^{\perp}_2\cap V_B) - \mathsf{dim\,}(\mathcal{C}^{\perp}_1\cap V_B)\label{eq:m1} \end{align} for any distribution of $S$. By $I(S;BX^{\rm T})=H(S)-H(S|BX^{\rm T})$ and \Eq{eq:nonuniforms2} we can see that the equality holds if $S$ is uniformly distributed. Therefore, for fixed $B$, if $I(S;BX^{\rm T})=0$ holds for uniformly distributed $S$, then the right hand side of \Eq{eq:m1} is zero, which implies that $I(S;BX^{\rm T})=0$ also holds for arbitrarily distributed $S$ from the nonnegativity of mutual information \cite{Cover2006}. \end{IEEEproof} Lastly, we express ${\Omega}$ in \Def{def:universalalpha} in terms of the RGRW. For a subset $\mathcal{J} \subseteq \{1,\dots,N\}$ and a vector $\vec{c}=[c_1,\dots,c_N]\in\mathbb{F}_{q^m}^N$, let $P_\mathcal{J}(\vec{c})$ be a vector of length $|\mathcal{J}|$ over $\mathbb{F}_{q^m}$, obtained by removing the $t$-th components $c_t$ for $t \notin \mathcal{J}$. For example for $\mathcal{J}=\{1,3\}$ and $\vec{c}=[1,1,0,1]$ ($N=4$), we have $P_\mathcal{J}(\vec{c})=[1,0]$. The \textit{punctured code} $P_\mathcal{J}(\mathcal{C})$ of a code $\mathcal{C} \in \mathbb{F}_{q^n}^N$ is given by $P_\mathcal{J}(\mathcal{C}) \triangleq \left\{P_\mathcal{J}(\vec{c}) : \vec{c}\in \mathcal{C} \right\}$. The \textit{shortened code} $\mathcal{C}_\mathcal{J}$ of a code $\mathcal{C} \subseteq \mathbb{F}_{q^m}^N$ is defined by $\mathcal{C}_\mathcal{J} \triangleq \left\{ P_\mathcal{J}(\vec{c}) : \vec{c}=[c_1,\dots,c_N] \in \mathcal{C}, c_i = 0 \text{ for } i \notin \mathcal{J} \right\}$. For example for $\mathcal{C}=\{[0,0,0],[1,1,0],[1,0,1],[0,1,1]\}$ ($N=3$) and $\mathcal{J}=\{2,3\}$, we have $\mathcal{C}_\mathcal{J}=\{[0,0],[1,1]\}$. We then have the following theorem for the universal ${\Omega}$-strong security defined in \Def{def:universalalpha}. \begin{theorem}\label{thm:universalalpha} Let $\bari{i}\triangleq\{1,\dots,l+n\}\backslash\{i\}$. Fix $\mathcal{C}_1$, $\mathcal{C}_2$ and $\psi$ in \Def{def:nestedcoding} and consider the corresponding nested coset coding scheme in \Def{def:nestedcoding}. By using $\mathcal{C}_1$, $\mathcal{C}_2$ and $\psi$, define \begin{align*} \mathcal{C}'_1 \triangleq \left\{[S,X]: S \in \mathbb{F}_{q^m}^l\text{ and } X\in\psi(S)\right\} \subseteq \mathbb{F}_{q^m}^{l+n}. \end{align*} For each index $1 \leq i \leq l$, we define a punctured code $\mathcal{D}_{1,i}$ of $\mathcal{C}'_1$ as $\mathcal{D}_{1,i} \triangleq P_{\bari{i}}(\mathcal{C}'_1) \subseteq\mathbb{F}_{q^m}^{l+n-1}$, and a shortened code $\mathcal{D}_{2,i}$ of $\mathcal{C}'_1$ as $\mathcal{D}_{2,i} \triangleq (\mathcal{C}'_1)_{\bari{i}} \subseteq\mathbb{F}_{q^m}^{l+n-1}$. Then, the value ${\Omega}$ in \Def{def:universalalpha} is given by \begin{align} {\Omega} = \min \left\{ M_{R,1}(\mathcal{D}^{\perp}_{2,i},\mathcal{D}^{\perp}_{1,i}) : 1 \leq i \leq l \right\} -1. \label{eq:universalalphamin} \end{align} \end{theorem} \begin{IEEEproof} Define $\mathcal{C}'_2 \triangleq \{[\vec{0},\vec{c}_2] : \vec{c}_2 \in \mathcal{C}_2\}\subseteq\mathbb{F}_{q^m}^{l+n}$. Since $\mathcal{C}_2 \subsetneqq \mathcal{C}_1$, $\mathcal{C}'_2$ is also a subcode of $\mathcal{C}'_1$. Thus, in terms of $\mathcal{C}'_1$ and $\mathcal{C}'_2$, we can see that the vector $[S,X]\in\mathbb{F}_{q^m}^{l+n}$ is generated by a nested coset coding scheme of $\mathcal{C}'_1$ and $\mathcal{C}'_2$ from $S$. Then, from the definition of $\mathcal{C}'_1$ and $\mathcal{C}'_2$, we can see that $\mathcal{D}_{2,i}$ is a subcode of $\mathcal{D}_{1,i}$ with dimension $\mathsf{dim\,} \mathcal{D}_{2,i}=\mathsf{dim\,} \mathcal{D}_{1,i} - 1 = \mathsf{dim\,} \mathcal{C}_1 - 1$ over $\mathbb{F}_{q^m}$ for each $i \in \{1,\dots,l\}$. Let $\mathcal{L}\triangleq\{1,\dots,l\}$ and $S_{\mathcal{L}\backslash\{i\}} \triangleq [S_1,\dots,S_{i-1},S_{i+1},\dots,S_l]$ for each $1\leq i \leq l$. For $S_i \in\mathbb{F}_{q^m}$ define a coset \begin{align*} \phi(S_i) &\triangleq \left\{[S_{\mathcal{L}\backslash\{i\}},X]: S_{\mathcal{L}\backslash\{i\}} \in\mathbb{F}_{q^m}^{l-1} \text{ and } X \in \psi(S)\right\} \in\mathcal{D}_{1,i}/\mathcal{D}_{2,i}. \end{align*} Here we define $Z_{\bari{i}} \triangleq P_{\bari{i}}([S,X])= [S_{\mathcal{L}\backslash\{i\}},X] \in \mathcal{D}_{1,i}$. Recall that $S_1,\dots,S_l$ are mutually independent and uniformly distributed over $\mathbb{F}_{q^m}$. Thus, considering a nested coset coding scheme that generates $Z_{\bari{i}}$ from a secret message $S_i \in \mathbb{F}_{q^m}$ with $\mathcal{D}_1,\mathcal{D}_2$, we can see that $Z_{\bari{i}} \in \phi(S_i) \in \mathcal{D}_{1,i}/\mathcal{D}_{2,i}$ is chosen uniformly at random from $\phi(S_i)$. Therefore, we have $I(S_i;DZ_{\bari{i}}^{\rm T})=0$ for any $D\in\mathbb{F}_q^{\mu \times (n+l-1)}$ whenever $\mu < M_{R,1}(\mathcal{D}^{\perp}_{2,i},\mathcal{D}^{\perp}_{1,i})$ from \Coro{prop:perfectsecrecy}. For an arbitrary subset $\mathcal{R}\!\subseteq\!\mathcal{L}\backslash\{i\}$, define a matrix $F_\mathcal{R}$ that consists of $|\mathcal{R}|$ rows of an $(l-1)\times(l-1)$ identity matrix, satisfying $[S_j : j \!\in\! \mathcal{R}]^{\rm T} \!=\! F_\mathcal{R} S_{\mathcal{L}\backslash\{i\}}^{\rm T}$. For an arbitrary matrix $B\!\in\!\mathbb{F}_q^{k \times n}$ ($0 \!\leq\! k \!\leq\! n$), set $D\!=\!\left[\begin{smallmatrix}F_\mathcal{R} & O \\ O & B\end{smallmatrix}\right]$. Then, from the foregoing proof, we have \begin{align*} 0=I(S_i;DZ_{\bari{i}}^{\rm T})&= I(S_i; S_\mathcal{R},BX^{\rm T}) =H(S_i|S_\mathcal{R}) - H(S_i | BX^{\rm T},S_{\mathcal{R}}) \\ &= H(S_i) - H(S_i | BX^{\rm T},S_{\mathcal{R}}) =I(S_i; BX^{\rm T}|S_\mathcal{R}), \end{align*} whenever $|\mathcal{R}| \!+\! k \!<\! M_1(\mathcal{D}^{\perp}_{2,i},\mathcal{D}^{\perp}_{1,i})$. Since $I(S_i; BX^{\rm T}|S_\mathcal{R})\!=\!0$ is equivalent to \Eq{eq:universalalpha} from \cite[Prop.\@\,5]{Silva2009}, we have \Eq{eq:universalalphamin} by selecting the minimum value of $M_{R,1}(\mathcal{D}^{\perp}_{2,i},\mathcal{D}^{\perp}_{1,i})\!-\! 1$ for $1 \!\leq\! i \!\leq\! l$. \end{IEEEproof} \begin{example}\label{ex2} The scheme proposed in \cite{Kurihara2012} used a systematic MRD code as $\mathcal{C}'_1$ (not $\mathcal{C}_1$), where $m \geq l+n$. We proved $\min \left\{ M_{R,1}(\mathcal{D}^{\perp}_{2,i},\mathcal{D}^{\perp}_{1,i}) : 1 \leq i \leq l \right\}=n$ in \cite[Proof of Theorem 4]{Kurihara2012}. By \Thm{thm:universalalpha}, we see that the scheme \cite{Kurihara2012} attains the universal $(n-1)$-strong security in the sense of \Def{def:universalalpha}, while \cite{Kurihara2012} proved it by adapting the proof argument in \cite{Silva2009}. \end{example} As shown in \Prop{coro:distribution}, no information of $S$ is leaked from less than $M_{R,1}(\mathcal{C}^{\perp}_2,\mathcal{C}^{\perp}_1)$ tapped links even if $S$ is arbitrarily distributed. In contrast, $S$ must be uniformly distributed over $\mathbb{F}_{q^m}^l$ to establish \Thm{thm:universalalpha}. This is because elements of $S$ need to be treated as extra random packets, as in strongly secure network coding schemes \cite{Silva2009,Harada2008,Matsumoto2011}. \section{Universal Error Correction Capability of Secure Network Coding}\label{sect:errorcorrection} This section derives the universal error correction capability by the approach of \cite[Section III]{Silva2009a}. Recall that the received packets $Y$ is given by $Y^{\rm T}=AX^{\rm T}+DZ^{\rm T}$ in the setup of \Sect{sect:nestedcoding}, and that $X$ is chosen from the coset $\psi(S)\in\mathcal{C}_1/\mathcal{C}_2$ corresponding to $S$ by the nested coset coding in \Def{def:nestedcoding}. From now on, we write $\mathcal{X}\triangleq\psi(S)$ for the sake of simplicity. First, we define the \textit{discrepancy} \cite{Silva2009a} between $\mathcal{X}$ and $Y$ by \begin{align} \Delta_A (\mathcal{X}, Y) &\!\triangleq\! \min \{r \!\in\! \mathbb{N} : D \!\in\! \mathbb{F}_{q}^{N \times r}, Z \!\in\! \mathbb{F}_{q^m}^{r}, X \!\in\! \mathcal{X}, Y^{\rm T}\!=\!AX^{\rm T}\!+\!DZ^{\rm T}\} \nonumber\\ &\!=\! \min \left\{ d_R(XA^{\rm T},Y) : X \in \mathcal{X} \right\},\label{eq:defdescrepancy} \end{align} where the second equality is derived from \cite[Lemma 4]{Silva2009a}. This definition of $\Delta_A(\mathcal{X},Y)$ represents the minimum number $r$ of error packets $Z$ required to be injected in order to transform at least one element of $\mathcal{X}$ into $Y$, as \cite[Eq.\,(9)]{Silva2009}. Next, we define the \textit{$\Delta$-distance} \cite{Silva2009a} between $\mathcal{X}$ and $\mathcal{X}'$, induced by $\Delta_A(\mathcal{X},Y)$, as \begin{align} \delta_A(\mathcal{X},\mathcal{X}') \triangleq \min \left\{\Delta_A(\mathcal{X},Y)+ \Delta_A(\mathcal{X}',Y) : Y \in \mathbb{F}_{q^m}^N\right\}, \label{eq:defdelta} \end{align} for $\mathcal{X},\mathcal{X}'\in\mathcal{C}_1/\mathcal{C}_2$. \begin{lemma}\label{lma:deltadistance} For $\mathcal{X}, \mathcal{X}' \in \mathcal{C}_1/\mathcal{C}_2$, we have \begin{align} \delta_A(\mathcal{X},\mathcal{X}') &= \min \left\{ d_R(XA^{\rm T},X'A^{\rm T}): X \in \mathcal{X}, X' \in \mathcal{X}' \right\}. \label{eq:deltalma} \end{align} \end{lemma} \begin{IEEEproof} First we have \begin{align} &\delta_A(\mathcal{X},\mathcal{X}') =\min \left\{\Delta_A(\mathcal{X},Y)+ \Delta_A(\mathcal{X}',Y) : Y \in \mathbb{F}_{q^m}^N\right\} \nonumber\\ &\!=\!\min \Big\{ \min \left\{ d_R(XA^{\rm T},Y) : X \in \mathcal{X} \right\} \nonumber\\ &\qquad\qquad+ \min \left\{ d_R(X'A^{\rm T},Y) : X' \in \mathcal{X}' \right\} : Y \in \mathbb{F}_{q^m}^N \Big\}\nonumber\\ &\!=\! \min \left\{ d_R(XA^{\rm \!T},Y) \!+\! d_R(X'\!A^{\rm \!T},Y): X \!\in\! \mathcal{X}, X'\! \!\in\! \mathcal{X}'\!, Y \!\in\! \mathbb{F}_{q^m}^N \right\}.\label{eq:triangle} \end{align} The rank distance satisfies the triangle inequality $d_R(XA^{\rm T},XA^{\rm T}) \leq d_R(XA^{\rm T},Y) + d_R(X'A^{\rm T},Y)$ for $\forall Y\in\mathbb{F}_{q^m}^N$ \cite{Gabidulin1985}. This lower bound can be achieved by choosing, {e.\@g.\@,~} $Y =X A^{\rm T}$. Therefore, from \Eq{eq:triangle}, we have \Eq{eq:deltalma}. \end{IEEEproof} The next lemma shows that $\Delta_A(\mathcal{X},Y)$ is \textit{normal} \cite[Definition 1]{Silva2009a}. \begin{lemma}\label{lma:normal} For all $\mathcal{X},\mathcal{X}'\in\mathcal{C}_1/\mathcal{C}_2$ and all $0 \leq i \leq \delta_A(\mathcal{X},\mathcal{X}')$, there exists some $Y\in\mathbb{F}_{q^m}^n$ such that $\Delta_A(\mathcal{X},Y)=i$ and $\Delta_A(\mathcal{X}',Y)=\delta_A(\mathcal{X},\mathcal{X}')-i$. \end{lemma} \begin{IEEEproof} Let $\mathcal{X},\mathcal{X}'\in\mathcal{C}_1/\mathcal{C}_2$ and let $0 \leq i \leq d=\delta_A(\mathcal{X},\mathcal{X}')$. Then, $d=\min \left\{ d_R(XA^{\rm T},X'A^{\rm T}): X \in \mathcal{X}, X' \in \mathcal{X}'\right\}$ from \Lma{lma:deltadistance}. Let $\bar{X}\in\mathcal{X}$ and $\bar{X'}\in\mathcal{X}'$ be vectors satisfying $d=d_R(\bar{X}A^{\rm T},\bar{X}'A^{\rm T})$. From the proof of \cite[Theorem 6]{Silva2009a}, we can always find two vectors $W,W'\in\mathbb{F}_{q^m}^n$ such that $W+W' = (\bar{X}'-\bar{X})A^{\rm T}$, $\mathsf{rank\,}_{\mathbb{F}_q}(W)=i$ and $\mathsf{rank\,}_{\mathbb{F}_q}(W')=d-i$. Taking $\bar{Y}=\bar{X}A^{\rm T}+W=\bar{X}'A^{\rm T}-W'$, we have $d_R(\bar{X}A^{\rm T},\bar{Y}) = i$ and $d_R(\bar{X}'A^{\rm T},\bar{Y}) = d-i$. We thus obtain $\Delta_A(\mathcal{X},\bar{Y}) \leq i$ and $\Delta_A(\mathcal{X}',\bar{Y}) \leq d-i$ from \Eq{eq:defdescrepancy}. On the other hand, since $\delta_A(\mathcal{X},\mathcal{X}')=d$, we have $\Delta_A(\mathcal{X},Y) + \Delta_A(\mathcal{X}',Y) \geq d$ for any $Y \in \mathbb{F}_{q^m}^n$ from from \Eq{eq:defdelta}. Therefore, $\Delta_A(\mathcal{X},\bar{Y}) = i$ and $\Delta_A(\mathcal{X}',\bar{Y}) = d-i$ hold. \end{IEEEproof} Let $\delta_A(\mathcal{C}_1/\mathcal{C}_2)$ be the minimum $\Delta$-distance given by \begin{align*} \delta_A (\mathcal{C}_1/\mathcal{C}_2) \triangleq \min \left\{ \delta_A(\mathcal{X},\mathcal{X}') : \mathcal{X},\mathcal{X}' \in \mathcal{C}_1/\mathcal{C}_2, \mathcal{X}\neq\mathcal{X}' \right\}. \end{align*} As \cite[Theorem 7]{Silva2009a}, from \Lma{lma:normal} and \cite[Theorem 3]{Silva2009a}, we have the following proposition. \begin{proposition}\label{prop:deltacorrection} A nested coset coding scheme with $\mathcal{C}_1,\mathcal{C}_2$ is guaranteed to determine the unique coset $\mathcal{X}$ against any $t$ packet errors for any fixed $A$ if and only if $\delta_A(\mathcal{C}_1/\mathcal{C}_2) \!>\! 2t$. \hfill\IEEEQED \end{proposition} Here we note that if $\mathcal{X}$ is uniquely determined, $S$ is also uniquely determined from \Def{def:nestedcoding}. \begin{lemma}\label{eq:cosetdelta} $\delta_A(\mathcal{C}_1/\mathcal{C}_2) = \min\{ d_R(XA^{\rm\! T}, X'\!A^{\rm\! T}) : X,X' \!\!\in\! \mathcal{C}_1, X'\!\!-\!X \!\notin\! \mathcal{C}_2 \}$. \end{lemma} \begin{IEEEproof} \begin{align*} &\delta_A(\mathcal{C}_1/\mathcal{C}_2) =\min \left\{ \delta_A(\mathcal{X},\mathcal{X}') : \mathcal{X},\mathcal{X}' \in \mathcal{C}_1/\mathcal{C}_2, \mathcal{X}\neq\mathcal{X}' \right\}\\ &\!=\! \min \!\Big\{ \!\min\!\left\{ d_R(XA^{\rm\! T},X'\!A^{\rm\! T})\!:\!X \!\in\!\mathcal{X},X'\!\!\in\!\mathcal{X}' \right\} \!:\! \mathcal{X},\mathcal{X}'\!\!\in\! \mathcal{C}_1/\mathcal{C}_2, \mathcal{X}\!\neq\!\mathcal{X}' \Big\}\\ &\!=\! \min\Big\{ d_R(XA^{\rm T},X'A^{\rm T}): X \!\in\!\mathcal{X} \!\in\! \mathcal{C}_1/\mathcal{C}_2, X'\!\in\!\mathcal{X}'\!\in\!\mathcal{C}_1/\mathcal{C}_2, \mathcal{X}\!\neq\!\mathcal{X}' \Big\}\\ &\!=\! \min \left\{ d_R(XA^{\rm T},X'A^{\rm T}) : X,X' \in \mathcal{C}_1, X'-X \notin \mathcal{C}_2 \right\}. \end{align*}\\[-6.3ex] \end{IEEEproof} \begin{theorem}\label{thm:errorcorrectioncap} Consider the nested coset coding in \Def{def:nestedcoding}. Then, the scheme is a universally ({i.\@e.\@,~} simultaneously for all $A \in \mathbb{F}_q^{N \times n}$ with rank deficiency at most $\rho$) $t$-error-$\rho$-erasure-correcting secure network coding if and only if $M_{R,1}(\mathcal{C}_1,\mathcal{C}_2) > 2t + \rho$. \end{theorem} \begin{IEEEproof} For the rank deficiency $\rho \!=\! n\!-\!\mathsf{rank\,} A$, we have $d_R(X,X')\!-\!\rho \!\leq\! d_R(XA^{\rm T},X'A^{\rm T})$, and there always exists $A \in \mathbb{F}_q^{N\times n}$ depending on $(X,X')$ such that the equality holds. Thus, from \Lma{eq:cosetdelta}, we have \begin{align*} \min_{\substack{A \in \mathbb{F}_q^{N \times n}:\\ \mathsf{rank\,} A = n-\rho}} \delta_A(\mathcal{C}_1/\mathcal{C}_2) &\!=\! \min \left\{ d_R(X,X') : X,X' \!\in\! \mathcal{C}_1, X'\!-\!X \!\notin\! \mathcal{C}_2 \right\}\!-\!\rho\\[-2ex] &\!=\! \min \left\{ d_R(X,\vec{0}) : X \in \mathcal{C}_1, X \notin \mathcal{C}_2 \right\}-\rho\\ &\!=\! M_{R,1}(\mathcal{C}_1,\mathcal{C}_2) -\rho. \quad \text{(by \Lma{lma:rankdistance})} \end{align*} Therefore, we have ${\displaystyle \min_{A: \mathsf{rank\,} A = n-\rho}\!\!\delta_A(\mathcal{C}_1/\mathcal{C}_2) \!<\! \min_{A: \mathsf{rank\,} A = n-\rho'}\!\!\delta_A(\mathcal{C}_1/\mathcal{C}_2)}$ for $\rho > \rho'$, and hence we obtain ${\displaystyle \min_{A: \mathsf{rank\,} A \geq n-\rho}\delta_A(\mathcal{C}_1/\mathcal{C}_2)=}$ ${\displaystyle \min_{A: \mathsf{rank\,} A = n-\rho}\delta_A(\mathcal{C}_1/\mathcal{C}_2)= M_{R,1}(\mathcal{C}_1,\mathcal{C}_2) \!-\!\rho}$. \end{IEEEproof} \begin{example}\label{ex3} The existing scheme \cite{Silva2011} used MRD codes as $\mathcal{C}_1,\mathcal{C}_2$, where $m \geq n$. Then, by \Coro{coro:mrdrgrw}, we have $M_{R,1}(\mathcal{C}_1,\{\vec{0}\})=n-\mathsf{dim\,}\mathcal{C}_1+1$. Since $\mathsf{dim\,} (V \cap \mathcal{C}_2)=0$ for any $V \in \colinvi{\mathsf{dim\,}\mathcal{C}^{\perp}_2}$ by \Coro{coro:rankdistance} and $\mathsf{dim\,} \mathcal{C}^{\perp}_2 > n-\mathsf{dim\,} \mathcal{C}_1$, we have $M_{R,1}(\mathcal{C}_1,\mathcal{C}_2)=M_{R,1}(\mathcal{C}_1,\{\vec{0}\})$. Thus, by \Thm{thm:errorcorrectioncap} and \Coro{coro:rankdistance}, the scheme is universally $t$-error-$\rho$-erasure-correcting when $M_R(\mathcal{C}_1,\{\vec{0}\})=d_R(\mathcal{C}_1) > 2t+\rho$, as shown in \cite[Theorem 11]{Silva2011}. \end{example} \textsc{Acknowledgment:} This research was partially supported by the MEXT Grant-in-Aid for Scientific Research (A) No.~23246071.
1,314,259,992,768
arxiv
\section{Introduction} The idea that small planets cool faster than larger ones stems from an area to volume argument. For a planet of radius, $R_p$, heat flow scales with surface area while heat produced within its interior scales with volume. Taking the ratio, cooling scales as $1/R_p$. For scaling between planets, this assumes that planets have the same internal heat source concentrations, valid for planets of similar chemical composition. It also assumes equivalent surface heat flux for a given internal temperature. It has been noted that the relationship between heat flux and internal temperature depends on the tectonic mode of a planet. Planets with plate tectonics will have a different cooling efficiency than single plate planets \citep{Stevenson2003}. Although that potential has been acknowledged, there is still the thought that surface to volume arguments remain valid for planets with the same tectonic modes - in particular, plate tectonics. This assumption has not been called out to date, and it too is invalid. Plate tectonics is a kinematic theory \citep{McKenzie1967,Morgan1991,LePichon1968}. Connecting plate tectonics to interior cooling is a dynamic problem and a dynamic theory of plate tectonics is not agreed upon at present. It is an active research problem with no deficiency of hypotheses. Different assumptions regarding the balance between the forces driving and resisting plate motions lead to different scaling relationships between heat flux and internal temperature \citep{Tozer1972b,Christensen1985,Conrad1999b,Crowley2012}. Such relationships have been used in models that track the evolution of the Earth's internal temperature over time (thermal history models). Proponents of different hypotheses regarding the cooling efficiency of plate tectonics have argued that their models can match thermal history constraints. However, using an agnostic approach that accounted for model and observational uncertainties, \citet{Seales2019,Seales2020} showed that multiple hypotheses remain viable. More critically for planetary studies, no observational data demands equivalency between the cooling efficiency of planets within a tectonic mode akin to Earth and the Earth itself. In this note we explore how variances in plate tectonic cooling efficiencies couple with variable planetary size to determine cooling rates. \section{Thermal History Models} \indent Plate tectonics, on Earth and potentially other terrestrial planets, is a surface manifestation of thermal convection within a planet's rocky interior layer (i.e., its mantle). Thermal history models predict mantle cooling trajectories based on how internal heat sources ($H$) and convective heat flux ($q_{conv}$) evolve with time. A large class of such models use a global energy balance that determines the spherically averaged temperature of the mantle, $T$, according to \begin{equation} \label{Tdot} \rho c_pV\dot{T}=H-Aq_{conv} \end{equation} where $rho$, $c_p$, and $\dot{T}$ are the density, heat capacitance and the time derivative of mantle temperature. The volume of the convecting mantle is $V=\frac{4\pi}{3}\left(R_p^3-R_c^3\right)$ and its surface area is $A=4\pi R_p^2$, where $R_c$ is the radius of the iron core of a terrestrial planet/moon. Radiogenic decay produces heat within the mantle according to \begin{equation} \label{H} H=VH_oexp(-\lambda t) \end{equation} where $H_o$ is a scaling constant representing the heat produced per unit time per unit volume, $\lambda$ is the decay constant, and $t$ is time. The heat flux through the surface depends on convective vigor in the mantle. It is typically parameterized using a scaling equation given by \citet{Schubert1979,Schubert1980}: \begin{equation}\label{eqparam} Nu=aRa^{\beta} \end{equation} where $Nu$ is the Nusselt number (a measure of surface heat flux), $Ra$ is the Rayleigh number (a measure of convective vigor), $a$ is scaling constant that accounts for geometric effects (e.g., the wavelength of convection), and $\beta$ is a scaling exponent that encapsulates the efficiency of convective cooling. The value of $\beta$ varies between different hypotheses for the dynamics of plate tectonics. We will return to this issue after we develop the final model equations. The Nusselt number is the convective heat flux, $q_{conv}$, normalized by the amount of heat that would be conducted through the layer of thickness $D$. The conductive flux is given by Fourier's Law, $q_{cond}=\frac{k\Delta T}{D}$. The values $k$ and $\Delta T$ are the thermal conductivity and the difference between surface and interior temperatures. $Ra$ is defined as \begin{equation} \label{Ra} Ra=\frac{\rho g\alpha\Delta T D^3}{\kappa \eta (T)} \end{equation} where $\rho$, $g$, $\alpha$ and $\kappa$ are density, gravity, thermal expansivity and thermal diffusivity. The temperature-dependent mantle viscosity is defined as \begin{equation} \eta(T)=\eta_{ref}exp\left[\frac{A_e}{R}\left(\frac{1}{T}-\frac{1}{T_{ref}}\right)\right] \end{equation} where $A_e$ is the activation energy and $R$ the universal gas constant, and $eta_{ref}$ and $T_{ref}$ are reference values \citep{Karato1993}. Conduction becomes unstable when $Ra$ exceeds a critical threshold value, $Ra_c$. Taking this into account, the convective heat flux is \begin{equation}\label{q_conv} q_{conv}=ak\frac{\Delta T}{D}\left(\frac{Ra}{Ra_c}\right)^\beta \end{equation} Combining the above we arrive at \begin{equation}\label{CTM} \dot{T}=\frac{1}{\rho c_p}\left[H_oexp(-\lambda t) - \frac{A}{V}\frac{ak\Delta T}{D}\left(\frac{Ra}{Ra_c}\right)^\beta\right]. \end{equation} If we assume that all values in Equation (\ref{Ra}) are constant except $T$ and $\eta (T)$, then combining Equations (\ref{eqparam}), (\ref{Ra}), and the definition of $Nu$ leads to \begin{equation} \label{Q} q_{conv}=a'\frac{T^{1+\beta}}{\eta (T)^{\beta}} \end{equation} \begin{equation} \label{a'} a'=\frac{ak}{D}\left(\frac{\rho g \alpha D^3}{\kappa}\right)^\beta \end{equation} where all constants have now been combined into $a'$. The material constants can be determined using experimental values. The geometric constant, $a$, can be determined from laboratory and/or numerical convection experiments in combination with boundary layer theory \citep{Davies1980,Schubert1980}. We will refer to that approach as a classic thermal history model (CTM). An alternative approach, that we refer to as a scaled thermal history model (STM), sets the constant $a'$ to a particular heat flow, $q_o$, at a scaling temperature, $T_o$, and viscosity, $\eta_o$ \citep{Christensen1985}. In doing so, we have an alternative formulation given by \begin{equation} \dot{T}=\frac{1}{\rho c_p}\left[H_oexp(-\lambda t)-\frac{Aq_o}{V}\left(\frac{T}{T_o}\right)^{1+\beta}\left(\frac{\eta_o}{\eta (T)}\right)^\beta\right]. \end{equation} CTMs integrate forwards in time from an initial mantle temperature value. STMs have historically built in Earth's present day heat flux, temperature, and viscosity directly into the model formulation (akin to a data assimilation approach). Following this rationale, STMs have integrated backwards in time to model past mantle temperatures starting from present day values. That is not conducive to modeling exoplanets, but the STM approach can be adapted for integrating forwards in time \citep{Seales2019,Seales2020}. For completeness, we evaluated how variable planetary mass/size and tectonic cooling efficiency (i.e., different $\beta$ values) affected thermal histories by evolving model paths of both CTMs and STMs forwards in time. \begin{table}[h] \caption{Model constants, scaling values and parameter ranges} \centering \begin{tabular}{ c c c c } \hline\hline Symbol & Parameter & Value & Units \\ \hline $H_o$ & Initial radiogenic concentration & 1.25e-7 & $Wm^{-3}$ \\ $\lambda$ & Decay constant & 0.34 & $Gyr^{-1}$ \\ $\alpha$ & Thermal expansivity & 2e-5 & $K^{-1}$ \\ $\kappa$ & Thermal diffusivity & 1e-6 & $m^2s^{-1}$ \\ $T_s$ & Surface Temperature & 273 & $K$ \\ $\eta_{ref}$ & Reference viscosity & $1e21 $ & $Pa*s$ \\ $A_e$ & Activation energy & 3e5 & $Jmol^{-1}$ \\ $R$ & Universal gas constant & 8.314 & $J(K*mol)^{-1}$ \\ $T_{ref}$ & Reference temperature & 1855 & $K$ \\ $Ra_c$ & Critical Rayleigh number & 1100 & - \\ $c_p$ & Heat capacitance & 1400 & $J(kg*K)^{-1}$ \\ $k$ & Thermal conductivity & 4.2 & $W(m\times K)^{-1}$ \\ $T_o$ & Scaling temperature & 1600 & $K$ \\ $q_o$ & Scaling convective heat flow & 0.069 & $Wm^{-2}$ \\ $\eta_o$ & Scaling viscosity & 4.45e19 & $Pa*s$ \\ $M_\oplus$ & Mass of Earth & 5.97e24 & $kg$ \\ $R_\oplus$ & Radius of Earth & 6371 & $km$ \\ $G$ & Gravitational constant & 6.67408e-11 & $Nm^2kg^{-2}$ \\ $\beta$ & Tectonic cooling efficiency constant & 0-0.33 & - \\ $M_p$ & Mass of Planet & 0.1-5 $M_\oplus$ & - \\ $R_p$ & Planet radius & Calculated & $km$ \\ $R_c$ & Core radius & Calculated & $km$ \\ $\rho$ & Mantle density & Calculated & $kgm^{-3}$ \\ $g$ & Surface gravity & Calculated & $ms^{-2}$ \\ \hline \end{tabular} \label{table:convectiion_parameters} \end{table} The cooling efficiency of plate tectonics remains a matter of debate. For this reason, thermal history models have assumed different values of $\beta$. Given that different $\beta$ values represent different physical assumptions regarding the dynamics of plate tectonics, and by association Earth cooling, it follows that different values of $\beta$ represent different hypotheses. The earliest thermal history models used a $\beta$ value of 0.33 \citep{Schubert1980,Spohn1982,Jackson1984}. This assumes that mantle viscosity dominantly resists convective motion \citep{Tozer1972b}. \citet{Gurnis1989} incorporated analogues to tectonic plates and showed this scaling could be recovered provided that weak plate boundaries were also incorporated. \citet{Moresi1998} allowed weak plate boundaries to develop dynamically, which lead to a scaling exponent of 0.30. If plate boundaries are not assumed to be so weak that energy dissipation along them can be neglected and/or if plate strength offers resistance to convective motion, then the scaling exponent will be lower, with a range between $0<=\beta<=0.15$ having been proposed \citep{Christensen1985,Giannandrea1993,Conrad1999a,Conrad1999b}. \citet{Hoink2011} and \citet{Crowley2012} argued that different sized plates can have different balances between plate driving and resisting forces. This leads to a mixed mode scaling that allows for $\beta$ values between 0.15 and 0.30 \citep{Hoink2013}. We will consider the full range of $\beta$ values cited above. As noted in the introduction, within data and model uncertainties, multiple models within that range can match observational constraints on the cooling of the Earths interior over time \citep{Seales2019, Seales2020}. Our choice of constants, scaling values and parameter ranges are listed in Table \ref{table:convectiion_parameters}. We calculated thermal paths for planets ranging from 0.1 to 5 earth masses ($M_\oplus$). For the remainder of this paper $\oplus$ refers to Earth-referenced values. For scaling models with planetary mass ($M_p$), we followed in the spirit of \citet{Schaefer2015} in using the scalings of \citet{Valencia2006} to determine the planetary ($R_p$) and core ($R_c$) radii, assuming a constant core mass fraction of 0.3259. We calculated the average mantle density ($\rho$) based on the planetary mass, mantle volume and the average gravitational acceleration ($g$), which scales as $GM_p/R_p^2$. We ran model suites with two different sets of initial temperatures. In one scenario, all planets began with the same average mantle temperature. The second suite of models started all planets with the same potential temperature - the temperature of the interior mantle removing the effects of adiabatic self-compression. We used the scaling of \citet{Schaefer2015} to convert between average mantle temperature and potential temperature. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.35\textwidth} \includegraphics[width=\linewidth]{Sample_paths_SameTmi_CLASSIC.png} \caption{} \label{fig:Paths_SameTmi_classic} \end{subfigure}% \begin{subfigure}[b]{0.35\textwidth} \includegraphics[width=\linewidth]{Sample_paths_DiffTmi_CLASSIC.png} \caption{} \label{fig:Paths_DiffTmi_classic} \end{subfigure}% \medskip \begin{subfigure}[b]{0.35\textwidth} \includegraphics[width=\linewidth]{Sample_paths_SameTmi.png} \caption{} \label{fig:Paths_SameTmi} \end{subfigure}% \begin{subfigure}[b]{0.35\textwidth} \includegraphics[width=\linewidth]{Sample_paths_DiffTmi.png} \caption{} \label{fig:Paths_DiffTmi} \end{subfigure}% \caption{Sample thermal histories of average mantle temperature for CTMs (a and b) and STMs (c and d) that begin at the same (a and c) and different (b and d) temperatures.} \label{fig:Paths} \end{figure} \section{Results} Figure \ref{fig:Paths} shows sample thermal histories of different models and different starting temperatures. For low $\beta$ values, temperatures were considerably warmer for CTMs than STMs. This behavior was first noted by \citet{Mcnamara2001}. It occurs principally because CTMs have one initial value, mantle temperature, while STMs have effectively two boundary values, temperature and heat flux. This difference did not impact our principal conclusions. For a fixed tectonic cooling efficiency, small planet models cool faster than larger ones. Allowing for different plate tectonic cooling efficiencies produced more nuanced results. For example, a 5 $M_\oplus$ planet with $\beta=0.2$ had nearly the same temperature as an order of magnitude less massive planet after ten billion years of model evolution. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\linewidth]{Contours_5gyr_SameTmiV2_CLASSIC.png} \caption{} \label{fig:Paths_SameTm_classic} \end{subfigure}% \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\linewidth]{Contours_5gyr_DiffTmiV2_CLASSIC.png} \caption{} \label{fig:Paths_DiffTm_classic} \end{subfigure}% \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\linewidth]{T2000_SameTmi_CLASSIC.png} \caption{} \label{fig:Sample_Paths_CTM} \end{subfigure}% \medskip \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\linewidth]{Contours11_SameTmiV3_noline.png} \caption{} \label{fig:Paths_SameTm} \end{subfigure}% \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\linewidth]{Contours1_DiffTmiV2.png} \caption{} \label{fig:Paths_DiffTm} \end{subfigure}% \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\linewidth]{T1400_STM_SameTmi.png} \caption{} \label{fig:Sample_Paths_STM} \end{subfigure}% \caption{Contoured mantle temperatures at 5 Gyr for CTMs (a and b) and STMs (d and e) with the same (a and c) and different (b and d) initial mantle temperatures. Sample paths from this space demonstrating smaller planets can cool more slowly than larger ones (c and f).} \label{fig:Contours_Tmi} \end{figure} Figure \ref{fig:Contours_Tmi} shows contoured mantle temperatures after five billion years of model time plotted in planetary mass and cooling efficiency space. Models on the same contour have cooled to the same temperature. The contours show that differences in plate tectonic cooling efficiencies allowed planets of different masses/sizes to be at the same temperature, i.e., small planets can cool to the same temperature as larger planets over time. Figures \ref{fig:Sample_Paths_CTM} and \ref{fig:Sample_Paths_STM} demonstrate this using sample thermal paths, which are color-coded to the parameter space shown in Figures \ref{fig:Paths_SameTm_classic} and \ref{fig:Paths_SameTm}. The CTM samples had similar cooling histories despite an order of magnitude difference in planetary mass. Similar behavior occurred for STM models with the addition that less massive (smaller) planets could remain significantly warmer than more massive ones for five billion years of model time. \section{Discussion and Conclusions} The thermal history of a terrestrial planet affects its volcanic and geologic history. Volcanic/geologic history, in turn, affects the cycling of volatiles between a planet's interior and surface reservoirs, which is a critical factor in determining whether liquid water can exist at the surface of a planet over geological time \citep{Walker1981,Berner1983, Kasting1993,Kopparapu2014}. In addition to liquid water being key for life as we know it, life forms can also use a planets internal energy as a fuel source for their survival \citep{Baross1985a,Jannasch1985}. For these reasons, the solid body thermal evolution of a terrestrial planet has had a long standing connection to astrobiology. The discovery of terrestrial exoplanets has reinvigorated interest in that connection and in thermal history models. The discovery of terrestrial exoplanets larger and more massive than the Earth kick-started thinking about how differences in planetary size could affect a planets thermal history and, by association, life potential \citep{Valencia2007}. A first-wave of research into planetary size effects on geological history focused on whether larger planets would be more or less likely to have plate tectonics \citep[e.g.,][]{Valencia2007,ONeill2007}. The focus was on the initiation of plate tectonics. That is, would internal energy overcome rock strength such that plate margins could be generated. Although an interesting problem, the cooling efficiency of plate tectonics does not depend solely one whether a planets internal energy sources can overcome rock strength to initiate plate subduction. It also depends on the source(s) of resistance to plate motions after plate tectonics is established. As discussed, that remains debated for the Earth and exoplanets with plate tectonics can have different cooling efficiencies. Allowing for this leads to trade-offs between a planets size/mass and tectonic cooling efficiency. A principal result is that planets smaller than the Earth, and of the same absolute age, can remain and geologically and volcanically active. We have only considered differences in cooling efficiency for a particular tectonic mode (i.e., plate tectonics). Other tectonic regimes, such as episodic and stagnant lid, will further increase the possibility that planets of the same size as Earth may not have the same interior temperatures and/or that planets smaller than the Earth may have hotter interiors. Within our own solar system, it has been argued that Venus may have liquid magma at the base of its mantle \citep{ORourke2020}. This suggests the interior of Venus may be hotter than Earth, despite the two planets having similar size and mass. Mars is considerably smaller and less massive than the Earth, yet estimates of its potential temperature are similar to Earth \citep{Filiberto2017}. In addition, \citet{Ruiz2011} argued that the Martian mantle experienced recent warming. An added effect that could allow small planets to remain geologically active is tidal locking. In some cases, tidal heating may be the dominant heat source in the mantle. Rocky bodies in that setting may maintain the same interior temperature for billions of years. With a large enough volatile inventory, this could provide steady, persistent outgassing of life essential elements \citep{Driscoll2015}. By looking at a range of plate tectonic cooling efficiencies, we have shown that smaller planets can cool slower than larger ones. This implies that, to the degree that geological activity is critical for planetary habitability, exoplanets smaller than the Earth, and of the same age or older, should not be down weighted in target selection strategies. An added implication is that planets sharing a range of Earth characteristics, including absolute age, can be at different times in their geological lifetimes -- the time window over which a planet can remain geologically active. To the degree that variations in volcanic/geologic/tectonic activity over time have influenced the evolution of life on Earth, this suggests that we should anticipate that Earth-like exoplanets, of the same age as Earth, need not be at the same evolutionary stages.
1,314,259,992,769
arxiv
\section{Introduction} Massive multiple-input multiple-output (MIMO) is certainly the most noticeable technology to increase the throughput and guarantee reliability in modern and future wireless communication systems~\cite{marzetta}. With \wout{the deployment of} large-scale antenna arrays, space diversity induces a remarkable improvement in the spectral efficiency and makes possible to serve multiple users at the same time. However, to benefit from all the \wout{prospective} advantages of massive MIMO, the high dimensional channel frequency response must be accurate and promptly acquired at the \wout{base station} (BS). Therefore, the strong reciprocity between the \wout{corresponding uplink (UL) and downlink (DL) channels} makes time division duplex (TDD) networks \wout{one of the most prominent solution candidates} under these strict constraints~ \cite{sanguinetti2019massive}. In contrast, in \wout{frequency division duplex} (FDD) systems, the absence of reciprocity between \wout{the} UL and DL channel responses and consequently the huge overhead for reporting the \wout{channel state information (CSI)} from the \wout{mobile terminal} (MT) to the BS represent\wout{s} the major limitation for an effective deployment of massive MIMO communications. Although the TDD operation mode is the most commonly adopted, it has been shown that FDD massive MIMO would handle the low latency requirements imposed by the standardization potentially much better than TDD solutions~\cite{BjornsonLM16}. Hence, this premise has motivated and encouraged several studies that aim to reduce or eliminate the DL CSI acquisition overhead. \wout{In addition to some well-known examples based on a particular model and sparsity assumptions that show how to extrapolate DL covariance from UL covariance~\cite{8542957}, there are a variety of data-driven approaches that address the challenge of recovering instantaneous DL CSI in FDD systems at the BS.} \wout{Among these, to eliminate the need for feedback, several machine learning approaches have been proposed based on supervised learning of direct extrapolation of CSI across the frequency gap, based on pairs of UL--DL training data}~\cite{ArDoCaYaHoBr19,alk2019deep,8764345,han2020deep,safari,me}. \\ \indent\wout{A very innovative solution is represented by the concept of autoencoder neural networks which are trained in order to learn a low rate feedback from the MT to the BS}~\cite{csinet,8638509,9090892,8972904,9279228,9347820}. In this setup, the DL CSI is encoded at the MT into a codeword, which is then fed back to the BS \wout{and decoded there, implying a distributed implementation of the parts of the autoencoder at the MT and BS.} \\ \indent\wo{In this work, following this general approach, we propose a novel method which is {again} based on {the autoencoding concept. However, motivated by the results in~\cite{utschick2021learning}, the unsupervised training of the autoencoder is conducted at the BS soley based on noisy UL training data, thus avoiding the issue that collecting DL data at the BS to enable the training otherwise would require an immense effort with respect to the overall network traffic.} {By the corresponding result from \cite{utschick2021learning}, we mean the equivalence of UL and DL CSI discovered therein with respect to their probability distributions.} Thus, the core idea of our scheme is that the neural network encoder trained on UL data at the BS can be applied to DL data without any further adaptation, from any mobile device to which the encoder is offloaded. Training on the MT is no longer necessary at all, making it possible to quickly update the encoder on the MT at any time and place, e.g., when moving from one cell to another or for different locations in the cell.} \varizz{Compared to our approach, training at the MT with DL data has some disadvantages, e.g.: \textit{i)} the MT could spend only a short amount of time inside a cell and could not collect enough samples for training, \textit{ii)} if multiple MTs stayed in the same cell long enough to perform the training of different autoencoders, lots of computational power would be wasted since only one decoder would be deployed at the BS, \textit{iii)} there would be a high risk of overfitting since it's unlikely that a MT visits all the locations in a cell because of the systematic behaviour of the users.} \wout{Based on the presented simulation results, we are eventually able to demonstrate the excellent performance of the proposed technique.} \section{System Architecture} In the following, we indicate with $\ulnoisy$ and $\dlnoisy \in \mathbb{C}^{N_{\text{a}}\times N_{\text{c}}}$ the noisy UL and DL CSI matrices \wout{of the transmission channel between the BS and the single antenna MT}, where $N_{\text{a}}$ and $N_{\text{c}}$ denote the number of antennas at the BS and the number of subcarriers, respectively. In addition, we can express $\ulnoisy$ as \begin{equation} \ulnoisy = \ulbig + \vN, \end{equation} \vari{where $\ulbig$ and $\vN \in \mathbb{C}^{N_{\text{a}}\times N_{\text{c}}}$ represents the true UL CSI matrix and the additive white Gaussian noise, respectively. Analogous expressions can be derived for $\dlnoisy$. } \wout{Note that throughout} this work we assume that the true data for both UL and DL, namely $\ulbig$ and $\dlbig$, are inaccessible and only a noisy version of them is available. \begin{figure}[t] \centering \hspace{-1cm} \csubfloat[\wout{Training of the autoencoder at the BS.}]{\label{fig: aeUL} \input{intro_plots/ul_train2}}\\ \hspace{-1.5cm} \csubfloat[\wout{Codeword generation at the MT.}]{\label{fig: aeDL}\input{intro_plots/dl_pred2}} \caption{\wout{Training of the autoencoder based on UL CSI at the BS, generation of the codeword by the offloaded encoder at the MT, transmission over the radio channel, and subsequent reconstruction of the DL CSI at the BS.}} \label{fig: sole_ae} \vspace*{-4mm} \end{figure} The proposed method consists of two \wout{phases, which are} illustrated in Fig.~\ref{fig: sole_ae}. \wout{First, an autoencoder $\vg_{\boldsymbol{\phi}} (\vf_{\boldsymbol{\theta}} (\cdot)) $ is trained at the BS based solely on noisy UL data $\ulnoisy$, which is supposed to be collected during the standard UL operation of the BS in advance.} The $\vf_{\boldsymbol{\theta}}$ denotes the encoder with parameters $\boldsymbol{\theta}$ and $\vg_{\boldsymbol{\phi}}$ denotes the decoder with parameters $\boldsymbol{\phi}$, see Fig.~\ref{fig: aeUL}. \wout{It is well-known that autoencoders implicitly introduce regularization for the reconstruction of the input signal, cf.~\cite{10.1093/imaiai/iaaa011} for an introduction to the fundamentals behind denoising with deep neural networks.} In essence, \wout{an} autoencoder can be trained with the noisy data $\ulnoisy$ in an unsupervised fashion to obtain an estimate $\ulbighat$ which will be approximately equal to the unknown $\ulbig$. \wout{It should be noted that for the proposed method, there are no special requirements for the acquisition of the UL training data, except for the property that they come from the same propagation scenario as the subsequent DL data to which the encoder \wo{will be applied at the MTs}.} \wout{Subsequently, half of the autoencoder, namely the encoding part $\vf_{\boldsymbol{\theta}}(\cdot)$, is offloaded to the MT based on a respective network protocol, which is due to space restriction not further considered here.} \wout{In the second phase,} similarly to what has been proposed in~\cite{utschick2021learning}, we reuse the UL-trained autoencoder neural network for the recovery \wout{of the complete DL CSI.} In particular, \wo{each} MT takes the noisy DL CSI estimate $\dlnoisy$ and feeds it into the \wout{offloaded} UL-trained encoder to obtain the latent vector or codeword $\vz_{\text{DL}}$. Then, the codeword is fed back to the BS which recovers $\dlbighat \approxeq \dlbig$ with the second half of the autoencoder, namely the UL-trained decoder. \section{Dataset Description} \label{sec: quad} Our study is based on a single urban microcell (UMi) with $150$~\wout{meters} radius, which has been simulated with the $\Matlab$ based software QuaDRiGa version 2.2~\cite{quad, quad2}. Specifically, we consider non-line-of-sight (NLoS) channels, with $L=58$ multi-path components (MPCs), \wout{which means a rich scattering propagation environment.} The BS is placed at a height of $10$ meters and is equipped with a uniform planar array (UPA) with $N_{\text{a}} = 8\times 8$ ``3GPP-3d'' antennas, while the users have a single omni-directional antenna each. In addition, the BS antennas are tilted by $6$ degrees towards the ground to point in the direction of the users. The UL center frequency is $2.5$~GHz while the DL center frequencies are $2.62$~GHz, and $2.98$~GHz, which correspond to a FDD gap of $120$~MHz and $480$~MHz, respectively. For each frequency, we consider a bandwidth of approximately $8$~MHz divided over $ N_{\text{c}} = 160 $ subcarriers. The cell has been sampled at \wout{$60\times 10^3$} \wo{different locations of MT} and for each sample the channels at the predefined frequencies are collected. Therefore, the dataset is split into three groups of \wout{$48\times 10^3$, $6\times 10^3$ and $6\times 10^3$} samples, where each sample consists of the three matrices ${\vH}_{\text{UL}} $, ${\vH}_{\text{DL-120}} $, and ${\vH}_{\text{DL-480}} \in \mathbb{C}^{N_{\text{a}}\times N_{\text{c}}} $. \wout{Note again that although the training of the autoencoder at the BS is based solely on the UL CSI, it still covers the distribution of the unseen DL CSI as well, since the UL and DL data ultimately follow the same propagation scenario, cf. ~\cite{utschick2021learning}. With respect to testing, only the test set of the two DL CSI datasets (DL@120, 480) will be used.} Additionally, and likewise~\cite{utschick2021learning} the channels are normalized with respect to their path-gain. \section{Autoencoder} \label{sec: ae} An autoencoder is a neural network that is trained in an unsupervised fashion to reconstruct its input. It has been introduced in~\cite{aebook} and its purpose is to find a compact representation of the data. The autoencoder consists of two parts: an encoder function $\vf_{\boldsymbol{\theta}}$ with hyperparameters $\boldsymbol{\theta}$ and a decoder function $\vg_{\boldsymbol{\phi}}$ with hyperparameters $\boldsymbol{\phi}$. The encoder projects a $d$-dimensional input vector $\vx$ into a \wout{typically lower} dimensional latent \wout{space representation $\vz \in \mathbb{C}^{d_z} $ with $d_z \ll d$,} whereas the decoder reconstructs the original input from $\vz$, \wout{i.e., \begin{equation} \boldsymbol{x} \stackrel{\vf_{\boldsymbol{\theta}}}{\longrightarrow} \boldsymbol{z} \stackrel{\vg_{\boldsymbol{\phi}}}{\longrightarrow} \hat{\vx} \approxeq \vx. \end{equation}} Note that the bottleneck or \wout{hourglass structure of the architecture is a key element of the autoencoding concept, as it forces the network to learn only the important features that allow reconstruction with the decoder, cf.}~\cite{Goodfellow-et-al-2016} and~\cite{bank2021autoencoders}. \begin{table}[t] \caption{Encoder architecture.} \label{tab: cnn_enc} \begin{center} \begin{tabular}{lcccr} \hline Layer type& Output shape & \#Parameters $\boldsymbol{\theta} $\\ \hline Input & $64 \times 160 \times 2$ & 0\\ Conv2D, strides=2 & $32 \times 80 \times 8$ & 152\\ Batch normalization & $32 \times 80 \times 8$ & 32\\ ReLU & $32 \times 80 \times 8$ & 0\\ Conv2D, strides=2 & $16 \times 40 \times 16$ & 1168\\ Batch normalization & $16 \times 40 \times 16$ & 64\\ ReLU & $16 \times 40 \times 16$ & 0\\ Conv2D, strides=2 & $8 \times 20 \times 32$ & 4640\\ Batch normalization & $8 \times 20 \times 32$ & 128\\ ReLU & $8 \times 20 \times 32$ & 0\\ Conv2D, strides=2 & $4 \times 10 \times 64$ & 18496\\ Batch normalization & $4 \times 10 \times 64$ & 256\\ ReLU & $4 \times 10 \times 64$ & 0\\ Conv2D, strides=2 & $2 \times 5 \times 128$ & 73856\\ Batch normalization & $2 \times 5 \times 128$ & 512 \\ ReLU & $2 \times 5 \times 128$ & 0\\ Flatten & $1280$ & 0\\ Fully-connected & $256$ & 327936\\ Tanh & $256$ & 0\\ \hline \end{tabular} \vspace*{-4mm} \end{center} \end{table} \begin{table}[t] \caption{Decoder architecture.} \label{tab: cnn_dec} \begin{center} \begin{tabular}{lcccr} \hline Layer type& Output shape & \#Parameters $\boldsymbol{\phi} $\\ \hline Input & $256$ & 0\\ Fully-connected & $1280$ & 328960\\ Reshape & $2\times 5 \times 128$ & 0\\ Conv2D transposed, strides=2 & $4 \times 10 \times 128$ & 147584\\ Batch normalization & $4 \times 10 \times 128$ & 512\\ ReLU & $4 \times 10 \times 128$ & 0\\ Conv2D transposed, strides=2 & $8 \times 20 \times 64$ & 73792\\ Batch normalization & $8 \times 20 \times 64$ & 256\\ ReLU & $8 \times 20 \times 64$ & 0\\ Conv2D transposed, strides=2 & $16 \times 40 \times 32$ & 18464\\ Batch normalization & $16 \times 40 \times 32$ & 128\\ ReLU & $16 \times 40 \times 32$ & 0\\ Conv2D transposed, strides=2 & $32 \times 80 \times 16$ & 4624\\ Batch normalization & $32 \times 80 \times 16$ & 64\\ ReLU & $32 \times 80 \times 16$ & 0\\ Conv2D transposed, strides=2 & $64 \times 160 \times 8$ & 1160\\ Batch normalization & $64 \times 160 \times 8$ & 32\\ ReLU & $64 \times 160 \times 8$ & 0\\ Conv2D transposed & $64 \times 160 \times 2$ & 146\\ \hline \end{tabular} \vspace*{-4mm} \end{center} \end{table} For \wout{the proposed autoencoder in this work,} we use a deep neural network with several convolutional layer\wout{s}. The encoder and decoder architectures are described in Tables~\ref{tab: cnn_enc} and~\ref{tab: cnn_dec}. Firstly, the real and imaginary parts of the original noisy UL matrix $\ulnoisy \in \mathbb{C}^{64 \times 160} $ have been stacked along the third dimension to form a real-valued tensor $\ulnoisy^{\text{real}} \in \mathbb{R}^{64 \times 160\times 2} $, which represents the input of the encoder. By observing the encoder in Table~\ref{tab: cnn_enc}, we can distinguish \wout{five} consecutive blocks, each of them formed by the cascade of a convolutional layer, a batch normalization layer~\cite{SanturkarTIM18}, and the rectified linear unit (ReLU) activation function. A key attribute of this architecture is \wout{to} use strided convolutions~\cite{SpringenbergDBR14} which are meant to progressively extract features and reduce the input dimension down to $1280$ units. After the progressive reduction of the input dimension, a fully connected layer with \wout{$\tanh(\cdot)$ activation functions} completes the encoder and generates the codeword $\vz_{\text{UL}}$, which is a real valued vector with $d_z = 256$ dimensions that leads to a compression factor \wout{of} \begin{equation} \frac{64\times 160 \times 2}{256} = 80. \end{equation} Note that having a deep architecture with multiples strided convolutional layers before the fully connected layer helps to substantially reduce the total number of trainable parameters which is highly affected by the number of parameters in the fully connected layer. The decoder, which is displayed in Table~\ref{tab: cnn_dec}, \wout{is supposed to map} the codeword back to the original input $\ulnoisy^{\text{real}} $, \wout{thereby benefiting from the regularizing effect (denoising) of the autoencoder concept. Its structure is equal to} the mirrored version of the encoder, where deconvolutions are in place of convolutions and a final transposed convolution with \wout{two} feature maps recovers the original input shape. \varizz{Despite the large size, this autoencoder architecture has a number of trainable parameters which is smaller compared to autoencoders built with the same principle of CsiNet~\cite{csinet}.} \begin{figure*}[t!] \centering \subfloat[][CDFs NMSE.]{\label{fig: cdf_nmse} \input{new_plots/nmse_new}}\hspace{10pt} \subfloat[][CDFs Cosine Similarity.]{\label{fig: cdf_rho}\input{new_plots/rho_new}} \caption{CDFs performance metrics of different methods for $\text{SNR}=10$~dB.} \label{fig: cdfs} \end{figure*} \begin{figure} \centering \input{new_plots/box_plot2} \caption{Performance metrics of AE vs. IDFT for $\text{SNR}=0$~dB.} \label{fig: box_plot} \vspace*{-4mm} \end{figure} \begin{figure} \centering \input{new_plots/lisa_plot_quant} \caption{Per-user rate performance with LISA of DL CSI for \vari{$\mathbb{E}[\norm{\vH}^2_{\mathrm{F}}/ \norm{\vN}^2_{\mathrm{F}}] = 10$~dB} and a multi-user scenario with 8 users.} \label{fig: lisa_plot} \vspace*{-4mm} \end{figure} \section{Simulations} \label{sec: simulations} The autoencoder neural network has been implemented with Tensorflow~\cite{tensorflow2015} and single-precision has been utilized for the training. We consider mini-batches of $64$ samples and we use the Adam optimization algorithm~\cite{adam} to tune the hyperparameters $\boldsymbol{\theta}$ and $\boldsymbol{\phi}$ of the neural network. The weights are updated in order to minimize \wout{an empirical risk function based on the least-squares} loss function \begin{equation} \mathcal{L}(\boldsymbol{\theta}, \boldsymbol{\phi}) = \left\|\vg_{\boldsymbol{\phi}} \left(\vf_{\boldsymbol{\theta}} \left(\ulnoisy^{\text{real}}\right)\right) - \ulnoisy^{\text{real}}\right\|^2. \label{eq: loss_f} \end{equation} The UL-trained encoder is then used \wo{at each MT} to generate the codeword $\vz_{\text{DL}}$ from the noisy DL CSI estimate $\dlnoisy$. The codeword is then sent to the BS, which uses the UL-trained decoder to obtain a clean version of the DL CSI $\dlbighat \approxeq \dlbig$. After the training, we measure the quality of the unsupervised denoising in terms of normalized mean square error $ \varepsilon^2 $ and cosine similarity $\rho$, where \begin{equation} \varepsilon^2 = \mathbb{E}\left[\frac{\norm{\hat{\vect{H}} - \vect{H}}_{\mathrm{F}}^2}{\norm{\vect{H}}_{\mathrm{F}}^2}\right] \end{equation} and \begin{equation} \rho = \mathbb{E}\left[\frac{1}{N_{\text{c}}}\sum_{n=1}^{N_{\text{c}}}\frac{\vert\hat{\vect{h}}^{\text{H}}_n\vect{h}_n\vert}{\norm{\hat{\vect{h}}_n}_2 \norm{\vect{h}_n}_2}\right], \end{equation} being $\vect{H} \in \mathbb{C}^{N_{\text{a}}\times N_{\text{c}}}$ the true CSI, and ${\vect{h}}_n$ its $n$-th column, and $\hat{\vect{H}}$ and $\hat{\vect{h}}_n$ their corresponding versions at the decoder output. \wout{In addition,} we also evaluate the performance in terms of average per-user rate with zero forcing precoding. \wout{To this end,} we consider two different values of SNR, namely $10$~dB and $0$~dB, \vari{where the SNR represents the level of CSI corruption, \wout{i.e.,} $\mathbb{E}[\norm{\vH}^2_{\mathrm{F}}/ \norm{\vN}^2_{\mathrm{F}}]$}. \wout{We further} compare the results achieved with the UL-trained autoencoder with two methods that serve as \wout{a} reference. In particular, we utilize the CsiNet \wout{method}, which requires a learning-phase and has been proposed in~\cite{csinet}, and another method which is based on the IDFT which does not require any learning. \wout{CsiNet is based on an autoencoder approach trained on DL CSI that exploits the sparsity of CSI in the space-delay domain, and is often used as a benchmark.} After transforming the DL CSI in the space-delay domain, the authors in~\cite{csinet} propose to retain only a small fraction of the component in the time domain, being the remaining component close to zero, and to train an autoencoder with this ``cropped'' version of the CSI. Specifically, we keep $64$ out of $160$ time-delay instances, and to be consistent with the original paper, only for the CsiNet results, we decide not to add any noise to the DL CSI. For the approach based on the IDFT, first we transform the noisy DL CSI $\dlnoisy$ to the space-delay domain by a multiplication with a DFT matrix. Then, we only keep the first two columns in the space-time domain, such that the total number of coefficients is $256$, as it is assumed for the codeword. Afterwards, these coefficients are sent to the BS, which reconstructs the DL CSI in the space-frequency domain, by operating the zero-padding followed by the DFT transformation. The results of NMSE and cosine similarity for $\text{SNR}= 10$~dB are displayed in \wout{the subplots of Fig.}~\ref{fig: cdfs}. We can clearly observe that the UL-trained autoencoder \wout{(``AE DL $120$ MHz'', ``AE DL $480$ MHz'')} performs very well on DL data too, with only a slight drop in performance when increasing the frequency gap \wout{from $120$ MHz to $480$ MHz.} {\wout{The ``AE UL'' curve demonstrates the reconstruction property of the autoencoder when applied to UL data, which serves as a further reference.}} Note that the \wo{other ``AE''-labeled solutions} have never seen training samples of DL CSI. \wout{Nevertheless, it can be observed} that the ``AE'' solutions show \wout{considerable} gain compared to the \wout{``IDFT''} method and \wout{still some} gain compared to the \wout{``CsiNet'' curve}. Analogous conclusions can be made by observing the performance metrics in Fig.~\ref{fig: box_plot} for $\text{SNR}= 0$~dB where the NMSE and cosine similarity achieved with our approach are compared with those of the IDFT approach. Finally, \wout{results} of the average per-user rate in a multi-user scenario with $8$ users \wout{are discussed}. Likewise~\cite{utschick2021learning}, we adopt the LISA algorithm \cite{UtStJoLu18} which is applied independently on each of the $160$ carriers, and the results are then averaged over the carriers. Fig.~\ref{fig: lisa_plot} shows the per-user rate for $120$ and $480$ MHz frequency gaps, averaged over $100$ instances of LISA simulation runs for \vari{$\mathbb{E}[\norm{\vH}^2_{\mathrm{F}}/ \norm{\vN}^2_{\mathrm{F}}] = 10$~dB. } The continuous lines represent the rates achievable with perfect DL CSI knowledge, the dashed lines represent the rates obtained with the DL CSI predicted with \wo{the same UL-trained autoencoder at each MT}, and the dotted lines represent the rates with the IDFT method. We can observe that the rates per-user with the DL channels denoised with the AE is extremely close to the rates achieved with the true DL CSI and that there is a significant gain compared to the IDFT method. Furthermore, we only notice a moderate degradation in the per-user rate when we apply uniform $8$ bit ($7$-bit) quantization to each element of the codewords, so that the total number of bits to be sent over the return channel is $256 \times 8 = 2048 $ bits ($256 \times 7 = 1792 $ bits). Note that the quantization of the codewords can be easily performed because the activation function at the end of the encoder forces the codeword values into the interval $[-1, 1]$. \section{Conclusions} \wout{In this work, following the idea of using autoencoders for noise reduction and codeword generation for DL CSI in FDD systems, we presented a novel concept. This is based on the recently discovered equivalence of UL and DL data across the FDD frequency gap, which allows training the autoencoder at the BS instead of the MT, followed by offloading the \wo{same} encoder to \wo{each} MT. Training on the MT is no longer necessary, making it possible to quickly update the encoder on the MT at any time and place. The promising results presented validate our proposed method.} \bibliographystyle{IEEEtran}
1,314,259,992,770
arxiv
\section{Introduction} IP Pegasi is an interacting binary system containing a white dwarf receiving mass through an accretion disc from a Roche lobe filling late type star. These accretion disc fed systems called cataclysmic variables (see Warner (1995) for an excellent overview) provide one of the best laboratories for accretion physics due to their proximity and convenient time scales. The strong emission lines in their spectra originate in the accretion flow and are powerful observational probes of the local gas conditions. The picture of a viscous disc, transporting angular momentum outwards as material slowly spirals inwards, forms the basis of our understanding of accretion flows in X-ray binaries and AGNs as well. One of the main longstanding problems of accretion discs is their angular momentum transport mechanisms. In order to sustain the observed mass transfer rates highly efficient viscous processes must be available to transport the angular momentum outwards. Although the famous $\alpha$ prescription (Shakura \& Sunyaev 1973), which scales the effective viscosity by a dimensionless parameter $\alpha$, has been very succesfull it also shows how poorly these processes are understood. Turbulent magnetic fields (Tout \& Pringle 1992, Schramkowski \& Torkelsson 1996) and spiral shocks (Spruit et al. 1987) are two promising mechanisms even though the effective $\alpha$ expected from such models is still low. A second issue that has received less attention is the removal of the angular momentum at the outer disc via a tidal torque between disc and companion star (e.g. Papaloizou \& Pringle 1977). IP Peg is a member of the subclass of CVs called dwarf novae that display semi-periodic outbursts during which the system brightens by several magnitudes as more mass is suddenly transfered through the disc. These systems provide a great test case for accretion disc models. IP Peg is one of the few eclipsing dwarf novae, where the inclination of the orbital plane ($\sim$80$^{\circ}$) is large enough for the 0.5 M$_{\odot}$ companion star to cover the 1.02 M$_{\odot}$ white dwarf and most of the accretion disc as it passes in front every 3.8 hours. IP Peg's outbursts have an amplitude of about 2 magnitudes and recur roughly every 3 months during which the accretion disc is the dominant light source. We present spectrophotometric observations of the dwarf nova IP Peg at the late stages of a rise to outburst and use Doppler imaging to map the accretion disc. Observations are presented in section 2 followed by the analysis of the tomograms in section 3. The tidal origin of the spirals is discussed in section 4. \section{Observations} The data we present here are part of a long term service program to study IP Peg throughout its outburst cycle. Time-resolved CCD spectrophotometry with the 2.5m Isaac Newton Telescope on La Palma was used to study the strong emission lines originating in the accretion disc both during quiescence and outburst. Here we will focus our attention on the data obtained during the night of 19 August, 1993. IP Peg had just gone into outburst a day before and was close to its maximum brightness level. The Intermediate Dispersion Spectrograph was used to obtain spectra between 6300 and 6800 \AA, covering H$\alpha$ and HeI($\lambda 6678$) at a mean dispersion of 0.56 \AA\ pixel$^{-1}$ or 38 km s$^{-1}$pixel$^{-1}$. A 1024$\times$1024-pixel TEK CCD chip recorded long slit spectra of IP Peg and a comparison star to account for slit-losses. Neon arc spectra were regularly recorded for wavelength calibration and the flux standard BD+28$^{\circ}$4211 was used for flux calibration. This setup allowed us to optimally extract spectra with an absolute flux scale. A total of 15 spectra with an exposure time of 360 s where obtained sampling 60\% of the 3.8 hour binary orbit. The top panels of Figure 1 show the H$\alpha$ and HeI(6678) line profiles as a function of binary phase after subtracting a low order spline fit to the continuum of the individual spectra. Orbital phases were calculated using the Wolf et al. (1993) ephemeris without their quadratic term; \[ T_0(HJD)=2445615.4156 + 0.15820616 E \] \noindent with $T_0$ corresponding to mid-eclipse. The AB$\sim$12.6 mag continuum increasing by $\sim 7\%$ during the 2 hour observing window, shows that IP Peg was near the top of its rise to outburst, which typically lasts 1--1.5 d. \section{Doppler Maps} To interpret the phase dependent line profiles $f(v,\phi)$ (Fig. 1), we use Doppler tomography (Marsh \& Horne 1988), an indirect de-projection technique very similar to CAT scanning used in medical imaging. The Doppler map $I$(V$_x$,V$_y)$ gives the emission line flux of gas moving with velocity vector $V=(V_x,V_y)$ in the rotating frame of the binary. As the binary rotates, projections of the rotating velocity vector onto the line of sight traces the sinusoidal radial velocity curve; \[ V(\phi)=-V_x \cos\phi + V_y \sin\phi \] \noindent The observed line profiles $f(v,\phi)$ can therefore be modelled as projections of the map $I$(V$_x$,V$_y)$ without making specific assumptions about the form of the velocity field of the accretion flow (see also Robinson, Marsh \& Smak 1993 and Horne 1991). A maximum entropy implementation was used where the Doppler image is built up iteratively. Any given map is projected to produce the predicted line profiles for the particular map. $\chi^2$ statistic is used to determine goodness of fit while the entropy is maximised to select the simplest image that can fit the data to the required $\chi^2$ value. This technique assumes that the disc pattern is constant throughout the data set (in the co-rotating frame of the binary) so that the line variations can be modeled by projection effects. Transient features will therefore be averaged out over the map so that the average co-rotating pattern is recovered. Tidal distortions co-rotate in the binary frame and therefore do not suffer from this restriction and are ideally recovered by Doppler tomography. A second problem can be secular variability of the system within the data set used for tomography. In our case the continuum showed little increase during the course of our observations (i.e. outburst was developed) and as our observations cover only $\sim$ 2h, which is sufficient to calculate a Doppler image as more than half of the orbital period is covered, secular changes were negligible. Furthermore, line flux variations were compatible with the changing contribution of the companion star as the illuminated inner face comes into view, while the disc contribution was stable. Middle panels of Figure 1 show the two maps constructed from the observed H$\alpha$ and HeI(6678) line flux. As a comparison, bottom panels show predicted data and can be used to check how well the Doppler image reproduces the observed line emission. The gas stream trajectory and position of the companion star's Roche lobe is plotted based on the known system parameters (Marsh \& Horne 1990). Strong secondary star emission (K$_2$=300 km s$^{-1}$) is visible in both lines, a common feature of dwarf novae in outburst and is thought to be due to irradiation of the inner face of the star. However, emission from the companion has also been observed during quiescence (Harlaftis et al. 1994) and can be related to intrinsic activity of the late type star as the secondary star is co-rotating in a binary with a period of only several hours. There is also a weak low velocity component in the H$\alpha$ image, which was observed a week later by Steeghs et. al (1996) who propose prominence like structures to be responsible for this feature. This emission is thus already present early in outburst, even though it is more pronounced a week later. Disc emission is centered on the white dwarf (K$_1$=147 km s$^{-1}$) and has a strong azimuthal asymmetry in the form of a two armed spiral pattern. Both lines show similar structure but the arms are more sharply defined in the HeI map. The line flux in these spirals is about a factor of $\sim$4 stronger than that of the disc emission outside the spirals pointing to considerable heating and density enhancement. The velocities of the disc material in the two arms decrease from $\sim$700 km s$^{-1}$ down to $\sim$500 km s$^{-1}$ with increasing azimuth, suggesting a highly non Keplerian flow. A Keplerian accretion disc on the other hand would produce circular rings of emission, each velocity corresponding to a particular Kepler radius ($V(r)=\sqrt{GM/r}$) as has been observed in tomographic studies of other binaries. Note that the two arms are not perfectly symmetric, the arm in the upper right of the tomogram is slightly stronger. \begin{figure*} \centerline{\psfig{figure=fig1.ps,width=13cm}} \caption{ Top panels show the observed line flux from IP Peg as a function of binary phase with H$\alpha$ on the left, HeI(6678) on the right. Middle panels are constructed Doppler tomograms with theoretical gas stream and Roche lobe plotted for comparison. Bottom cross denotes white dwarf, middle cross the system center of mass at V=(0,0). Bottom panels show predicted data constructed by projecting the Doppler image at the observed phases used to determine how well the image fits our data.} \end{figure*} \section{Tides in the outer disc} The presence of the companion star will perturb the disc material from their circular Keplerian orbits in the outer disc, ultimately resulting in intersecting orbits outside the radius referred to as the tidal radius (Paczynski 1977). For IP Peg this occurs at $\sim$0.7 $R_{L_1}$ and is thought to represent the maximum radius of a quiescent disc. This tidal interaction is essential in extracting the angular momentum, transported outwards through the disc by viscous processes, from the disc via a tidal torque. Hydrodynamic simulations (Sawada et al. 1986, Savonije et al. 1994, Heemskerk 1994) and analytical work (Spruit et al. 1987, Dgani et al. 1992) on this tidal interaction show that spiral waves, and even shocks, are expected to be generated in the accretion disc down to quite small radii, depending on the Mach number of the disc flow. For hot accretion discs (low Mach numbers), these trailing waves can provide a steady mass transfer rate by transporting angular momentum outwards without the need of intrinsic disc viscosity. For the high Mach numbers expected in CV discs, the effective $\alpha$ is low, however ($\leq$ 0.01), and is therefore likely not the dominant transport mechanism in the inner disc, but will still dominate the dynamics of the outer disc. Many Doppler maps have previously been constructed from observations of discs, but those have never shown obvious evidence for the spiral waves predicted by theory. Our observations now for the first time provide observational evidence for a two armed trailing spiral in a dwarf novae disc. To confirm whether a two armed spiral can indeed produce the observed line profiles, we constructed a Doppler map of a model disc containing two symmetric trailing spiral arms, as shown in Figure 2. This model assumes a two-armed trailing spiral pattern in the spatial line emissivity of the disc, covering the outer part of the disc between 0.4 and 0.9 $R_{L_1}$ (Figure 2, bottom). The velocity coordinates conserve its azimuthal shape, resulting in a model Doppler image with two spirals as well (Figure 2, middle panel). Note that the model was optimised to reproduce the velocities of the observed spirals. The arms span $\sim 110^{\circ}$ in azimuth, and appear to be very open. The quoted radii corresponding to this are the Kepler orbits that limit the spirals. The predicted line profiles of this model are shown in the top panel and demonstrates a close resemblance to the observed data of Figure 1. The key signature is the modulation of the double peak separation. The two peaks measure the radial velocity of material on either side of the disc moving almost directly towards and away from the observer. Their separation would be constant as a function of binary phase for an axisymmetric (Keplerian) disc for example. Note also the jump in velocity around phase 0.7 where one crosses from one arm to the other. While general asymmetries in the local emissivity can be produced by non circular orbits, the fact that it has the shape of a spiral strongly favors the interpretation that we are indeed seeing a spiral density wave in the outer disc. As the orbits start to intersect pressure and viscous forces will setup density waves and possibly even shocks. \begin{figure} \centerline{\psfig{figure=fig2a.ps,width=5.5cm}} \centerline{\psfig{figure=fig2b.ps,angle=-90,width=5.5cm}} \caption{ A model Doppler tomogram containing a two armed trailing spiral superposed on symmetric disc emission. A Gaussian spot at the secondary is added to simulate its contribution to the data.Top panel shows the predicted data from such a system with the same signal to noise as our observations (compare with top panels of Figure 1). Middle panel is the model tomogram and bottom panel shows a spatial image of the disc emissivity pattern.} \end{figure} Tomography of the final stages of the outburst, about a week after our data (Steeghs et al. 1996), reveals a similar asymmetry pattern in the disc, most obviously in HeI. The much stronger companion star emission dominates over the disc emission and the fainter disc structure suggests the disc is shrinking and the tidal distortions are damping out. Simulations suggest large, hot discs are needed to generate strong waves (Savonije et al. 1994). Dwarf novae discs are considerably larger and hotter during outburst than in quiescence (e.g. Ichikawa \& Osaki 1992, Wood et al. 1989) due to their high mass accretion rate state. Tidal forces will therefore be similarly enhanced. A combination of those two factors (temperature and size) would explain why quiescent discs do not seem to show such structure while (early) outburst discs do. Doppler mapping studies of dwarf novae in the early phase of outburst on several consecutive days, may be able to record the dynamical behaviour of the spiral waves. The very start of the outburst is were the two competing models for the outburst, a disc instability (DI) on one hand (Osaki 1974) or a mass transfer burst (MTI) (Bath 1985) on the other, predict different disc behaviour. In the MTI model, the sudden addition of low angular momentum gas causes the disc to shrink initially before it grows again through viscous forces. In the DI model, the disc expands as soon as it switches to the high viscosity state at the onset of the outburst (e.g. Ichikawa \& Osaki 1992). Our data suggests a large (almost filling the full Roche lobe), non Keplerian accretion disc, possibly exceeding its tidal radius, is present very early on in the outburst, and therefore favors a DI as the trigger of the outburst. \section{summary} The tidal interaction manifestated in the spiral pattern turns out to be an important factor for outburst discs. Work is now in progress to use different observations of this phenomenon in different emission lines and at different epochs to sample the physical conditions of the disc material. Observing high ionization lines like HeII can show the presence of shocks and will indicate the implication for the angular momentum budget. Furthermore future observations of disc structure in different objects (with different mass ratios and disc sizes) will provide us with a new insight in tidal theory and perhaps the outburst mechanism. In this way dwarf novae disc provide an excellent laboratory for tides in astrophysical discs, since the time scales of the outbursts lasting a week and recurring every couple of months, allows one to study the dynamical behaviour of the disc and its tidal response. Tidal spirals in galaxies for example, thought to be generated in the same manner by a companion galaxy, have very long dynamical time scales making it impossible to study their evolution. \section*{Acknowledgments} We thank Tom Marsh for his valuable support in Doppler tomography and Henk Spruit for fruitful discussion. The Isaac Newton Telescope is operated on La Palma by the Isaac Newton Group of telescopes, Royal Observatories in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisico de Canarias.
1,314,259,992,771
arxiv
\section{Introduction and statement of results} \label{sec:int-stat} \hfill Let $M$ be a compact $C^{\infty}$ riemannian $n$-dimensional manifold, $n \geq 3$. Let $\Mundo$ the set of $C^1$ vector fields on $M$, endowed with the $C^1$ topology. And denote by $X_t: M \to M$ the $C^1$ flow generated by $X$. In a remarkable work Morales, Pac\'ifico and Pujals \cite{MPP99} defined the so called \emph{singular hyperbolic systems}, in order to describe the behaviour of Lorenz attractor. It is an extension of the hyperbolic theory for invariant sets for flows which are not (uniformly) hyperbolic, but which have some robust properties, certain kind of weaker hyperbolicity and also admit singularities. In \cite{MPP04}, the same authors proved that every robustly transitive singular set for a three dimensional flow is a partially hyperbolic attractor or repeller and the singularities in this set must be Lorenz-like. In this paper, we prove a relation between the $\J$-algebra of Potapov \cite{Pota60,Pota79,Wojtk01} a new definition of singular hyperbolicity, envolving intermediate dimensions of the central subbundle. The $\J$-algebra here means a pseudo-euclidean structure given by $C^1$ non-degenerate quadratic form $\J$, defined on $\Lambda$, which generates positive and negative cones with maximal dimension $p$ and $q$, respectively. The maximal dimension of a cone in $T_xM$ is the maximal dimension of the subspaces contained in there. We are going to prove sufficient and necessary conditions for a flow to be singular hyperbolic of some order, in a sense to be clarified below. It is also given a characterization of singular and sectional hyperbolicity for a flow over a compact invariant set, improving a result in \cite{ArSal2012}. The text is organized as follow. In first section, it is given the main definitions and stated the results. In second section, it is presented the main tools by using the notion of $\J$-algebra of Potapov. In third section, it is proved the main theorems. \subsection{Preliminary definitions and Main results} \label{sec:prelim-definit} \hfill Before presenting the main statements, we give some definitions. Let $M$ be a connected compact finite $n$-dimensional manifold, $n \geq 3$, with or without boundary. We consider a vector field $X$, such that $X$ is inwardly transverse to the boundary $\partial M$, if $\partial M\neq\emptyset$. The flow generated by $X$ is denoted by $X_t$. An \emph{invariant set} $\Lambda$ for the flow of $X$ is a subset of $M$ which satisfies $X_t(\Lambda)=\Lambda$ for all $t\in\RR$. The \emph{maximal invariant set} of the flow is $M(X):= \cap_{t \geq 0} X_t(M)$, which is clearly a compact invariant set. A \emph{singularity} for the vector field $X$ is a point $\sigma\in M$ such that $X(\sigma)=0$ or, equivalently, $X_t(\sigma)=\sigma$ for all $t \in \RR$. The set formed by singularities is the \emph{singular set of $X$} denoted $\sing(X)$ and $Per(X)$ is the set of periodic points of $X$. We say that a singularity \emph{is hyperbolic} if the eigenvalues of the derivative $DX(\sigma)$ of the vector field at the singularity $\sigma$ have nonzero real part. The set of critical elements of $X$ is the union of the singularities and the periodic orbits of $X$, and will be denoted by $\cri(X)$. We recall that a \emph{hyperbolic} set for a flow $X_t$ is an invariant subset of $\Lambda \subset M$ with a decomposition $T_\Lambda M= E^s\oplus E^X \oplus E^u$ of the tangent bundle which is a continuous splitting, where $E^X$ is the direction of the vector field, the subbundles are invariant under the derivative $DX_t$ of the flow \begin{align*} DX_t\cdot E^*_x=E^*_{X_t(x)},\quad x\in\Lambda, \quad t\in\RR,\quad *=s,X,u; \end{align*} $E^s$ is uniformly contracted by $DX_t$ and $E^u$ is uniformly expanded: there are $K,\lambda>0$ so that \begin{align}\label{eq:def-hyperbolic} \|DX_t\mid_{E^s_x}\|\le K e^{-\lambda t}, \quad \|DX_{-t} \mid_{E^u_x}\|\le K e^{-\lambda t}, \quad x\in\Lambda, \quad t\in\RR. \end{align} Recall that the index of a hyperbolic periodic orbit of a flow is the dimension of the contracting subbundle of its hyperbolic splitting. Our main results is the following. Let $\Lambda \subset M$ be a compact invariant subset for $X$. \begin{definition}\label{def1} A \emph{dominated splitting} over a compact invariant set $\Lambda$ of $X$ is a continuous $DX_t$-invariant splitting $T_{\Lambda}M = E \oplus F$ with $E_x \neq \{0\}$, $F_x \neq \{0\}$ for every $x \in \Lambda$ and such that there are positive constants $K, \lambda$ satisfying \begin{align}\label{eq:def-dom-split} \|DX_t|_{E_x}\|\cdot\|DX_{-t}|_{F_{X_t(x)}}\|<Ke^{-\la t}, \ \textrm{for all} \ x \in \Lambda, \ \textrm{and all} \,\,t> 0. \end{align} \end{definition} A compact invariant set $\Lambda$ is said to be \emph{partially hyperbolic} if it exhibits a dominated splitting $T_{\Lambda}M = E \oplus F$ such that subbundle $E$ is uniformly contracted. In this case $F$ is the \emph{central subbundle} of $\Lambda$. A compact invariant set $\Lambda$ is said to be \emph{singular-hyperbolic} if it is partially hyperbolic and the action of the tangent cocycle expands volume along the central subbundle, i.e., \begin{align}\label{eq:def-vol-exp} \vert \det (DX_t\vert_{F_x}) \vert > C e^{\la t}, \forall t>0, \ \forall \ x \in \Lambda. \end{align} The following definition was given as a particular case of singular hyperbolicity. \begin{definition}\label{def:sec-exp} A \emph{sectional hyperbolic set} is a singular hyperbolic one such that for every two-dimensional linear subspace $L_x \subset F_x$ one has \begin{align}\label{eq:def-sec-exp} \vert \det (DX_t \vert_{L_x})\vert > C e^{\la t}, \forall t>0. \end{align} \end{definition} \subsection{Singular hyperbolicity of various orders}\label{sec:p-sing-hyp} \hfill Given $E$ a vector space, we denote by $\wedge^p E$ the exterior power of order $p$ of $E$, defined as follows. If $v_1,\dots, v_n$ is a basis of $E$ then $\wedge^p E$ is generated by $\{v_{i_1}\wedge \cdots \wedge v_{i_p}\}_{1 \leq i \leq n, i_j \neq i_k, j \neq k}$. Any linear transformation $A:E\to F$ induces a transformation $\wedge^p A:\wedge^p E\to\wedge^p F$. Moreover, $v_{i_1}\wedge \cdots \wedge v_{i_p}$ can be viewed as the $p$-plane generated by $\{v_{i_1}, \cdots, v_{i_p}\}$ if $i_j \neq i_k, j \neq k$. As a reference for more information about exterior powers it is recommended \cite{A}, for instance. We may define a new kind of singular hyperbolicity. \begin{definition}\label{def:p-sing-hyp} A compact invariant set $\Lambda$ is $p$-singular hyperbolic (or $p$-sectionally hyperbolic) for a $C^1$ flow $X$ if there exists a partially hyperbolic splitting $T_{\Lambda}M = E \oplus F$ such that $E$ is uniformly contracting and the central subbundle $F$ is $p$-sectionally expanding, with $2 \leq p \leq \dim(F)$. \end{definition} If $L_x$ is a $p$-plane, we can see it as $\widetilde{v}\in \wedge^p(F_x)\setminus \{0\}$ of norm one. Hence, to obtain the singular expansion we just need to show that for some $\lambda>0$ and every $t>0$ holds the following inequality $$\|\wedge^p DX_t(x).\widetilde{v}\|>Ce^{\lambda t}.$$ Our first main result concerns in a characterization of singular hyperbolicity of any order via infinitesimal Lyapunov functions, following \cite{ArSal2012}, \cite{ArSal2015}, \cite{BurnKatok94}, \cite{Pota79}, \cite{Wojtk85}, \cite{Wojtk01}. Recall that, if $T: Z \to Z$ is a measurable map, we say that a probability measure $\mu$ is an invariant measure of $T$, if $\mu(T^{-1}(A)) = \mu(A)$, for every measurable set $A \subset Z$. We say that $\mu$ is an invariant measure of $X$ if it is an invariant measure of $X_t$ for every $t \in \mathbb{R}$. We will denote by $\mathcal{M}_X$ the set of all invariant measures of $X$. A subset $Y\subset Z$ has \emph{total probability} if for every $\mu\in \mathcal{M}_X$ we have $\mu(Y)=1$ (see \cite{Man82}). \begin{theorem} \label{mthm:sec-hyp-equiv} A compact invariant set $\Lambda$ whose singularities are hyperbolic (with $\indi \geq \indi(\J)$) for $X \in \Mundo$ is a $p$-singular hyperbolic set if, and only if, there exist a neighborhood $U$ of $\Lambda$ and a field of non-degenerate quadratic forms $\J$ on $U$ with index $1 \leq \indi(\J) \leq n-2$ such that $X$ is non-negative strictly $\J$-separated and the spectrum of the diagonalized operator $DX_t$ satisfies the properties: \begin{enumerate} \item{} $r_1^- < 1$; and \item{} $\Pi_{1}^{p} \ r_i^+ > 1$, \ \textrm{where} \ $2 \leq p \leq \dim(M) - \indi(\J)$, \end{enumerate} in a total probability subset of $\Lambda$. Moreover, if $r_i^+ \cdot r_j^+ > 1$, for all $1 \leq i, j, \leq p, i \neq j$, in a total probability set, then $\Lambda$ is a sectional-hyperbolic set. \end{theorem} In \cite{ArSal2012}, the authors proved the next result about sectional hyperbolicity. As a direct application of Theorem \ref{mthm:sec-hyp-equiv} and Theorem\ref{thm:lyap-exp-sing-val} in Section $2$, we reobtain the next one, without the assumption on the singularities. \begin{corollary}\cite[Theorem D]{ArSal2012} \label{mcor:2-sec-exp-J-monot} Suppose that all singularities of the attracting set $\Lambda$ of $U$ are all of them sectional-hyperbolic with index $\indi(\sigma) \geq \indi(\Lambda)$. Then, $\Lambda$ is a sectional-hyperbolic set for $X_t$ if, and only if, there is a field of quadratic forms $\J$ with index equal to $\indi(\Lambda)$ such that $X_t$ is a non-negative strictly $\J$-separated flow on $U$ and for each compact invariant subset $\Gamma$ in $\Lambda^*=\Lambda \setminus \sing(X)$ the linear Poincar\'e flow is strictly $\J_0$-monotonous for some field of quadratic forms $\J_0$ equivalent to $\J$. \end{corollary} Thus, Theorem \ref{mthm:sec-hyp-equiv} is an improvement to this result, once it does not requires a priori sectional hyperbolicity on the singularities. \vspace{0.1in} In \cite{AraArbSal}, this author together with V. Araujo and A. Arbieto, proved that the requirements in the definition of sectional hyperbolicity can be weakened, demanding the domination property only over the singularities, because in this setting the splitting is in fact dominated. More precisely, we proved the next result. \begin{theorem}\cite[Theorem A]{AraArbSal}\label{mthm:domination-ararbsal} Let $\Lambda$ be a compact invariant set of $X$ such that every singularity in this set is hyperbolic. Suppose that there exists a continuous invariant splitting of the tangent bundle of $\Lambda$, $T_{\Lambda}M = E \oplus F$, where $E$ is uniformly contracted, $F$ is sectionally expanding and for some constants $C,\lambda > 0$ we have \begin{align} \|DX_t\vert_{E_\sigma}\|\cdot\|DX_{-t}\vert_{F_\sigma}\| &< \label{eq:sing-domin} Ce^{-\lambda t} \quad\text{for all}\quad \sigma\in\Lambda\cap\sing(X)\textrm{ and $t\geq 0$}. \end{align} Then $T_{\Lambda}M = E \oplus F$ is a dominated splitting. \end{theorem} The study of conditions to a given splitting of the tangent bundle to have the domination property is an important research line in the area of Dynamical Systems, see \cite{ArbSal}, \cite{AraPac2010}, \cite{BDV2004}. Some progress in this context has been obtained for instance in \cite[Theorem A]{ArSal2015}, jointly with V. Araujo, where we give a characterization for dominated splitting based on $k$-th exterior powers, where $k = \dim F$. We note that if $E\oplus F$ is a $DX_t$-invariant splitting of $T_\Gamma M$, with $\{e_1,\dots,e_\ell\}$ a family of basis for $E$ and $\{f_1,\dots,f_h\}$ a family of basis for $F$, then $\widetilde F=\wedge^kF$ generated by $\{f_{i_1}\wedge\dots\wedge f_{i_k}\}_{1\le i_1<\dots<i_k\le h}$ is naturally $\wedge^kDX_t$-invariant by construction. In addition, $\tilde E$ generated by $\{e_{i_1}\wedge\dots\wedge e_{i_k}\}_{1\le i_1<\dots<i_k\le \ell}$ together with all the exterior products of $i$ basis elements of $E$ with $j$ basis elements of $F$, where $i+j=k$ and $i,j\ge1$, is also $\wedge^kDX_t$-invariant and, moreover, $\widetilde E\oplus \widetilde F$ gives a splitting of the $k$th exterior power $\wedge^k T_\Gamma M$ of the subbundle $T_\Gamma M$. \begin{theorem}\cite[Theorem A]{ArSal2015}\label{mthm:bivectparthyp} Let $T_\Gamma M=E_\Gamma\oplus F_\Gamma$ be a $DX_t$-invariant splitting over the compact $X_t$-invariant subset $\Gamma$ such that $\dim F=k\ge2$. Let $\widetilde F=\wedge^k F$ be the $\wedge^k DX_t$-invariant subspace generated by the vectors of $F$ and $\tilde E$ be the $\wedge^k DX_t$-invariant subspace such that $\widetilde E\oplus\widetilde F$ is a splitting of the $k$th exterior power $\wedge^k T_\Gamma M$ of the subbundle $T_\Gamma M$. Then $E\oplus F$ is a dominated splitting if, and only if, $\widetilde E\oplus \widetilde F$ is a dominated splitting for $\wedge^k DX_t$. \end{theorem} We note that the equivalence is only valid if $k = \dim F$. Here, it is proved a similar result to \cite[Theorem A]{AraArbSal}, but now it is done on $p$-sectional hypothesis. Note that, in this case, it is no longer true without some more requirements on the combinations of the Lyapunov exponents of the subbundles (due Theorem \ref{mthm:bivectparthyp}), since for $p > 2$ we can have uniform contraction on $E$, $p$-sectional expansion on $F$ and none dominated splitting, as exemplified below. In \cite[Example 3]{ArSal2015}, we have an example where even $E \oplus F$ being dominated we do not obtain $\widetilde{E} \oplus \widetilde{F}$ dominated, for $k < \dim F$. The next example is a similar one. \begin{example}\label{ex:ThmA} Theorem~\ref{mthm:bivectparthyp} does not hold if we take $c < \dim F$: consider $\sigma$ a hyperbolic fixed point for a vector field $X$ in a $4$-manifold such that $DX(\sigma)=\diag\{-3, 2, 4, 10\}$. The splitting $E=\RR\times\{0^3\}, F=\{0\}\times\RR^3$ is dominated and hyperbolic but, for $c=2<3=\dim F$ the splitting $\tilde E\oplus \tilde F$ of the exterior square is not dominated. Indeed, the eigenvalues for $\tilde F$ are $2+4 = 6, 2+10 = 12, 4+10 =14$, and for $\tilde E $ the eigenvalues are $-3+2 = -1, -3+4 = 1, -3+10 = 7$, so we have an eigenvalue $7$ in $\tilde E$ strictly bigger than the eigenvalue $6$ along $\tilde F$. \end{example} We can see that even under the domination assumption over singularities, we have no longer the same result as Theorem \ref{mthm:domination-ararbsal}, it is enough to take the union of an isolated hyperbolic singularity with a periodic orbit displaying the features of the above example. However, we might ask how this assumption worked out in \cite{ArSal2015} and \cite{AraArbSal}. In fact, within the accounts of the results contained therein it is obtained domination due $2$-sectional expansion together with the uniform contraction. The singular case requires domination on the singularities, once it is necessary matching the splitting. Observing these results, we can get a characterization of domination property based on Lyapunov spectrum, without any other assumption on the singularities. This is the content of our next result. \begin{theorem}\label{mthm:Lyapunov-domination} Let $\Lambda$ be a compact invariant set of $X$. Suppose that there exists a continuous invariant splitting of the tangent bundle of $\Lambda$, $T_{\Lambda}M = E \oplus F$. Then $T_{\Lambda}M = E \oplus F$ is a dominated splitting if, and only if, exists $\eta < 0$ for which \begin{align*} \liminf\limits_{t \to +\infty} \frac{1}{t} \log \vert DX_t\vert_{E_x}\vert - \limsup\limits_{t \to +\infty} \frac{1}{t}\log m(DX_t\vert_{F_x}) < \eta, \end{align*} in a total probability set of $\Lambda$. \end{theorem} By transitivity, we obtain the next corollary. \begin{corollary}\label{mcor:equiv-domina-ext-expon} Suppose the assumptions of Theorem \ref{mthm:bivectparthyp}. Then, $\widetilde E\oplus \widetilde F$ is a dominated splitting for $\wedge^k DX_t$ if, and only if, there exists $\eta < 0$ for which \begin{align*} \liminf\limits_{t \to +\infty} \frac{1}{t} \log \vert DX_t\vert_{E_x}\vert - \limsup\limits_{t \to +\infty} \frac{1}{t}\log m(DX_t\vert_{F_x}) < \eta, \end{align*} in a total probability set of $\Lambda$. \end{corollary} \subsection{p-sectional Lyapunov exponents} \hfill The next definition reminds a previous one from Arbieto \cite{arbieto2010} which deals with, in his terminology, the sectional Lyapunov exponents. Based in the same ideas, we can state an analogous term for general singular sets. Inspired by \cite{arbieto2010}, we finally define: \begin{definition}\label{def:sing-lyap-exp} The \emph{p-sectional Lyapunov exponents} (or \emph{Lyapunov exponents of order $p$}) of $x$ along $F$ are the limits $$\lim_{t\to+\infty}\frac{1}{t}\log\| \wedge^p DX_t(x).\widetilde{v}\|$$ whenever they exists, where $\widetilde{v}\in \wedge^p F_x-\{0\}$. \end{definition} Following the corresponding results from \cite[Theorem B]{AraArbSal} and \cite[Theorem 2.3]{arbieto2010}, just by some modifications in computations and hyphotesis, changing $\|\wedge^2 DX_t(x).\widetilde{v}\|$ by $\|\wedge^p DX_t(x).\widetilde{v}\|$. We obtain, via Theorem \ref{thm:lyap-exp-sing-val}, the analogous results for singular hyperbolic and partially hyperbolic sets of the main result of this paper. \begin{corollary}\label{mcor:equiv-partial} Let $\Lambda$ be a compact invariant set of $X$ such that every singularity in this set is hyperbolic. There exists a continuous invariant splitting of the tangent bundle, $T_{\Lambda}M = E \oplus F$, of $\Lambda$ where: \begin{enumerate} \item the Lyapunov exponents on $E$ are negative (or positive on $F$), and \item $\liminf\limits_{t \to +\infty} \frac{1}{t} \log \vert DX_t\vert_{E_x}\vert - \limsup\limits_{t \to +\infty} \frac{1}{t}\log m(DX_t\vert_{F_x}) < 0$, \end{enumerate} in a total probability set of $\Lambda$, if and only if, $T_{\Lambda}M = E \oplus F$ is a partially hyperbolic splitting. \end{corollary} This way, we can extend and improve \cite[Theorem B]{AraArbSal} and \cite[Theorem 2.3]{arbieto2010}, as follow. \begin{corollary} \label{mcor:p-sing-lyap-exp} Let $\Lambda$ a compact invariant set for a flow $X_t$ such that every singularity $\sigma \in \Lambda$ is hyperbolic. Suppose that there is a continuous invariant splitting $T_{\Lambda}M=E\oplus F$. The set $\Lambda$ is $p$-singular hyperbolic for the flow if, and only if, on a set of total probability in $\Lambda$, \begin{enumerate} \item $\liminf\limits_{t \to +\infty} \frac{1}{t} \log \vert DX_t\vert_{E_x}\vert - \limsup\limits_{t \to +\infty} \frac{1}{t}\log m(DX_t\vert_{F_x}) < 0$, \item the Lyapunov exponents in the $E$ direction are negative and \item the $p$-sectional Lyapunov exponents in the $F$ direction are positive . \end{enumerate} \end{corollary} Hence, the definition of singular hyperbolicity (of any order, including the classical one) can be rewritten based on the Lyapunov exponents. \begin{definition}\label{new-def-singhyp} A compact invariant set $\Lambda \subset M$ is \emph{$p$-singular hyperbolic} for $X$ if all singularities in $\Lambda$ are hyperbolic, there are a continuous invariant splitting of the tangent bundle on $T_{\Lambda}M = E \oplus F$ and constants $C,\lambda > 0$ such that for every $x \in \Lambda$ and every $t>0$ we have \begin{enumerate} \item the Lyapunov exponents in the $E$ direction are negative, \item the $p$-sectional Lyapunov exponents in the $F$ direction are positive, \item $\liminf\limits_{t \to +\infty} \frac{1}{t} \log \vert DX_t\vert_{E_x}\vert - \limsup\limits_{t \to +\infty} \frac{1}{t}\log m(DX_t\vert_{F_x}) < 0$ \end{enumerate} in a total probability set of $\Lambda$. \end{definition} The last item guarantees that the dominated splitting of the singularities matches to the one over the remainder of $\Lambda$. \vspace{0.1in} \begin{remark} \label{rmk:sec-exp-discrete} The properties of $p$-singular hyperbolicity can be expressed in the following equivalent forms; see \cite{AraPac2010} for the classical one. There exists $T>0$ such that \begin{itemize} \item $\|DX^T\vert_{E_x}\|<\frac12$ for all $x\in\Lambda$ (uniform contraction); and \item $|\wedge^p (DX^T\vert_{\wedge^p F_x})|> 2$ for all $x\in\Lambda$. \end{itemize} \end{remark} From now on, we consider $M$ a connected compact finite dimensional riemannian manifold and all singularities of $X$ (if they exist) are hyperbolic. \section{Fields of quadratic forms} \label{sec:fields-quadrat-forms} \hfill In this section, we introduce the quadratic forms and its properties. Let $\J:E_U\to\RR$ be a continuous family of quadratic forms $\J_x:E_x\to\RR$ which are non-degenerate and have index $0<q<\dim(E)=n$, where $U\subset M$ is an open set such that $X_t(U) \subset \overline{U}$ for a vector field $X$. We also assume that $(\J_x)_{x\in U}$ is continuously differentiable along the flow. The continuity assumption on $\J$ just means that for every continuous section $Z$ of $E_U$ the map $U\to\RR$ given by $x\mapsto \J(Z(x))$ is continuous. The $C^1$ assumption on $\J$ along the flow means that the map $x\mapsto \J_{X_t(x)} (Z(X_t(x)))$ is continuously differentiable for all $x\in U$ and each $C^1$ section $Z$ of $E_U$. The assumption that $M$ is a compact manifold enables us to globally define an inner product in $E$ with respect to which we can find the an orthonormal basis associated to $\J_x$ for each $x$, as follows. Fixing an orthonormal basis on $E_x$ we can define the linear operator \begin{align*} J_x:E_x\to E_x \quad\text{such that}\quad \J_x(v)=<J_x v,v> \quad \text{for all}\quad v\in T_xM, \end{align*} where $<,>=<,>_x$ is the inner product at $E_x$. Since we can always replace $J_x$ by $(J_x+J_x^*)/2$ without changing the last identity, where $J_x^*$ is the adjoint of $J_x$ with respect to $<,>$, we can assume that $J_x$ is self-adjoint without loss of generality. Hence, we represent $\J(v)$ by a non-degenerate symmetric bilinear form $<\J_x v,v>_x$. Now we use Lagrange's method to diagonalize this bilinear form, obtaining a base $\{u_1,\dots,u_n\}$ of $E_x$ such that \begin{align*} \J_x(\sum_{i}\alpha_iu_i)=\sum_{i=1}^q -\lambda_i\alpha_i^2 + \sum_{j=q+1}^n \lambda_j\alpha_j^2, \quad (\alpha_1,\dots,\alpha_n)\in\RR^n. \end{align*} Replacing each element of this base according to $v_i=|\lambda_i|^{1/2}u_i$ we deduce that \begin{align*} \J_x(\sum_{i}\alpha_iv_i)=\sum_{i=1}^q -\alpha_i^2 + \sum_{j=q+1}^n \alpha_j^2, \quad (\alpha_1,\dots,\alpha_n)\in\RR^n. \end{align*} Finally, we can redefine $<,>$ so that the base $\{v_1,\dots, v_n\}$ is orthonormal. This can be done smoothly in a neighborhood of $x$ in $M$ since we are assuming that the quadratic forms are non-degenerate; the reader can check the method of Lagrange in a standard Linear Algebra textbook and observe that the steps can be performed with small perturbations, for instance in \cite{Maltsev63}. In this adapted inner product we have that $J_x$ has entries from $\{-1,0,1\}$ only, $J_x^*=J_x$ and also that $J_x^2=J_x$. Having fixed the orthonormal frame as above, the \emph{standard negative subspace} at $x$ is the one spanned by $v_{1},\dots, v_{q}$ and the \emph{standard positive subspace} at $x$ is the one spanned $v_{q+1},\dots,v_n$. \subsubsection{Positive and negative cones} \label{sec:positive-negative-co} Let $\cC_\pm=\{C_\pm(x)\}_{x\in U}$ be the family of positive and negative cones \begin{align*} C_\pm(x):=\{0\}\cup\{v\in E_x: \pm\J_x(v)>0\} \quad x\in U \end{align*} and also let $\cC_0=\{C_0(x)\}_{x\in U}$ be the corresponding family of zero vectors $C_0(x)=\J_x^{-1}(\{0\})$ for all $x\in U$. In the adapted coordinates obtained above we have \begin{align*} C_0(x)=\{v=\sum_{i}\alpha_iv_i\in E_x : \sum_{j=q+1}^n \alpha_j^2 = \sum_{i=1}^q \alpha_i^2\} \end{align*} is the set of \emph{extreme points} of $C_\pm(x)$. The following definitions are fundamental to state our main result. \begin{definition} \label{def:J-separated} Given a continuous field of non-degenerate quadratic forms $\J$ with constant index on the trapping region $U$ for the flow $X_t$, we say that the flow is \begin{itemize} \item $\J$-\emph{separated} if $DX_t(x)(C_+(x))\subset C_+(X_t(x))$, for all $t>0$ and $x\in U$; \item \emph{strictly $\J$-separated} if $DX_t(x)(C_+(x)\cup C_0(x))\subset C_+(X_t(x))$, for all $t>0$ and $x\in U$; \item $\J$-\emph{monotone} if $\J_{X_t(x)}(DX_t(x)v)\ge \J_x(v)$, for each $v\in T_xM\setminus\{0\}$ and $t>0$; \item \emph{strictly $\J$-monotone} if $\partial_t\big(\J_{X_t(x)}(DX_t(x)v)\big)\mid_{t=0}>0$, for all $v\in T_xM\setminus\{0\}$, $t>0$ and $x\in U$; \item $\J$-\emph{isometry} if $\J_{X_t(x)}(DX_t(x)v) = \J_x(v)$, for each $v\in T_xM$ and $x\in U$. \end{itemize} \end{definition} Thus, $\J$-separation corresponds to simple cone invariance and strict $\J$-separation corresponds to strict cone invariance under the action of $D_t(x)$. \begin{remark}\label{rmk:J-separated-C-} If a flow is strictly $\J$-separated, then for $v\in T_xM$ such that $\J_x(v)\le0$ we have $$ \J_{X_{-t}(x)}(DX_{-t}(v))<0, $$ for all $t>0$ and $x$ such that $X_{-s}(x)\in U$ for every $s\in[-t,0]$. Indeed, otherwise $\J_{X_{-t}(x)}(DX_{-t}(v))\ge0$ would imply $\J_x(v)=\J_x\big(DX_t(DX_{-t}(v))\big)>0$, contradicting the assumption that $v$ was a non-positive vector. This means that a flow $X_t$ is strictly $\J$-separated if, and only if, its time reversal $X_{-t}$ is strictly $(-\J)$-separated. \end{remark} A vector field $X$ is $\J$-\emph{non-negative} on $U$ if $\J(X(x))\ge0$ for all $x\in U$, and $\J$-\emph{non-positive} on $U$ if $\J(X(x))\leq 0$ for all $x\in U$. When the quadratic form used in the context is clear, we will simply say that $X$ is non-negative or non-positive. We apply this notion to the linear Poincar\'e flow defined on regular orbits of $X_t$ as follows. We assume that the vector field $X$ is non-negative on $U$. Then, the span $E^X_x$ of $X(x)\neq 0$ is a $\J$-non-degenerate subspace. According to item (1) of Proposition~\ref{pr:propbilinear}, this means that $T_xM=E_x^X\oplus N_x$, where $N_x$ is the pseudo-orthogonal complement of $E^X_x$ with respect to the bilinear form $\J$, and $N_x$ is also non-degenerate. Moreover, by the definition, the index of $\J$ restricted to $N_x$ is the same as the index of $\J$. Thus, we can define on $N_x$ the cones of positive and negative vectors, respectively, $N_x^+$ and $N_x^-$, just like before. Now we define the Linear Poincar\'e Flow $P^{\, t}$ of $X_t$ along the orbit of $x$, by projecting $DX_t$ orthogonally (with respect to $\J$) over $N_{X_t(x)}$ for each $t\in\RR$: \begin{align*} P^{\, t} v := \Pi_{X_t(x)}DX_t v , \quad v\in T_x M, t\in\RR, X(x)\neq 0, \end{align*} where $\Pi_{X_t(x)}:T_{X_t(x)}M\to N_{X_t(x)}$ is the projection on $N_{X_t(x)}$ parallel to $X(X_t(x))$. We remark that the definition of $\Pi_x$ depends on $X(x)$ and $\J_X$ only. The linear Poincar\'e flow $P^{\,t}$ is a linear multiplicative cocycle over $X_t$ on the set $U$ with the exclusion of the singularities of $X$. In this setting we can say that the linear Poincar\'e flow is $\J$-separated and $\J$-monotonous using the non-degenerate bilinear form $\J$ restricted to $N_x$ for a regular $x\in U$. More precisely: $P^t$ is $\J$-monotonous if $\partial_t\J(P^tv)\mid_{t=0}\ge0$, for each $x\in U, v\in T_xM\setminus\{0\}$ and $t>0$, and strictly $\J$-monotonous if $\partial_t\J(P^tv)\mid_{t=0}>0$, for all $v\in T_xM\setminus\{0\}$, $t>0$ and $x\in U$. \begin{proposition} \label{pr:J-separated-spectrum} Let $L:V\to V$ be a $\J$-separated linear operator. Then \begin{enumerate} \item $L$ can be uniquely represented by $L=RU$, where $U$ is a $\J$-isometry and $R$ is $\J$-symmetric (or $\J$-pseudo-adjoint; see Proposition~\ref{pr:propbilinear}) with positive spectrum. \item the operator $R$ can be diagonalized by a $\J$-isometry. Moreover the eigenvalues of $R$ satisfy \begin{align*} 0<r_-^q\le\dots\le r_-^1=r_-\le r_+=r_1^+\le\dots\le r_+^p. \end{align*} \item the operator $L$ is (strictly) $\J$-monotonous if, and only if, $r_-\le (<) 1$ and $r_+\ge (>) 1$. \end{enumerate} \end{proposition} \subsection{J-separated linear maps} \label{sec:j-separat-linear} \subsubsection{J-symmetrical matrixes and J-selfadjoint operators} \label{sec:j-symmetr-matrix} The symmetrical bilinear form defined by $$ (v,w)=\langle J_x v,w\rangle, $$ $v,w\in E_x$ for $x\in M$ endows $E_x$ with a pseudo-Euclidean structure. Since $\J_x$ is non-degenerate, then the form $(\cdot,\cdot)$ is likewise non-degenerate and many properties of inner products are shared with symmetrical non-degenerate bilinear forms. We state some of them below. \begin{proposition} \label{pr:propbilinear} Let $(\cdot,\cdot):V\times V \to\RR$ be a real symmetric non-degenerate bilinear form on the real finite dimensional vector space $V$. \begin{enumerate} \item $E$ is a subspace of $V$ for which $(\cdot,\cdot)$ is non-degenerate if, and only if, $V=E\oplus E^\perp$. We recall that $E^\perp:=\{v\in V: (v,w)=0 \quad\text{for all}\quad w\in E\}$, the pseudo-orthogonal space of $E$, is defined using the bilinear form. \item Every base $\{v_1,\dots,v_n\}$ of $V$ can be orthogonalized by the usual Gram-Schmidt process of Euclidean spaces, that is, there are linear combinations of the basis vectors $\{w_1,\dots, w_n\}$ such that they form a basis of $V$ and $(w_i,w_j)=0$ for $i\neq j$. Then this last base can be pseudo-normalized: letting $u_i=|(w_i,w_i)|^{-1/2}w_i$ we get $(u_i,u_j)=\pm\delta_{ij}, i,j=1,\dots,n$. \item There exists a maximal dimension $p$ for a subspace $P_+$ of $\J$-positive vectors and a maximal dimension $q$ for a subspace $P_-$ of $\J$-negative vectors; we have $p+q=\dim V$ and $q$ is known as the \emph{index} of $\J$. \item For every linear map $L:V\to\RR$ there exists a unique $v\in V$ such that $L(w)=(v,w)$ for each $w\in V$. \item For each $L:V\to V$ linear there exists a unique linear operator $L^+:V\to V$ (the pseudo-adjoint) such that $(L(v),w)=(v,L^+(w))$ for every $v,w\in V$. \item Every pseudo-self-adjoint $L:V\to V$, that is, such that $L=L^+$, satisfies \begin{enumerate} \item eigenspaces corresponding to distinct eigenvalues are pseudo-orthogonal; \item if a subspace $E$ is $L$-invariant, then $E^\perp$ is also $L$-invariant. \end{enumerate} \end{enumerate} \end{proposition} The proofs are rather standard and can be found in \cite{Maltsev63}. The following simple result will be very useful in what follows. \begin{lemma} \label{le:kuhne} Let $V$ be a real finite dimensional vector space endowed with a non-positive definite and non-degenerate quadratic form $\J:V\to\RR$. If a symmetric bilinear form $F:V\times V\to\RR$ is non-negative on $C_0$ then \begin{align*} r_+=\inf_{v\in C_+} \frac{F(v,v)}{\langle Jv,v\rangle} \ge \sup_{u\in C_-}\frac{F(u,u)}{\langle Ju,u\rangle}=r_- \end{align*} and for every $r$ in $[r_-,r_+]$ we have $F(v,v)\ge r\langle Jv,v\rangle$ for each vector $v$. In addition, if $F(\cdot,\cdot)$ is positive on $C_0\setminus\{0\}$, then $r_-<r_+$ and $F(v,v)> r\langle Jv,v\rangle$ for all vectors $v$ and $r\in(r_-,r_+)$. \end{lemma} \begin{remark} \label{rmk:Jseparated} Lemma~\ref{le:kuhne} shows that if $F(v,w)=\langle \tilde J v,w\rangle$ for some self-adjoint operator $\tilde J$ and $F(v,v)\ge0$ for all $v$ such that $\langle J v, v\rangle=0$, then we can find $a\in\RR$ such that $\tilde J \ge a J$. This means precisely that $\langle \tilde J v,v\rangle\ge a\langle Jv, v\rangle$ for all $v$. If, in addition, we have $F(v,v)>0$ for all $v$ such that $\langle J v, v\rangle=0$, then we obtain a strict inequality $\tilde J > a J$ for some $a\in\RR$ since the infimum in the statement of Lemma~\ref{le:kuhne} is strictly bigger than the supremum. \end{remark} The (longer) proofs of the following results can be found in~\cite{Wojtk01} or in~\cite{Pota79}; see also~\cite{Wojtk09}. For a $\J$-separated operator $L:V\to V$ and a $d$-dimensional subspace $F_+\subset C_+$, the subspaces $F_+$ and $L(F_+)\subset C_+$ have an inner product given by $\J$. Thus both subspaces are endowed with volume elements. Let $\alpha_d(L;F_+)$ be the rate of expansion of volume of $L\mid_{F_+}$ and $\sigma_d(L)$ be the infimum of $\alpha_d(L;F_+)$ over all $d$-dimensional subspaces $F_+$ of $C_+$. \begin{proposition} \label{pr:product-vol-exp} We have $\sigma_d(L)=r_+^1 \cdots r_+^d$, where $r^i_+$ are given by Proposition~\ref{pr:J-separated-spectrum}(2). Moreover, if $L_1,L_2$ are $\J$-separated, then $\sigma_d(L_1L_2)\ge\sigma_d(L_1)\sigma_d(L_2)$. \end{proposition} The following corollary is very useful. \begin{corollary} \label{cor:compos-max-exp} For $\J$-separated operators $L_1,L_2:V\to V$ we have \begin{align*} r_+^1(L_1L_2)\ge r_+^1(L_1) r_+^1(L_2) \quad\text{and}\quad r_-^1(L_1L_2)\le r_-^1(L_1)r_-^1(L_2). \end{align*} Moreover, if the operators are strictly $\J$-separated, then the inequalities are strict. \end{corollary} \begin{remark}\label{rmk:J-mon-spec} Another important property about the singular values of a $\J$-separated operator $L$ is that $$ r_+^1 = r_+ \ge 1 (> 1) \quad\text{and}\quad r_-^1 = r_- \le 1 (< 1)$$ if, and only if, $L$ is (strictly) $\J$-monotone. This property will be used a lot of times in our proofs. \end{remark} \subsection{Lyapunov exponents} \hfill It is well known that under conditions of measurability, by Oseledec's Ergodic Theorem, there exist a full probability set $X$ such that for every $x \in Y$ there is an invariant decomposition \begin{align*} T_xM = \langle X\rangle \oplus E_{1}(x) \oplus \cdots \oplus E_{l(x)}(x) \end{align*} and numbers $\chi_1 < \cdots < \chi_l$ corresponding to the limits \begin{align*} \chi_j = \lim\limits_{t \to +\infty} \frac{1}{t} \log \Vert DX_t(x) \cdot v\Vert, \end{align*} for every $v \in E_i(x)\setminus \{0\}, i = 1, \cdots, l(x)$. In this setting, Wojtkowski \cite{Wojtk01} proved that the logarithm of the pseudo-Euclidean singular values $0 \leq r_q^- \leq \cdots \leq r_1^- \leq r_1^+ \leq \cdots \leq r_p^+$ of $DX_t$ are $\mu$-integrable, and obtained estimates of the Lyapunov exponents related to the singular eigenvalues of strictly $\J$-separated maps. \begin{theorem}\cite[Corollary 3.7]{Wojtk01} \label{thm:lyap-exp-sing-val} For $1 \leq k_1 \leq p$ and $1 \leq k_2 \leq q$ \begin{align*} \chi^-_1 + \cdots + \chi^-_{k_1} \leq \sum_{i=1}^{k_1} \int \log r^-_i d\mu \ \textrm{and} \ \chi^+_1 + \cdots + \chi^+_{k_2} \geq \sum_{i=1}^{k_2} \int \log r^+_i d\mu. \end{align*} \end{theorem} Look that, if $X_t$ is a $\J$-separated flow on $\Lambda$, for each diffeomorphism $DX_t$ if we fix $t > 0$, the last theorem holds for $r^{\pm, t}_i$, where $r^{\pm, t}_i$ are the singular $\J$-values of $DX_t$. \section{Proof of Theorems} \hfill In this section, we prove our mains results. First, we prove Theorem \ref{mthm:sec-hyp-equiv}, by using Corollary \ref{mcor:p-sing-lyap-exp} which is proved below. \begin{proof}[Proof of Theorem \ref{mthm:sec-hyp-equiv}] Suppose $\Lambda$ $p$-singular hyperbolic set of index $\indi$. Then, $1 \leq \indi \leq n-2$ and there is a dominated splitting $T_{\Lambda}M = E \oplus F$, where $E$ is uniformly contracting and $F$ is uniformly $p$-sectionally expanding. Moreover, $\langle X \rangle \subset F$, by Lemma \ref{le:flow-center}. By using adapted metric \cite{Goum07}, we construct the quadratic forms $\J$ such that $X$ is non-negative strictly $\J$-separated. By Proposition \ref{pr:J-separated-spectrum} and Corollary \ref{cor:compos-max-exp}, there is a $\J$-diagonalization of $DX_t$ by a $\J$-isometry, that we are also denoting by $DX_t$, such that its spectrum has the required properties. In fact, for each singular value $r_i^-$ corresponding to the contracting subspace, we must have $r_i^- < 1$. Analogously, as $F$ is a $p$-sectionally expanding subbundle, the sum of each $p$ corresponding singular value, $r_{i_1}^+, \cdots, r_{i_p}^+$, must be greater than one. Even including the corresponding field direction. Reciprocally, suppose that in a total probability subset of $\Lambda$ we have $r_1^-<1$ and $\Pi_{j=1}^{p} \ r_{i_j}^+ > 1$, \ \textrm{where} \ $2 \leq p \leq \dim(M) - \indi(\J)$. Moreover, strictly $\J$-separation guarantees that there exists a dominated splitting. Let $T_{\Lambda}M = E \oplus F$ the corresponding splitting and the decomposition in direct sum of Lyapunov subspaces \begin{align*} E_x = \oplus_{j=0}^{r} E_j(x), F_x = \oplus_{j=0}^{s(x)-1} F_j(x). \end{align*} By Theorem \ref{thm:lyap-exp-sing-val}, \begin{align*} \chi^-_1 + \cdots + \chi^-_{r} \leq \sum_{i=1}^{r} \int \log r^-_i d\mu \ \textrm{and} \ \chi^+_{i_0} + \cdots + \chi^+_{i_p} \geq \sum_{j=1}^{p} \int \log r^+_{i_j} d\mu. \end{align*} So, we obtain that the Lyapunov exponents over $E$ are all of them negative and the $p$-sectional Lyapunov exponents on $F$ are all of them positive, in a total probability subset. Now, Theorem \ref{mthm:Lyapunov-domination} and Corollary \ref{mcor:p-sing-lyap-exp} imply that $\Lambda$ is a $p$-singular hyperbolic set for $X$. \end{proof} We recall now that, fixed a compact $X_t$-invariant subset $\Lambda$, we say that a family of functions $\{f_t:\Lambda\to \RR\}_{t\in \RR}$ is subadditive if for every $x\in M$ and $t,s\in \RR$ we have that $f_{t+s}(x)\leq f_s(x)+f_t(X_s(x))$. \begin{proof}[Proof of Theorem \ref{mthm:Lyapunov-domination}] Note that, once $T_{\Lambda}M = E \oplus F$ is a dominated splitting, there is an indefinite $C^1$ field of quadratic forms $\J$ such a way $X$ is strictly separated and, by Proposition \ref{pr:J-separated-spectrum}, \begin{align*} 0<r_-^q\le\dots\le r_-^1=r_- < r_+=r_1^+\le\dots\le r_+^p. \end{align*} Moreover, by Corollary \ref{thm:lyap-exp-sing-val}, \begin{align*} \chi^-_1 + \cdots + \chi^-_{k_1} \leq \sum_{i=1}^{k_1} \int \log r^-_i d\mu \ \textrm{and} \ \chi^+_1 + \cdots + \chi^+_{k_2} \geq \sum_{i=1}^{k_2} \int \log r^+_i d\mu. \end{align*} Since $r_- - r_+ < 0$, we obtain \begin{align*} \liminf\limits_{t \to +\infty} \frac{1}{t} \log \vert DX_t\vert_{E_x}\vert - \limsup\limits_{t \to +\infty} \frac{1}{t}\log m(DX_t\vert_{F_x}) =\\ = \max\{\chi_i^E(x),1\le i\le r(x)\} - \min\{\chi_i^F(x),1\le i\le s(x)\} \le \eta <0, \end{align*} for all $x \in \Lambda$, in particular, in a total probability set. Reciprocally, suppose that there exists a continuous invariant decomposition, $T_\Lambda M = E \oplus F$, and $\eta < 0$ such that \begin{align*} \liminf\limits_{t \to +\infty} \frac{1}{t} \log \vert DX_t\vert_{E_x}\vert - \limsup\limits_{t \to +\infty} \frac{1}{t}\log m(DX_t\vert_{F_x})\le \eta <0, \end{align*} in a total probability set in $\Lambda$. Consider $f_t(x)=\log \frac{\| DX_t|{E_x}\|}{m(DX_t\mid F_x)}$, which is a subadditive family of continuous functions and satisfies \begin{align*} \overline{f}(x) &= \liminf_{t\to+\infty}\frac{f_t(x)}{t}\le \liminf_{n\to+\infty}\frac1t\log\| DX_t|{E_x}\| -\limsup_{n\to+\infty}\frac1t\log m( DX_t|{F_x})\le \eta < 0. \end{align*} By Subadittive Ergodic Theorem \cite{Ki}, the function $\overline{f}(x)=\liminf\limits_{t\to+\infty}\frac{f_t(x)}{t}$ coincides with $\widetilde{f}(x)=\lim\limits_{t\to+\infty}\frac{1}{t}f_t(x)$ in a set of total probability. Moreover, for any invariant measure $\mu$ we have that $\int \widetilde{f}d\mu=\lim\limits_{t\to+\infty}\int \frac{f_t}{t}d\mu$. Thus, we can use the following result from \cite{ArbSal}. \begin{proposition}\cite[Corollary 4.2]{ArbSal}\label{prop:subadd} Let $\{t\mapsto f_t:S\to \RR\}_{t\in \RR}$ be a continuous family of continuous functions which is subadditive and suppose that $\int \widetilde{f}(x) d\mu < 0$ for every $\mu\in \mathcal{M}_X$, with $\widetilde{f}(x):=\lim\limits_{t\to+\infty}\frac{1}{t}f_t(x)$. Then there exist a $T > 0$ and a constant $\eta < 0$ such that for every $x\in S$ and every $t \geq T$: $$f_t(x) \leq \eta t.$$ \end{proposition} Note that, all of the last accounts are true independently to $x$ is either a regular or a singular point. Hence, we obtain $f_t(x) \leq k - \eta t, t \geq 0, x \in \Lambda$, for some constant $k > 0$, and this gives us the domination property on $\Lambda$. \end{proof} Now, we prove the Corollary \ref{mcor:equiv-partial}. \begin{proof}[Proof of Corollary \ref{mcor:equiv-partial}] Suppose that we are under the hypothesis. By Theorem \ref{mthm:Lyapunov-domination}, $E \oplus F$ is a dominated splitting on $\Lambda$. Since $E$ is an invariant subbundle, consider $f_t(x) = \log \Vert DX_t\vert_{E_{x}}\Vert, t\in \RR$, as our subadditive family. As in the proof of Theorem \ref{mthm:Lyapunov-domination}, we obtain $f_t(x) \leq k - \eta t, t \geq 0, x \in \Lambda$, for some constant $k > 0$. This means that $E$ is uniformly contracting under the action of $DX_t$. The case of positive Lyapunov exponents over $F$ is analogous, by taking $f_t(x) = \log \Vert DX_{-t}\vert_{F_x} $. (Also see proof of \cite[Theorem B]{AraArbSal}). For the converse, by using adapted metrics (as in the proof \cite[Theorem A]{ArSal2012}) we obtain a $C^1$ field $\J$ of nondegenerate quadratic forms for which $X$ is nonnegative strictly separated. Now, Proposition \ref{pr:J-separated-spectrum} and Theorem \ref{thm:lyap-exp-sing-val} complete the proof. \end{proof} Finally, the proof of Corollary \ref{mcor:p-sing-lyap-exp}. We also need to use the following lemma. Let $\Lambda$ be a compact invariant set for a flow $X$ of a $C^1$ vector field $X$ on $M$. \begin{lemma} \cite{AraArbSal} \label{le:flow-center} Given a continuous splitting $T_\Lambda M = E\oplus F$ such that $E$ is uniformly contracted, then $X(x)\in F_x$ for all $x\in \Lambda$. \end{lemma} \begin{proof}[Proof of Corollary \ref{mcor:p-sing-lyap-exp}] By Theorem \ref{mthm:Lyapunov-domination}, $T_\Lambda M = E \oplus F$ is a dominated splitting. If $x=\sigma \in \sing(X)$, by hyperbolicity, we obtain the desired features. Following Corollary \ref{mcor:equiv-partial}, we obtain that this is a partially hyperbolic splitting as well, with subbundle $E$ uniformly contracting. By Lemma \ref{le:flow-center}, if $x$ is a regular point, the flow direction $E^X(x)$ is contained in $F(x)$. Since $F$ is an invariant subbundle, consider $f_t(x) = \log \Vert \wedge^p DX_t\vert_{F_{x}}\Vert, t\in \RR$, and there is a decomposition in direct sum of Lyapunov subspaces \begin{align*} F_x = \oplus_{j=0}^{s(x)-1} F_j(x). \end{align*} One of them, say $E^X = F_0(x)$, generated by $X(x) \neq 0$. Denote by $\chi_j^F(x), j=1, \cdots s(x)-1$ the corresponding Lyapunov exponents. Fixing $i_1, \cdots, i_p \in \{1, \cdots, s(x)-1\}$ and considering $p$ vectors $v_1 \in F_{i_1}\setminus\{0\}, \cdots v_{p-1} \in F_{i_{p-1}}\setminus\{0\}$, put $L = span \{X(x), v_1, \cdots, v_{p-1}\}$ as the generated $p$-plane. From assumption, \begin{align*} 0 < \chi \leq \liminf\limits_{t \to +\infty} \frac{1}{t} \log \vert \wedge^p DX_t\vert_L \vert = \chi_0^F + \chi_{i_1}^F +\cdots +\chi_{i_{p-1}}^F, \end{align*} and we obtain \begin{align*} \sum_{j=1}^{p}\chi_{i_j}^F \geq \chi > 0, \forall i_j \in \{1, \cdots, s(x)-1\}. \end{align*} For some singularity, $\sigma \in \Lambda$, we must have $\overline{f(\sigma)} \leq - \chi$, as a consequence of domination . Now applying the following proposition from \cite{arbieto2010}: \begin{proposition} \label{prop3.4-arbieto} Let $\{t\mapsto f_t:\Lambda\to \RR\}_{t\in \RR}$ be a continuous family of continuous function which is subadditive and suppose that $\ov{f}(x)<0$ in a set of total probability. Then there exist constants $C>0$ and $\lambda<0$ such that for every $x\in \Lambda$ and every $t>0$ we have $\exp(f_t(x))\leq C \exp(\frac{\lambda t}{2}),$ \end{proposition} to the function $f_t(x)$ give us constants $D > 0$ and $\eta < 0$ for which $\|\wedge^p DX_{-t}|_{\wedge^p F_{X_t(x)}}\|\le De^{\eta t}$, so $F$ is a $p$-sectionally expanding subbundle. The converse follows from the lines of the last proof, by using Proposition \ref{pr:J-separated-spectrum} and Theorem \ref{thm:lyap-exp-sing-val}. So, we are done. \end{proof} \bibliographystyle{amsplain}
1,314,259,992,772
arxiv
\section{Introduction} How different are $\mathbb{R}^m$ and $\mathbb{R}^n$? It is intuitively obvious that $\mathbb{R}^m$ and $\mathbb{R}^n$ are not homeomorphic whenever $m\not=n$. However, it is not as easy as it appears. Quite a few prominent mathematicians tried to solve this {\em invariance of dimension} problem, and nobody before Brouwer could succeed to provide a correct rigorous proof (see \cite[Section 5.1]{vanDalen} for the history of the invariance of dimension problem). In the early days of topology, Brouwer proved three important theorems: {\em The Brouwer fixed point theorem}, {\em the invariance of dimension theorem}, and {\em the invariance of domain theorem}. Modern proofs of these theorems make use of singular homology theory \cite{Hatcher} or its relative of the same nature, but even today, no direct proof (only using elementary topology) has been found. Brouwer's intuitionistic standpoint eventually led him to refuse his theorems, and even propose a ``counterexample'' to his fixed point theorem. As an alternative, Brouwer introduced an approximate version of the fixed point theorem (which follows from Sperner's lemma); however it does not provide us an approximation of an actual fixed point as already pointed out by Brouwer himself, cf.\ \cite[p.\ 503]{vanDalen}. (Indeed, there is {\bf no} computable algorithm which, given a sequence $(x_n)_{n\in\mathbb{N}}$ of points such that $x_n$ looks like a fixed point within precision $2^{-n}$, produces an approximation of an actual fixed point.) Then, how non-constructive are Brouwer's original theorems? We examine this problem from the perspective of reverse mathematics. Reverse mathematics is a program to determine the exact (set-existence) axioms which are needed to prove theorems of ordinary mathematics. We employ a subsystem ${\sf RCA}_0$ of second order arithmetic as our base system, which consists of Robinson arithmetic (or the theory of the non-negative parts of discretely ordered rings), $\Sigma^0_1$-induction schema, and $\Delta^0_1$-comprehension schema, cf.\ \cite{Simpson,Stillwell}. Roughly speaking, the system ${\sf RCA}_0$ corresponds to computable mathematics, which has enough power to show the approximate fixed point theorem (cf.\ \cite[Section IV.7]{Simpson}). On the other hand, Orevkov \cite{Orevkov} showed that the Brouwer fixed point theorem is invalid in computable mathematics in a rigorous sense; hence ${\sf RCA}_0$ is not enough for proving the actual fixed point theorem. In the Bishop-style constructive mathematics, it is claimed that a uniform continuous version of the invariance of dimension theorem has a constructive proof (cf.\ Beeson \cite[Section I.19]{Bee85}). Similarly, in the same constructive setting, Julian-Mines-Richman \cite{JMR83} studied the Alexander duality theorem and the Jordan-Brouwer separation theorem (which are basic tools to show the invariance of domain theorem in modern algebraic topology, cf.\ \cite{Hatcher}). However, these constructive versions are significantly different from original ones (from constructive and computable viewpoints). Concerning the original theorems, Shioji-Tanaka \cite{ShTa90} (see also \cite[Section IV.7]{Simpson}) utilized Orevkov's idea to show that, over ${\sf RCA}_0$, the Brouwer fixed point theorem is equivalent to {\em weak K\"onig's lemma} (${\sf WKL}$): Every infinite binary tree has an infinite path. Other examples equivalent to ${\sf WKL}$ include the Jordan curve theorem and the Sch\"onflies theorem \cite{SaYo07}. In his book \cite{Stillwell}, John Stillwell wrote ``finding the exact strength of the Brouwer invariance theorems seems to me one of the most interesting open problems in reverse mathematics.'' In this article, we solve this problem by showing that some forms of the Brouwer invariance theorems are equivalent to weak K\"onig's lemma over the base system ${\sf RCA}_0$. \begin{Theorem}\label{thm:main-theorem} The following are equivalent over ${\sf RCA}_0$: \begin{enumerate} \item Weak K\"onig's lemma. \item (Invariance of Domain) Let $U\subseteq\mathbb{R}^m$ be an open set, and $f\colon U\to\mathbb{R}^m$ be a continuous injection. Then, the image $f[U]$ is also open. \item (Invariance of Dimension I) If $m>n$ then there is no continuous injection from $\mathbb{R}^m$ into $\mathbb{R}^n$. \item (Invariance of Dimension II) If $m>n$ then there is no topological embedding of $\mathbb{R}^m$ into $\mathbb{R}^n$. \end{enumerate} \end{Theorem} \begin{proof} For (1)$\Rightarrow$(2), as mentioned in Stillwell \cite{Stillwell}, the usual algebraic topology machineries (cf.\ \cite{Hatcher}) are available in ${\sf WKL}_0$. A simpler proof of the invariance of domain theorem is presented in Tao \cite[Section 6.2]{Tao}, which can also be carried out in ${\sf WKL}_0$. For (2)$\Rightarrow$(3), suppose $m>n$ and that there is a continuous injection $f$ from $\mathbb{R}^m$ into $\mathbb{R}^n$. Define $g\colon\mathbb{R}^m\to\mathbb{R}^m$ by $g(x)=(f(x),0,0,\dots,0)$. Then, $g$ is also a continuous injection. Hence, by invariance of domain, the image of $g$ is open. However, if $m>n$, then $\{(z,0,0,\dots,0)\in\mathbb{R}^m:z\in\mathbb{R}^n\}$ does not contain a nonempty open set. Thus, we get $m\leq n$. The implication (3)$\Rightarrow$(4) is obvious. We devote the rest of the paper to proving the implication (4)$\Rightarrow$(1). \end{proof} We first describe the outline of our strategy for (the contrapositive of) (4)$\Rightarrow$(1): First, we will show that several basic results in topological dimension theory are provable in ${\sf RCA}_0$. More explicitly, ${\sf RCA}_0$ proves that, whenever the $n$-sphere $\mathbb{S}^n$ is an absolute extensor for $X$, the covering dimension of $X$ is at most $n$. We also show that the N\"obeling imbedding theorem (stating that every $n$-dimensional Polish space is topologically embedded into a ``universal'' $n$-dimensional subspace of $\mathbb{R}^{2n+1}$) is provable in ${\sf RCA}_0$. Then, under ${\sf RCA}_0+\neg{\sf WKL}$, we will show that the $1$-sphere $\mathbb{S}^1$ is an absolute extensor (for all Polish spaces). This means that, under $\neg{\sf WKL}$, {\em every Polish space is at most one-dimensional}, and therefore, by the N\"obeling imbedding theorem, every Polish space is topologically embedded into $\mathbb{R}^3$. In particular, we will see that, assuming $\neg{\sf WKL}$, a topological embedding of $\mathbb{R}^4$ into $\mathbb{R}^3$ {\bf does} exist. However, the following two questions remain open. \begin{Question} Does ${\sf RCA}_0$ prove that there is no topological embedding of $\mathbb{R}^3$ into $\mathbb{R}^2$? \end{Question} \begin{Question} Does ${\sf RCA}_0$ prove that $\mathbb{R}^m$ is not homeomorphic to $\mathbb{R}^n$ whenever $m\not=n$? \end{Question} \subsection{Preliminaries} We assume that the reader is familiar with reverse mathematics (cf.~Stillwell \cite{Stillwell} and Simpson \cite{Simpson}). In particular, we use standard formulations of mathematical concepts in second order arithmetic: A real number is coded as a Cauchy sequence of rational numbers with modulus of convergence (\cite[Definition II.4.4]{Simpson}). A Polish space $X$ is coded as a pair of a countable set $A\subseteq\mathbb{N}$ (which represents a countable dense subset of a space $X$) and a function $d\colon A^2\to\mathbb{R}$ (\cite[Definition II.5.2]{Simpson}). A code of an open set $U\subseteq X$ is any sequence of rational open balls $B_n$ whose union is $U$ (\cite[Definition II.5.6]{Simpson}). A code of a partial continuous function $f\colon\!\!\!\subseteq X\to Y$ is any data $\Phi$ specifying a modulus of pointwise continuity for $f$; that is, if $(a,r,b,s)$ is enumerated into $\Phi$ at some round, then $x\in{\rm dom}(f)$ and $d_X(x,a)<r$ implies $d_Y(f(x),b)\leq s$ (\cite[Definition II.6.1]{Simpson}). A topological embedding $f$ of $X$ into $Y$ is coded as a pair of (codes of) continuous functions $(f,g)$ such that $g\circ f(x)=x$ for any $x\in X$. In particular, we note that a ``code'' of some mathematical object can always be considered as an element of $\mathbb{N}^\mathbb{N}$. In reverse mathematics, we often use sentences like ``for a given $x$ one can {\em effectively} find a $y$ such that $\dots$'' when there is a partial continuous function $f\colon\!\!\!\subseteq\mathbb{N}^\mathbb{N}\to\mathbb{N}^\mathbb{N}$ such that if $\dot{x}$ is a code of $x$ then $f(\dot{x})$ is defined and returns a code of such a $y$. \section{Proof of (4)$\Rightarrow$(1)} \subsection{Coincidence of dimension} In this section, we discuss a few basic results on topological dimension theory within ${\sf RCA}_0$. For basics on classical topological dimension theory, see Engelking \cite{Engelking} and Nagata \cite{Nagata}. It is not hard to see that the results we will discuss in this section are provable within ${\sf RCA}$ (i.e., ${\sf RCA}_0$ plus full induction); however, most basic results in topological dimension theory involve induction argument (see Lemma \ref{lem:coceshrinking} and Lemma \ref{lem:lemma2}), so we will need a few tricks to make the proofs work only with $\Sigma^0_1$-induction. \subsubsection{Normality} A space $X$ is {\em normal} if for any (negative codes of) disjoint closed sets $P_0,P_1\subseteq X$, one can find (positive codes of) disjoint open sets $S_0,S_1\subseteq X$ such that $P_0\subseteq S_0$ and $P_1\subseteq S_1$. A space $X$ is {\em perfectly normal} if for any disjoint closed sets $P_0,P_1\subseteq X$, one can effectively find a (code of) continuous function $g\colon X\to[0,1]$ such that for all $x\in X$ and $i<2$, $x\in C_i$ if and only if $g(x)=i$. Note that we require effectivity for all notions to reduce the complexity of induction involved in our proofs. It is known that the effective version of Urysohn's lemma is provable within ${\sf RCA}_0$ as follows: \begin{Fact}[cf.\ Simpson {\cite[Lemma II.7.3]{Simpson}}]\label{fact:perfectly-normal} Over ${\sf RCA}_0$, every Polish space is perfectly normal. \qed \end{Fact} Let $\mathcal{U}$ be a cover of a space $X$. A cover $\mathcal{V}$ of $X$ is a {\em refinement of} $\mathcal{U}$ if for any $B\in\mathcal{V}$ there is $A\in\mathcal{U}$ such that $B\subseteq A$. A {\em shrinking of a cover $\mathcal{U}=(U_i)_{i<s}$} of $X$ is a cover $\mathcal{V}=(V_i)_{i<s}$ of $X$ such that $V_i\subseteq U_i$ for any $i<s$. \begin{Lemma}[${\sf RCA}_0$]\label{lem:coceshrinking} Let $X$ be a perfectly normal space. Then, for every finite open cover $\mathcal{U}$ of $X$, one can effectively find a closed shrinking of $\mathcal{U}$. \end{Lemma} \begin{proof} Let $\mathcal{U}=\{U_i\}_{i<k}$ be a finite open cover. By perfect normality of $X$, for each $i<k$ one can effectively find a continuous function $g_i\colon X\to[0,1]$ such that $g_i^{-1}(x)>0$ iff $x\in U_i$ for any $x\in X$. One can effectively construct (a code of) the following sequence $\langle g'_i,\tilde{g}_i\rangle_{i<k}$ of (possibly partial) continuous functions: \begin{align*} \tilde{g}_i(x)&=\frac{g_i(x)}{g_i(x)+\max\{g_s'(x),g_t(x):s<i<t<k\}},\\ g'_i(x)&=\max\left\{0,\tilde{g}_i(x)-\frac{1}{2}\right\}. \end{align*} Fix $x\in X$. By $\Sigma^0_1$-induction, we show that the denominator in the definition of $\tilde{g}_i(x)$ is nonzero. Note that $g_i(x)>0$ for some $i<k$ since $(U_i)_{i<k}$ covers $X$. This verifies the base case. We inductively assume that the denominator of $\tilde{g}_i(x)$ is nonzero, that is, $g'_s(x)>0$ for some $s<i$ or $g_t(x)>0$ for some $t\geq i$. Suppose that the denominator of $\tilde{g}_{i+1}(x)$ is zero, that is, $g'_s(x)=0$ for any $s\leq i$ or $g_t(x)=0$ for any $t>i$. Note that $g_i'(x)=0$ implies $\tilde{g}_i(x)\leq 1/2$, and therefore, by definition of $\tilde{g}_i$, we have \[ g_i(x)\leq \max\{g_s'(x),g_t(x):s<i<t<k\}=0. \] However, this contradicts the induction hypothesis. Hence, $\langle g'_i,\tilde{g}_i\rangle_{i<k}$ defines a sequence of total continuous functions, and for any $x\in X$, we have $g_i'(x)>0$ for some $i<k$ as seen above. This means that $W_i=\{x\in X:g_i'(x)>0\}=\{x\in X:\tilde{g}_i(x)>1/2\}$ covers $X$. Therefore, $F_i=\{x\in X:\tilde{g}_i(x)\geq 1/2\}$ also covers $X$. Now, if $g_i(x)=0$ then clearly $\tilde{g}_i(x)=0\leq 1/2$; hence we have $W_i\subseteq F_i\subseteq U_i$. This concludes that $(F_i)_{i<k}$ is a closed shrinking of $(U_i)_{i<k}$. \end{proof} \subsubsection{Star refinement} Let $S\subseteq X$ and $\mathcal{U}$ be a cover of a space $X$. A {\em star of $S$ w.r.t.~$\mathcal{U}$} is defined as follows: \[{\rm st}(S,\mathcal{U})=\bigcup\{U\in\mathcal{U}:S\cap U\not=\emptyset\}.\] We define $\mathcal{U}^\star$ by $\{{\rm st}(U,\mathcal{U}):U\in\mathcal{U}\}$. A {\em star refinement} of a cover $\mathcal{U}$ of $X$ is a cover $\mathcal{V}$ of $X$ such that $\mathcal{V}^\star$ is a refinement of $\mathcal{U}$. It is known that a space is normal iff every finite open cover has a finite open star refinement. \begin{Lemma}[${\sf RCA}_0$]\label{lem:star-refinement} Let $X$ be a normal space. Then, for every finite open cover $\mathcal{U}$ of $X$, one can effectively find a finite open star refinement of $\mathcal{U}$. \end{Lemma} \begin{proof} Given a finite open cover $\mathcal{U}=\{U_i\}_{i<k}$ of $X$, as in the proof of Lemma \ref{lem:coceshrinking}, one can effectively find a closed shrinking $\{F_i\}_{i<k}$ and an open shrinking $\mathcal{W}=\{W_i\}_{i<k}$ such that $W_i\subseteq F_i\subseteq U_i$ for each $i<k$. Then, $\mathcal{V}_i=\{X\setminus F_i,U_i\}$ is an open cover of $X$. We define $\mathcal{V}$ as the following open cover of $X$: \[\mathcal{V}=\mathcal{W}\wedge\bigwedge_{i<k}\mathcal{V}_i:=\left\{W\cap\bigcap_{i<k}V_i:W\in\mathcal{W},\;V_i\in\mathcal{V}_i\right\}.\] We claim that if $V\in\mathcal{V}$ is of the form $W_{\ell}\cap\bigcap_{i<k}V_i$, then ${\rm st}(V,\mathcal{V})\subseteq U_{\ell}$. For any $V^*\in\mathcal{V}$ of the form $W_m\cap\bigcap_{i<k}V^*_i$, if $V\cap V^*\not=\emptyset$, then $V^*_\ell\not=X\setminus F_\ell$ since $V\subseteq W_\ell\subseteq F_\ell$. Therefore, $V^*\subseteq V^*_\ell=U_\ell$. Consequently, $\mathcal{V}$ is an open star refinement of $\mathcal{U}$ as desired. \end{proof} We also define $\mathcal{U}^\triangle$ by $\{{\rm st}(\{x\},\mathcal{U}):x\in X\}$. A {\em point-star refinement} (or a {\em barycentric refinement}) of a cover $\mathcal{U}$ of $X$ is a cover $\mathcal{V}$ of $X$ such that $\mathcal{V}^\triangle$ is a refinement of $\mathcal{U}$. Clearly, every star refinement is a point-star refinement. \subsubsection{Absolute extensor} A space $K$ is called an {\em absolute extensor} for a space $X$ if for any continuous map $f\colon P\to K$ on a closed set $P\subseteq X$, one can find a continuous map $g\colon X\to K$ extending $f$, that is, $g\mathop{\upharpoonright} P=f\mathop{\upharpoonright} P$. It is known that the topological dimension (and the cohomological dimension) of a normal space can be restated in the context of the absolute extensor. Classically, it is known that the covering dimension of $X$ is at most $n$ if and only if the $n$-sphere $\mathbb{S}^n$ is an absolute extensor for $X$ (cf.\ \cite[Theorem 1.9.3]{Engelking} or \cite[Theorem III.2]{Nagata}). This equivalence is due to Eilenberg-Otto. To prove the equivalence, Eilenberg-Otto introduced the notion of an essential family. We will need effectivity for inessentiality to reduce the complexity of induction. Therefore, instead of considering essentiality of a family, we consider the following notion: A space $X$ is {\em $(n+1)$-inessential} if for any sequence $(A_i,B_i)_{i<n+1}$ of disjoint pairs of closed sets in $X$, one can effectively find a sequence $(U_i,V_i)_{i<n+1}$ of disjoint open sets in $X$ such that $A_i\subseteq U_i$ and $B_i\subseteq V_i$ for each $i\leq n$ and $(U_i\cup V_i)_{i<n+1}$ covers $X$. \begin{Lemma}[${\sf RCA}_0$]\label{lem:lemma1} Let $X$ be a Polish space. If the $n$-sphere $\mathbb{S}^n$ is an absolute extensor for $X$, then $X$ is $(n+1)$-inessential. \end{Lemma} \begin{proof} As the boundary $\partial {\mathbb{I}}^{n+1}$ of the $(n+1)$-hypercube $\mathbb{I}^{n+1}$ is homeomorphic to $\mathbb{S}^n$, we can assume that $\partial {\mathbb{I}}^{n+1}$ is an absolute extensor for $X$. Given a sequence $(A_i,B_i)_{i<n+1}$ of disjoint pairs of closed sets, one can define $f\colon\bigcup_{i<n+1}(A_i\cup B_i)\to\partial {\mathbb{I}}^{n+1}$ such that $(\pi_i\circ f)^{-1}\{0\}=A_i$ and $(\pi_i\circ f)^{-1}\{1\}=B_i$ by perfect normality (Fact \ref{fact:perfectly-normal}), where $\pi_i$ is the projection into the $i$th coordinate. Then, by our assumption, we have $g\colon X\to\partial {\mathbb{I}}^{n+1}$ which agrees with $f$ on $\bigcup_{i<n+1}(A_i\cup B_i)$. Define $U_i:=(\pi_i\circ g)^{-1}[0,1/2)$ and $V_i:=(\pi_i\circ g)^{-1}(1/2,1]$. Then, $(U_i,V_i)_{i<n+1}$ covers $X$ since the range of $g$ is contained in $\partial\mathbb{I}^{n+1}$. Hence, the sequence $(U_i,V_i)$ witnesses the condition of $(n+1)$-inessentiality. \end{proof} \subsubsection{Covering dimension} Let $\mathcal{U}$ be a cover of a space $X$. We say that the {\em order of $\mathcal{U}$ is at most $n$} if for any $U_0,U_1,\dots,U_{n+1}\in\mathcal{U}$ we have $\bigcap_{i<n+2}U_i=\emptyset$. A space $X$ has the {\em covering dimension at most $n$} if for any finite open cover of $X$, one can effectively find a finite open refinement of order at most $n$. \begin{Lemma}[${\sf RCA}_0$]\label{lem:lemma2} Let $X$ be a Polish space. If $X$ is $(n+1)$-inessential, then the covering dimension of $X$ is at most $n$. \end{Lemma} \begin{proof} We first show the following claim. \begin{Claim}[${\sf RCA}_0$]\label{claim:inessential} If $X$ is $(n+1)$-inessential, then for any open cover $\mathcal{U}=(U_i)_{i<n+2}$ of $X$, one can effectively find an open shrinking $\mathcal{W}=(W_i)_{i<n+2}$ of $\mathcal{U}$ such that $\bigcap\mathcal{W}=\emptyset$. \end{Claim} \begin{proof} We follow the argument in Engelking \cite[Theorem 1.7.9]{Engelking}. Given an open cover $\mathcal{U}=(U_i)_{i<n+2}$ of $X$, pick a closed shrinking $(F_i)_{i<n+2}$ by Lemma \ref{lem:coceshrinking}. Then, consider the sequence $(U_i,X\setminus F_i)_{i<n+1}$ of open covers. By $(n+1)$-inessentiality, one can find a sequence of disjoint open sets $(W_i,V_i)_{i<n+1}$ in $X$ such that $W_i\subseteq U_i$, $V_i\subseteq (X\setminus F_i)$ and $\bigcup_{i<n+1}W_i\cup V_i$ covers $X$. Define $W_{n+1}:=U_{n+1}\cap\bigcup_{i<n+1}V_i$. As $F_{n+1}\subseteq U_{n+1}$, we have the following: \[\bigcup\mathcal{W}=\left[\bigcup_{i<n+1}W_i\cup U_{n+1}\right]\cap\left[\bigcup_{i<n+1}W_i\cup\bigcup_{i<n+1}V_i\right]\supseteq\bigcup_{i<n+2}F_i=X.\] Thus, $\mathcal{W}=(W_i)_{i<n+2}$ is an open cover of $X$. Moreover, as $V_i$ and $W_i$ are disjoint, we have \[\bigcap_{i<n+2}W_i=\bigcap_{i<n+1}W_i\cap\left[U_{n+1}\cap\bigcup_{i<n+1}V_i\right]\subseteq\bigcap_{i<n+1}W_i\cap\bigcup_{i<n+1}V_i=\emptyset.\] This concludes that $\mathcal{W}$ is an open refinement of $\mathcal{U}$ of order at most $n$ as desired. \end{proof} We then follow the argument in Engelking \cite[Theorem 1.6.10]{Engelking}. Suppose that $\mathcal{U}=\{U_i\}_{i<s}$ is a finite open cover of $X$. Let $[s]^{n+2}$ be the collection of all set $D\subseteq s$ such that $|D|=n+2$, and $D_e$ be the $e$-th element in $[s]^{n+2}$. Put $b:=|[s]^{n+2}|=\binom{s}{n+2}$. Set $U^{-1}_i=U_i$. We will construct a sequence $(F_i^e,U_i^e)_{e<b}$ of pairs of a closed set $F_i^e$ and an open set $U_i^e$ such that $(U^e_i)_{i<s}$ is an open shrinking of $\mathcal{U}$, and moreover, \begin{align*} (\forall i<s)\;U^{e}_i\subseteq F^{e}_i\subseteq U^{e-1}_i,\mbox{ and }\bigcap_{i\in D_e}U_i^e=\emptyset. \end{align*} Given a sequence $\mathcal{U}=(U_i)_{i<s}$ of open set which is given as cozero sets of $(u_i)_{i<s}$, by Claim \ref{claim:inessential}, one can effectively find a code of a sequence $(w_i)_{i\in D_e}$ of partial continuous functions such that, whenever $\mathcal{U}$ is a cover of $X$, $w_i$ is total, the cozero sets $\mathcal{W}=(W_i)_{i\in D_e}$ of $(w_i)_{i\in D_e}$ are an open shrinking of $(U_i)_{i\in D_e}$, and $\mathcal{U}':=(U_i,W_j:i\in D_e,\,j\not\in D_e)$ covers $X$. Put $u_i'=u_i$ for $i\not\in D_e$ and $u_i'=w_i$ for $i\in D_e$. Then, $\mathcal{U}'$ is given as a collection of cozero sets of $u_i'$'s. Then, by Lemma \ref{lem:coceshrinking}, one can effectively find a code of a sequence $(\tilde{v}_i)_{i<s}$ of partial continuous functions such that, whenever $\mathcal{U}$ is a cover of $X$, $v_i$ is total, $u'_i(x)=0$ implies $\tilde{v}_i(x)=0$, and $V_i=\{x:\tilde{v}_i(x)>1/2\}$ covers $X$. Put $F_i=\{x:\tilde{v}_i(x)\geq 1/2\}$, and $v_i(x)=\max\{0,\tilde{v}_i(x)-1/2\}$. It is clear that if $\mathcal{U}$ is an open cover of $X$, then $(V_i)_{i<s}$ is an open shrinking of $\mathcal{U}$, and moreover, \[V_i\subseteq F_i\subseteq U_i,\mbox{ and }\bigcap_{i\in D_e}V_i=\emptyset.\] To reduce the complexity of induction, we now note that the construction $(u_i)_{i<s}\mapsto(v_i,\tilde{v}_i)_{i<s}$ is effective, i.e., has an explicit $\Sigma^0_1$-description $\Phi$. Hence, one can effectively obtain (a code of) a sequence $(\tilde{g}_i^e,g_i^e)_{e,i}$ such that $(u_i)_{i<s}=(g_i^{e-1})$ and $(\tilde{v}_i,v_i)_{i<s}=(\tilde{g}_i^e,g_i^{e})_{i<s}$ satisfies the $\Sigma^0_1$-condition $\Phi$ describing the above construction. Then, define $U^e_i=\{x:\tilde{g}^e_i(x)>1/2\}$ and $F^e_i=\{x:\tilde{g}^e_i(x)\geq 1/2\}$. We first check that $(U^e_i)_{i<s}$ forms an open cover for any $e<b$. Fix $x\in X$. By $\Sigma^0_1$-induction, one can easily show that for any $e$, $x\in U^e_i$ for some $i<s$. Next, we see that $U^d_i\subseteq U^e_i$ for any $e\leq d<b$. Fix $x\in X$. Note that $g^{e-1}_i(x)=0$ implies $\tilde{g}^e_i(x)<1/2$, and this condition is $\Sigma^0_1$. For $d>e$, inductively assume that $g^{e-1}_i(x)=0$ implies $\tilde{g}^d_i(x)<1/2$. Then $\tilde{g}^d_i(x)<1/2$ clearly implies $g^d_i(x)=0$, and therefore, $\tilde{g}^{d+1}_i(x)<1/2$. By $\Sigma^0_1$-induction, we obtain that $g^{e-1}_i(x)=0$ implies $\tilde{g}^d_i(x)<1/2$ for any $d>e$. Hence, $g^{e-1}_i(x)=0$ implies $g^d_i(x)=0$ for $d>e$, which implies that $U^d_i\subseteq U^e_i$ for any $e\leq d<b$. Finally we put $V_i=U^{b-1}_i$. We have shown that $(V_i)_{i<s}$ is an open shrinking of $\mathcal{U}$. It remains to show that the order of $(V_i)_{i<s}$ is at most $n$. To see this, it suffices to show that for any $e$, $\bigcap_{i\in D_e}V_i=\emptyset$. As shown above, $\mathcal{U}^{e-1}=(U^{e-1}_i)_{i<s}$ forms an open cover. Therefore, $(U^{e}_i)_{i<s}$ is an open shrinking of $\mathcal{U}^{e-1}$ such that $\bigcap_{i\in D_e}U^e_i=\emptyset$. Then, as seen before, we have $V_i=U^{b-1}_i\subseteq U^e_i$ for any $i<s$. Therefore, $\bigcap_{i\in D_e}V_i=\emptyset$ as desired. \end{proof} \subsection{N\"obeling's imbedding theorem} The {\em $n$-dimensional N\"obeling space} $N^n$ is a subspace of ${\mathbb{I}}^{2n+1}$ consisting of points with at most $n$ rational coordinates. The N\"obeling imbedding theorem says that an $n$-dimensional separable metrizable space is topologically embedded into the $n$-dimensional N\"obeling space. We will see that the N\"obeling imbedding theorem is provable in ${\sf RCA}_0$ in the following sense: \begin{Theorem}[${\sf RCA}_0$]\label{thm:Nobeling} If the covering dimension of a Polish space $X$ is at most $n$, then $X$ can be topologically embedded into the $n$-dimensional N\"obeling space. More precisely, there is a topological embedding $f$ of $X$ into $\mathbb{I}^{2n+1}$ such that for any $x\in X$, at most $n$ coordinates of $f(x)$ are rational. \end{Theorem} \subsubsection{The modified Kuratowski mapping} We say that points $\{p_i\}_{i<\ell}$ in $\mathbb{I}^{d+1}$ are in a {\em general position}, i.e., if $0\leq m\leq d$, then any $m+2$ points from $\{p_i\}_{i<\ell}$ do not lie in an $m$-dimensional hyperplane of $\mathbb{I}^{d+1}$. The following is an easy effectivization of a very basic observation (cf.\ Engelking \cite[Theorem 1.10.2]{Engelking}). \begin{Observation}[${\sf RCA}_0$]\label{obs:general-position} Given $\varepsilon>0$ and points $q_1,\dots,q_k\in \mathbb{R}^m$, one can effectively find $p_1,\dots p_k\in \mathbb{R}^m$ in general position such that $d(p_i,q_i)<\varepsilon$ for any $i\leq k$. \qed \end{Observation} A {\em polyhedron} is a geometric realization $|\mathcal{K}|$ of a simplicial complex $\mathcal{K}$ in a Euclidean space. We approximate a given space by a polyhedron as follows: Let $\mathcal{U}=(U_i)_{i<k}$ be a finite open cover of $X$. The {\em nerve of $\mathcal{U}$} is an abstract simplicial complex $\mathcal{N}(\mathcal{U})$ with $k$ many vertices $\{p_i\}_{i<k}$ such that an $m$-simplex $\{p_{j_0},\dots,p_{j_{m+1}}\}$ belongs to $\mathcal{N}(\mathcal{U})$ iff $U_{j_0}\cap\dots\cap U_{j_{m+1}}$ is nonempty. We define the function $\kappa:X\to |\mathcal{N}(\mathcal{U})|$ as follows: \[\kappa(x)=\frac{\sum_{i=0}^{k-1}d(x,X\setminus U_i)p_i}{\sum_{j=0}^{k-1}d(x,X\setminus U_j)}.\] The function $\kappa$ is called {\em the $\kappa$-mapping (or Kuratowski mapping) determined by $\mathcal{U}$ and $(p_i)_{i<k}$}. For basics on the $\kappa$-mapping, see also Engelking \cite[Definition 1.10.15]{Engelking}, and Nagata \cite[Section IV.5]{Nagata}. However, we cannot ensure the existence of the $(x,i)\mapsto d(x,X\setminus U_i)$ within ${\sf RCA}_0$. Therefore, we introduce a replacement for the $\kappa$-mapping. Recall that, within ${\sf RCA}_0$, given an open set $U_i$, one can effectively find a continuous function $u_i\colon X\to[0,1]$ whose cozero set is exactly $U_i$. The {\em modified $\kappa$-mapping} $\kappa\colon X\to \mathbb{I}^{2n+1}$ determined by $(u_i)_{i<s}$ and $(z_i)_{i<s}$ is defined as follows: \[\kappa(x)= \frac{\sum_{i<s}u_i(x)z_i}{\sum_{j<s}u_j(x)}. \] The denominator of the above formula is nonzero whenever $\mathcal{U}$ is a cover of $X$. Given $x\in X$ let $\Lambda(x)$ be the list of all indices $e<s$ such that $x\in U_e$. Such sets exist by bounded $\Sigma^0_1$ comprehension within ${\sf RCA}_0$. Let $Z(x)$ be the hyperplane spanned by $(z_e:e\in\Lambda(x))$. \begin{Claim}[${\sf RCA}_0$]\label{claim:kappa-in-L} For any $x\in X$, $\kappa(x)$ is contained in the convex hull of $(z_e:e\in\Lambda(x))$, and in particular, $\kappa(x)\in Z(x)$. \end{Claim} \begin{proof} Fix $x\in X$. By definition of $u_i$, $x\not\in U_i$ (i.e., $i\in\Lambda(x)$) implies $u_i(x)=0$. Put $\lambda_i=u_i(x)/(\sum_{j\in\Lambda(x)}u_j(x))$. Clearly, $\sum_{i\in\Lambda(x)}\lambda_i=1$, and $\kappa(x)=\sum_{i\in\Lambda(x)}\lambda_iz_i$. Hence, $\kappa(x)$ is contained in the convex hull of $(z_e:e\in\Lambda(x))$. \end{proof} \subsubsection{Proof of Theorem \ref{thm:Nobeling}} First note that, to work within ${\sf RCA}_0$, we need to avoid any use of compactness. Therefore, we cannot use the standard proof of N\"obeling's imbedding theorem. However, we will see that one can remove compactness arguments from some proof of N\"obeling's imbedding theorem, e.g., given in \cite[Theorem IV.8]{Nagata}, by performing a very careful work. \begin{proof}[Proof of Theorem \ref{thm:Nobeling}] For $n+1$ coordinates $(c_i)_{i<n+1}\in (2n+1)^{n+1}$ and $n+1$ rationals $(r_i)_{i<n+1}$, consider the following hyperplane: \[L=\{(x_j)_{j<2n+1}\in\mathbb{I}^{2n+1}:(\forall i<n+1)\;x_{c_i}=r_i\}.\] Let $(L_t)_{t\in\omega}$ be the list of all such hyperplanes. For a list $(V_e)_{e\in\omega}$ of all basic open balls in $X$, let $\langle i,j\rangle$ be the $t$-th pair such that $\overline{V_i}\subseteq V_j$. Then, consider the open cover $\mathcal{V}_{t}=\{V_j,X\setminus\overline{V_i}\}$, where $\overline{V_i}$ is the formal closure of $V_i$; that is, the closed ball whose center and radius are the same as $V_i$. We first gives an explicit construction of (a code of) a sequence $(f_t)_{t\in\mathbb{N}}$ of (possibly partial) continuous functions. We describe our construction at stage $t$. Suppose that a continuous function $f_t\colon X\to{\mathbb{I}}^{2n+1}$ and a positive rational $\delta_t>0$ have already been constructed. Consider $L_t$ and $\mathcal{V}_t$. We construct a $\mathcal{V}_t$-mapping $f_{t+1}$ which avoids $L_t$. By total boundedness of $\mathbb{I}^{2n+1}$, one can easily find a collection $(x_j)_{j\leq m}$ of points in $\mathbb{I}^{2n+1}$ such that $(B(x_j;\delta_t))_{j\leq m}$ covers $\mathbb{I}^{2n+1}$, where $B(x;\delta)$ is the open ball centered at $x$ of radius $\delta$. Consider $\mathcal{W}_t=\{f_t^{-1}[B(x_j;\delta_t)]:j\leq m\}$. Since the covering dimension of $X$ is at most $n$, one can effectively find an open refinement of $\mathcal{V}_t\land\mathcal{W}_t$ of order at most $n$. Apply Lemma \ref{lem:star-refinement} to this new open cover of $X$ to get an open star refinement $\mathcal{U}_t=(U^t_i)_{i<s}$ of $\mathcal{V}_t\land\mathcal{W}_t$ of order at most $n$. Then, one can effectively find a sequence of continuous functions $(u^t_i)_{i<s}$ such that $U^t_i$ is the cozero set of $u^t_i$. For each $i<s$, one can effectively choose $x_i\in U^t_i$, and then get the value $f_t(x_i)$. Then, by Observation \ref{obs:general-position}, we can effectively choose $z^t_i\in X$ and $p^t_j\in L_t$ such that \begin{align*} \mbox{$d(f_t(x_i),z^t_i)<\delta$, and $(z^t_i,p^t_j)_{i<s,j<n+1}$ are in a general position,} \end{align*} i.e., if $0\leq m\leq 2n$, then any $m+2$ vertices do not lie in an $m$-dimensional hyperplane of $\mathbb{I}^{2n+1}$. Let $\kappa\colon X\to\mathbb{I}^{2n+1}$ be the modified $\kappa$-mapping determined by $(u_i)_{i<s}$ and $(z_i)_{i<s}$. \begin{Claim}[${\sf RCA}_0$]\label{claim:uniform-limit} $d(f_{t}(x),\kappa(x))<3\delta_t$ for any $x\in X$. \end{Claim} \begin{proof} Let $x\in X$ be given. If $x\not\in U^t_i$, then $u^t_i(x)=0$. If $x\in U^t_i$, since $\mathcal{U}_t$ is a refinement of $\mathcal{W}_t$, we have $d(f_t(x),f_t(y))<2\delta_t$ for any $y\in U^t_i$. Therefore, $d(f_t(x),z^t_i)<3\delta_t$ since $d(f_t(x_i),z^t_i)<\delta_t$, where $x_i\in U^t_i$. Hence, by the definition of the modified $\kappa$-mapping, we get $d(f_t(x),\kappa(x))<3\delta_t$ for any $x\in X$, since \[d(f_t(x),\kappa(x))=d\left(\sum_{i<s}\lambda_i(x)f_t(x),\sum_{i<s}\lambda_i(x)z^t_i\right)\leq\sum_{i<s}\lambda_i(x)d(f_t(x),z^t_i)<3\delta_t\] where $\lambda_i(x)$ is defined as in Claim \ref{claim:kappa-in-L}. The first equality follows from $\sum_{i<s}\lambda_i=1$, and the middle inequality follows from the triangle inequality. \end{proof} Let $[s]^{\leq n}$ denote the set of all finite subsets $D\subseteq s$ with $|D|\leq n$, and $Z^t_D$ be the hyperplane spanned by $(z^t_e:e\in D)$. Now, one can calculate the following value: \begin{align*} \eta_t:=\min\{d(Z^t_D,Z^t_E):D,E\in[s]^{\leq n},\;Z^t_D\cap Z^t_E=\emptyset)\}>0. \end{align*} Recall that $(z^t_i)_{i\in\Lambda(x)}$ and $(p^t_j)_{j<n+1}$ are in a general position, and $L_t$ is spanned by $(p^t_j)_{j<n+1}$, which implies that $d(Z^t_D,L_t)>0$ for any $D\in[s]^{\leq n}$. One can also calculate the following value: \begin{align*} \eta'_t:=\min\{d(Z^t_D,L_t):D\in[s]^{\leq n}\}>0. \end{align*} Now, define $f_{t+1}=\kappa$ (where $\kappa$ is the modified $\kappa$-mapping defined before Claim \ref{claim:uniform-limit}) and $\delta_{t+1}=\min\{\delta_t,\eta_t/8,\eta'_t/4\}/3$. To reduce the complexity of induction, we now note that the construction $(f_t,\delta_t)\mapsto(f_{t+1},\delta_{t+1},\eta_t,\eta'_t)$ is effective, i.e., has an explicit $\Sigma^0_1$-description. We then have a sequence $(f_t,\delta_t,\eta_t,\eta'_t)_{t\in\mathbb{N}}$ with auxiliary parameters $(z^t_i)_{t\in\mathbb{N},i<s}$ and $(p^t_j)_{t\in\mathbb{N},j<n+1}$. A simple induction shows $\delta_t<2^{-t}$. By $\Sigma^0_1$-induction with Claim \ref{claim:uniform-limit}, for any $t\leq s$, one can also show that $d(f_t(x),f_s(x))<\sum_{s\geq t}\delta_{s+1}<2^{-t}$; hence this is classically a uniform convergent sequence. Note that the uniform limit theorem is provable within ${\sf RCA}_0$ since a modulus of pointwise continuity of the uniform limit $f=\lim_{t\to\infty}f_t$ is effectively calculated from a sequence of moduli of pointwise continuity of $(f_t)_{t\in\mathbb{N}}$ and the modulus of uniform convergence $2^{-t}$. Hence, the uniform limit $f=\lim_{t\to\infty}f_t$ exists. By definition of $\delta_t$, we also get $d(f,f_{t+1})<\eta_t/4,\eta'_t/2$. \begin{Claim}[${\sf RCA}_0$]\label{claim-v-mapping} For any $t\in\mathbb{N}$ and $y\in\mathbb{I}^{2n+1}$ there is $V\in\mathcal{V}_t$ such that $f^{-1}[B(y;\eta_t/4)]\subseteq V$. \end{Claim} \begin{proof} Let $y\in\mathbb{I}^{2n+1}$ be given. For $x,x'\in f^{-1}[B(y;\eta_t/4)]$, we have $d(f(x),f(x'))<\eta_t/2$. As $d(f,f_{t+1})<\eta_t/4$, we have $d(f_{t+1}(x),f_{t+1}(x'))<\eta_t$. By Claim \ref{claim:kappa-in-L}, we have $f_{t+1}(x)=\kappa(x)\in Z^t(x)$ and $f_{t+1}(x')=\kappa(x')\in Z^t(x')$, where $Z^t(x)$ is defined in a similar manner as before. By our choice of $\eta_t$, we have $Z^t(x)\cap Z^t(x')\not=\emptyset$. Assume that $Z^t(x)$ is spanned by $(z^t_{i_\ell})_{\ell<t}$ and $Z^t(x')$ is spanned by $(z^t_{j_\ell})_{\ell<u}$. Since $Z^t(x)\cap Z^t(x')\not=\emptyset$, $(z^t_{i_{\ell}},z^t_{j_{m}})_{\ell<t,m<u}$ lie on a $((t-1)+(u-1))$-dimensional hyperplane. By our choice, the open cover $\mathcal{U}_t$ has the order at most $n$, and therefore $t,u\leq n+1$; hence $t+u\leq 2n+2$. Since $\{z_{i_{\ell}},z_{j_{m}}\}_{\ell<t,m<u}$ are in a general position, $t+u$ vertices do not lie in an $(t+u-2)$-dimensional hyperplane. Hence, we must have $\ell$ and $m$ such that $z_{i_\ell}=z_{j_m}$. This implies that $x,x'\in U_{i_\ell}$. Consequently, if $x,x'\in f_t^{-1}[B(y;\eta_t/4)]$ then $x'$ belongs to the star of $\{x\}$ w.r.t.\ $\mathcal{U}_t$, that is, $x'\in{\rm st}(\{x\},\mathcal{U}_t)$. As $\mathcal{U}_t$ is a star refinement of $\mathcal{V}_t$, we obtain $V\in\mathcal{V}_t$ such that $f^{-1}[B(y;\eta_t/4)]\subseteq{\rm st}(\{x\},\mathcal{U}_t)\subseteq V$. \end{proof} \begin{Claim}\label{claim:line-avoiding} $d(f(x),p)>\eta_t'/2$ for any $x\in X$ and $p\in L_t$. \end{Claim} \begin{proof} By definition of $\eta_t'$, we have $d(Z_t(x),L_t)\geq \eta_t'$ for any $x\in X$. By Claim \ref{claim:kappa-in-L}, we also have $f_{t+1}(x)\in Z_t(x)$, and therefore $d(f_{t+1}(x),L_t)\geq \eta_t'$. Hence, $d(f(x),L_t)\geq\eta_t'/2$. \end{proof} Claim \ref{claim:line-avoiding} ensures that the range of $f$ avoids $L_t$; hence $f$ is a continuous map from $X$ into the $n$-dimensional N\"obeling space $N^n\subseteq\mathbb{R}^{2n+1}$. \begin{Claim} $f$ is injective. \end{Claim} \begin{proof} Let $W$ be any open neighborhood of $x\in X$. Then, by perfect normality of $X$ (Fact \ref{fact:perfectly-normal}), there are basic open balls $V_i$ and $V_j$ such that $x\in V_i\subseteq\overline{V_i}\subseteq V_j\subseteq W$. By applying Claim \ref{claim-v-mapping} to the code $t$ of pairs $\langle i,j\rangle$ (i.e., $\mathcal{V}_t=\{V_j,X\setminus\overline{V_i}\}$), we get an open neighborhood $B$ of $f(x)$ such that either $f^{-1}[B]\subseteq V_j$ or $f^{-1}[B]\subseteq X\setminus\overline{V_i}$. However, as $x\in V_i$, we have $x\in f^{-1}[B]\cap V_i\not=\emptyset$; hence $f^{-1}[B]\subseteq V_j$. Therefore, if $x'\not\in W$ then, as $W\supseteq V_j$, we get $f(x')\not\in B$. This implies that $f$ is injective. \end{proof} It remains to show that $f^{-1}$ is continuous in ${\sf RCA}_0$. In the usual proof, by using the property that $f$ is an $\varepsilon$-mapping for all $\varepsilon>0$, we conclude that $f$ is a closed map. However, it is unclear that, from the property being an $\varepsilon$-mapping, how one can effectively obtain a code of the closed image $f[A]$ of a closed set $A\subseteq X$ (without using any compactness arguments). Fortunately, Claim \ref{claim-v-mapping} has more information than just saying that $f$ is an $\varepsilon$-mapping, which can be used to show that $f$ is an effective open map. \begin{Claim}\label{claim-open-map} $f$ is an open map. \end{Claim} \begin{proof} We say that an open ball $B_X(x;q)$ in $X$ is formally (strictly, respectively)\ included in $B_X(y;p)$ if $d(x,y)\leq p-q$ ($d(x,y)<p-q$, respectively). Note that if $B_X(x;q)$ is strictly included in $B_X(y;p)$ then $\overline{B_X(x;q)}\subseteq B_X(y;p)$. Let $U=\bigcup_{e}V_{u(e)}\subseteq X$ be an open set given as a union of open balls. Then, we make a new list $(V_{v(e,j)})_{e,j\in\mathbb{N}}$ of all open balls $V_j$ such that $V_j$ is strictly included in $V_{u(e)}$. Let $t(e,j)$ be the code of the pair $\langle v(e,j),u(e)\rangle$ (i.e., $\mathcal{V}_{t(e,j)}=\{V_{u(e)},X\setminus\overline{V}_{v(e,j)}\}$). We now consider a list $(B_k^{e,j})_{k\in\mathbb{N}}$ of all open balls of radius $\leq\eta_{t(e,j)}/4$ in $\mathbb{I}^{2n+1}$. By Claim \ref{claim-v-mapping}, either $f^{-1}[B_k^{e,j}]\subseteq V_{u(e)}$ or $f^{-1}[B_k^{e,j}]\subseteq X\setminus \overline{V}_{v(e,j)}$ holds. As we have already seen that $f$ is continuous, we get a code of the open set $f^{-1}[B_k^{e,j}]=\bigcup_{m}V_{s(e,j,k,m)}$. If we see that $V_{s(e,j,k,m)}$ is formally included in $V_{v(e,j)}$ for some $m$, then we must have $f^{-1}[B_k^{e,j}]\subseteq V_{u(e)}$. Let $(J_i)_{i\in\mathbb{N}}$ be a list of all such open balls $B_k^{e,j}$, that is, \[\{J_i\}_{i\in\mathbb{N}}=\{B_k^{e,j}:\mbox{$V_{s(e,j,k,m)}$ is formally included in $V_{v(e,j)}$ for some $m$}\}.\] We claim that $f[U]=\bigcup_{i\in\mathbb{N}}J_i$. If $x\in U$, then $x\in V_{u(e)}$ for some $e$, and so $x\in V_{v(e,j)}$ for some $j$. By Claim \ref{claim-v-mapping}, if $B$ is a sufficiently small basic open ball containing $f(x)$, then $f^{-1}[B]\subseteq V_{v(e,j)}$. Hence, $f^{-1}[B]$ contains an open ball which is formally included in $V_{v(e,j)}$. Therefore, $f(x)\in B=J_i$ for some $i\in\mathbb{N}$. For the converse, if $J_i=B$ then $f^{-1}[B]\subseteq V_{u(e)}$ for corresponding $e$ as mentioned above, and therefore, $f^{-1}[B]\subseteq V_{u(e)}\subseteq U$. Consequently, $B\subseteq f[U]$. \end{proof} By Claim \ref{claim-open-map}, one can effectively obtain a code of $f^{-1}$ as a continuous function. This concludes the proof. \end{proof} \subsection{Every Polish space is at most one dimensional} We say that $K$ is an {\em absolute extensor} if it is an absolute extensor for any Polish space. In other words, if $X$ is a Polish space, for any continuous map $f\colon P\to K$ on a closed set $P\subseteq X$, one can find a continuous map $g\colon X\to K$ extending $f$. The Tietze extension theorem states that the unit interval $\mathbb{I}$ is an absolute extensor. This clearly implies that $\mathbb{I}^n$ is also an absolute extensor by coordinatewisely extending $f=(f_i)_{i<n}\colon P\to\mathbb{I}^n$ to $g=(g_i)_{i< n}\colon X\to\mathbb{I}^n$. It is known that the effective version of the Tietze extension theorem is provable within ${\sf RCA}_0$ as follows: \begin{Fact}[see Simpson {\cite[Theorem II.7.5]{Simpson}}]\label{fact:Tietze} The Tietze extension theorem is provable in ${\sf RCA}_0$, that is, $\mathbb{I}^n$ is an absolute extensor. \qed \end{Fact} It is intuitively obvious that the topological dimension of the $n$-hypercube ${\mathbb{I}}^{n}$ is $n$ (but the proof is not so easy even in the classical world). Surprisingly, however, under $\neg{\sf WKL}$, {\em every Polish space is at most one-dimensional} in the following sense. \begin{Lemma}[${\sf RCA}_0+\neg{\sf WKL}$]\label{lem:extension-dimension} If $X$ is a Polish space, then the $1$-sphere $\mathbb{S}^1$ is an absolute extensor for $X$. \end{Lemma} \begin{proof} By Orevkov's construction \cite{Orevkov} (cf.\ Shioji-Tanaka \cite{ShTa90}), if weak K\"onig's lemma fails, then there is a continuous retraction $r\colon{\mathbb{I}}^2\to\partial {\mathbb{I}}^2$. Note that the $1$-dimensional sphere $\mathbb{S}^1$ is homeomorphic to $\partial {\mathbb{I}}^2$. Let $f\colon P\to\partial {\mathbb{I}}^2$ be a continuous map on a closed set $P\subseteq X$. Then, since ${\mathbb{I}}^2$ is an absolute extensor by Fact \ref{fact:Tietze}, one can effectively find a continuous extension $f^*\colon X\to {\mathbb{I}}^2$ of $f$ such that $f^*\mathop{\upharpoonright} P=f\mathop{\upharpoonright} P$. Then $g=r\circ f^*\colon X\to\partial\mathbb{I}^2$ is continuous and extends $f$ since $r$ is a continuous retraction. This concludes that $\mathbb{S}^1$ is an absolute extensor for $X$ as $\mathbb{S}^1\simeq\partial\mathbb{I}^2$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main-theorem} (4)$\Rightarrow$(1)] Suppose $\neg{\sf WKL}$. Then, by Lemma \ref{lem:extension-dimension}, $\mathbb{S}^1$ is an absolute extensor for $\mathbb{R}^m$. By Lemmata \ref{lem:lemma1} and \ref{lem:lemma2}, the covering dimension of $\mathbb{R}^m$ is at most one. By Theorem \ref{thm:Nobeling}, there is a topological embedding $f$ of $\mathbb{R}^m$ into the one-dimensional N\"obeling space; that is, for any $x\in\mathbb{R}^m$, at most one coordinate of $f(x)\in\mathbb{R}^3$ is rational. Consequently, there is a topological embedding of $\mathbb{R}^m$ into $\mathbb{R}^3$. \end{proof} \section{Continuous degrees} In this section, we mention some relationship between reverse mathematics of topological dimension theory and J.\ Miller's work on continuous degrees \cite{Miller}. Classically, a space is countable dimensional if it is a countable union of zero dimensional subspaces. However, within ${\sf RCA}_0$, it is difficult to handle with the notion of a subspace. Instead, we use the following definition. A {\em copy of a subspace of $Y$ in $X$} is a pair $S=(f,g)$ of (codes of) partial continuous functions $f\colon\!\!\!\subseteq X\to Y$ and $g\colon\!\!\!\subseteq Y\to X$. Then, we say that {\em $x\in X$ is a point in $S=(f,g)$} if $f(x)$ is defined, and $g\circ f(x)$ is defined and equal to $x$. A separable metric space $X$ is {\em countable dimensional} if $X$ is a union of countably many copies of subspaces of $\mathbb{N}^\mathbb{N}$; that is, there is a sequence $(S_e)_{e\in\mathbb{N}}$ of copies of subspaces of $\mathbb{N}^\mathbb{N}$ such that every $x\in X$ is a point in $S_e$ for some $e\in\mathbb{N}$. \begin{Theorem}\label{thm:countable-dimensional} The following are equivalent over ${\sf RCA}_0$: \begin{enumerate} \item Weak K\"onig's lemma. \item The Hilbert cube $\mathbb{I}^\mathbb{N}$ is not countable dimensional. \end{enumerate} \end{Theorem} \begin{proof} (1)$\Rightarrow$(2): The usual argument (cf.\ \cite[Theorem 1.8.20]{Engelking}) only uses the Brouwer fixed point theorem, which can be carried out in ${\sf WKL}_0$ \cite{ShTa90}. (2)$\Rightarrow$(1): As $\mathbb{I}^\mathbb{N}$ is Polish, if we assume $\neg{\sf WKL}$ then, by Lemma \ref{lem:extension-dimension}, $\mathbb{S}^1$ is absolute extensor for $\mathbb{I}^\mathbb{N}$. Therefore, by Lemmata \ref{lem:lemma1} and \ref{lem:lemma2}, and Theorem \ref{thm:Nobeling}, $\mathbb{I}^\mathbb{N}$ can be embedded into the $1$-dimensional N\"obeling space $N^1$. Now, it is clear that $N^1$ is a finite union of zero dimensional subspaces. \end{proof} Indeed, the instance-wise version of Theorem \ref{thm:countable-dimensional} holds. We now consider the instance-wise version in an $\omega$-model $(\omega,\mathcal{S})$ of ${\sf RCA}_0$: For (1)$\Rightarrow$(2), if $(S_e)_{e\in\omega}\in\mathcal{S}$ is a sequence of copies of subspaces of $\omega^\omega$, then there is an infinite binary tree $T\in\mathcal{S}$ such that every infinite path through $T$ computes a point $x\in\mathbb{I}^\omega$ which is not a point of $S_e$ for any $e\in\omega$. For (2)$\Rightarrow$(1), if $T\in\mathcal{S}$ is an infinite binary tree, then there is a sequence $(S_e)_{e\in\omega}\in\mathcal{S}$ of copies of subspaces of $\omega^\omega$ such that if $x\in\mathbb{I}^\omega$ is not a point in $S_e$ for any $e\in\omega$, then $x$ computes an infinite path through $T$. We now interpret this instance-wise $\omega$-model version of Theorem \ref{thm:countable-dimensional} in the context of continuous degrees. We say that {\em $\mathbf{b}$ is PA-above $\mathbf{a}$} (written $\mathbf{a}\ll\mathbf{b}$) if for any $\mathbf{a}$-computable infinite binary tree has a $\mathbf{b}$-computable infinite path. Miller \cite{Miller} reduced the first-order definability of PA-aboveness to that of continuous degrees: Whenever $\mathbf{a}$ and $\mathbf{b}$ are total degrees, $\mathbf{a}\ll\mathbf{b}$ if and only if there is a non-total continuous degree $\mathbf{v}$ such that $\mathbf{a}<\mathbf{v}<\mathbf{b}$. For continuous and total degrees, see Miller \cite{Miller}. (1)$\Rightarrow$(2) implies Theorem 8.2 in \cite{Miller}: If $\mathbf{a}$ and $\mathbf{b}$ are total degrees and $\mathbf{b}\ll\mathbf{a}$, then there is a non-total continuous degree $\mathbf{v}$ with $\mathbf{b}<\mathbf{v}<\mathbf{a}$. To see this, consider the topped $\omega$-model of ${\sf RCA}_0$ consisting of all sets of Turing degree $\leq\mathbf{b}$. Then, as in Kihara-Pauly \cite{KiPa}, take the list $(f_e,g_e)$ of all pairs of Turing reductions (more precisely, all reductions in the sense of representation reducibility), which is considered as copies in $\mathbb{I}^\omega$ of subspaces of $\omega^\omega$. By (1)$\Rightarrow$(2), there is an infinite binary tree $T$ of Turing degree $\mathbf{b}$ such that any path computes $x\in\mathbb{I}^\omega$ which is not a point in $(f_e,g_e)$. Such an $x$ is non-total since there is no $\alpha\in\omega^\omega$ such that $f_e(x)=\alpha$ and $g_e(\alpha)=x$. As $\mathbf{b}\ll\mathbf{a}$, such an $x$ is computable in $\mathbf{a}$. If necessary, by adding a new coordinate to $x$ to code $\mathbf{b}$, we can conclude that there is a non-total degree $\mathbf{v}$ with $\mathbf{b}<\mathbf{v}<\mathbf{a}$. (2)$\Rightarrow$(1) implies Theorem 8.4 in \cite{Miller}: If $\mathbf{v}$ is a non-total continuous degree and $\mathbf{b}<\mathbf{v}$ is total, then there is a total degree $\mathbf{c}$ with $\mathbf{b}\ll\mathbf{c}<\mathbf{v}$. To see this, consider the same $\omega$-model $\mathcal{S}$ as above. As in Kihara-Pauly \cite{KiPa}, we consider a copy $S\in\mathcal{S}$ of a subspace of $\omega^\omega$ in $\mathbb{I}^\omega$ as a pair of $\mathbf{b}$-relative Turing reductions. As $\mathbf{v}$ is non-total, and $\mathbf{b}\leq\mathbf{v}$, a point $x\in\mathbb{I}^\omega$ of degree $\mathbf{v}$ avoids any sequence of copies $(S_e)_{e\in\omega}\in\mathcal{S}$ of subspaces of $\omega^\omega$ in $\mathbb{I}^\omega$. Hence, by (2)$\Rightarrow$(1), for any infinite binary tree $T\in\mathcal{S}$, $x$ computes an infinite path $c$ through $T$. Consequently, we have $\mathbf{b}\ll\mathbf{c}<\mathbf{v}$ for some $\mathbf{c}$. This argument indicates that (some of) J.\ Miller's work \cite{Miller} (on definability of PA-degrees via continuous degrees) can be considered as the computable instance-wise version of Theorem \ref{thm:countable-dimensional}. \begin{Acknowledgement} The author's research was partially supported by JSPS KAKENHI Grant 17H06738, 15H03634, the JSPS Core-to-Core Program (A. Advanced Research Networks), and the Young Scholars Overseas Visit Program in Nagoya University. The author would like to thank Keita Yokoyama for valuable discussions. \end{Acknowledgement} \bibliographystyle{plain}
1,314,259,992,773
arxiv
\section*{Introduction} The most successful cheating strategy against non-relativistic bit commitment schemes is the entanglement attack (also known as the EPR attack) \cite{Mayers99thetrouble} \cite{PhysRevLett.78.3414}. In this strategy, one of the parties (Alice) entangles a system with the one she uses for commitment and keeps this second system secret. Then she is able to cheat before the opening phase through local operations on her own system. One approach to counter this cheating strategy is to determine a means of breaking the entanglements. This must be done either through local transforms performed by the other party (Bob), or through local noise applied to Bob's system (from the transmission channel). Entanglement breaking channels are a relatively new concept in quantum information that were first introduced in \cite{15}. The characteristics of two-qubit entanglements are discussed in \cite{16} and \cite{17}. In particular, the local two-qubit entanglement-annihilating channel (2-LEA) is examined in \cite{16}. >From \cite{15}, a local channel $c$ is called entanglement breaking if the output of the channel operating on an entangled state is separable, where separability for a density matrix $\rho$ means $\rho=\sum_{i} p_i \rho_a^i\otimes \rho_b^i$. In the next section, we describe through an example how an entanglement breaking channel can be used to secure the Bennett and Brassard bit commitment scheme \cite{1984-175-179} against an EPR attack. \section*{Depolarizing Channel Bit Commitment} As is typical, we assume Alice is working in a noise free environment, i.e., a perfectly shielded and isolated lab. Therefore the joint noise which corrupts the entangled state $\rho_{AB}$ is $I \otimes \varepsilon_{c}[\rho_{AB}]$, where $\varepsilon_{c}$ is the channel noise. This entanglement breaking operation must either be applied by Bob through some apparatus he possesses for adding noise, or by the quantum channel through which Alice sends the qubits to Bob, as shown in Figure 1. Here we use a depolarizing channel to apply the entanglement breaking operation. This channel is defined in \cite{17} as \[ \epsilon(X)=qX+(1-q)tr[X]\frac{1}{2}I \] The action of the depolarizing channel replaces the qubit with the completely mixed state, $\frac{I}{2}$, with probability $1-q$. It was shown in \cite{18}\cite{19} that the evolution of any entangled state in a channel (entanglement breaking channel in this case), is determined by the evolution of a maximally entangled state in the channel. Therefore we only consider the effect of the entanglement breaking channel on the maximally entangled state $\vert \psi^+\rangle=\frac{1}{\sqrt{2}}(\vert0_A0_B\rangle+\vert1_A1_B\rangle)$, where the subscripts $A$ and $B$ denote Alice and Bob, respectively. It was proven in \cite{19} for a local quantum channel $\mathbb{S}$ (which operates on a qubit), an entangled state $\vert X\rangle$ (which is a bipartite $N \otimes 2$ state), and concurrence $C$ as defined in \cite{19} (a measure of entanglement), that \[ C((I\otimes \mathbb{S})[\vert X\rangle\langle X\vert])=C[\vert X\rangle\langle X\vert]]C[(I\otimes \mathbb{S})[\vert \psi^+\rangle \langle \psi^+ \vert]]. \] Since $C[\vert X\rangle\langle X\vert]]=0$ for $\vert X\rangle$, which is a separable state, if a local channel $\mathbb{S}$ (or $I\otimes \mathbb{S}$ for the entire system) applied on the maximally entangled state $\vert \psi^+\rangle$ disentangles it, then we have $C((I\otimes \mathbb{S})[\vert X\rangle\langle X\vert])=0$, which disentangles any bipartite $N \otimes 2$ state. Having a maximally entangled state $\vert \psi^+\rangle$ and applying a depolarizing channel on Bob's state (which means the effect of the channel on the entire system is $(I\otimes\epsilon)[X]$), results in the state \[ q \vert \psi^+\rangle \langle \psi^+ \vert+\frac{(1-q)}{4}I_A\otimes I_B \] which has been shown to be separable for $q \leq \frac{1}{3}$ \cite{16}. Therefore as discussed above, this channel will disentangle Bob's qubit from any other system (such as Alice's secret system). Now assume the two parties use the simple Bennett and Brassard bit commitment scheme in which Alice sends random selections of $\vert \uparrow \rangle$ or $\vert \rightarrow\rangle$ for 0, and $\vert \nearrow \rangle$ or $\vert \searrow \rangle$ for 1. Since $\rho_+=\rho_\times=\rho$, if $\rho$ passes through the channel Bob will expect to receive $q\rho+(1-q)tr[\rho]\frac{1}{2}I$. Thus if Alice attempts to cheat (i.e., entangle her secret system with Bob's qubit), he will receive a separable state after applying the depolarizing channel ($\rho_{channel}=\sum_{i} p_i \rho_a^i\otimes \rho_b^i$). Therefore the states of Alice will be disentangled from Bob's system, and more importantly he can determine if Alice has cheated or not. To show this, two cases must be considered, Alice is honest and has not cheated, and Alice attempts to cheat. For the first case, the probability of Bob receiving the same state as Alice sent is $q$. Therefore the probability of Bob measuring the correct state is $\frac{q}{2}$ since the probability of choosing the correct basis is $\frac{1}{2}$. Bob should then expect to measure at least $\frac{q}{2}$ of the states correctly. In the second case, Alice cheats and entangles her secret states with the committed qubits. After Bob performs the depolarizing operation, the state will be disentangled as described previously, and therefore Alice cannot change the state that Bob measures. There is also no guarantee that the probability of Bob measuring the correct state is $\frac{q}{2}$, so Alice may be exposed regarding the entanglements even if she does not change the committed bit. A simple security analysis regarding our example using the Bennet and Brassard scheme is given below. Alice prepares an entangled state $\vert a_0\rangle_A \vert 0\rangle_B+\vert a_1\rangle_A\vert 1\rangle_B$, where the subscript $A$ means the state is controlled by Alice and $B$ controlled by Bob. Alice then sends Bob's states to him, and when she wants to change her mind about the committed bit she performs a unitary transform followed by a projective measurement on her own state (we can just assume a projective measurement). The effect of the entanglement breaking channel on the system after Alice's measurement is $\rho_b = \sum_{i} p_i \langle a_j \vert \rho_a^i \vert a_j\rangle \rho_b^i$, where $a_j$ is the basis for the projective measurement. Thus in order for Alice to determine $\rho_b$ she needs to know the decomposition caused by the channel, i.e., $\rho_{channel}=\sum_{i} p_i \rho_a^i\otimes \rho_b^i$, and therefore she needs to know the value of $q$ (which we assume is controlled by Bob). Chailloux and Kerenidis \cite{20} provided lower bounds on an optimal quantum bit commitment (the bounds are tight and the upper bounds are close to the lower bounds), however they assumed that the operations in both the commitment and revealing phases are \textbf{unitary transforms} on Alice and Bob's quantum spaces. Here we take the same approach towards analysing the security of our system. For the Hiding property (i.e. the ability of Bob to guess the committed bit assuming a honest Alice), we know that without considering the channel Bob can guess the bit with probability $\frac{1}{2}+ \frac{\Delta(\sigma_0,\sigma_1)}{2}$ (where $\sigma_b$ is the density matrix assigned to 0 or 1). The effect of the channel, Bob's ability to cheat is then simply the maximum of his ability to distinguish the states with or without having them passed through the quantum channel, which is given by\\ $P_{Bcheat} = \frac{1}{2}+ Max(\frac{\Delta(\sigma_0,\sigma_1)}{2},\frac{\Delta(\mathbb{S}[\sigma_0],\mathbb{S}[\sigma_1])}{2})$ \\ Where $\mathbb{S}[.]$ is the effect of the channel. For Alice's cheating probability, consider the following.\\ We assume a cheating Alice prepares a state $\rho_{AB}$ and sends it to Bob so that just before Alice opens the bit, the state of that part of the system which Bob possesses is $\sigma_{B}=Tr_A(\alpha[I\otimes \mathbb{S}[\rho_{AB}]])$ where $\alpha[.]$ is Alice's operation on her own part of the system (i.e. A unitary transform followed by a measurement). Now assuming that $\mathbb{S}[.]$ is an entanglement breaking channel, we have $\sigma_{B}= \sum_{i} p_i Tr_A(A[\rho_a^i])\otimes \rho_b^i$. Assuming Alice wants Bob to measure 0, she should maximize the probability of Bob detecting $\sigma_0$, which is $F^2(\sigma_{Bend},\sigma_0)$, where $F$ is the fidelity. This means Alice must know the $p_i$ and properly choose the value of $Tr_A(A[\rho_a^i])$ (i.e. she also needs to know the $\rho_a^i$). This requires Alice to know the channel characteristics, but these are controlled by Bob and is kept secret by him. \section*{Conclusions} In this letter we have shown that by using an entanglement breaking channel, the simple Bennett and Brassard bit commitment scheme can be made secure against EPR attacks. We also presented an example of a depolarizing channel which is practically conceivable. Only entanglement attacks were discussed, we leave the unconditional security of these noise based systems as a topic for future research.
1,314,259,992,774
arxiv
\section{Introduction} Glioblastoma (GBM), and diffuse astrocytic glioma with molecular features of GBM (WHO Grade 4 astrocytoma), are the most common and aggressive malignant primary tumor of the central nervous system (CNS) in adults, with extreme intrinsic heterogeneity in appearance, shape, and histology \cite{louis2019cimpact,cimpact_1,cimpact_2,cimpact_3,cimpact_4,cimpact_5,cimpact_6}. GBM patients have an average prognosis of 14 months, following standard of care treatment (comprising surgical resection followed by radiotherapy and chemotherapy), and 4 months left untreated \cite{OS_SB}. Although various experimental treatment options have been proposed during the past 20 years, there have not been any substantial differences in patient prognosis. Accurate identification of brain tumor sub-regions boundaries in MRI is of profound importance in many clinical applications, such as surgical treatment planning, image-guided interventions, monitoring tumor growth, and the generation of radiotherapy maps. However, manual detection and tracing of tumor sub-regions is tedious, time-consuming, and subjective. In a clinical setup, this manual process is carried out by radiologists in a qualitative visual manner, and hence becomes impractical when dealing with numerous patients. This highlights the unmet need for automated deterministic segmentation solutions that could contribute in expediting this process. The release of the current revised World Health Organization (WHO) classification of CNS tumors \cite{WHO_louis20162016} highlighted the appreciation of integrated diagnostics, and transitioned the clinical tumor diagnosis from a purely morphologic-histopathologic classification to integrating molecular-cytogenetic characteristics. O[6]-methylguanine-DNA methyltransferase (MGMT) is a DNA repair enzyme that the methylation of its promoter in newly diagnosed GBM has been identified as a favorable prognostic factor and a predictor of chemotherapy response \cite{MGMT}. Thus, determination of MGMT promoter methylation status in newly diagnosed GBM can influence treatment decision making. The RSNA ASNR MICCAI Brain Tumor Segmentation (BraTS) 2021 challenge utilizes multi-institutional multi-parametric Magnetic Resonance Imaging (mpMRI) scans, to address both the automated tumor sub-region segmentation and the prediction of one of the genetic characteristics of glioblastoma (MGMT promoter methylation status) from pre-operative baseline MRI scans. Specifically, BraTS 2021 focuses on the evaluation of state-of-the-art methods for the accurate segmentation of intrinsically heterogeneous brain glioma sub-regions and on the evaluation of classification methods distinguishing between MGMT methylated (MGMT+) and unmethylated (MGMT-) tumors. This manuscript describes the characteristics of the data included in the BraTS 2021 challenge, along with the annotation protocol followed to prepare the challenge data, an elaborate description of the challenge’s tasks, and the performance evaluation of all participating methods (in Section 2) and then discusses the limitations and currently considered future directions (in Section 3). \subsection{Data} \label{sec:data} The BraTS dataset describes a retrospective collection of brain tumor mpMRI scans acquired from multiple different institutions under standard clinical conditions, but with different equipment and imaging protocols, resulting in a vastly heterogeneous image quality reflecting diverse clinical practice across different institutions. Inclusion criteria comprised pathologically confirmed diagnosis and available MGMT promoter methylation status. These data have been updated, since BraTS 2020 \cite{menze2014multimodal, bakas2017advancing, bakas2018identifying, bakas2017segmentation_1, bakas2017segmentation_2}, increasing the total number of cases from 660 to 2,000. Ground truth annotations of every tumor sub-region for task 1 were approved by expert neuroradiologists, whereas the MGMT methylation status was based on the laboratory assessment of the surgical brain tumor specimen. Following the paradigm of algorithmic evaluation in machine learning, the data included in the BraTS 2021 challenge are divided in training, validation, and testing datasets. The challenge participants are provided with the ground truth labels only for the training data. The validation data are then provided to the participants without any associated ground truth and the testing data are kept hidden from the participants at all times. Participants are not allowed to use additional public and/or private data (from their own institutions) for extending the provided BraTS data, for the training of the algorithm chosen to be ranked. Similarly, using models that were pretrained on such datasets is not allowed. This is due to our intentions to provide a fair comparison among the participating methods. However, participants are allowed to use additional public and/or private data (from their own institutions), only for scientific publication purposes and if they explicitly mention this in their submitted manuscripts. Importantly, participants that decide to proceed with this scientific analysis they must also report results using only the BraTS'21 data to discuss potential result differences. \subsubsection{Imaging Data Description} The mpMRI scans included in the BraTS 2021 challenge describe a) native (T1) and b) post-contrast T1-weighted (T1Gd (Gadolinium)), c) T2-weighted (T2), and d) T2 Fluid Attenuated Inversion Recovery (T2-FLAIR) volumes, acquired with different protocols and various scanners from multiple institutions. Standardized pre-processing has been applied to all the BraTS mpMRI scans. Specifically, the applied pre-processing routines include conversion of the DICOM files to the NIFTI file format \cite{nifti}, re-orientation to a common orientation system (i.e., RAI), co-registration to the same anatomical template (SRI24) \cite{SRI_rohlfing2010sri24}, resampling to a uniform isotropic resolution ($1mm^{3}$), and finally skull-stripping. The preprocessing pipeline is publicly available through the Cancer Imaging Phenomics Toolkit (CaPTk) \cite{captk} and Federated Tumor Segmentation (FeTS) tool \footnote{https://fets-ai.github.io/Front-End/}. Conversion to NIFTI strips the DICOM metadata from the images and essentially removes all Protected Health Information (PHI) from the DICOM headers. Furthermore, skull stripping mitigates potential facial reconstruction/recognition of the patient \cite{NEJMc1908881,NEJMc1915674}. The specific approach we have used for skull stripping is based on a novel DL approach that accounts for the brain shape prior and is agnostic to the MRI sequence input \cite{thakur2020brain}. Specifically for Task 1 (Tumor sub-region segmentation), all imaging volumes have then been segmented using the STAPLE \cite{warfield2004simultaneous} fusion of previous top-ranked BraTS algorithms namely, DeepScan \cite{mckinley2018ensembles}, DeepMedic \cite{kamnitsas2017efficient} and nnU-Net \cite{isensee2020nnu} and then refined manually by volunteer neuroradiology experts of varying rank and experience, following the same annotation protocol. Annotations were finally approved by experienced board-certified neuro-radiologists with more than 15 years of experience working with glioma. The exact annotated regions are based upon known observations visible to the trained radiologist (VASARI features) and comprise the Gd-enhancing tumor (ET — label 4), the peritumoral edematous/invaded tissue (ED — label 2), and the necrotic tumor core (NCR — label 1). ET is the enhancing portion of the tumor, described by areas with both visually avid, as well as faint, enhancement on T1Gd MRI. NCR is the necrotic core of the tumor, the appearance of which is hypointense on T1Gd MRI. ED is the peritumoral edematous and infiltrated tissue, defined by the abnormal hyperintense signal envelope on the T2 FLAIR volumes, which includes the infiltrative non enhancing tumor, as well as vasogenic edema in the peritumoral region. The tumor sub-regions are shown in Fig. \ref{annotations}. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{figures/Annotations.png} \caption{\textbf{Glioma sub-regions considered in the RSNA-ASNR-MICCAI BraTS 2021 challenge.} Image panels with the tumor sub-regions annotated in the different mpMRI scans. The image panels A-C denote the regions considered for the performance evaluation of the participating algorithms and specifically highlight (from left to right): the enhancing tumor (ET - yellow) visible in a T1Gd scan, surrounding the cystic/necrotic components of the core (panel A), the tumor core (TC – magenta) and the whole tumor (WT - cyan) visible in the corresponding T2 (panel B) and T2-FLAIR (panel C) scans, respectively. Panel D, depicts the combined segmentations generating the final tumor sub-region labels, as provided to the BraTS 2021 participants: enhancing core (yellow), necrotic/cystic core (red), and edema/invasion (green).} \label{annotations} \end{figure} For Task 2 (Radiogenomic Classification), all the imaging volumes were converted from NIFTI to DICOM files, while ensuring that the original patient space is preserved. To make this conversion both the skull-stripped brain volume in NIFTI format of each MRI sequence and its corresponding original DICOM scan in the patient space are required. The DICOM volume is read as an ITK image \cite{ITK} and the skull-stripped volume is rigidly registered to it, providing a transformation matrix that defines the spatial mapping between the 2 volumes. This transformation matrix is applied to the skull-stripped volume and to the corresponding segmentation labels, in order to translate them both to the patient space. These transformed volumes are then passed through CaPTk's NIFTI to DICOM conversion engine to generate DICOM image volumes for the skull-stripped image. Once all MRI sequences were converted back to the DICOM file format, further de-identification took place based on a two-step process. The first step used the RSNA CTP (Clinical Trials Processor) Anonymizer \footnote{http://mirc.rsna.org/download/Anonymizer-installer.jar} with the standard built-in script. Step two then consisted of whitelisting the DICOM files from step 1. The whitelisting process removes all non-essential tags from the DICOM header. This last process ensures there are no protected health information (PHI) entries left in the DICOM header. \subsubsection{MGMT Promoter Methylation Data Description} The MGMT promoter methylation status data is defined as a binary label (0: unmethylated, 1: methylated), and provided to the participants as a comma-separated value (.csv) file with the corresponding pseudo-identifiers of the mpMRI volumes (study-level label). The MGMT promoter methylation status of the BraTS 2021 dataset was determined at each of the host institutions based on various techniques, including pyrosequencing, and next generation quantitative bisulfite sequencing of promoter CpG sites. Sufficient tumor tissue collected at time of surgery was required for both approaches. For the pyrosequencing approach, the genomic DNA was initially extracted from 5lm tissue sections of formalin-fixed paraffin-embedded (FFPE) tissue samples. DNA was further cleaned and purified. The DNA concentration, protein to nucleic acid ratio, and DNA to RNA ratio for purity were assessed by spectrophotometer. Approximately 500–1000ng total DNA was subjected to bisulfite conversion using the EPiTect Bisulfite Kit. A total of 50–100 ng bisulfite-treated DNA was carried on for PCR using F-primer and R-primer. Pyrosequencing methylation assay was then conducted using the sequencing primer on the PyroMark Q96ID pyrosequencer. The Pyromark CpG MGMT kit detected the average level of methylation on CpG 74–81 sites located in the MGMT gene. A cytosine not followed by a guanine served as an internal control for completion of bisulfite conversion. The percent methylation above 10\% was interpreted as positive. A sample below 10\% methylation was interpreted as negative. For the latter approach, a total of 17 MGMT promoter CpG sites were amplified by nested polymerase chain reaction (PCR) using a bisulfite treated DNA template. Quantitative PCR was performed for each CpG site to determine its methylation status. A result of 2\% or more methylated CpG sites in the MGMT promoter (out of 17 total sites) was considered a positive result. \subsubsection{Comparison with Previous BraTS datasets} \begin{table}[t] \caption{Summary of distribution of BraTS Challenge data across training, validation and test cohorts since the inception of BraTS initiative. (TBA: To Be Announced)} \label{Table1} \begin{tabular}{|l|l|l|l|l|l|l|} \hline \textbf{Year} & \textbf{\begin{tabular}[c]{@{}l@{}}Total\\ Data\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Training\\ Data\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Validation\\ Data\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Testing\\ Data\end{tabular}} & \textbf{Tasks} & \textbf{Timepoint} \\ \hline 2012 & 50 & 35 & NA & 15 & Segmentation & Pre-operative \\ \hline 2013 & 60 & 35 & NA & 25 & Segmentation & Pre-operative \\ \hline 2014 & 238 & 200 & NA & 38 & Segmentation & Longitudinal \\ \hline 2015 & 253 & 200 & NA & 53 & \begin{tabular}[c]{@{}l@{}}Segmentation\\ Disease progression\end{tabular} & Longitudinal \\ \hline 2016 & 391 & 200 & NA & 191 & \begin{tabular}[c]{@{}l@{}}Segmentation\\ Disease progression\end{tabular} & Longitudinal \\ \hline 2017 & 477 & 285 & 46 & 146 & \begin{tabular}[c]{@{}l@{}}Segmentation\\ Survival prediction\end{tabular} & Pre-operative \\ \hline 2018 & 542 & 285 & 66 & 191 & \begin{tabular}[c]{@{}l@{}}Segmentation\\ Survival prediction\end{tabular} & Pre-operative \\ \hline 2019 & 626 & 335 & 125 & 166 & \begin{tabular}[c]{@{}l@{}}Segmentation\\ Survival prediction\end{tabular} & Pre-operative \\ \hline 2020 & 660 & 369 & 125 & 166 & \begin{tabular}[c]{@{}l@{}}Segmentation\\ Survival prediction\end{tabular} & Pre-operative \\ \hline \begin{tabular}[c]{@{}l@{}}2021 \\ (expected)\end{tabular} & 2000 & TBA & TBA & TBA & \begin{tabular}[c]{@{}l@{}}Segmentation\\ MGMT classification\end{tabular} & Pre-operative \\ \hline \end{tabular} \end{table} The first BraTS challenge was organized in 2012 in conjunction with the MICCAI conference, and was making available a total of 50 mpMRI glioma cases (Table \ref{Table1}). The BraTS'12-'13 dataset was manually annotated by clinical experts, and the task at hand was the segmentation of the glioma sub-regions (ET, NCR, ED). In BraTS'14-'16 the dataset provided to the participants included a large contribution of data from The Cancer Imaging Archive (TCIA) \cite{TCIA}, and specifically from the TCGA-GBM \cite{scarpace2016radiology} and the TCGA-LGG \cite{pedano2016radiology} collections. Both pre- and post-operative scans were included from these collections, and the ground truth segmentations were annotated by the fusion of previous algorithms that ranked highly during BraTS'12 and '13. During the BraTS’17 challenge all the data were revised by board-certified neuroradiologists, who assessed the complete TCIA collections (TCGA-GBM, n=262 and TCGA-LGG, n=199) and categorized each scan as pre- or post-operative, and only the scans without any prior instrumentation were included as a part of the BraTS challenge this year onwards \cite{bakas2017segmentation_1,bakas2017segmentation_2,bakas2017advancing}. In BraTS'17-'20’ the challenge was extended to the prediction of patient overall survival for the glioblastoma cases that underwent gross-total resection. This year, the BraTS 2021 challenge continues its focus on the segmentation of glioma sub-regions, with a substantially larger dataset (2,000 glioma cases = 8,000 mpMRI scans), and extends to the clinically relevant task of identifying the tumor’s MGMT promoter methylation status (methylated/unmethylated). These additional exams were obtained as a collection of the pre-operative cases of the TCIA public collections of TCGA-GBM, TCGA-LGG, IvyGAP \cite{ivygap1_puchalski2018anatomic,ivygap2_shah2016data}, CPTAC-GBM \cite{CPTAC_GBM, wang2021proteogenomic}, and ACRIN-FMISO-Brain (ACRIN 6684) \cite{ACRIN_FMISO1, ACRIN_FMISO2}, as well as contributions from private institutional collections. The name mapping between the previous and the current challenge, as well as all the TCIA collections will be provided to further facilitate research beyond the directly BraTS related tasks. \subsubsection{ Tumor Annotation Protocol} We designed the following tumor annotation protocol, in order to make it possible to create similar ground truth delineations across various annotators. For the tasks related to BraTS, only structural mpMRI volumes were considered (T1, T1Gd, T2, T2-FLAIR), all of them co-registered to a common anatomical template (SRI24 \cite{SRI_rohlfing2010sri24}) and resampled to 1mm$^3$. The end to end pipeline is available for these through CaPTk \cite{captk} and FeTS tool. We note that radiologic definition of tumor boundaries, especially in such infiltrative tumors as gliomas, is a well-known problem. In an attempt to offer a standardized approach to assess and evaluate various tumor sub-regions, the BraTS initiative, after consultation with internationally recognized expert neuroradiologists, defined the various tumor sub-regions. However, we note that other criteria for delineation could be set, resulting in slightly different tumor sub-regions. For the BraTS 2021 challenge the regions considered are: i) the “enhancing tumor” (ET), ii) the “tumor core” (TC) and iii) the complete tumor extent also referred to as the ``whole tumor” (WT). The ET is described by areas that show hyper-intensity in T1Gd when compared to T1, but also when compared to “healthy” white matter in T1Gd. The TC describes the bulk of the tumor, which is what is typically considered for surgical excision. The TC entails the ET, as well as the necrotic (NCR) parts of the tumor, the appearance of which is typically hypo-intense in T1Gd when compared to T1. The WT describes the complete extent of the disease, as it entails the TC and the peritumoral edematous/invaded tissue (ED), which is typically depicted by the abnormal hyper-intense signal in the T2-FLAIR volume. BraTS tumor visual features (sub-regions) are image based and do not reflect strict biologic entities. For example, the ET regions may be defined as hyper-intense signal on T1Gd images. However, in high grade tumors, non-necrotic, non-cystic regions are present that do not enhance and can be separable from the surrounding vasogenic edema, representing non-enhancing infiltrative tumor. Another issue is defining the tumor center in low grade gliomas as it is difficult to differentiate tumor from vasogenic edema, particularly in the absence of enhancement. In the previous BraTS challenges annotators would start from the manual delineation of the abnormal signal in the T2-weighted images, primarily defining the WT, then address the TC, and finally the enhancing and non-enhancing/necrotic core, possibly using semi-automatic tools. To facilitate the annotation process for BraTS 2021, initial automated segmentations were generated by fusing previously top-performing BraTS methods. The specific methods fused were the DeepMedic \cite{kamnitsas2017efficient}, DeepScan \cite{mckinley2018ensembles} and nnU-Net \cite{isensee2020nnu}, all trained on the BraTS 2020 dataset \cite{menze2014multimodal,bakas2017advancing, bakas2018identifying}. The STAPLE label fusion \cite{warfield2004simultaneous} was used to aggregate the segmentation produced by each of the individual methods, and account for systematic errors generated by each of them separately. All these segmentation methods and the exact pipeline used to generate the fused automated segmentation has been made publicly available through the Federated Tumor Segmentation (FeTS) platform\footnote{\url{https://www.med.upenn.edu/cbica/fets/}} \cite{sheller2020federated}. The volunteer neuroradiology expert annotators were provided with four mpMRI scans along with the fused automated segmentation volume to initiate the manual refinements. The ITK-SNAP \cite{itksnap} software was used for making these refinements. Once the automated segmentations were refined by the annotators, two senior attending board-certified neuroradiologists with more than 15 years of experience each, reviewed the segmentations. Depending upon correctness, these segmentations were either approved or returned to the individual annotator for further refinements. This process was followed iteratively until the approvers found the refined tumor sub-region segmentations acceptable for public release and the challenge conduction. \subsubsection{Common errors of automated segmentations} Building upon observations during all previous BraTS instances, we note some common errors in the automated segmentations. The most typical such errors observed are: \begin{enumerate} \item The choroid plexus and areas of T1 bright blood products (when they can be discriminated by comparing with the pre contrast T1 images), have erroneously been labelled as ED (Fig. \ref{a}). \item Vessels within the peritumoral T2 FLAIR edematous area, have been marked as ET (Fig. \ref{b}). \item Vessels within the peritumoral T2 FLAIR edematous area, have been marked as ED (Fig. \ref{c}). \item Periventricular white matter hyperintensities being confused and segmented as tumor/peritumoral regions (Fig. \ref{fig:d}). \end{enumerate} \begin{figure} \begin{subfigure}{\textwidth} \centering \includegraphics[width=.8\linewidth]{figures/Picture1.png} \caption{Choroid plexus erroneously marked as ED.} \label{a} \end{subfigure} \newline \begin{subfigure}{\textwidth} \centering \includegraphics[width=.8\linewidth]{figures/Picture2.png} \caption{Vessels in ED marked at ET.} \label{b} \end{subfigure} \newline \begin{subfigure}{\textwidth} \centering \includegraphics[width=.8\linewidth]{figures/Picture3.png} \caption{Vessels in ED} \label{c} \end{subfigure} \newline \begin{subfigure}{\textwidth} \centering \includegraphics[width=.8\linewidth]{figures/Picture4.png} \caption{Periventricular white matter hyperintensities. Figure taken from \cite{10.3389/fncom.2019.00084}.} \label{fig:d} \end{subfigure} \caption{Common errors expected from the automatic segmentations. } \label{fig:fig} \end{figure} \subsection{Challenge Tasks} The BraTS 2021 challenge utilizes multi-institutional mpMRI scans, and focuses on (Task 1) the evaluation of state-of-the-art methods for the segmentation of intrinsically heterogeneous brain glioblastoma sub-regions in mpMRI scans. Furthermore, to pinpoint the clinical relevance of this segmentation task, BraTS 2021 also focuses on (Task 2) the evaluation of methods to predict the MGMT promoter methylation status at the pre-operative baseline scans, via integrative analyses of quantitative imaging phenomic features and machine learning algorithms. Participants are free to choose whether they want to focus only on one or both tasks. \subsubsection{Task 1: Brain Tumor Sub-region Segmentation } The participants are called to address this task by using the provided clinically-acquired training data to develop their method and produce segmentation labels of the glioma sub-regions. The sub-regions considered for evaluation are the ``enhancing tumor" (ET), the ``tumor core" (TC), and the ``whole tumor" (WT). The provided segmentation labels have values of 1 for NCR, 2 for ED, 4 for ET, and 0 for everything else. For this task this year’s BraTS challenge makes available a dataset of 8,000 MRI scans from 2,000 glioma patients. These cases are distributed across training, validation, and testing datasets following a machine learning paradigm. \subsubsection{Task 2: Radiogenomic Classification} Participants are provided with mpMRI data and the MGMT promoter methylation status associated with each case. The methylated cases are marked as ‘1’ and unmethylated as ‘0’ in the csv file which is provided with the data. Researchers have proposed methods to predict the MGMT promoter methylation status with appropriate imaging/radiomic features extraction, and analyse them through machine learning algorithms. The participants do not need to be limited to volumetric parameters, but can also consider intensity, morphologic, histogram-based, and textural features, as well as spatial information, and glioma diffusion properties extracted from glioma growth models. Participants will be evaluated for the predicted MGMT status of the subjects indicated in the accompanying spreadsheet. \subsection{Performance Evaluation} Participants are called to submit the results on the online evaluation platform for the training and validation dataset. The test dataset will never be shared with the participants and they will upload their proposed methods in a containerized way for the final testing phase. To evaluate the generalizability of the proposed methods, we will evaluate the performance on the cohort which is not part of either training or validation cohort, also termed as testing out of distribution cohort. The distribution for methylated and unmethylated cases across the training, validation, testing cohort is given in Table \ref{Table1}. \subsubsection{Task 1: Tumor Sub-region Segmentation} Consistent with the configuration of previous BraTS challenges, we intend to use the ``Dice similarity coefficient", and the ``Hausdorff distance (95\%)" as performance evaluation metrics. Expanding upon this evaluation scheme, we will also provide the metrics of ``Sensitivity" and ``Specificity", allowing to determine potential over- or under-segmentations of the tumor sub-regions by participating methods. The ranking scheme followed during the BraTS 2017-2020 comprised the ranking of each team relative to its competitors for each of the testing subjects, for each evaluated region (i.e., ET, TC, WT), and for each measure (i.e., Dice and Hausdorff). For example, in BraTS 2020, each team was ranked for 166 subjects, for 3 regions, and for 2 metrics, which resulted in $166\times3\times2=996$ individual rankings. The final ranking score (FRS) for each team was then calculated by firstly averaging across all these individual rankings for each patient (i.e., Cumulative Rank), and then averaging these cumulative ranks across all patients for each participating team. This ranking scheme has also been adopted in other challenges with satisfactory results, such as the Ischemic Stroke Lesion Segmentation challenge\footnote{\url{http://www.isles-challenge.org/}} \cite{maier2017isles}. We then conducted further permutation testing, to determine statistical significance of the relative rankings between each pair of teams. This permutation testing would reflect differences in performance that exceeded those that might be expected by chance. Specifically, for each team we started with a list of observed subject-level Cumulative Ranks, i.e., the actual ranking described above. For each pair of teams, we repeatedly randomly permuted (i.e., for 100,000 times) the Cumulative Ranks for each subject. For each permutation, we calculated the difference in the FRS between this pair of teams. The proportion of times the difference in FRS calculated using randomly permuted data exceeded the observed difference in FRS (i.e., using the actual data) indicated the statistical significance of their relative rankings as a p-value. These values were reported in an upper triangular matrix providing insights of statistically significant differences across each pair of participated teams. Top ranked methods in the validation phase will be invited at MICCAI 2021 for presentation of their methods and results. The final top three ranked participating teams according to their evaluation against the testing data, will be invited at RSNA 2021 for presentation and to receive their monetary awards. \subsubsection{Task 2: Radiogenomic Classification} The methods submitted by the participating teams for task 2 will be evaluated based on the area under the ROC curve (AUC), accuracy, FScore (Beta) and Matthew's Correlation Coefficient of the classification of the MGMT status as methylated and unmethylated. The AUC is a metric that measures the overall discriminatory capacity of a model for all possible thresholds and allows for comparing the performance of the entries by each participant, even though it has no straightforward clinical meaning and does not guarantee the model is calibrated. The AUC will be used as the reference metric to rank the participants in the leaderboard of task 2. \subsection{Participation Timeline} The challenge will commence with the release of the training dataset, which will consist of imaging data and the corresponding ground-truth labels. Participants can start designing and training their methods using this training dataset. The validation data will then be released within three weeks after the training data is released. This will allow participants to obtain preliminary results in unseen data and also report these in their submitted short MICCAI LNCS papers, in addition to their cross-validated results on the training data. The ground truth of the validation data will not be provided to the participants, but multiple submissions to the online evaluation platforms will be allowed. The top-ranked participating teams in the validation phase will be invited to prepare their slides for a short oral presentation of their method during the BraTS challenge at MICCAI 2021. Finally, all participants will be evaluated and ranked on the same unseen testing data, which will not be made available to the participants, after uploading their containerized method in the evaluation platforms. The final top-ranked participating teams will be announced at the 2021 RSNA Annual Meeting. The top-ranked participating teams of both the tasks will receive monetary prizes of total value of \$60,000, sponsored by Intel, RSNA, and NeoSoma Inc. \section{Results} tbd \section{Discussion} In this paper we presented the design of the $10^{th}$ BraTS challenge, jointly organised by the RSNA, ASNR, and MICCAI societies, and offering what can possibly be considered the largest curated multi-label annotated dataset of mpMRI scans for a single disease. Members of the RSNA and ASNR communities had graciously volunteered to refine tumor sub-region annotations for all 2,000 cases included in the BraTS 2021 challenge, until satisfactory quality for releasing the data. Considering the size of this year’s challenge and also its potential continuation after the announcement of this year’s winners, the testing data will be kept hidden at all times and their performance evaluation will be based on the challenge evaluation platforms of Sage Bionetworks Synapse (Task 1) and Kaggle (Task 2), concluding in distributing to the top ranked participants monetary awards of \$60,000 collectively. We hope that the well-labelled multi-institutional data of BraTS 2021 will provide an optimized community benchmark and a common dataset to the research community focusing on computational neuro-oncology, even beyond the specific BraTS 2021 tasks. Although we designed the BraTS 2021 challenge with utmost care there are still some limitations that need further consideration. Firstly, the tumor feature segmentations of each case are refined by a single annotator with an iterative process with a group of approvers, until approval from the latter, and hence the potential inter-rater agreement can not be assessed. Secondly, since the provided MGMT promoter methylation status was determined based on varying methods across the multiple institutions that contributed data, and each institute follows its own methodology (e.g., pyrosequencing vs quantitative PCR) and thresholds, only a binary classification of the methylation status was made available to the participants instead of a continuous value. Lastly, we note that some of the MRI datas included in the challenge harbor more abnormalities than just gliomas. Since the focus of the challenge was on gliomas all other abnormalities (such as white matter hyperintensities that are typically secondary to small vessel ischemic disease) were not considered in the annotation process. This was made particularly apparent from previous efforts that attempted to perform a multi-disease segmentation\cite{10.3389/fncom.2019.00084}. With this multi-disease segmentation in mind, one of the main future directions for the BraTS challenge would be to expand beyond its current focus on glial tumors towards general brain abnormalities. Furthermore, the extension from solely pre-operative baseline scans to post-operative scans, and the inclusion of an additional label for the resection cavity would be a very interesting and clinically appealing direction, as it would speak directly to the assessment of treatment response and disease progression. To ensure robustness and generalizability of the computational algorithms, ample patient data from multiple sites, capturing diverse patient populations are desired. A major hindrance for accessing these datasets is data siloing due to tedious bureaucratic process, data ownership concerns, and legal considerations reflected in patient privacy regulations, such as the American HIPAA \cite{hippa} and the European GDPR\cite{gdpr}. In future, we aim at moving from the current centralised data approach to a federated approach, which would enable researchers to access potentially unprecedented size of data and hence design more robust and generalizable algorithms \cite{sheller2020federated, rieke2020future, pati2021federated}. \section{Introduction} Glioblastoma (GBM), and diffuse astrocytic glioma with molecular features of GBM (WHO Grade 4 astrocytoma), are the most common and aggressive malignant primary tumor of the central nervous system (CNS) in adults, with extreme intrinsic heterogeneity in appearance, shape, and histology \cite{louis2019cimpact,cimpact_1,cimpact_2,cimpact_3,cimpact_4,cimpact_5,cimpact_6}. GBM patients have an average prognosis of 14 months, following standard of care treatment (comprising surgical resection followed by radiotherapy and chemotherapy), and 4 months left untreated \cite{OS_SB}. Although various experimental treatment options have been proposed during the past 20 years, there have not been any substantial differences in patient prognosis. Accurate identification of brain tumor sub-regions boundaries in MRI is of profound importance in many clinical applications, such as surgical treatment planning, image-guided interventions, monitoring tumor growth, and the generation of radiotherapy maps. However, manual detection and tracing of tumor sub-regions is tedious, time-consuming, and subjective. In a clinical setup, this manual process is carried out by radiologists in a qualitative visual manner, and hence becomes impractical when dealing with numerous patients. This highlights the unmet need for automated deterministic segmentation solutions that could contribute in expediting this process. The release of the current revised World Health Organization (WHO) classification of CNS tumors \cite{WHO_louis20162016} highlighted the appreciation of integrated diagnostics, and transitioned the clinical tumor diagnosis from a purely morphologic-histopathologic classification to integrating molecular-cytogenetic characteristics. O[6]-methylguanine-DNA methyltransferase (MGMT) is a DNA repair enzyme that the methylation of its promoter in newly diagnosed GBM has been identified as a favorable prognostic factor and a predictor of chemotherapy response \cite{MGMT}. Thus, determination of MGMT promoter methylation status in newly diagnosed GBM can influence treatment decision making. The RSNA ASNR MICCAI Brain Tumor Segmentation (BraTS) 2021 challenge utilizes multi-institutional multi-parametric Magnetic Resonance Imaging (mpMRI) scans, to address both the automated tumor sub-region segmentation and the prediction of one of the genetic characteristics of glioblastoma (MGMT promoter methylation status) from pre-operative baseline MRI scans. Specifically, BraTS 2021 focuses on the evaluation of state-of-the-art methods for the accurate segmentation of intrinsically heterogeneous brain glioma sub-regions and on the evaluation of classification methods distinguishing between MGMT methylated (MGMT+) and unmethylated (MGMT-) tumors. This manuscript describes the characteristics of the data included in the BraTS 2021 challenge, along with the annotation protocol followed to prepare the challenge data, an elaborate description of the challenge’s tasks, and the performance evaluation of all participating methods (in Section 2) and then discusses the limitations and currently considered future directions (in Section 3). \subsection{Data} \label{sec:data} The BraTS dataset describes a retrospective collection of brain tumor mpMRI scans acquired from multiple different institutions under standard clinical conditions, but with different equipment and imaging protocols, resulting in a vastly heterogeneous image quality reflecting diverse clinical practice across different institutions. Inclusion criteria comprised pathologically confirmed diagnosis and available MGMT promoter methylation status. These data have been updated, since BraTS 2020 \cite{menze2014multimodal, bakas2017advancing, bakas2018identifying, bakas2017segmentation_1, bakas2017segmentation_2}, increasing the total number of cases from 660 to 2,000. Ground truth annotations of every tumor sub-region for task 1 were approved by expert neuroradiologists, whereas the MGMT methylation status was based on the laboratory assessment of the surgical brain tumor specimen. Following the paradigm of algorithmic evaluation in machine learning, the data included in the BraTS 2021 challenge are divided in training, validation, and testing datasets. The challenge participants are provided with the ground truth labels only for the training data. The validation data are then provided to the participants without any associated ground truth and the testing data are kept hidden from the participants at all times. Participants are not allowed to use additional public and/or private data (from their own institutions) for extending the provided BraTS data, for the training of the algorithm chosen to be ranked. Similarly, using models that were pretrained on such datasets is not allowed. This is due to our intentions to provide a fair comparison among the participating methods. However, participants are allowed to use additional public and/or private data (from their own institutions), only for scientific publication purposes and if they explicitly mention this in their submitted manuscripts. Importantly, participants that decide to proceed with this scientific analysis they must also report results using only the BraTS'21 data to discuss potential result differences. \subsubsection{Imaging Data Description} The mpMRI scans included in the BraTS 2021 challenge describe a) native (T1) and b) post-contrast T1-weighted (T1Gd (Gadolinium)), c) T2-weighted (T2), and d) T2 Fluid Attenuated Inversion Recovery (T2-FLAIR) volumes, acquired with different protocols and various scanners from multiple institutions. Standardized pre-processing has been applied to all the BraTS mpMRI scans. Specifically, the applied pre-processing routines include conversion of the DICOM files to the NIFTI file format \cite{nifti}, re-orientation to a common orientation system (i.e., RAI), co-registration to the same anatomical template (SRI24) \cite{SRI_rohlfing2010sri24}, resampling to a uniform isotropic resolution ($1mm^{3}$), and finally skull-stripping. The preprocessing pipeline is publicly available through the Cancer Imaging Phenomics Toolkit (CaPTk) \cite{captk} and Federated Tumor Segmentation (FeTS) tool \footnote{https://fets-ai.github.io/Front-End/}. Conversion to NIFTI strips the DICOM metadata from the images and essentially removes all Protected Health Information (PHI) from the DICOM headers. Furthermore, skull stripping mitigates potential facial reconstruction/recognition of the patient \cite{NEJMc1908881,NEJMc1915674}. The specific approach we have used for skull stripping is based on a novel DL approach that accounts for the brain shape prior and is agnostic to the MRI sequence input \cite{thakur2020brain}. Specifically for Task 1 (Tumor sub-region segmentation), all imaging volumes have then been segmented using the STAPLE \cite{warfield2004simultaneous} fusion of previous top-ranked BraTS algorithms namely, DeepScan \cite{mckinley2018ensembles}, DeepMedic \cite{kamnitsas2017efficient} and nnU-Net \cite{isensee2020nnu} and then refined manually by volunteer neuroradiology experts of varying rank and experience, following the same annotation protocol. Annotations were finally approved by experienced board-certified neuro-radiologists with more than 15 years of experience working with glioma. The exact annotated regions are based upon known observations visible to the trained radiologist (VASARI features) and comprise the Gd-enhancing tumor (ET — label 4), the peritumoral edematous/invaded tissue (ED — label 2), and the necrotic tumor core (NCR — label 1). ET is the enhancing portion of the tumor, described by areas with both visually avid, as well as faint, enhancement on T1Gd MRI. NCR is the necrotic core of the tumor, the appearance of which is hypointense on T1Gd MRI. ED is the peritumoral edematous and infiltrated tissue, defined by the abnormal hyperintense signal envelope on the T2 FLAIR volumes, which includes the infiltrative non enhancing tumor, as well as vasogenic edema in the peritumoral region. The tumor sub-regions are shown in Fig. \ref{annotations}. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{figures/Annotations.png} \caption{\textbf{Glioma sub-regions considered in the RSNA-ASNR-MICCAI BraTS 2021 challenge.} Image panels with the tumor sub-regions annotated in the different mpMRI scans. The image panels A-C denote the regions considered for the performance evaluation of the participating algorithms and specifically highlight (from left to right): the enhancing tumor (ET - yellow) visible in a T1Gd scan, surrounding the cystic/necrotic components of the core (panel A), the tumor core (TC – magenta) and the whole tumor (WT - cyan) visible in the corresponding T2 (panel B) and T2-FLAIR (panel C) scans, respectively. Panel D, depicts the combined segmentations generating the final tumor sub-region labels, as provided to the BraTS 2021 participants: enhancing core (yellow), necrotic/cystic core (red), and edema/invasion (green).} \label{annotations} \end{figure} For Task 2 (Radiogenomic Classification), all the imaging volumes were converted from NIFTI to DICOM files, while ensuring that the original patient space is preserved. To make this conversion both the skull-stripped brain volume in NIFTI format of each MRI sequence and its corresponding original DICOM scan in the patient space are required. The DICOM volume is read as an ITK image \cite{ITK} and the skull-stripped volume is rigidly registered to it, providing a transformation matrix that defines the spatial mapping between the 2 volumes. This transformation matrix is applied to the skull-stripped volume and to the corresponding segmentation labels, in order to translate them both to the patient space. These transformed volumes are then passed through CaPTk's NIFTI to DICOM conversion engine to generate DICOM image volumes for the skull-stripped image. Once all MRI sequences were converted back to the DICOM file format, further de-identification took place based on a two-step process. The first step used the RSNA CTP (Clinical Trials Processor) Anonymizer \footnote{http://mirc.rsna.org/download/Anonymizer-installer.jar} with the standard built-in script. Step two then consisted of whitelisting the DICOM files from step 1. The whitelisting process removes all non-essential tags from the DICOM header. This last process ensures there are no protected health information (PHI) entries left in the DICOM header. \subsubsection{MGMT Promoter Methylation Data Description} The MGMT promoter methylation status data is defined as a binary label (0: unmethylated, 1: methylated), and provided to the participants as a comma-separated value (.csv) file with the corresponding pseudo-identifiers of the mpMRI volumes (study-level label). The MGMT promoter methylation status of the BraTS 2021 dataset was determined at each of the host institutions based on various techniques, including pyrosequencing, and next generation quantitative bisulfite sequencing of promoter CpG sites. Sufficient tumor tissue collected at time of surgery was required for both approaches. For the pyrosequencing approach, the genomic DNA was initially extracted from 5lm tissue sections of formalin-fixed paraffin-embedded (FFPE) tissue samples. DNA was further cleaned and purified. The DNA concentration, protein to nucleic acid ratio, and DNA to RNA ratio for purity were assessed by spectrophotometer. Approximately 500–1000ng total DNA was subjected to bisulfite conversion using the EPiTect Bisulfite Kit. A total of 50–100 ng bisulfite-treated DNA was carried on for PCR using F-primer and R-primer. Pyrosequencing methylation assay was then conducted using the sequencing primer on the PyroMark Q96ID pyrosequencer. The Pyromark CpG MGMT kit detected the average level of methylation on CpG 74–81 sites located in the MGMT gene. A cytosine not followed by a guanine served as an internal control for completion of bisulfite conversion. The percent methylation above 10\% was interpreted as positive. A sample below 10\% methylation was interpreted as negative. For the latter approach, a total of 17 MGMT promoter CpG sites were amplified by nested polymerase chain reaction (PCR) using a bisulfite treated DNA template. Quantitative PCR was performed for each CpG site to determine its methylation status. A result of 2\% or more methylated CpG sites in the MGMT promoter (out of 17 total sites) was considered a positive result. \subsubsection{Comparison with Previous BraTS datasets} \begin{table}[t] \caption{Summary of distribution of BraTS Challenge data across training, validation and test cohorts since the inception of BraTS initiative. (TBA: To Be Announced)} \label{Table1} \begin{tabular}{|l|l|l|l|l|l|l|} \hline \textbf{Year} & \textbf{\begin{tabular}[c]{@{}l@{}}Total\\ Data\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Training\\ Data\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Validation\\ Data\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Testing\\ Data\end{tabular}} & \textbf{Tasks} & \textbf{Timepoint} \\ \hline 2012 & 50 & 35 & NA & 15 & Segmentation & Pre-operative \\ \hline 2013 & 60 & 35 & NA & 25 & Segmentation & Pre-operative \\ \hline 2014 & 238 & 200 & NA & 38 & Segmentation & Longitudinal \\ \hline 2015 & 253 & 200 & NA & 53 & \begin{tabular}[c]{@{}l@{}}Segmentation\\ Disease progression\end{tabular} & Longitudinal \\ \hline 2016 & 391 & 200 & NA & 191 & \begin{tabular}[c]{@{}l@{}}Segmentation\\ Disease progression\end{tabular} & Longitudinal \\ \hline 2017 & 477 & 285 & 46 & 146 & \begin{tabular}[c]{@{}l@{}}Segmentation\\ Survival prediction\end{tabular} & Pre-operative \\ \hline 2018 & 542 & 285 & 66 & 191 & \begin{tabular}[c]{@{}l@{}}Segmentation\\ Survival prediction\end{tabular} & Pre-operative \\ \hline 2019 & 626 & 335 & 125 & 166 & \begin{tabular}[c]{@{}l@{}}Segmentation\\ Survival prediction\end{tabular} & Pre-operative \\ \hline 2020 & 660 & 369 & 125 & 166 & \begin{tabular}[c]{@{}l@{}}Segmentation\\ Survival prediction\end{tabular} & Pre-operative \\ \hline \begin{tabular}[c]{@{}l@{}}2021 \\ (expected)\end{tabular} & 2000 & TBA & TBA & TBA & \begin{tabular}[c]{@{}l@{}}Segmentation\\ MGMT classification\end{tabular} & Pre-operative \\ \hline \end{tabular} \end{table} The first BraTS challenge was organized in 2012 in conjunction with the MICCAI conference, and was making available a total of 50 mpMRI glioma cases (Table \ref{Table1}). The BraTS'12-'13 dataset was manually annotated by clinical experts, and the task at hand was the segmentation of the glioma sub-regions (ET, NCR, ED). In BraTS'14-'16 the dataset provided to the participants included a large contribution of data from The Cancer Imaging Archive (TCIA) \cite{TCIA}, and specifically from the TCGA-GBM \cite{scarpace2016radiology} and the TCGA-LGG \cite{pedano2016radiology} collections. Both pre- and post-operative scans were included from these collections, and the ground truth segmentations were annotated by the fusion of previous algorithms that ranked highly during BraTS'12 and '13. During the BraTS’17 challenge all the data were revised by board-certified neuroradiologists, who assessed the complete TCIA collections (TCGA-GBM, n=262 and TCGA-LGG, n=199) and categorized each scan as pre- or post-operative, and only the scans without any prior instrumentation were included as a part of the BraTS challenge this year onwards \cite{bakas2017segmentation_1,bakas2017segmentation_2,bakas2017advancing}. In BraTS'17-'20’ the challenge was extended to the prediction of patient overall survival for the glioblastoma cases that underwent gross-total resection. This year, the BraTS 2021 challenge continues its focus on the segmentation of glioma sub-regions, with a substantially larger dataset (2,000 glioma cases = 8,000 mpMRI scans), and extends to the clinically relevant task of identifying the tumor’s MGMT promoter methylation status (methylated/unmethylated). These additional exams were obtained as a collection of the pre-operative cases of the TCIA public collections of TCGA-GBM, TCGA-LGG, IvyGAP \cite{ivygap1_puchalski2018anatomic,ivygap2_shah2016data}, CPTAC-GBM \cite{CPTAC_GBM, wang2021proteogenomic}, and ACRIN-FMISO-Brain (ACRIN 6684) \cite{ACRIN_FMISO1, ACRIN_FMISO2}, as well as contributions from private institutional collections. The name mapping between the previous and the current challenge, as well as all the TCIA collections will be provided to further facilitate research beyond the directly BraTS related tasks. \subsubsection{ Tumor Annotation Protocol} We designed the following tumor annotation protocol, in order to make it possible to create similar ground truth delineations across various annotators. For the tasks related to BraTS, only structural mpMRI volumes were considered (T1, T1Gd, T2, T2-FLAIR), all of them co-registered to a common anatomical template (SRI24 \cite{SRI_rohlfing2010sri24}) and resampled to 1mm$^3$. The end to end pipeline is available for these through CaPTk \cite{captk} and FeTS tool. We note that radiologic definition of tumor boundaries, especially in such infiltrative tumors as gliomas, is a well-known problem. In an attempt to offer a standardized approach to assess and evaluate various tumor sub-regions, the BraTS initiative, after consultation with internationally recognized expert neuroradiologists, defined the various tumor sub-regions. However, we note that other criteria for delineation could be set, resulting in slightly different tumor sub-regions. For the BraTS 2021 challenge the regions considered are: i) the “enhancing tumor” (ET), ii) the “tumor core” (TC) and iii) the complete tumor extent also referred to as the ``whole tumor” (WT). The ET is described by areas that show hyper-intensity in T1Gd when compared to T1, but also when compared to “healthy” white matter in T1Gd. The TC describes the bulk of the tumor, which is what is typically considered for surgical excision. The TC entails the ET, as well as the necrotic (NCR) parts of the tumor, the appearance of which is typically hypo-intense in T1Gd when compared to T1. The WT describes the complete extent of the disease, as it entails the TC and the peritumoral edematous/invaded tissue (ED), which is typically depicted by the abnormal hyper-intense signal in the T2-FLAIR volume. BraTS tumor visual features (sub-regions) are image based and do not reflect strict biologic entities. For example, the ET regions may be defined as hyper-intense signal on T1Gd images. However, in high grade tumors, non-necrotic, non-cystic regions are present that do not enhance and can be separable from the surrounding vasogenic edema, representing non-enhancing infiltrative tumor. Another issue is defining the tumor center in low grade gliomas as it is difficult to differentiate tumor from vasogenic edema, particularly in the absence of enhancement. In the previous BraTS challenges annotators would start from the manual delineation of the abnormal signal in the T2-weighted images, primarily defining the WT, then address the TC, and finally the enhancing and non-enhancing/necrotic core, possibly using semi-automatic tools. To facilitate the annotation process for BraTS 2021, initial automated segmentations were generated by fusing previously top-performing BraTS methods. The specific methods fused were the DeepMedic \cite{kamnitsas2017efficient}, DeepScan \cite{mckinley2018ensembles} and nnU-Net \cite{isensee2020nnu}, all trained on the BraTS 2020 dataset \cite{menze2014multimodal,bakas2017advancing, bakas2018identifying}. The STAPLE label fusion \cite{warfield2004simultaneous} was used to aggregate the segmentation produced by each of the individual methods, and account for systematic errors generated by each of them separately. All these segmentation methods and the exact pipeline used to generate the fused automated segmentation has been made publicly available through the Federated Tumor Segmentation (FeTS) platform\footnote{\url{https://www.med.upenn.edu/cbica/fets/}} \cite{sheller2020federated}. The volunteer neuroradiology expert annotators were provided with four mpMRI scans along with the fused automated segmentation volume to initiate the manual refinements. The ITK-SNAP \cite{itksnap} software was used for making these refinements. Once the automated segmentations were refined by the annotators, two senior attending board-certified neuroradiologists with more than 15 years of experience each, reviewed the segmentations. Depending upon correctness, these segmentations were either approved or returned to the individual annotator for further refinements. This process was followed iteratively until the approvers found the refined tumor sub-region segmentations acceptable for public release and the challenge conduction. \subsubsection{Common errors of automated segmentations} Building upon observations during all previous BraTS instances, we note some common errors in the automated segmentations. The most typical such errors observed are: \begin{enumerate} \item The choroid plexus and areas of T1 bright blood products (when they can be discriminated by comparing with the pre contrast T1 images), have erroneously been labelled as ED (Fig. \ref{a}). \item Vessels within the peritumoral T2 FLAIR edematous area, have been marked as ET (Fig. \ref{b}). \item Vessels within the peritumoral T2 FLAIR edematous area, have been marked as ED (Fig. \ref{c}). \item Periventricular white matter hyperintensities being confused and segmented as tumor/peritumoral regions (Fig. \ref{fig:d}). \end{enumerate} \begin{figure} \begin{subfigure}{\textwidth} \centering \includegraphics[width=.8\linewidth]{figures/Picture1.png} \caption{Choroid plexus erroneously marked as ED.} \label{a} \end{subfigure} \newline \begin{subfigure}{\textwidth} \centering \includegraphics[width=.8\linewidth]{figures/Picture2.png} \caption{Vessels in ED marked at ET.} \label{b} \end{subfigure} \newline \begin{subfigure}{\textwidth} \centering \includegraphics[width=.8\linewidth]{figures/Picture3.png} \caption{Vessels in ED} \label{c} \end{subfigure} \newline \begin{subfigure}{\textwidth} \centering \includegraphics[width=.8\linewidth]{figures/Picture4.png} \caption{Periventricular white matter hyperintensities. Figure taken from \cite{10.3389/fncom.2019.00084}.} \label{fig:d} \end{subfigure} \caption{Common errors expected from the automatic segmentations. } \label{fig:fig} \end{figure} \subsection{Challenge Tasks} The BraTS 2021 challenge utilizes multi-institutional mpMRI scans, and focuses on (Task 1) the evaluation of state-of-the-art methods for the segmentation of intrinsically heterogeneous brain glioblastoma sub-regions in mpMRI scans. Furthermore, to pinpoint the clinical relevance of this segmentation task, BraTS 2021 also focuses on (Task 2) the evaluation of methods to predict the MGMT promoter methylation status at the pre-operative baseline scans, via integrative analyses of quantitative imaging phenomic features and machine learning algorithms. Participants are free to choose whether they want to focus only on one or both tasks. \subsubsection{Task 1: Brain Tumor Sub-region Segmentation } The participants are called to address this task by using the provided clinically-acquired training data to develop their method and produce segmentation labels of the glioma sub-regions. The sub-regions considered for evaluation are the ``enhancing tumor" (ET), the ``tumor core" (TC), and the ``whole tumor" (WT). The provided segmentation labels have values of 1 for NCR, 2 for ED, 4 for ET, and 0 for everything else. For this task this year’s BraTS challenge makes available a dataset of 8,000 MRI scans from 2,000 glioma patients. These cases are distributed across training, validation, and testing datasets following a machine learning paradigm. \subsubsection{Task 2: Radiogenomic Classification} Participants are provided with mpMRI data and the MGMT promoter methylation status associated with each case. The methylated cases are marked as ‘1’ and unmethylated as ‘0’ in the csv file which is provided with the data. Researchers have proposed methods to predict the MGMT promoter methylation status with appropriate imaging/radiomic features extraction, and analyse them through machine learning algorithms. The participants do not need to be limited to volumetric parameters, but can also consider intensity, morphologic, histogram-based, and textural features, as well as spatial information, and glioma diffusion properties extracted from glioma growth models. Participants will be evaluated for the predicted MGMT status of the subjects indicated in the accompanying spreadsheet. \subsection{Performance Evaluation} Participants are called to submit the results on the online evaluation platform for the training and validation dataset. The test dataset will never be shared with the participants and they will upload their proposed methods in a containerized way for the final testing phase. To evaluate the generalizability of the proposed methods, we will evaluate the performance on the cohort which is not part of either training or validation cohort, also termed as testing out of distribution cohort. The distribution for methylated and unmethylated cases across the training, validation, testing cohort is given in Table \ref{Table1}. \subsubsection{Task 1: Tumor Sub-region Segmentation} Consistent with the configuration of previous BraTS challenges, we intend to use the ``Dice similarity coefficient", and the ``Hausdorff distance (95\%)" as performance evaluation metrics. Expanding upon this evaluation scheme, we will also provide the metrics of ``Sensitivity" and ``Specificity", allowing to determine potential over- or under-segmentations of the tumor sub-regions by participating methods. The ranking scheme followed during the BraTS 2017-2020 comprised the ranking of each team relative to its competitors for each of the testing subjects, for each evaluated region (i.e., ET, TC, WT), and for each measure (i.e., Dice and Hausdorff). For example, in BraTS 2020, each team was ranked for 166 subjects, for 3 regions, and for 2 metrics, which resulted in $166\times3\times2=996$ individual rankings. The final ranking score (FRS) for each team was then calculated by firstly averaging across all these individual rankings for each patient (i.e., Cumulative Rank), and then averaging these cumulative ranks across all patients for each participating team. This ranking scheme has also been adopted in other challenges with satisfactory results, such as the Ischemic Stroke Lesion Segmentation challenge\footnote{\url{http://www.isles-challenge.org/}} \cite{maier2017isles}. We then conducted further permutation testing, to determine statistical significance of the relative rankings between each pair of teams. This permutation testing would reflect differences in performance that exceeded those that might be expected by chance. Specifically, for each team we started with a list of observed subject-level Cumulative Ranks, i.e., the actual ranking described above. For each pair of teams, we repeatedly randomly permuted (i.e., for 100,000 times) the Cumulative Ranks for each subject. For each permutation, we calculated the difference in the FRS between this pair of teams. The proportion of times the difference in FRS calculated using randomly permuted data exceeded the observed difference in FRS (i.e., using the actual data) indicated the statistical significance of their relative rankings as a p-value. These values were reported in an upper triangular matrix providing insights of statistically significant differences across each pair of participated teams. Top ranked methods in the validation phase will be invited at MICCAI 2021 for presentation of their methods and results. The final top three ranked participating teams according to their evaluation against the testing data, will be invited at RSNA 2021 for presentation and to receive their monetary awards. \subsubsection{Task 2: Radiogenomic Classification} The methods submitted by the participating teams for task 2 will be evaluated based on the area under the ROC curve (AUC), accuracy, FScore (Beta) and Matthew's Correlation Coefficient of the classification of the MGMT status as methylated and unmethylated. The AUC is a metric that measures the overall discriminatory capacity of a model for all possible thresholds and allows for comparing the performance of the entries by each participant, even though it has no straightforward clinical meaning and does not guarantee the model is calibrated. The AUC will be used as the reference metric to rank the participants in the leaderboard of task 2. \subsection{Participation Timeline} The challenge will commence with the release of the training dataset, which will consist of imaging data and the corresponding ground-truth labels. Participants can start designing and training their methods using this training dataset. The validation data will then be released within three weeks after the training data is released. This will allow participants to obtain preliminary results in unseen data and also report these in their submitted short MICCAI LNCS papers, in addition to their cross-validated results on the training data. The ground truth of the validation data will not be provided to the participants, but multiple submissions to the online evaluation platforms will be allowed. The top-ranked participating teams in the validation phase will be invited to prepare their slides for a short oral presentation of their method during the BraTS challenge at MICCAI 2021. Finally, all participants will be evaluated and ranked on the same unseen testing data, which will not be made available to the participants, after uploading their containerized method in the evaluation platforms. The final top-ranked participating teams will be announced at the 2021 RSNA Annual Meeting. The top-ranked participating teams of both the tasks will receive monetary prizes of total value of \$60,000, sponsored by Intel, RSNA, and NeoSoma Inc. \section{Results} tbd \section{Discussion} In this paper we presented the design of the $10^{th}$ BraTS challenge, jointly organised by the RSNA, ASNR, and MICCAI societies, and offering what can possibly be considered the largest curated multi-label annotated dataset of mpMRI scans for a single disease. Members of the RSNA and ASNR communities had graciously volunteered to refine tumor sub-region annotations for all 2,000 cases included in the BraTS 2021 challenge, until satisfactory quality for releasing the data. Considering the size of this year’s challenge and also its potential continuation after the announcement of this year’s winners, the testing data will be kept hidden at all times and their performance evaluation will be based on the challenge evaluation platforms of Sage Bionetworks Synapse (Task 1) and Kaggle (Task 2), concluding in distributing to the top ranked participants monetary awards of \$60,000 collectively. We hope that the well-labelled multi-institutional data of BraTS 2021 will provide an optimized community benchmark and a common dataset to the research community focusing on computational neuro-oncology, even beyond the specific BraTS 2021 tasks. Although we designed the BraTS 2021 challenge with utmost care there are still some limitations that need further consideration. Firstly, the tumor feature segmentations of each case are refined by a single annotator with an iterative process with a group of approvers, until approval from the latter, and hence the potential inter-rater agreement can not be assessed. Secondly, since the provided MGMT promoter methylation status was determined based on varying methods across the multiple institutions that contributed data, and each institute follows its own methodology (e.g., pyrosequencing vs quantitative PCR) and thresholds, only a binary classification of the methylation status was made available to the participants instead of a continuous value. Lastly, we note that some of the MRI datas included in the challenge harbor more abnormalities than just gliomas. Since the focus of the challenge was on gliomas all other abnormalities (such as white matter hyperintensities that are typically secondary to small vessel ischemic disease) were not considered in the annotation process. This was made particularly apparent from previous efforts that attempted to perform a multi-disease segmentation\cite{10.3389/fncom.2019.00084}. With this multi-disease segmentation in mind, one of the main future directions for the BraTS challenge would be to expand beyond its current focus on glial tumors towards general brain abnormalities. Furthermore, the extension from solely pre-operative baseline scans to post-operative scans, and the inclusion of an additional label for the resection cavity would be a very interesting and clinically appealing direction, as it would speak directly to the assessment of treatment response and disease progression. To ensure robustness and generalizability of the computational algorithms, ample patient data from multiple sites, capturing diverse patient populations are desired. A major hindrance for accessing these datasets is data siloing due to tedious bureaucratic process, data ownership concerns, and legal considerations reflected in patient privacy regulations, such as the American HIPAA \cite{hippa} and the European GDPR\cite{gdpr}. In future, we aim at moving from the current centralised data approach to a federated approach, which would enable researchers to access potentially unprecedented size of data and hence design more robust and generalizable algorithms \cite{sheller2020federated, rieke2020future, pati2021federated}. \section{Materials \& Methods} \input{2_1_Material_and_Methods.tex} \input{4_discussion} \iffalse \section{Conclusion} Text related to the discussion goes here. \fi \iffalse \section*{Author Contributions} Study conception and design: Software development used in the study: Wrote the paper: Data analysis and interpretation: Reviewed / edited the paper: \fi \section*{Acknowledgments} Success of any challenge in the medical domain depends upon the quality of well annotated multi-institutional datasets. We are grateful to all the data contributors, annotators and approvers for their time and efforts. \section*{Funding} Research reported in this publication was partly supported by the National Cancer Institute (NCI) Informatics Technology for Cancer Research (ITCR) program and the National Institute of Neurological Disorders and Stroke (NINDS) of the National Institutes of Health (NIH), under award numbers NCI:U01CA242871, NCI:U24CA189523, NINDS:R01NS042645, Contract No. HHSN261200800001E, Ruth L. Kirschstein Institutional National Research Service Award number T32 EB001631. Research reported in this publication was also partly supported by the RSNA Research \& Education Foundation grant number RR2011, and by the ASNR Foundation Grant in Artificial Intelligence (JDR). Sage Bionetworks support of challenge organization and infrastructure was supported by the NCI ITCR program under award number U24CA248265. The content of this publication is solely the responsibility of the authors and does not represent the official views of the NIH or of the RSNA R\&E Foundation, or the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government. \bibliographystyle{ieeetr} \section{Materials \& Methods} \input{2_1_Material_and_Methods.tex} \input{4_discussion} \iffalse \section{Conclusion} Text related to the discussion goes here. \fi \iffalse \section*{Author Contributions} Study conception and design: Software development used in the study: Wrote the paper: Data analysis and interpretation: Reviewed / edited the paper: \fi \section*{Acknowledgments} Success of any challenge in the medical domain depends upon the quality of well annotated multi-institutional datasets. We are grateful to all the data contributors, annotators and approvers for their time and efforts. \section*{Funding} Research reported in this publication was partly supported by the National Cancer Institute (NCI) Informatics Technology for Cancer Research (ITCR) program and the National Institute of Neurological Disorders and Stroke (NINDS) of the National Institutes of Health (NIH), under award numbers NCI:U01CA242871, NCI:U24CA189523, NINDS:R01NS042645, Contract No. HHSN261200800001E, Ruth L. Kirschstein Institutional National Research Service Award number T32 EB001631. Research reported in this publication was also partly supported by the RSNA Research \& Education Foundation grant number RR2011, and by the ASNR Foundation Grant in Artificial Intelligence (JDR). Sage Bionetworks support of challenge organization and infrastructure was supported by the NCI ITCR program under award number U24CA248265. The content of this publication is solely the responsibility of the authors and does not represent the official views of the NIH or of the RSNA R\&E Foundation, or the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government. \bibliographystyle{ieeetr}
1,314,259,992,775
arxiv
\section{Introduction} \IEEEPARstart{V}{oice} activity detection (VAD, or speech activity detection, SAD) in some literature, whose main objective is to detect voiced speech segments and distinguish them from unvoiced ones, is crucial as a pre-processing step for tasks such as speech recognition and speaker recognition. VAD can be performed via either unsupervised feature-based or supervised model-based approaches. For feature-based VAD, simple features such as energy~\cite{woo2000robust,povey2011kaldi} and zero-crossing rate~\cite{rabiner1975algorithm,junqua1991study} and more complex ones such as the spectral shape~\cite{rabiner1977application} and pitch~\cite{morales2011pitch,Tan2020} are investigated. The latter requires speech and non-speech labels for the training data to build statistical models that discriminate between speech or non-speech signals~\cite{sohn1999statistical}. Contrary to supervised frameworks, unsupervised methods do not require extensive amounts of labeled data. Therefore unsupervised approaches are cheaper to train and often faster (due to simpler architecture) than their supervised counterparts. Unsupervised methods are thus a popular research direction in VAD~\cite{Sharma2019,Sadjadi2013UnsupervisedSA,Ying2011VoiceAD,Tao_2016,Zhang2013a,Zhang2013b}. However, despite the simplicity of unsupervised methods, they suffer from not scaling well with large amounts of data. On the other hand, supervised model-based VAD can obtain better performance when training data size scales up due to a more accurate estimation of the model parameters. The choice of backbone models is essential for supervised VAD approaches. Before the era of deep learning, statistical models such as the Gaussian mixture model (GMM)~\cite{fukuda2010long} and Hidden Markov model (HMM)~\cite{sohn1999statistical,varela2011combining} are used to model the distribution of speech and non-speech signals. Deep learning techniques have contributed to the recent success in VAD~\cite{Hughes2013,ryant2013speech,thomas2014analyzing,Kim2018,Lavechin2020,Lee2020}. Deep neural networks (DNN)~\cite{Segbroeck2013} and specifically convolutional neural networks (CNN)~\cite{Lin2019,Vafeiadis2019} offer improved modeling capabilities compared to traditional methods~\cite{ryant2013speech}, while recurrent- (RNN) and long short-term memory (LSTM) networks can better model long-term dependencies between sequential inputs~\cite{Hughes2013,eyben2013real,Tong2016,Kim2018}. Lastly, semi-supervised VAD, which incorporates labeled and unlabeled data, has also been investigated in~\cite{SHOLOKHOV2018132}. However, despite the recent success of deep learning models in VAD, supervised frame-level labels are required for training. Most methods currently acquire those labels via an automatic speech recognition (ASR) pipeline, where frame-level speech activation is estimated via an HMM model trained on transcribed, clean data. Accordingly, the prerequisite includes both prior knowledge about the spoken language (phonemes) and clean training data, and therefore, such methods cannot easily scale with arbitrary data. Thus training data is usually recorded under a controlled environment with or without additional synthetic noise~\cite{hirsch2000aurora,Tong2016}, with work aiming at de-noising~\cite{Jung2018,Zhang2013b,Ghosh2018}. However, only having access to synthetic noise inevitably prevents VAD from generalizing to real-world applications, where speech in the wild is often accompanied by countless unseen sounds, each with its unique features. Moreover, real-world data is likely to contain copious amounts of spoken language data mixed with any arbitrary noise, challenging to be used in traditional supervised VAD frameworks. Recent work in~\cite{Dinkel2020a} proposed general-purpose VAD (GPVAD), a framework using weak labeled supervision (on clip-level), as an alternative to common supervised VAD approaches. However, while the proposed GPVAD framework in~\cite{Dinkel2020a} outperforms strongly supervised VAD when evaluating on real-world data, GPVAD's clean and synthetic noise performance is inferior to traditional supervised VAD approaches. We believe the inferior GPVAD performance stems mainly from two factors: \begin{enumerate} \item Strongly supervised VAD models have access to frame-level labels, enhancing their capability to estimate speech duration. \item Language/Phonetic unit match between training and evaluation datasets (e.g., English). \end{enumerate} One possible advantage of GPVAD against traditional supervised VAD methods is that data collection is comparatively cheap since real-world publicly available datasets can be used, and only clip-level labels are required. This work aims to address the two problems stated above by extending the GPVAD framework towards a generalized data setting. We adopt a teacher-student approach and estimate frame-level labels for the student model from weakly-labeled teacher training. Therefore, this study aims to provide insight if VAD models can improve noise robustness by utilizing large amounts of data without requiring manual frame-level annotation or exclusively rely on clean data. The paper is organized as follows: In \Cref{sec:approach}, we introduce our method. Further, in \Cref{sec:experiments}, the experimental setup and training details and evaluation schemes are provided. Then, in \Cref{sec:vad_results}, our results are provided and analyzed regarding their noise robustness in VAD. Finally, a summary is provided in \Cref{sec:conclusion}. \section{VAD in the wild} \label{sec:approach} \begin{figure*} \centering \includegraphics[width=\linewidth]{figs/DataDrivenVAD.pdf} \caption{The proposed data-driven VAD framework. A convolution block refers to an initial batch normalization, then a $3\times3$ convolution, and lastly, a LeakyReLU (slope $-0.1$) activation. All convolutions use padding to preserve the input size. The framework consists of three distinct stages: 1. Clip-level training of a teacher model on source data (Audioset). 2. Using the teacher to estimate soft labels for a student model on a target dataset. 3. Evaluation of the student model by only keeping the Speech class.} \label{fig:data_driven_vad} \end{figure*} Traditionally, VAD for noisy scenarios is modeled as in \Cref{eq:noisy_eq}. The assumption is that additive noise $\mathbf{u}$ can be filtered out from an observed speech signal $\mathbf{x}$ to obtain clean speech $\mathbf{s}$. \begin{equation} \label{eq:noisy_eq} \mathbf{x} = \mathbf{s} + \mathbf{u} \end{equation} Conventional approaches tackle the problem from a signal processing perspective, where the noised signal $\mathbf{x}$ is filtered by a multitude of low- and high-pass filters, as well as other noise suppression techniques to remove $\mathbf{u}$~\cite{povey2011kaldi,Tan2020,Tan2010}. However, VAD systems trained with this framework cannot scale easily with real-world data since directly modeling $\mathbf{u}$ with various noise types is difficult. Therefore, we aim at learning the properties of $\mathbf{s}$ accompanied with potentially $L$ different non-speech events $\mathbf{U} = \left( \mathbf{u}_0, \mathbf{u}_1,\ldots,\mathbf{u}_L \right)$, where $\mathbf{u}_0 = 0$. \begin{align} \label{eq:model} \begin{split} \mathcal{X} &= \{ \mathbf{x}_1,\ldots, \mathbf{x}_{l}, \ldots , \mathbf{x}_{L} \}\\ \mathbf{x}_l &= \left( \mathbf{s}, \mathbf{u}_l \right) \end{split} \end{align} Here, we model our observed speech data $\mathcal{X}$ as a ``bag'', containing all co-occurrences of \textit{Speech} in conjunction with another, possibly noisy background/foreground event label $l \in \{0, \ldots, L\}$ from a set of all possible event labels $L < E$ (\Cref{eq:model}). Here $E$ is the total number of event labels observed. Since our approach stems from weakly supervised sound event detection (WSSED), we do not restrict our approach to only model $L$ event, and instead, we aim at modeling all $E$ events. This potentially enhances our model's robustness since it not only has access to speech-only data, commonly seen in traditional VAD approaches but also to data in the wild. \subsection{Teacher-student approach} This work proposes a data-driven teacher-student VAD approach, which only requires weak clip labels during training. The approach is based on WSSED, which detects and localizes different sounds, including speech, via clip-level supervision. Specifically, the approach estimates from a given input audio-clip spectrogram $\mathbf{S} \in \mathbb{R}^{T\times D}$ with duration $T$ (here number of frames) and $D$ frequency-bins, a clip-level label $y$ as: \begin{align} \begin{split} \left[y_1, \ldots, y_T\right] = F\left(\mathbf{S}\right)\\ y = \Gamma\left[y_1, \ldots, y_T\right] \end{split} \end{align} , where $F$ is modeled via a neural network. Note that the temporal pooling function $\Gamma$, which removes all time-variability, is the only direct connection between the observed, weakly supervised signal $y$ and the per-frame estimate $y_t$. Therefore, the estimate $y_t$ is only indirectly learned via back-propagation from the loss between the prediction $y$ and ground truth $\hat{y}$. Our approach is located within a teacher-student framework, whereas a teacher $\mathcal{T}$ is first trained to estimate $y$. After training, $\mathcal{T}$ then predicts soft-labels $\hat{y}_t$ on a known or unknown dataset, providing frame-level supervision to a student $\mathcal{S}$. Note that in our work, the teacher is trained to predict $E$ (here $E = 527$, including ``Speech'' and 526 ``non-Speech'' events) different events, whereas the student $\mathcal{S}$ is trained as a binary classifier between speech and non-speech. Therefore, the soft training labels $\hat{y}_{t}^{\mathcal{S}}$ for student $\mathcal{S}$ given the predictions $y_{t}^{\mathcal{T}}$ of teacher $\mathcal{T}$ are defined as: \begin{align}\label{eq:label_pooling_student} \begin{split} \hat{y}_{t}^{\mathcal{S}}(\text{Speech}) & = y_{t}^{\mathcal{T}}(\text{Speech})\\ \hat{y}_{t}^{\mathcal{S}}(\text{non-Speech}) &= \max_{e \ne \text{Speech}} y_{t}^{\mathcal{T}}(e) \end{split} \end{align} Since the goal is to best discriminate between speech and non-speech events, we utilize the maximal value across all events not labeled as ``Speech'' (see \Cref{eq:label_pooling_student}) as the negative class (non-Speech) representation. For the positive ``Speech'' class, we use the naive approach of directly transferring the teacher's predictions to the student. Please note that $\hat{y}_t(\text{Speech}) + \hat{y}_t(\text{non-Speech}) \neq 1$, which enables our model to simultaneously predict speech, as well as possible foreground or background noises. Also, during inference, we only consider the outputs of $y_t(\text{Speech})$ as being valid and neglect $y_t(\text{non-Speech})$. \section{Experiments} \label{sec:experiments} In this section, we introduce the experimental setup, including utilized datasets for training and evaluation and insights about the used framework. All neural networks were implemented in Pytorch~\cite{PaszkePytorch}. \subsection{Datasets} We first provide details on the training and evaluation datasets. All datasets' duration and data condition (clean, real) can be seen in \Cref{tab:datasets}. \paragraph*{Training Data} It should be noted that since we adopt a teacher-student approach, the training data utilized in this work is split into two categories: \begin{enumerate*} \item Source data, which is used to train a teacher model. The source data is labeled on the clip-level. \item Target data, which is unlabeled. The teacher is estimating frame-level soft labels on a target dataset. Then a student model is trained from scratch on this dataset and evaluated. \end{enumerate*} \paragraph{Source data} In this work, we utilize the publicly available Audioset~\cite{Gemmeke2017} dataset for our backbone teacher training. The commonly available Audioset is split into a ``balanced'' (further $\mathcal{A}_1$) and an ``unbalanced'' (further $\mathcal{A}_2$) subset. The ``balanced'' $\mathcal{A}_1$ dataset was collected by first taking examples for the rarest classes, then moving on to less-rare classes, ultimately leading to at least 59 examples for each event (but 5000+ for the most seen ``Music'' event). The main difference between the $\mathcal{A}_1$ and $\mathcal{A}_2$ datasets is the amount of available data. Due to difficulties obtaining the entire dataset, our $\mathcal{A}_1$ subset contains 21k, and $\mathcal{A}_2$ contains 1.85M at most 10-second long Youtube audio clips. The data can be considered unconstrained since the dataset is taken from the globally utilized Youtube platform; thus, parameters such as recording devices, environment, data quality are unknown. Audioset is annotated at clip-level, with 527 possible event classes, where it should be noted that label noise (e.g., incorrect labels) is present. Within these 527 events, our focus lies in the ``Speech'' class event. The ``Speech'' event according to the Audioset ontology contains: ``Male speech'', ``Female speech'', ``Child speech'', ``Conversation'', ``Monologue'', ``Babbling'' and ``Synthesized speech''. Unlike other datasets, Audioset is not restricted to one specific language, meaning that the teacher model can be considered language-agnostic. The $\mathcal{A}_1$ subset contains 5452 clips ($\approx$ 15h), the $\mathcal{A}_2$ subset 905721 ($\approx$ 2500h) clips labeled as ``Speech''. Note that $\mathcal{A}_1$ only contains samples, where ``Speech'' is seen with other events in tandem ($\mathbf{U}=\left(\mathbf{u}_1,\ldots,\mathbf{u}_L \right)$), whereas $\mathcal{A}_2$ contains single individual ``Speech'' only samples ($\mathbf{U}=\left(\mathbf{u}_0, \mathbf{u}_1,\ldots,\mathbf{u}_L \right)$). The amount of events co-occurring with ``Speech'' in $\mathcal{A}_1$ is $L=405$, while for $\mathcal{A}_2$ it is $L=498$. Therefore, it is likely that training a teacher on $\mathcal{A}_2$ is potentially more noise-robust than on $\mathcal{A}_1$. The most common events co-occurring with ``Speech'' for each respective dataset are provided in \Cref{fig:occurance}. \begin{figure}[htbp] \centering \includegraphics[width=0.95\linewidth]{figs/co_occurance.pdf} \caption{Top 10 most common ``non-Speech'' event co-occurring with ``Speech'' within $\mathcal{A}_{1}$ (left) and $\mathcal{A}_2$ (right) datasets.} \label{fig:occurance} \end{figure} \paragraph{Target data} The target data consists of the two datasets utilized for teacher training ($\mathcal{A}_{1/2}$), as well as three other datasets. These three datasets are: Voxceleb1 ($\mathcal{V}_1$)~\cite{Nagraniy2017}, VoxCeleb2 ($\mathcal{V}_2$)~\cite{Chung2018,Nagrani2020}, as well as $\mathcal{V}_3$ which is a combination of the SRE datasets~\cite{sadjadi20172016} and Switchboard datasets~\cite{godfrey1992switchboard}. $\mathcal{V}_{1/2}$ are collected from Youtube; thus, data can contain real-world noises but is likely only to contain spoken language as their primary sound source. $\mathcal{V}_1$ contains about 150,000 audio clips from more than 1200 speakers. The average length of audios is 8.2s, and the whole corpus contains approximately 352 hours of audio. The collection of $\mathcal{V}_2$ follows the same procedure as $\mathcal{V}_1$, but with many more speakers involved. About 1.13M audio clips from about 6000 speakers are contained in $\mathcal{V}_2$, with an average duration of 7.8s and a total duration of 2442 hours. Unlike $\mathcal{A}_{1/2}$ and $\mathcal{V}_{1/2}$, which are collected from open-source Youtube videos, $\mathcal{V}_3$ was carefully planned and constructed by asking users to record phone calls. $\mathcal{V}_3$ consists of a Switchboard (SWBD) portion and an SRE portion, where the former contains SWBD phase 2,3 and cellular 1,2, and the latter contains NIST SRE04-10. $\mathcal{V}_3$ is commonly used for the SRE challenges and contains long-duration recordings, with an average duration of 5 minutes. Overall, $\mathcal{V}_3$ contains more than 60000 recordings, leading to a total duration of 5213 hours. \paragraph*{Evaluation Data} Three different evaluation scenarios are proposed. First, we validate our model on the clean Aurora 4 test set (test A)~\cite{hirsch2000aurora}. Test A contains 330 utterances with a total duration of 40 minutes. Second, we synthesize a noisy test set based on the clean Aurora 4 test set by randomly adding noise from a database of 100 noise files encompassing 20 noise types (e.g., Machine, Crowd, Traffic, Animal, Water, Cry, Laugh, Yawn) using an SNR ranging from 5db to 15db in steps of 1db (test B). Lastly, we merge the development and evaluation tracks of the Challenge on Detection and Classification of Acoustic Scenes and Events 2018 (DCASE18)~\cite{Serizel2018}, itself a subset of Audioset, to create our real-world evaluation data (test C). The DCASE18 data provides ten domestic environment event labels, of which we neglect all labels other than ``Speech'', but report the number of instances where non-speech labels were present. The DCASE18 dataset contains manually re-annotated samples from Audioset, where similar event labels are generally merged (e.g., ``Cat'' + ``Meow'' from Audioset $\rightarrow$ ``Cat'' in DCASE18). An important difference between DCASE18 and Audioset is that the manually annotated events in DCASE18 are comparatively noise-free, meaning that wrong (incorrect) or absent (incomplete) event labels are rarely seen, whereas Audioset contains label-noise. Our DCASE18 evaluation set encompasses 596 utterances labeled as ``Speech'', 414 utterances (69\%) contain another non-speech label, 114 utterances (20\%) only contain speech, and 68 utterances (11\%) contain two or more non-speech labels. This indicates that the test set (C) is the most challenging trial compared with (A) and (B). We summarize the differences between the evaluation dataset as follows. \begin{enumerate*}[label=\arabic*)] \item Evaluation sets A and B are annotated using an automatic HMM alignment, whereas test C is manually annotated using human-labor. \item Tests A and B contain exclusively English speech, whereas test C contains an unknown amount of languages. \item Test C contains sporadic speech (e.g., random shouts or greetings), whereas tests A, B only contains well-pronounced (e.g., news broadcast) sentences in English. \end{enumerate*} \begin{table}[htbp] \centering \begin{tabular}{ll|r|r|r|r} \toprule \multicolumn{2}{c}{Datatype} & Name & Condition & Label & Duration \\ \midrule \multirow{5}{*}{\rotatebox[origin=c]{90}{Target}} & \multirow{2}{*}{Source} & Balanced ($\mathcal{A}_1$) & Real & Clip & 60 h \\ & & Unbalanced ($\mathcal{A}_2$) & Real & Clip & 5000 h \\ \cline{2-6} & & VoxCeleb1 ($\mathcal{V}_1$) & Real & - & 352 h\\ & & VoxCeleb2 ($\mathcal{V}_2$) & Real & - & 2442 h\\ & & SRE ($\mathcal{V}_3$) & Clean & - & 5213 h \\ \hline \multicolumn{2}{c|}{\multirow{3}{*}{Evaluation}} & Aurora 4 (A) & Clean & Frame & 40 min \\ & & Aurora 4 (B) & Syn & Frame & 8.7 h \\ & & DCASE18 (C) & Real & Frame & 100 min \\ \bottomrule \end{tabular} \caption{Training datasets for teachers (source) and students (target) as well as the three proposed evaluation protocols for clean, synthetic noise and real-world scenarios. Duration represents the overall duration of any signal in the corpus.} \label{tab:datasets} \end{table} \subsection{Setup} Our VAD experiments used $64$-dimensional log-Mel power spectrograms (LMS) in this work regarding feature extraction. Every single audio-clip is resampled to $22050$ Hz. Each LMS sample was extracted by a $2048$ point Fourier transform every $20$ ms with a window size of $40$ ms using a Hann window. \begin{equation} \label{eq:bce} \mathcal{L}(\hat{y}, y) = \hat{y} \log(y) + (1-\hat{y})\log(1-y) \end{equation} The training criterion for all experiments between the ground truth $\hat{y}$ and prediction $y$ is binary cross-entropy (BCE, see\Cref{eq:bce}). Regarding teacher training, the BCE is computed on clip-level, while student training computes BCE per frame. Linear softmax~\cite{Wang2018,dinkel2019duration} (\Cref{eq:linear_softmax}) is utilized as temporal pooling layer ($\Gamma$) that merges frame-level probabilities $y_t(e) \in \left[ 0,1 \right]$ to a single vector representation $y(e) \in \left[ 0,1 \right]^E$. \begin{equation}\label{eq:linear_softmax} y(e) = \frac{\sum_{t}^T y_t(e)^2}{\sum_{t}^T y_t(e)} \end{equation} Linear softmax is only utilized during teacher training and removed during student training. \subsection{Evaluation metrics} \label{ssec:metrics} Our models are evaluated on two distinct levels: frame-level and segment-level. All binary metrics used in this work require: \begin{itemize} \item True positive (TP): Both reference and system prediction indicates speech to be present. \item False positive (FP): System prediction indicates speech to be present, but the reference indicates non-speech. \item False negative (FN): Reference indicates speech to be present, but the system prediction indicates non-speech. \end{itemize} \paragraph{Frame-level} For frame-level evaluation, we utilize macro averaged (instance-independent) precision (P), recall (R), and their corresponding F1 score. Moreover, we also report the frame error rate (FER). \begin{align} \begin{split} \text{P} &= \frac{\text{TP}}{\text{TP}+\text{FP}}, \text{R} = \frac{\text{TP}}{\text{TP}+\text{FN}}\\ \text{F1} &= 2\frac{\text{PR}}{\text{P}+\text{R}}\\ \text{FER} &= \frac{\text{FP} + \text{FN}}{\text{TP}+\text{FP}+\text{FN}+\text{TN}} \label{eq:eval_metrics} \end{split} \end{align} The threshold-based metrics (P, R, F1, FER) can be seen in \Cref{eq:eval_metrics}. Moreover, to compare different approaches with each other, independent of the post-processing or thresholds used, we also include Area Under the Curve (AUC)~\cite{ROC_AUC}. Note that the computation of AUC is directly done on the estimated speech probability sequence $y_t(\text{Speech}) \in [0,1]$. \paragraph{Segment-level} For segment-level evaluation we utilize event-based F1-Score (Event-F1)~\cite{Mesaros2016,Bilen2019}. Event-F1 calculates whether onset, offset, and the predicted label overlaps with the ground truth, therefore being a measure for temporal consistency. We set a t-collar value according to WSSED research~\cite{Serizel2018} to 200 ms to allow an onset prediction tolerance and further permit a duration discrepancy between the reference and prediction of 20\%. \subsection{Models} \label{ssec:models} Both teacher and student models utilize the same convolutional recurrent neural network (CRNN) back-end. The architecture consists of a five-layer CNN (utilizing $3\times3$ convolutions), summarized into three blocks, with L4-Norm pooling after each block~\cite{dinkel2019duration,Dinkel2020a}, identical to the CDur framework from~\cite{Dinkel2021a}. A bidirectional gated recurrent unit (BGRU) is attached after the last CNN output, enhancing our models' temporal consistency. The framework and specific parameters can be seen in \Cref{fig:data_driven_vad}. The model has 679k parameters, making it comparably light-weight, only requiring 2.7 MB on disk. \paragraph*{Teacher model} This work uses two teacher models, $\mathcal{T}_{1/2}$. $\mathcal{T}_{1}$ represents our baseline teacher approach, only utilizing the smaller $\mathcal{A}_1$ dataset with no augmentation, identical to the CRNN in~\cite{Dinkel2020a}. Further, we propose $\mathcal{T}_2$, which is trained on the large $\mathcal{A}_2$ dataset, utilizing additional augmentation as seen in \Cref{ssec:augment} (SpecAug, Time shift). To provide insight into our teacher models' potential performance implications, we evaluated on a subset ($\approx$ 36h) of the official Audioset evaluation data. Please note that these results are computed on clip-level, meaning they have little importance for frame-level performance and can be viewed as a measure of our model's capability to estimate non-speech sound events. The results in \Cref{tab:sourcemodel_performance} show that the additional training data for $\mathcal{T}_2$ training leads to better outcomes regarding mean average precision (mAP), AUC, and d-prime ($d^{'}$) compared to $\mathcal{T}_1$. However, the performance lacks behind large CNN models~\cite{Gemmeke2017,Kong2018a,Kong2019a,Xu2018}. The main reason for this performance discrepancy is that our approach aims at modeling speech, which requires a high time-resolution, ultimately leading to poor clip-level performance. The high time-resolution requirement also partially hinders our network's depth and width since our approach can not arbitrarily diminish the time-dimension. Lastly, since VAD is a pre-processing step to other tasks, a fast run-time speed is generally preferred, meaning large models should be avoided. \begin{table}[htbp] \centering \begin{tabular}{ll|rrrr} \toprule Source & Teacher & Aug & mAP & AUC & $d^{'}$ \\ \midrule $\mathcal{A}_1$ & $\mathcal{T}_1$ & \xmark & 10.9 & 88.5 & 1.698 \\ $\mathcal{A}_2$ & $\mathcal{T}_2$ & \cmark & 22.6 & 92.9 & 2.080 \\ \bottomrule \end{tabular} \caption{Teacher models and their respective performance on the Audioset evaluation data. Only $\mathcal{T}_2$ utilizied augmentation during training.} \label{tab:sourcemodel_performance} \end{table} \paragraph*{Student model} The student model is structurally identical to the teacher model (see \Cref{fig:data_driven_vad}). Unlike the teacher model, the student is trained on the teacher's frame-level predictions and does not require the temporal pooling function $\Gamma$. BCE is utilized as the frame-level loss function (\Cref{eq:bce}). \subsection{Training} Teacher training mainly differs from student training with its data sampling strategy. We utilize a balanced data sampling strategy aiming at oversampling minority sound events such that each batch at best contains a single sample per sound event. Note since this is a multi-label classification problem, perfect label balance is impossible since a minority event sample might also contain a majority event. All student models are trained on 90\% of the available training data and cross-validated using the leftover 10\%. Training of $\mathcal{T}_2$ slightly differs from this train/cross-validation paradigm, in which case we utilize the $\mathcal{A}_1$ dataset for cross-validation. VAD training is done using Adam optimization with a starting learning rate of $0.001$, where the learning rate is reduced by a factor of $10$ if no improvement on the held-out cross-validation set has been seen for at least $5$ cross-validation steps. The batch-size for all experiments was set to $64$. Cross-validation is done after each epoch or every 5000 batches. For all utilized datasets, training is run for 15 epochs. The best model obtaining the lowest loss on the held-out cross-validation dataset is kept for inference/evaluation. During training, zero-padding is applied to each audio-clip towards the longest clip's length within a batch. Since our student models observe frame-level labels, we mask our loss such that each padded element does not influence the final back-propagation step. Code and pretrained models are available online \footnote{Available at github.com/richermans/datadriven-GPVAD}. \subsection{Augmentation} \label{ssec:augment} The training utilizes the following data augmentation schemes. \paragraph{SpecAug} Recently, a cheap yet effective augmentation method has been introduced named SpecAugment (SpecAug)~\cite{park2019specaugment}. SpecAug randomly sets time-frequency regions to zero within an input log-Mel spectrogram. Time modification is applied by masking $\gamma_{t}$ times $\eta_{t}$ consecutive time frames, where $\eta_{t}$ is chosen to be uniformly distributed between $[t_0, t_0 + \eta_{t0}]$ and $t_{0}$ is uniformly distributed within $\left[0, T-\eta_{t} \right)$. Frequency modification is applied by masking $\gamma_f$ times $\eta_{f}$ consecutive frequency bins $\left[f_0, f_0 + \eta_{f}\right)$, where $\eta_{f}$ is randomly chosen from a uniform distribution in the range of $\left[0, \eta_{f0}\right]$ and $f_0$ is uniformly chosen from the range $\left[0, D-\eta_{f}\right)$. When using SpecAug, we set $\gamma_t = 2, \eta_{t0} = 60, \gamma_{f} = 2, \eta_{f0} = 8$. Note that SpecAug is utilized during teacher ($\mathcal{T}_2$) as well as any student training. \paragraph{Time Shifting} Time shifting is utilized only during teacher training since it does not affect student training (frame-level labels). Since only clip-level labels are present, we encourage the model to learn time-coherent predictions. For each audio-clip, we draw $\eta_{sh}$ from a normal distribution $\mathcal{N}(0, 10)$, meaning that we randomly either shift the audio clip forward or backward by $\eta_{sh}$ frames. \subsection{Post-processing} \label{ssec:post-processing} During evaluation, post-processing is required to obtain hard labels from class-wise probability sequences ($y_t(e)$). We hereby use double threshold~\cite{dinkel2019duration,Kong2018b} post-processing, which uses two thresholds $\phi_{\text{low}}=0.1, \phi_{\text{hi}}=0.5$. Please note that double thresholding aims to enhance the temporal consistency, therefore being beneficial in terms of Event-F1. \section{Results And Analysis} \label{sec:vad_results} In this section, we provide our experimental results and insight into the possible limits of our method. Please note that we consider FER, Event-F1, and AUC as our primary metrics, whereas P, R, and F1 are considered secondary metrics. \subsection{Baseline} \label{ssec:base} Here we first introduce our baseline approaches. First, we compare our clip-level trained teachers to a frame-level trained VAD-C (CRNN) model from~\cite{Dinkel2020a}. The VAD-C model back-end is identical to our CRNN framework, where only the training data (Aurora 4) and supervision (frame-level) differ, and artificial noise is added during training. Therefore, our VAD-C baseline is an example of a traditional supervised VAD approach with clean training data. \begin{table} \centering \begin{tabular}{ll||rrrrrrrr} \toprule Test & Model & P & R & F1 & AUC & FER & Event-F1 \\ \midrule \multirow{3}{*}{A} & VAD-C & \textbf{97.97} & \textbf{95.32} & \textbf{96.55} & \textbf{99.78} & \textbf{2.57} & \textbf{78.90} \\ & $\mathcal{T}_1$ & 95.69 & 95.47 & {95.58} & {99.07} & {4.01} & {73.70} \\ & $\mathcal{T}_2$ & 96.13 & 94.97 & {95.52} & {97.75} & {3.38} & {70.10} \\ \hline \multirow{3}{*}{B} & VAD-C & \textbf{91.37} & {82.82} & {85.96} & \textbf{97.07} & \textbf{9.71} & \textbf{47.50} \\ & $\mathcal{T}_1$ & 80.34 & 87.58 & {81.99} & {94.63} & {15.74} & {35.40} \\ & $\mathcal{T}_2$ & 85.28 & \textbf{90.25} & \textbf{87.17} & {95.22} & {10.58} & {42.50} \\ \hline \multirow{3}{*}{C} & VAD-C & 78.17 & 79.08 & 77.93 & 87.87 & 21.92 & {34.40} \\ & $\mathcal{T}_1$ & 85.36 & 82.70 & {83.50} & {91.80} & {15.47} & {44.80} \\ & $\mathcal{T}_2$ & \textbf{88.15} & \textbf{86.21} & \textbf{86.89} & \textbf{94.58} & \textbf{12.74} & \textbf{46.30} \\ \bottomrule \end{tabular} \caption{A baseline comparison between traditional supervised (CRNN) VAD-C approach trained on Aurora 4 in a frame-supervised manner to our proposed teacher models.} \label{tab:baseline_results} \end{table} The results can be seen in \Cref{tab:baseline_results}. Unsurprisingly, VAD-C outperforms our proposed clip-level training teachers on the clean (A) and synthetic (B) test sets. However, the difference in performance between our teachers and VAD-C is acceptable since our approach has no strong frame-level supervision. Leveraging large-data ($\mathcal{A}_2$) for $\mathcal{T}_2$ shows promising performance against $\mathcal{T}_1$ when noise is present. In real-world scenarios (test C), both $\mathcal{T}_1$ and $\mathcal{T}_2$ significantly outperform the standard VAD model on all shown metrics. Specifically, we observe a significant drop in FER (21.92 $\rightarrow$ 12.74) and an increase in AUC (87.87 $\rightarrow$ 94.58). Further, note that performance is less affected by noise (compare results B and C), indicating noise robustness for both our teacher models. \subsection{Difference in label types} \label{ssec:label_type_train} Naturally, since the teacher model outputs probabilities (\textit{soft labels}), an interesting topic of investigation is if \textit{hard labels} (zero-one) are helpful during training. We believe that both soft- and hard-label approaches mutually benefit each other. We assume that hard labels are possibly beneficial to detect onset- and offset boundaries. In contrast, soft labels can more effectively provide duration estimates since speech to non-speech transitions are smooth. We conduct two experiments, using each respective teacher model $\mathcal{T}_{1/2}$. Three types of labels are utilized: \begin{itemize} \item Soft labels, i.e., probabilities, $\hat{y}_t^{\mathcal{S}} \in [0, 1]$ (soft). \item Hard labels obtained from thresholding all soft labels with $\phi=0.5$, $\hat{y}^{\mathcal{S}}_t \in \{0, 1\}$ (hard). \item Randomly (hard) thresholding at most $25\%$ of the speech samples within an audio-clip using $\phi=0.5$ (dynamic). Note that during our model selection phase, we investigated the thresholds $10\%, 25\%, 50\%$, and came to the conclusion that $25\%$ works best. \end{itemize} \begin{table} \begin{tabular}{p{0.23cm}l||rrrrrr} \toprule Test & Label & P & R & F1 & AUC & FER & Event-F1 \\ \midrule \multirow{4}{*}{A} & clip ($\mathcal{T}_1$) & 95.69 & 95.47 & {95.58} & {99.07} & {4.01} & {73.70}\\ \cline{2-8} & hard & 94.18 & \textbf{96.31} & 95.17 & \textbf{99.31} & 3.79 & \textbf{76.34} \\ & soft & \textbf{96.07} & 95.03 & \textbf{95.53} & 98.93 & \textbf{3.38} & 65.22 \\ & dynamic & 94.06 & 94.98 & 94.50 & 98.56 & 4.25 & 61.19 \\ \hline \multirow{4}{*}{B} & clip ($\mathcal{T}_1$) & 80.34 & 87.58 & {81.99} & {94.63} & {15.74} & {35.40}\\ \cline{2-8} & hard & 79.84 & 87.81 & 81.07 & 96.10 & 16.91 & 36.20 \\ & soft & \textbf{86.24} & \textbf{90.80} & \textbf{88.04} & \textbf{96.70} & \textbf{9.78} & \textbf{48.79} \\ & dynamic & 81.95 & 88.75 & 83.85 & 95.67 & 13.88 & 34.34 \\ \hline \multirow{4}{*}{C} & clip ($\mathcal{T}_1$) & 85.36 & 82.70 & {83.50} & {91.80} & {15.47} & {44.80} \\ \cline{2-8} & hard & \textbf{85.39} & 80.97 & 82.00 & \textbf{93.12} & 16.55 & 48.75 \\ & soft & 84.96 & \textbf{83.86} & \textbf{84.28} & 92.70 & \textbf{15.01} & \textbf{51.29} \\ & dynamic & 83.70 & 81.86 & 82.47 & 90.63 & 16.58 & 44.84 \\ \bottomrule \end{tabular} \caption{Results using teacher $\mathcal{T}_1$ and student model trained on $\mathcal{A}_1$ using different label types. We compare the teacher $\mathcal{T}_1$ baseline using clip-level training to the student models. Best results are highlighted in bold.} \label{tab:soft_vs_hard_frame_level_t1} \end{table} Our initial results using the baseline teacher $\mathcal{T}_1$, and student model trained on $\mathcal{A}_1$ can be seen in \Cref{tab:soft_vs_hard_frame_level_t1}. Here, we compare the students trained on frame-level using the three proposed label types against the clip-level teacher $\mathcal{T}_1$. First and foremost, it can be seen that our proposed teacher-student approach improves performance in all test scenarios (A, B, C) against the teacher $\mathcal{T}_1$ (clip). For example, the AUC from $\mathcal{T}_1$ increases on test A (99.07 $\rightarrow$ 99.31), B (94.63 $\rightarrow$ 96.70) and C (91.80 $\rightarrow$ 93.12) when using teacher-student training. Second, our results indicate that hard-label training is preferred in clean data scenarios to obtain consistent temporal predictions. Here, the hard-label approach on the test set A improves the Event-F1 score from 73.70 to 76.34. This observation seems to be in line with our baseline VAD-C method, which is also trained on hard-labels, as it is common for traditional VAD approaches. \begin{table} \begin{tabular}{p{0.23cm}l|rrrrrr} \toprule Test & Label & P & R & F1 & AUC & FER & Event-F1 \\ \midrule \multirow{4}{*}{A} & clip ($\mathcal{T}_2$) & 96.13 & \textbf{94.97} & \textbf{95.52} & {97.75} & {3.38} & \textbf{70.10}\\ \cline{2-8} & hard & 96.70 & 93.57 & 95.00 & \textbf{98.65} & 3.71 & 66.39 \\ & soft & \textbf{97.23} & {94.06} & {95.51} & 98.26 & \textbf{3.33} & {69.19} \\ & dynamic & 97.05 & 93.60 & 95.16 & 98.50 & 3.57 & 66.25 \\ \hline \multirow{4}{*}{B} & clip ($\mathcal{T}_2$) & 85.28 & 90.25 & 87.17 & {95.22} & {10.58} & {42.50} \\ \cline{2-8} & hard & 88.42 & 90.09 & 89.20 & 96.86 & 8.45 & 52.84 \\ & soft & \textbf{90.60} & 90.81 & 90.70 & 97.23 & 7.14 & \textbf{57.78} \\ & dynamic & 90.44 & \textbf{91.32} & \textbf{90.87} & \textbf{97.37} & \textbf{7.08} & 55.29\\ \hline \multirow{4}{*}{C} & clip ($\mathcal{T}_2$) & {88.15} & {86.21} & {86.89} & {94.58} & {12.74} & {46.30}\\ \cline{2-8} & hard & 87.55 & 85.63 & 86.30 & 94.99 & 12.98 & \textbf{55.35} \\ & soft & 86.78 & 85.46 & 85.96 & \textbf{95.14} & 13.38 & 54.72 \\ & dynamic & \textbf{88.19} & \textbf{86.79} & \textbf{87.33} & 95.02 & \textbf{12.08} & 54.91 \\ \bottomrule \end{tabular} \caption{Results using teacher $\mathcal{T}_2$ and student model trained on $\mathcal{A}_1$ using different label types. We compare the teacher $\mathcal{T}_2$ baseline using clip-level training to the student models. Best result per test in bold.} \label{tab:soft_vs_har_frame_level_t2} \end{table} Further, we also provide our results using teacher $\mathcal{T}_2$ in \Cref{tab:soft_vs_har_frame_level_t2}. Student models are also trained on $\mathcal{A}_1$. The performance increase from using the more potent teacher $\mathcal{T}_2$ is evident in the noisy test cases (B, C). All frame-level and segment-level metrics improve significantly compared to the teacher model, e.g., on test B, FER $10.58 \rightarrow 7.08$ and Event-F1 $42.50 \rightarrow 57.78$. Moreover, while both teachers perform worse than our baseline VAD-C approach on test B, the students of teacher $\mathcal{T}_2$ now outperform VAD-C in both B and C noisy test-conditions regarding AUC, FER, and Event-F1. Different from the previous observations in \Cref{tab:soft_vs_hard_frame_level_t1}, it seems that our dynamic labeling method is consistently superior to soft and hard-label approaches for test B and C in regards to FER and F1. Lastly, our performance gap across the synthetic trial B and real noise trial C scenarios is significantly less than the VAD-C baseline. Notably, on all tested conditions, our AUC is higher than $95$, indicating our approach's noise robustness. Due to the results in \Cref{tab:soft_vs_har_frame_level_t2}, all further experiments utilize by default teacher $\mathcal{T}_2$ and use the dynamic label-scheme (for further information, see \Cref{sec:soft_vs_dynamic_appendix}). \subsection{Teacher-student VAD using unlabeled out-of-domain data} \label{ssec:mixed_data_vad} One of our approach's significant advantages is that it can potentially scale to other, out-of-domain datasets. Since the teachers are trained on real-world data, they can provide frame-level supervision on any dataset without being constrained to any specific data type (clean, real) or other conditions such as language. So far, our work utilized Audioset ($\mathcal{A}_1$) to achieve substantial improvements in noisy environments but slightly lag behind the clean test A. This performance gap could stem from the large amount of non-speech events within Audioset. We hypothesize that adding data that mainly contains speech (e.g., $\mathcal{V}_1$) might be beneficial. Thus, this experiment mainly focuses on comparing $\mathcal{V}_1$ and $\mathcal{A}_1$ as target datasets. \begin{table} \begin{tabular}{p{0.75cm}l|rrrrrr} \toprule Target & Test & P & R & F1 & AUC & FER & Event-F1 \\ \midrule \multirow{3}{*}{$\mathcal{A}_1$} & A & 97.05 & 93.60 & 95.16 & {98.50} & 3.57 & 66.25 \\ & B & 90.44 & 91.32 & 90.87 & 97.37 & 7.08 & 55.29 \\ & C & 88.19 & 86.79 & 87.33 & 95.02 & 12.08 & 54.91\\ \hline \multirow{3}{*}{$\mathcal{V}_1$} & A & 96.64 & 94.36 & 95.43 & 98.07 & 3.42 & 70.62 \\ & B & 91.77 & 90.20 & 90.94 & 96.56 & 6.81 & 55.60 \\ & C & 87.28 & 86.94 & 87.10 & 94.41 & 12.45 & 53.47 \\ \hline \multirow{3}{*}{$\mathcal{V}_1+\mathcal{A}_1$} & A & 96.72 & {94.74} & {95.67} & 98.45 & {3.24} & 69.43 \\ & B & 90.49 & 91.36 & 90.91 & 97.24 & 7.04 & 54.16 \\ & C & 85.98 & 85.41 & 85.66 & 94.41 & 13.79 & 54.57 \\ \bottomrule \end{tabular} \caption{Student training using the largely speech-only dataset $\mathcal{V}_1$ in conjunction with the noisy $\mathcal{A}_1$. Teacher $\mathcal{T}_2$ is utilized to predict dynamic labels on each respective dataset.} \label{tab:mixed_data_results} \end{table} Our results in \Cref{tab:mixed_data_results} show that our approach can achieve competitive performance even when trained on other datasets (here $\mathcal{V}_1$). Performance on the clean (A) test set improves against the $\mathcal{T}_2$ baseline. We believe that the performance improvement in our (B) evaluation dataset stems from the possible language match (English) when training ($\mathcal{V}_{1}$). Interestingly, by training on clean datasets ($\mathcal{V}_{1}$), performance in noisy test scenarios does not drop compared to real-world datasets ($\mathcal{A}_{1}$, see \Cref{tab:soft_vs_har_frame_level_t2}). We assume that this is due to the teacher's noise-robust soft labels, indicating a knowledge transfer from teacher to student. Adding real-world data to clean data (e.g., $\mathcal{V}_1 + \mathcal{A}_1$) seems to perform worse on the test set C than training on both datasets individually. \subsection{Scaling with large data} As it has been already seen in \Cref{ssec:base} and \Cref{ssec:label_type_train}, results of $\mathcal{T}_2$ substantially outperform results from $\mathcal{T}_1$, likely being due to the inherently much larger data size for teacher training. We further investigate the implications of large target data utilization ($\mathcal{A}_2,\mathcal{V}_2$, $\mathcal{V}_3$) for student training. \begin{table} \begin{tabular}{p{0.75cm}l|rrrrrr} \toprule Target & Test & P & R & F1 & AUC & FER & Event-F1 \\ \midrule \multirow{3}{*}{$\mathcal{A}_2$}& A & 96.96 & 94.28 & 95.52 & 98.13 & 3.34 & 69.10 \\ & B & 89.65 & 90.96 & 90.27 & 96.82 & 7.58 & 54.82\\ & C & 87.71 & 87.66 & 87.68 & 94.89 & 11.92 & 54.09\\ \hline \multirow{3}{*}{$\mathcal{V}_2$} & A & 96.80 & 94.34 & 95.48 & 98.26 & 3.37 & 70.95 \\ & B & 91.71 & 89.79 & 90.69 & 96.65 & 6.97 & 55.43 \\ & C & 86.32 & 86.92 & 86.56 & 94.20 & 13.15 & 53.16 \\ \hline \multirow{3}{*}{$\mathcal{V}_2+\mathcal{A}_2$} & A & 96.53 & 94.77 & 95.61 & 98.31 & 3.30 & 73.00 \\ & B & 89.62 & 91.47 & 90.48 & 97.19 & 7.47 & 53.96 \\ & C & 87.96 & 87.27 & 87.56 & 94.98 & 11.94 & 55.26 \\ \hline \multirow{3}{*}{$\mathcal{V}_3$} & A & 96.84 & 95.10 & 95.93 & 98.66 & 3.06 & 74.80 \\ & B & 90.44 & 92.87 & 91.54 & 97.63 & 6.68 & 54.45 \\ & C & \textbf{89.20} & \textbf{88.36} & \textbf{88.72} & \textbf{95.20} & \textbf{10.82} & \textbf{57.85} \\ \bottomrule \end{tabular} \caption{Large data training on labels generated from teacher $\mathcal{T}_2$ for each respective dataset using the dynamic labeling scheme. Our best (most noise robust across noisy evaluation scenarios) model is highlighted in bold.} \label{tab:scaling_with_data} \end{table} Our results in \Cref{tab:scaling_with_data} demonstrate that our teacher-student approach can scale with data. Our method can also be extended to any dataset, while some differences are notable between different target data. The best performing model we have observed is trained on the $\mathcal{V}_3$ dataset since it achieved the lowest FER (3.06), as well as the highest Event-F1 (74.80) of any proposed teacher-student approaches, on the clean test A scenario. More importantly, on both noisy tests (B, C), this model outperforms other models trained on $\mathcal{V}_{1/2}$ data, as well as $\mathcal{A}_2$. Most importantly, this model achieves the highest performance on our difficult C trial. Compared to our strong teacher $\mathcal{T}_2$ baseline, we observe an absolute decrease in FER by 3.9\%, an increase in AUC by 2.41, and an increase in Event-F1 by 11.95\% on test B. While a relative improvement on test C is less than on test B (likely due to harder difficulty), the model still manages to decrease FER by 1.92\%, increase AUC by 0.68, and Event-F1 by 11.55\%. All metrics are reported in absolute. Lastly, we also provide the receiver operating characteristic (ROC) curves for the results in \Cref{tab:scaling_with_data} in \Cref{fig:roc_plots_teacher_vs_student}. Specifically, the ROC curves for the teacher $\mathcal{T}_2$ and its students are displayed. We limit our visualization to tests B and C since the performance on those tests differs the most. \begin{figure} \centering \subfloat[Aurora 4 Noisy (B)]{% \includegraphics[width=0.9\linewidth]{figs/roc_plot_aurora_noisy.pdf}} \\ \subfloat[DCASE18 (C)]{% \includegraphics[width=0.9\linewidth]{figs/roc_plot_dcase18.pdf}} \caption{Receiver operating characteristic (ROC) curves for the Aurora 4 Noisy (B) and DCASE18 evaluation sets. The teacher $\mathcal{T}_2$ is compared to its students $\mathcal{V}_2,\mathcal{A}_2,\mathcal{V}_2 + \mathcal{A}_2,\mathcal{V}_3$. Best viewed in color.} \label{fig:roc_plots_teacher_vs_student} \end{figure} \subsection{Data size vs. target data characteristics} Another essential question worth investigating is whether the previous results of the $\mathcal{V}_3$ model stems exclusively from the increased data size compared to $\mathcal{V}_2$ or if the reason is the characteristics of the $\mathcal{V}_3$ dataset. For this reason, we subsampled the previously used $\mathcal{V}_3$ dataset to be of equal size to the $\mathcal{V}_2$ dataset, i.e., around 2400 hours. Then, we trained a new teacher ($\mathcal{V}_3$ (2.4k)) on this subset, and the results can be observed in \Cref{tab:v3_vs_subsampled_v3}. From the results, it can be noted that: \begin{enumerate*} \item The new $\mathcal{V}_3$ (2.4 k) model performs well against the other approaches on the clean test A and obtains the highest Event-F1, AUC, and Recall results. \item Both $\mathcal{V}_3$ and $\mathcal{V}_3$ (2.4k) outperform $\mathcal{V}_2$ on test C. \end{enumerate*} \begin{table} \centering \begin{tabular}{p{0.85cm}l|rrrrrr} \toprule Target & Test & P & R & F1 & AUC & FER & Event-F1 \\ \midrule \multirow{3}{*}{$\mathcal{V}_2$} & A & 96.80 & 94.34 & 95.48 & 98.26 & 3.37 & 70.95 \\ & B & 91.71 & 89.79 & 90.69 & 96.65 & 6.97 & 55.43 \\ & C & 86.32 & 86.92 & 86.56 & 94.20 & 13.15 & 53.16 \\ \hline \multirow{3}{*}{$\mathcal{V}_3$} & A & 96.84 & 95.10 & 95.93 & 98.66 & 3.06 & 74.80 \\ & B & 90.44 & 92.87 & 91.54 & 97.63 & 6.68 & 54.45 \\ & C & \textbf{89.20} & \textbf{88.36} & \textbf{88.72} & \textbf{95.20} & \textbf{10.82} & \textbf{57.85} \\ \hline \multirow{3}{*}{$\mathcal{V}_3$ (2.4k)} & A & 96.46 & 95.46 & 95.46 & 98.94 & 3.07 & 77.10 \\ & B & 88.79 & 93.06 & 90.55 & 97.54 & 7.66 & 46.90 \\ & C & {89.11} & {87.18} & {87.87} & {94.88} & 11.49 & {55.40} \\ \bottomrule \end{tabular} \caption{Comparison of the 2400 hour long $\mathcal{V}_2$ data against the subsampled $\mathcal{V}_3$ (2.4k). Best results for the hard C trial are highlighted in bold.} \label{tab:v3_vs_subsampled_v3} \end{table} We conclude from these results that the dataset's size for student training is less important than its characteristics. Comparing the results using medium-sized datasets in \Cref{tab:mixed_data_results} to the large scale ones in \Cref{tab:v3_vs_subsampled_v3} leads to the conclusion that, while larger datasets possibly contain more content-rich data, the performance benefits are marginal. Instead, our approach seems to be well suited for cross-domain adaptation. \subsection{Performance under different SNRs} \label{ssec:snr_performance} Here we further analyze the performance of our approach under synthetic noise scenarios. A new noise-controlled test set is generated by mixing the clean audio from the test set A by noise from Musan~\cite{musan2015} in a range between 20 to -5 dB SNR (in steps of 5 dB). Musan contains three categories of noise: speech, music, and background noise. For each sample in the test set A, we independently add speech, music, and background noise, resulting in a test set three times the size of A for each SNR value. As the results indicate in \Cref{tab:snr_analysis}, our proposed approach is robust to noise, capable of providing adequate performance (FER 12.30, Event-F1 36.75) in noisy SNR = 0db scenarios. However, it seems that our model's performance severely degrades even in light noise conditions (SNR = 20db). We hypothesize that this increase stems from the additive speech noise, which would inevitably lead to our VAD's false activations. We provide insight into our approach's limits via visualization of our models' probabilities for a comparatively hard sample, where speech occurred six times in a span of 12 s. Here the individual samples utilizing music (\Cref{fig:best_model_snr_plot_music}), speech (\Cref{fig:best_model_snr_plot_speech}), and background noise (\Cref{fig:best_model_snr_plot_background_noise}) can be observed. \begin{table} \centering \begin{tabular}{l|rrrrrr} \toprule SNR & P & R & F1 & AUC & FER & Event-F1 \\ \midrule -5 & 76.50 & 78.83 & 77.48 & 81.63 & 18.04 & 28.10 \\ 0 & 84.54 & 82.57 & 83.47 & 84.17 & 12.30 & 36.75 \\ 5 & 84.36 & 86.55 & 85.90 & 85.90 & 9.64 & 45.21 \\ 10 & 91.91 & 85.14 & 87.82 & 87.27 & 8.61 & 51.58 \\ 15 & 92.93 & 85.67 & 88.52 & 88.63 & 8.09 & 55.94 \\ 20 & 93.49 & 86.16 & 89.04 & 90.37 & 7.72 & 56.65 \\ Clean & 96.84 & 95.10 & 95.93 & 98.66 & 3.06 & 74.80 \\ \bottomrule \end{tabular} \caption{Our best model ($\mathcal{V}_3$) evaluated on the Aurora 4 corpus with additive noise (music, speech, background) from musan.} \label{tab:snr_analysis} \end{table} \begin{figure*} \centering \includegraphics[width=\linewidth]{figs/snr_samples_music.pdf} \caption{Our best model ($\mathcal{V}_3$) predicting speech under different SNRs ranging from -5 (A) to 20 (F) db in steps of 5db. Each plot title is a respective sample name from the Aurora4 dataset. Each graph represents a log-Mel spectrogram (top), ground truth (center) and probability output (bottom). Noise is exclusively music. } \label{fig:best_model_snr_plot_music} \end{figure*} \begin{figure*} \centering \includegraphics[width=\linewidth]{figs/snr_samples_speech.pdf} \caption{Our best model ($\mathcal{V}_3$) predicting speech under different SNRs ranging from -5 (A) to 20 (F) db in steps of 5db. Each plot title is a respective sample name from the Aurora4 dataset. Each graph represents a log-Mel spectrogram (top), ground truth (center) and probability output (bottom). Noise is exclusively speech.} \label{fig:best_model_snr_plot_speech} \end{figure*} \begin{figure*} \centering \includegraphics[width=\linewidth]{figs/snr_samples_freesound.pdf} \caption{Our best model ($\mathcal{V}_3$) predicting speech under different SNRs ranging from -5 (A) to 20 (F) db in steps of 5db. Each plot title is a respective sample name from the Aurora4 dataset. Each graph represents a log-Mel spectrogram (top), ground truth (center) and probability output (bottom). Noise is exclusively background noises.} \label{fig:best_model_snr_plot_background_noise} \end{figure*} First, as it can be seen in \Cref{fig:best_model_snr_plot_music}, our approach excels at noisy background scenarios such as music. For SNR values of \textgreater 5db, it can be observed that the model is capable of effectively predicting speech boundaries and the presence of speech. Notably, with high SNR values, the AUC of our approach can reach up to 95\%, which in turn decreases with a decrease in SNR. However, for the hard SNR = -5 case, our approach, even though capable of sensing speech, does only output probabilities below 50\%, meaning that a change in post-processing would be useful (e.g., lower threshold). Second, when faced with additional speech in \Cref{fig:best_model_snr_plot_speech}, our approach's potential drawbacks are observed. Our model now consistently outputs with high confidence in the presence of speech. The prediction patterns produced appear to be very similar, regardless of SNR. Even at SNR = 20, speech is predicted with high confidence throughout the entire utterance, indicating our model's high sensitivity towards speech. This high sensitivity attests that our method is fully capable of detecting the presence of any speech. However, it currently cannot distinguish between, e.g., multiple speakers or different sound sources. However, since our work can be easily extended with speaker-dependent VAD approaches, future work can focus on utilizing methods such as~\cite{Ding2020}. Third, when confronted with common background noises in \Cref{fig:best_model_snr_plot_background_noise}, our approach shows little to no influence even under heavy noise (SNR = -5) scenarios, indicated by high probability values. For all samples, it can also be seen that our model excels at estimating short, spontaneous bursts of speech, with accurate onset and offset prediction capabilities. \subsection{Comparison with other approaches} To prove our approach's effectiveness and the difficulty of VAD in real-world scenarios, we compare our results with previous successful frameworks. Note that we use the default configuration of each proposed method. Thus input feature-types (i.e., MFCC) and hyper-parameters (i.e., frameshift) for all other approaches differ from ours. Further, all other approaches were not retrained on our Aurora4 dataset and taken as-is from their respective public repository. First, we compare our method to the naive energy-thresholding method used in the Kaldi~\cite{povey2011kaldi} toolkit. Second, we utilize rVAD~\cite{Tan2020} (the rVAD-fast implementation), an unsupervised VAD approach, which has been seen to perform well in the presence of substantial noise. Third, we also compare to traditional supervised VAD approaches using deep neural networks (DNN) from~\cite{Segbroeck2013}. Lastly, we compare against a more modern attention-based approach (ACAM)~\cite{Kim2018}. Note that our goal in this comparison is to show that previous approaches trained on their respective dataset cannot generalize to unseen noise types. However, back-end models such as ACAM could be used in the future in conjunction with our proposed GPVAD approach to enhance performance further. Additionally, since all other competitors' outputs are hard labels $y_t \in {0,1}$, we refrain from calculating the AUC score, denoted as ``--''. \begin{table} \centering \begin{tabular}{p{0.23cm}l||rrrrrrrr} \toprule Test & Model & P & R &F1 & AUC & FER & Event-F1 \\ \midrule \multirow{6}{*}{A} & VAD-C & \textbf{97.97} & \textbf{95.32} & \textbf{96.55} & \textbf{99.78} & \textbf{2.57} & \textbf{78.90} \\ & Kaldi & 90.14 & 94.42 & 91.93 & - & 6.48 & 2.30 \\ & rVAD & 95.75 & 95.27 & 95.50 & - & 3.40 & 76.10 \\ & DNN & 87.24 & 93.75 & 89.52 & - & 8.74 & 27.10 \\ & ACAM & 96.38 & 89.96 & 92.61 & - & 5.26 & 55.20 \\ & Ours & 96.84 & 95.10 & 95.93 & 98.66 & 3.06 & 74.80 \\ \hline \multirow{6}{*}{B} & VAD-C & \textbf{91.37} & {82.82} & {85.96} & 97.07 & 9.71 & 47.50 \\ & Kaldi & 76.40 & 54.29 & 51.56 & - & 23.82 & 1.60 \\ & rVAD & 89.31 & 77.77 & {81.41} & - & 12.23 & 36.60 \\ & DNN & 79.69 & 87.36 & 81.21 & - & 16.50 & 11.40 \\ & ACAM & 89.25 & 84.50 & 86.50 & - & 9.71 & 35.80 \\ & Ours & 90.44 & \textbf{92.87} & \textbf{91.54} & \textbf{97.63} & \textbf{6.68} & \textbf{54.45} \\ \hline \multirow{6}{*}{C} & VAD-C & 78.17 & 79.08 & 77.93 & 87.87 & 21.92 & {34.40} \\ & Kaldi & 66.86 & 52.79 & 35.88 & - & 55.30 & 9.20 \\ & rVAD & 74.73 & 73.87 & 70.88 & - & 29.07 & 39.80 \\ & DNN & 72.38 & 72.89 & 71.35 & - & 28.59 & 24.00\\ & ACAM & 73.19 & 70.96 & 66.99 & - & 32.75 & 12.30 \\ & Ours & \textbf{89.20} & \textbf{88.36} & \textbf{88.72} & \textbf{95.20} & \textbf{10.82} & \textbf{57.85} \\ \bottomrule \end{tabular} \caption{Comparison between traditional energy (Kaldi), unsupervised (rVAD), supervised (DNN, ACAM) and our baseline (VAD-C) approaches to our student model trained on $\mathcal{V}_3$ (Ours). Best achieved result per test is highlighted in bold.} \label{tab:compare_other_approaches} \end{table} The results in \Cref{tab:compare_other_approaches} show that our chosen VAD-C baseline is indeed more potent than other approaches on clean data. Standard Kaldi energy-thresholding offers a comparatively well-rounded performance on test A in terms of FER (6.48) and F1 (91.93) while profoundly lacking temporal consistency (Event-F1 2.30). However, when noise increases (B, C), the naive Kaldi approach degenerates to random guessing levels (FER 55.30, F1 35.88). Further, we observe that rVAD performs well in clean and synthetic noise scenarios, as seen in its original work~\cite{Tan2020}. However, when faced with real-world, unconstrained evaluation, its performance decreases significantly on test C. Our proposed method shows signs of noise robustness between trials B and C, obtaining a lower FER and higher F1 in test C than rVAD does in test B. Traditional supervised VAD models, only using a shallow 2-layer DNN structure from~\cite{Segbroeck2013} are unable to perform well even against the unsupervised rVAD approach. Also, more modern attention-based approaches from~\cite{Kim2018} are seen to perform better than the traditional shallow DNN model. However, both supervised approaches perform consistently worse than our VAD-C baseline, suggesting that our model architecture (CRNN) is indeed suited for supervised VAD. In this comparison, it can be seen that our method is the best performing in noisy scenarios. More importantly, its performance across multiple test scenarios is also the most stable (e.g., FER increases from 3.06 in test A to 10.82 in test C, and AUC drops from 98.66 to 95.20). \begin{figure} \centering \subfloat{% \includegraphics[width=0.97\linewidth]{figs/samples_ours_vs_competitive_seed1.pdf}} \\ \subfloat{% \includegraphics[width=0.97\linewidth]{figs/samples_ours_vs_competitive_seed5.pdf}} \\ \subfloat{% \includegraphics[width=0.97\linewidth]{figs/samples_ours_vs_competitive_seed3.pdf}}\\ \subfloat{% \includegraphics[width=0.97\linewidth]{figs/samples_ours_vs_competitive_seed4.pdf}} \caption{Eight sample predictions of our best student model ($\mathcal{V}_3$) using default post-processing against previous methods on test C. For each graph: (Top) LMS. (Center) Ground truth label. (Bottom) speech presence predictions in color for each respective model. Each plot title is a respective sample from the DCASE18 dataset (formatted as Y[Youtubeid\_start\_end]). Viewers are encouraged to visit each respective Youtube link for a better experience.} \label{fig:sample_comparison_1} \end{figure} We also visualize some sample predictions on trial C in \Cref{fig:sample_comparison_1} between all used models. Note that since the test C labels are human-annotated, incorrect labeling can occur (e.g., short pauses are not considered). Compared to other approaches, the visualizations demonstrate our model's superiority in terms of FER since it is rarely seen to mispredict speech activity. Onsets (start of speech) and offsets (end of speech) are well estimated, even though our approach never had access to strong supervision (as the other comparable supervised models), thus needed to learn duration estimation by itself. However, our model is also seen to miss out on predicting speech activity, which leads us to investigate its sensitivity. \subsection{Sensitivity} Here we study the post-processing impact on our model's sensitivity. As per default, we used double thresholding (see \Cref{ssec:post-processing}), which can be seen as a conservative post-processing method. For this experiment, we remove double thresholding as post-processing method and replace it with traditional thresholding $y_t^{\mathcal{S}}(\text{Speech}) > \phi$, where we investigate $\phi \in \{ 0.01,0.02,0.05, 0.1, 0.2, 0.5, 0.7 \}$. Here, False alarm rate ($P_{fa}$) is the percentage of ``non-Speech'' frames being mis-classified as ``Speech'' and ($P_{miss}$) is the percentage of ``Speech'' frames being misclassified as ``non-Speech''. We compare our findings with other methods from \Cref{tab:compare_other_approaches}, utilizing their respective default configuration. The results can be seen in \Cref{tab:sensitivity}, reflecting our previous findings regarding our method's noise robustness. If a low threshold $0.01$ is used to reduce $P_{miss}$ to as low as $3.36$\%, the percentage of false accepts $P_{fa}$ still outperforms all other comparable approaches. Also, note that for all investigated thresholds, our highest reported FER (20.58\%) remains lower compared to other approaches (see \Cref{tab:sensitivity}). \begin{table}[htbp] \centering \begin{tabular}{ll|rrr} \toprule Method & Threshold ($\phi$) & $P_{fa}$\% & $P_{miss}$\% & FER\% \\ \midrule \multirow{8}{*}{Ours} & default & 7.02 & 16.26 & 10.82 \\ & 0.01 & 32.64 & 3.36 & 20.58 \\ & 0.02 & 23.07 & 5.13 & 15.68 \\ & 0.05 & 13.97 & 8.71 & 11.80 \\ & 0.1 & 9.28 & 12.53 & 10.62 \\ & 0.2 & 5.92 & 17.96 & 10.88 \\ & 0.5 & 2.33 & 31.04 & 14.15 \\ & 0.7 & 1.15 & 43.64 & 18.65 \\ \hline DNN~\cite{Segbroeck2013} & - & 35.62 & 18.59 & 28.59\\ ACAM~\cite{Kim2018} & - & 50.29 & 7.78 & 32.75\\ Kaldi~\cite{povey2011kaldi} & - & 93.52 & 0.91 & 55.30 \\ rVAD~\cite{Tan2010} & 0.7 & 31.40 & 19.88 & 26.65 \\ rVAD~\cite{Tan2020} & default (0.4) & 42.98 & 9.29 & 29.07 \\ rVAD~\cite{Tan2020} & 0.1 & 61.89 & 3.92 & 37.97\\ \bottomrule \end{tabular} \caption{Sensitivity in regards of $P_{fa}$ and $P_{miss}$ as well as FER on the test C. Here default represents double thresholding with $\phi_{low}=0.1,\phi_{hi}=0.5$.} \label{tab:sensitivity} \end{table} \begin{table*}[htbp] \centering \begin{tabular}{lll|rrrrrr} \toprule Target & Task & Label & P & R & F1 & AUC & FER & Event-F1 \\ \midrule \multirow{6}{*}{$\mathcal{V}_1$} & \multirow{2}{*}{A} & soft & 97.17 & 93.89 & 95.38 & 97.97 & 3.42 & 69.12 \\ & & dyn & 96.64 & 94.36 & 95.43 & 98.07 & 3.42 & 70.62 \\ & \multirow{2}{*}{B} & soft & 91.88 & 89.04 & 90.33 & 96.40 & 7.17 & 56.00 \\ & & dyn & 91.77 & 90.20 & 90.94 & 96.56 & 6.81 & 55.60 \\ & \multirow{2}{*}{C} & soft & 84.90 & 85.64 & 85.15 & 94.11 & 14.56 & 40.57 \\ & & dyn & 87.28 & 86.94 & 87.10 & 94.41 & 12.45 & 53.47 \\ \hline \multirow{6}{*}{$\mathcal{V}_2$} & \multirow{2}{*}{A} & soft & 97.20 & 94.24 & 95.60 & 98.50 & 3.27 & 71.33 \\ & & dyn & 96.80 & 94.34 & 95.48 & 98.26 & 3.37 & 70.95 \\ & \multirow{2}{*}{B} & soft & 91.62 & 89.73 & 90.62 & 96.64 & 7.03 & 55.84 \\ & & dyn & 91.71 & 89.79 & 90.69 & 96.65 & 6.97 & 55.43 \\ & \multirow{2}{*}{C} & soft & 85.30 & 85.44 & 85.36 & 94.45 & 14.21 & 42.37 \\ & & dyn & 86.32 & 86.92 & 86.56 & 94.20 & 13.15 & 53.16 \\ \hline \multirow{6}{*}{$\mathcal{V}_1+\mathcal{A}_1$} & \multirow{2}{*}{A} & soft & 97.19 & 93.64 & 95.24 & 98.42 & 3.51 & 61.62 \\ & & dyn & 96.72 & 94.74 & 95.67 & 98.45 & 3.24 & 69.43 \\ & \multirow{2}{*}{B} & soft & 92.39 & 90.43 & 91.35 & 97.40 & 6.48 & 57.02 \\ & & dyn & 90.49 & 91.36 & 90.91 & 97.24 & 7.04 & 54.16 \\ & \multirow{2}{*}{C} & soft & 84.85 & 84.92 & 84.88 & 94.59 & 14.66 & 39.77 \\ & & dyn & 85.98 & 85.41 & 85.66 & 94.41 & 13.79 & 54.57 \\ \bottomrule \end{tabular} \caption{Performance difference between soft and dynamic (dyn) labels on target data.} \label{tab:dynamic_vs_hard_appendix} \end{table*} \section{Conclusion} \label{sec:conclusion} This work proposes and investigates a novel data-driven teacher-student approach for voice activity detection to be trained with vast amounts of data. A teacher model is firstly trained using clip-wise labels on Audioset. Then the teacher is used to predict probabilities (soft-labels) for a student model. In our initial results, we show that teacher-student training on both source datasets ($\mathcal{A}_{1/2}$) significantly benefits VAD performance in noisy test conditions. Further, we investigate the influence of soft, hard, and dynamic labels on performance. Our proposed dynamic approach is seen to outperform both soft and hard label training in noisy scenarios. Out-of-domain large data student training is also investigated, utilizing the Voxceleb 1/2 datasets as well as NIST SRE. Our best student model significantly outperforms our supervised VAD-C baseline as well as our teachers ($\mathcal{T}_{1/2})$ on all noisy evaluation scenarios regarding FER, F1, AUC, and Event-F1 metrics. Notably, Event-F1 scores of over 50\% are reported across all test cases, meaning that our model excels at segmentation by providing accurate speech on- and offsets. When comparing our method to traditional supervised and unsupervised approaches, noise robustness is observed in the difficult C trial. The noise robustness is validated by our model's performance in the MUSAN corrupted A trial for low SNRs. Moreover, our model is sensitive to any speech, which could hinder its performance under speech-heavy scenarios. Lastly, we observe only little performance improvements when utilizing large data, most likely due to our model's small size, meaning that our future work would aim to improve the depth and complexity of our teacher/student models to utilize available data better. \appendices \section{Soft vs. dynamic labels} \label{sec:soft_vs_dynamic_appendix} In this paper, we utilized our dynamic method as the default label method without adequately providing results for $\mathcal{V}_{1},\mathcal{A}_1$ target datasets. In \Cref{tab:dynamic_vs_hard_appendix} these missing results can be seen. Even though dynamic labels do not always provide better performance (e.g., on the clean A test set), a significant difference in terms of FER and Event-F1 can be seen between B and C test sets. It seems that dynamic labels are much less prone to overfitting and are more capable to robustly estimate sound-event boundaries, evident by a similar Event-F1 score in tests B and C. All provided results on the C test set consistently obtain an Event-F1 score of over 50\%, while their soft-label counterparts consistently obtain 40\%. \section*{Acknowledgment} This work has been supported by National Natural Science Foundation of China (No.61901265), Shanghai Pujiang Program (No.19PJ1406300), State Key Laboratory of Media Convergence Production Technology and Systems Project (No.SKLMCPTS2020003) and Startup Fund for Youngman Research at SJTU (No.19X100040009). Experiments have been carried out on the PI supercomputer at Shanghai Jiao Tong University. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,314,259,992,776
arxiv
\section*{Acknowledgements} The authors thank Miles Padgett, Nathan Langford and Jian-Wei Pan for helpful discussions and suggestions. This work was supported by National Key R\&D Program of China (2017YFA0303700); National Natural Science Foundation of China (NSFC) (61734005, 11761141014, 11690033); Science and Technology Commission of Shanghai Municipality (STCSM) (15QA1402200, 16JC1400405, 17JC1400403); Shanghai Municipal Education Commission (SMEC)(16SG09, 2017-01-07-00-02-E00049); X.-M.J. acknowledges support from the National Young 1000 Talents Plan. \section*{Supplementary Material:} \subsection{\textbf{3D Fabrication of ``doughnut" waveguides}} The ``doughnut" waveguide proposed to support OAM modes in a photonic chip is very different from using a single-mode fiber. Its cylindrically symmetric structure requires 3D fabrication capacity, which is very challenging to be realized with conventional fabrication methods of silicon photonics. We employ femtosecond laser direct writing technique to realize such 3D fabrication capacity. Due to nonlinear absorption effects, the wafer materials only absorb energy in a scope of micrometer level and a short time slot of hundreds of femtoseconds, and therefore the refractive index inside the wafer can be modified in a very small scale \cite{Rafael2008,Szameit2010,Osellame2012} around the laser focal spot. Continuous scanning the wafer and/or the laser focal spot will allow us to manufacture a very thin line in three dimensions. Multiple writing in such way can construct the proposed ``doughnut" structure piece by piece, just like a ``surgery operation". \renewcommand{\thefigure}{\arabic{figure}} \begin{figure}[htb!] \centering \includegraphics[width=1\columnwidth]{FigureS1.png}\\ \caption{ \textbf{The measured ${\rm g}^{(2)}$ versus pump power.}} \label{Figure S1} \end{figure} Twelve written waveguides constitute the annular structure, of which the diameter of the circle formed by the core positions of the twelve waveguides is set at 8 $\mu$m. These twelve written waveguides are overlapped to form a continuous refractive index distribution. The central angle of the two adjacent waveguides is 30 degrees. So the linear distance between the two adjacent waveguides is 2.09 $\mu$m. The diameter of single waveguide is about 2.5 $\mu$m. Therefore, the overlap between two adjacent waveguides is estimated at 0.4 $\mu$m. The ``doughnut" structure will consist of 13 waveguides when we apply an additional scan through the middle. The femtosecond laser pulses, with a wavelength of 513 nm, a pulse duration of 290 $fs$ and a repetition rate of 1 MHz, are focused into the volume of borosilicate glass wafer (EAGLE 2000, Corning Inc.) by a 0.7 numerical aperture microscope objective. The wafer size is a 1$\times$20$\times$20 mm. Under suitable irradiation conditions (60 $nJ$ pulse energy and 5 mm/s translation speed), waveguides are produced at an average depth of 170 $\mu$m underneath the glass surface using 3-axis air-bearing stages (Aerotech Inc.). The refractive index contrast and birefringence are estimated at the order of $10^{-3}$ and $10^{-5}$, respectively. \subsection{\textbf{Classical and quantum twisted light preparation}} As is shown in Fig. 1b, a Ti: Sapphire Oscillator centred at 780 nm is divided into three beams by inserting two beamsplitters. One of the beams is relatively weak and serves as reference to measure the interference patterns. A translation stage is added in order to tune the phase for high-contrast interference fringes. The second beam as coherent light, weak as well, is utilized to prepare classical twisted light. The third beam, the strongest one, is employed to produce a 390 nm laser up to 1.2 W via second harmonic generation (SHG). We feed the up-converted laser into a 2-mm-thick BBO crystal tuned for type II, non-collinear down-conversion to prepare a photon pair. The obtained single-channel count rate and two-channel coincidence count rate are 1875000 and 237500 respectively. \renewcommand{\thefigure}{\arabic{figure}} \begin{figure}[htb!] \centering \includegraphics[width=1\columnwidth]{FigureS2.png} \caption{\textbf{The image in the far field for power spectrum measurement}. The first row is the hologram applied to SLM for projection measurement; the second (third) row is the intensity profile obtained after projection measurement corresponding to the first row hologram for OAM$_{-1}$ or OAM$_{+1}$ mode.} \label{Figure S2} \end{figure} In our experiment, we initialize (or spatially filter) the thermal states and heralded single photons states first with a single-mode fiber before imprinting OAM. It means the photons are perfect coherent in the transverse spatial domain. The concepts of thermal in photon statistics and coherent in transverse spatial domain are very different and sometime are quite confusing. The two-mode squeezed states generated by SPDC source can be described by $\vert{\psi}\rangle=\frac{1}{\sqrt{1-\lambda}}[\vert{0_{A}}{0_{B}}\rangle +\lambda\vert{1_{A}}{1_{B}}\rangle+\lambda^2\vert{2_{A}}{2_{B}}\rangle+\cdots]$, where $\lambda$ is nonlinear coefficient. When we trace the arm B out, the reduced density operator can be expressed as $\rho_{A}=\frac{1}{1-\lambda}[\vert{0}\rangle_{A}\langle{0}\vert +\lambda^2\vert{1}\rangle_{A}\langle{1}\vert+\lambda^4\vert{2}\rangle_{A}\langle{2}\vert+\cdots]$. The statistical distribution of photon number of the nonheralded single photons shows a super-Poissonian distribution. One photon registered at an avalanched photo diode can herald the existence of a well-defined single photon. We measured the single photoness of our source and the result is shown in Fig. 5. There is a tradeoff between $\lambda$ and single photoness. In our experiment, we scan the pump power from 0.3 W to 1.2 W, and finally take our data at 1.2 W, with which we can have a reasonable good single photons and high photon pair rate. The heralding signal is sent to ICCD to open a window of 10 $ns$ for taking the image of the heralded single-photon OAM state. We use a 6m fiber as a delay line to image the single photons. And in the external trigger mode, the insertion delay of ICCD is set as 35ns. Considering the delay induced by the optical path in our setup, the delay of 6-meter-long fiber is enough to open the gate beforehand. The single photon and coherent light can be switched easily by fiber flanges to an individual set-up for converting single-mode beam to the twisted light. The total efficiency of the prepared classical and quantum twisted light can reach 60\%. \begin{figure}[htb!] \centering \includegraphics[width=0.9\columnwidth]{FigureS3.png} \caption{\textbf{The result of high-dimensional superposition states}.The hologram and measured intensity profiles for two-dimensional and three-dimensional superposition states before and after the chip.} \label{Figure S2} \end{figure} \subsection{\textbf{Power spectrum measurement}} To analyze the output states quantitatively, we use the SLM for both twisted light generation and projection, half of the SLM is used to generate the input states, and the other half is used to make projection measurement for the output states. Firstly, we employ the output Gaussian mode from the chip to align with the singularity of the hologram used for projection. Second, we change to the OAM modes and switch the holograms, we can observe the images (see Fig. 6) in the far field. When the order of the hologram on SLM is opposite to the output state, resulting in planar phase fronts. Third, we employ SLM to project and select the Gaussian component, and then couple the light into a single-mode fiber to measure the OAM spectra after the transmission. This is a simple phase flattening method, which is a good approximation. For the higher-order input OAM modes, when we switch the holograms, we can also observe the images in the far field. We find that when the output states projected on OAM$_{+1}$ or OAM$_{-1}$, the resulting image in the far field is a Gaussian component. And then we couple the light into a single mode fiber to only select the Gaussian component. \begin{figure*}[htb!] \centering \includegraphics[width=2\columnwidth]{FigureS4.png} \caption{\textbf{The effect of different coupling system on the results}. \textbf{a.}Measured transmission effiency versus different input states with different coupling system. \textbf{b.} Measured OAM power spectra after the chip with 20X (16X) coupling system for second-order OAM modes.The output state is mainly weighted on OAM$_{-1}$ or OAM$_{+1}$ mode depending on the chirality of input state no matter what the coupling system is 20X or 16X.} \label{Figure S2} \end{figure*} \subsection{\textbf{High-dimensional superposition state}} We further explore the the performance on high-dimensional superposition state, which is an equal-weighted superposition state consisting of OAM$_{0}$, OAM$_{-1}$, OAM$_{+1}$. As is shown in Fig. 7, compared with two-dimensional superposition state of OAM$_{-1}$ and OAM$_{+1}$, three-dimensional superposition states are obviously uneven two-lobes with stickiness, which indicates that the output states can well preserve the intensity profile. The transmission efficiency of three-dimensional superposition state decreases by 3.5\% compared with two-dimensional superposition state. Therefore, we can see that the three-dimension superposition state can be well preserved, which indicates that our waveguide structure would support high-dimensional states of light. \subsection{\textbf{The effect of different coupling system on the results}} The chip or the current coupling system (16X) is optimized for OAM$_{0}$, OAM$_{-1}$, OAM$_{+1}$ modes. According to Ref \cite{Curtis2003}, the diameter of maximum optical intensity in the focal plane for OAM modes would tend to scale linearly with the topological charge. We further consider whether focusing tighter of the coupling system will make a difference, trying to focus the higher-order OAM modes tighter to spatially match the 8$\mu$m diameter. The results are shown in Fig. 8. Compared with 16X coupling system, the transmission efficiency for first (second)-order OAM mode drops about 16.1\% (4.6\%) on average with a 20X coupling system. With a 30X coupling system, the transmission efficiency drops sharply to less than 5\% for all the first (second)-order OAM modes. The parameter of different coupling system is shown in Table I. With the 20X coupling system, we also make projection measurements for second-order input modes. We can see that the output state is still mainly weighted on OAM$_{-1}$ or OAM$_{+1}$ mode. The results imply that the eigen mode of this ``doughnut" waveguide is OAM$_{0}$, OAM$_{-1}$, OAM$_{+1}$ mode, and higher-order input modes does not match the waveguide. Therefore, when higher-order OAM mode is coupled, it will dissipate and partially evolve into the eigen mode, as the output is always mainly OAM$_{-1}$ (OAM$_{+1}$). Even with tighter focus, the current ``doughnut" waveguide with 8$\mu$m diameter do not support higher-order OAM modes. \begin{table}[htbp!] \centering \caption{The parameter of different coupling system.} \setlength{\tabcolsep}{5mm}{ \begin{tabular}{|l|c|c|r} \hline \ Lens& Focal Length & N.A. \\ \hline 16X &11.0 mm & 0.25 \\ \hline 20X &8.0 mm & 0.50 \\ \hline 30X &6.2 mm & 0.40 \\ \hline \end{tabular}} \end{table}
1,314,259,992,777
arxiv
\section{Introduction}\label{sec:introduction} \input{sections/introduction.tex} \section{Background and related work}\label{sec:backgroundrelated} \input{sections/related_work.tex} \section{Motivation}\label{sec:motivation} \input{sections/motivation.tex} \section{Method}\label{sec:method} \input{sections/method.tex} \section{Experiments}\label{sec:experiments} \input{sections/experiments.tex} \section{Conclusion}\label{sec:conclusion} \input{sections/conclusion.tex} \section*{Acknowledgments}\label{sec:acknowledgments} We would like to thank Christos Louizos, Harris Teague, Jakub Tomczak, Mihir Jain and Pim de Haan for their helpful discussions and valuable feedback. {\small \bibliographystyle{ieee_fullname} \subsection{Mean of Clipped Normal Distribution} Using the fact that $f(x)$ is constant if $x\not\in[a,b]$ we have that: \begin{align} {\mu^c_{ab}} &= \int_{-\infty}^{\infty} f(x)p(x)dx\\ &= a\int_{-\infty}^a p(x)dx + \int_a^bxp(x)dx + b\int_b^{\infty}p(x)dx \end{align} The first and last term can be computed as $a\Phi(\alpha)$ and $b(1-\Phi(\beta))$ respectively, where we define $\alpha=\frac{a-\mu}{\sigma}$, $\beta=\frac{b-\mu}{\sigma}$, and $\Phi(x)=CDF(x \mid 0, 1)$, the normal CDF with zero mean and unit variance. The integral over the linear part of $f(\cdot)$ can be computed as: \begin{align} \int_a^bxp(x)dx &= C\int_a^bxe^{-\frac{1}{2\sigma^2}(x-\mu)^2}dx \\ &= \left.-C\sigma^2e^{-\frac{1}{2\sigma^2}(x-\mu)^2}\right\rvert_a^b + \mu(\Phi(\beta) - \Phi(\alpha))\\ &= \sigma\left(\phi(\alpha)-\phi(\beta)\right) + \mu(\Phi(\beta) - \Phi(\alpha)) \end{align} where we define $\phi(\cdot)=\mathcal{N}(\cdot\mid0, 1)$, i.e. the standard normal pdf and $C=\frac{1}{\sigma\sqrt{2\pi}}$ is the normalization constant for a normal distribution with variance $\sigma^2$, thus \begin{equation} \begin{aligned} {\mu^c_{ab}} =& \sigma\left(\phi(\alpha)-\phi(\beta)\right) + \mu(\Phi(\beta) - \Phi(\alpha)) \\&+ a\Phi(\alpha) + b(1-\Phi(\beta)). \end{aligned} \end{equation} \subsection{Variance of Clipped Normal Distribution} We again exploit the fact that $f(x)$ is constant if $x\not\in[a,b]$: \begin{align} {\sigma^c_{ab}}^2 &=\int_{-\infty}^{\infty} (f(x) -{\mu^c_{ab}})^2p(x)dx \\ &\begin{aligned} &=\int_{-\infty}^a(a-{\mu^c_{ab}})^2p(x)dx + \\ &+ \int_a^b(x-{\mu^c_{ab}})^2p(x)dx + \\ &+\int_b^\infty (b-{\mu^c_{ab}})^2p(x)dx \end{aligned} \end{align} The first and last term can be solved as $(a-{\mu^c_{ab}})^2\Phi(\alpha)$ and $(b-{\mu^c_{ab}})^2(1-\Phi(\beta))$ respectively. The second term can be decomposed as follows: \begin{align} \int_a^b(x-{\mu^c_{ab}})^2p(x)dx &= \int_a^b(x^2-2x{\mu^c_{ab}}+{\mu^c_{ab}}^2)p(x)dx \\ &\begin{aligned} &= \int_a^b x^2p(x)dx \\ &+ Z({\mu^c_{ab}}^2 - 2{\mu^c_{ab}}{\mu^t_{ab}}) \end{aligned} \end{align} where we use the result from the previous subsection and define $Z=\Phi(\beta)-\Phi(\alpha)$, and where ${\mu^t_{ab}}=\frac{1}{Z}\int_a^b x\mathcal{N}(x\mid \mu, \sigma^2)=\mu+\sigma (\phi(\alpha) - \phi(\beta))/Z$ is the mean of the truncated normal distribution. Evaluating the first term yields: \begin{equation} \begin{aligned} \int_a^bx^2p(x)dx &= Z(\mu^2+\sigma^2) \\ &+ \sigma(a\phi(\alpha) - b\phi(\beta)) \\ &+ \sigma\mu(\phi(\alpha) - \phi(\beta)) \end{aligned} \end{equation} This results in: \begin{equation} \begin{aligned} Var[f(X)] &= Z(\mu^2+\sigma^2+ {\mu^c_{ab}}^2 - 2{\mu^c_{ab}}\mu) \\ &+\sigma(a\phi(\alpha)-b\phi(\beta)) \\ &+ \sigma(\mu-2{\mu^c_{ab}})(\phi(\alpha)-\phi(\beta)) \\ &+ (a-{\mu^c_{ab}})^2\Phi(\alpha) \\ &+ (b-{\mu^c_{ab}})^2(1-\Phi(\beta)) \end{aligned} \end{equation} \subsection{Derivation for piece-wise linear functions} More generally, this scaling invariance hold for the all piece-wise linear activations functions if the splitting points and offsets are scaled according to $s$. \begin{equation} f(x)= \begin{cases} a_1 x + b_1 &\text{if } x \leq c_1\\ a_2 x + b_2 &\text{if } c_1 < x \leq c_2\\ &\vdots \\ a_n x + b_n &\text{if } c_{n-1} < x \end{cases} \end{equation} \begin{align} f(sx) &= \begin{cases} a_1 sx + b_1 &\text{if } sx \leq c_1\\ a_2 sx + b_2 &\text{if } c_1 < sx \leq c_2\\ &\vdots \\ a_n sx + b_n &\text{if } c_{n-1} < sx \end{cases}\\ &= s \begin{cases} a_1 x + b_1 / s &\text{if } x \leq c_1 / s\\ a_2 x + b_2 / s &\text{if } c_1 / s < x \leq c_2 / s\\ &\vdots \\ a_n x + b_n / s &\text{if } c_{n-1} / s < x \end{cases} \end{align} From this follows that $f(sx) = s \Tilde{f}(x)$ where $\Tilde{b}_i = b_i/s$ and $\Tilde{c}_i = c_i/s$. \subsection{Show scaling} \todo{do we need it?} We make use of a linear-scaling invariant property between subsequent layers in a neural network. Given input $X$ and two linear layers with weights $W_1$, $W_2$ \iffalse and biases $b_1$, $b_2$ \fi, we can always move around scaling factors $s$ from the columns of $W_1$ to the rows of $W_2$ while keeping the overall computation exactly the same. \small \begin{align*} &r \left( \left[ \begin{array}{ccc} \rule[.5ex]{2.5ex}{0.5pt} & w_{1} & \rule[.5ex]{2.5ex}{0.5pt} \\ \rule[.5ex]{2.5ex}{0.5pt} & w_{2} & \rule[.5ex]{2.5ex}{0.5pt} \\ \rule[.5ex]{2.5ex}{0.5pt} & \vdots & \rule[.5ex]{2.5ex}{0.5pt} \\ \rule[.5ex]{2.5ex}{0.5pt} & w_{n} & \rule[.5ex]{2.5ex}{0.5pt} \\ \end{array} \right]_2 r \left( \left[ \begin{array}{cccc} \vrule & \vrule & & \vrule\\ w_{1} & w_{2} & \ldots & w_{n} \\ \vrule & \vrule & & \vrule \end{array} \right]_1 \begin{bmatrix} x_{1} \\ x_{2} \\ \vdots \\ x_{m} \end{bmatrix} \right) \right) = \\ &r \left( \left[ \begin{array}{ccc} \rule[.5ex]{2.5ex}{0.5pt} & w_{1} & \rule[.5ex]{2.5ex}{0.5pt} \\ \rule[.5ex]{2.5ex}{0.5pt} & w_{2} \cdot s & \rule[.5ex]{2.5ex}{0.5pt} \\ \rule[.5ex]{2.5ex}{0.5pt} & \vdots & \rule[.5ex]{2.5ex}{0.5pt} \\ \rule[.5ex]{2.5ex}{0.5pt} & w_{n} & \rule[.5ex]{2.5ex}{0.5pt} \\ \end{array} \right]_2 r \left( \left[ \begin{array}{cccc} \vrule & \vrule & & \vrule\\ w_{1} & w_{2} \cdot \frac{1}{s} & \ldots & w_{n} \\ \vrule & \vrule & & \vrule \end{array} \right]_1 \begin{bmatrix} x_{1} \\ x_{2} \\ \vdots \\ x_{m} \end{bmatrix} \right) \right) \\ \end{align*} \normalsize where r is the ReLU function. This can also be done with any other piecewise linear activation function. By rescaling this way many times across layers, we can take the maximum element of one weight matrix, scale it by $1/s$ and absorb $s$ in any of it's adjacent layers. Effectively reducing the magnitude of the maximum weigh in the original matrix, while keeping the computation exactly the same. \subsection{Ablation study}\label{sec:ablation} In this section we investigate the effect of our methods on a pre-trained MobileNetV2 \cite{mobilenetv2} model\footnote{We use the Pytorch implementation of MobileNetV2 provided by \url{https://github.com/tonylins/pytorch-mobilenet-v2}.}. We validate the performance of the model on the ImageNet \cite{ILSVRC15} validation set. We first investigate the effects of different parts of our approach through a set of ablation studies. \subsubsection{Cross-layer equalization}\label{sec:exp_equalize} In this section we investigate the effects of cross-layer equalization and high-bias folding. We compare these methods to two baselines: the original quantized model and the less hardware friendly per-channel quantization scheme. The models considered in this section employ residual connections \cite{heresidual}. For these networks we apply cross-layer equalization only to the layers within each residual block. MobileNetV2 uses ReLU6 activation functions, which clips activation ranges to $[0, 6]$. To avoid ReLU6 requiring a different cut off per channel after applying the equalization procedure, we replace ReLU6 with regular ReLU. The results of the equalization experiments are shown in Table \ref{tbl:exp1_scaling}. Similar to \cite{krishnamoorthi}, we observe that the model performance is close to random when quantizing the original model to INT8. Further we note that replacing ReLU6 by ReLU does not significantly degrade the model performance. Applying equalization brings us to within 2\% of FP32 performance, close to the performance of per-channel quantization. We note that absorbing high biases results in a small drop in FP32 performance, but it boosts quantized performance by 1\% due to more precise activation quantization. Combining both methods improves performance over per-channel quantization, indicating the more efficient per-tensor quantization could be used instead. \input{tables/exp1_scaling.tex} To illustrate the effect of cross-layer equalization, we show the weight distributions per output channel of the depthwise-separable layer in the model’s first inverted residual block after applying the equalization in Figure \ref{fig:mobilenet_channel_scales_fixed}. We observe that most channels ranges are now similar and that the strong outliers from Figure \ref{fig:mobilenet_channel_scales} have been equalized. Note, there are still several channels which have all weight values close to zero. These channels convey little information and can be pruned from the network with hardly any loss in accuracy. \begin{figure}[t] \includegraphics[width=8cm, trim={.2cm .2cm .2cm .7cm}, clip]{figures/channel_range_boxplot_rescaled.pdf} \centering \caption{Per (output) channel weight ranges of the first depthwise-separable layer in MobileNetV2 after equalization. In the boxplot the min and max value, the 2nd and 3rd quartile and the median are plotted for each channel. Most channels in this layer are now within similar ranges.} \label{fig:mobilenet_channel_scales_fixed} \end{figure} \subsubsection{Bias correction} In this section we present results on bias correction for a quantized MobileNetV2 model. We furthermore present results of bias correction in combination with a naive weight-clipping baseline, and combined with the cross-layer equalization approach. The weight-clipping baseline serves two functions: 1) as a naive baseline to the cross-layer equalization approach, and 2) to show that bias correction can be employed in any setting where biased noise is introduced. Weight clipping solves the problem of large differences in ranges between channels by clipping large ranges to smaller ranges, but it introduces a strongly biased error. Weight clipping is applied by first folding the batch normalization parameters into a layer's weights, and then clipping all values to a certain range, in this case $[-15, 15]$. We tried multiple symmetric ranges, all provided similar results. For residual connections we calculate $\mathbb{E}[\xx]$ and $\text{Var}[\xx]$ based on the sum and variance of all input expectations, taking the input to be zero mean and unit variance. To illustrate the effect of bias correction, Figure \ref{fig:mobilenet_bias_fixed} shows the per output channel biased error introduced by weight quantization. The per-channel biases are obtained as described in eq.\ \ref{eq:compute_output_bias}. This figure shows that applying bias correction reduces the bias in the error on the output of a layer to very close to 0 for most output channels. \input{tables/exp1_bias.tex} Results for the experiments described above for MobileNet V2 on the ImageNet validation set are shown in Table \ref{tbl:exp1_bias}. Applying bias correction improves quantized model performance, indicating that a part of the problem of quantizing this model lies in the biased error that is introduced. However, bias correction on its own does not achieve near-floating point performance. The reason for this is most likely that the problem described in \ref{sec:motivation}.1 is more severe for this model. The experiments on weight-clipping show that bias correction can mitigate performance degradation due to biased error in non-quantized models as well as quantized models. Clipping without correction in the FP32 model introduces a 4.66\% loss in accuracy; bias correction reduces that loss to a mere 0.57\%. Furthermore, it shows that weight clipping combined with bias correction is a fairly strong baseline for quantizing MobileNet V2. Lastly, we show that bias correction improves results when combined with the cross-layer equalization and bias folding procedures. The combination of all methods is our data-free quantization (DFQ) method. The full DFQ approach achieves near-floating point performance with a reduction of 0.53\% top 1 accuracy relative to the FP32 baseline. \subsection{Comparison to other methods and models}\label{sec:other_models} In this section we show how DFQ generalizes to other popular computer vision tasks, namely semantic segmentation and object detection, and other model architectures such as MobileNetV1 \cite{mobilenetv1} and Resnet18 \cite{heresidual}. Afterwards we compare DFQ to methods in the literature, including more complex level 3 and 4 approaches. This set of models was chosen as they are efficient and likely to be used in mobile applications where 8-bit quantization is frequently used for power efficiency. \subsubsection{Other tasks} \paragraph{Semantic segmentation \input{tables/exp2_semseg.tex} To demonstrate the generalization of our method to semantic segmentation we apply DFQ for DeeplabV3+ with a MobileNetV2 backend \cite{chen2018deeplab, mobilenetv2}, performance is evaluated on the Pascal VOC segmentation challenge \cite{everingham2015pascal}. For our experiments we use the publicly available Pytorch implementation\footnote{\url{https://github.com/jfzhang95/pytorch-deeplab-xception}}. We show the results of this experiment in Table \ref{tbl:exp2_semseg}. As observed earlier for classification we notice a significant drop in performance when quantizing the original model which makes it almost unusable in practice. Applying DFQ recovers almost all performance degradation and achieves less than 1\% drop in mIOU compared to the full precision model. DFQ also outperforms the less hardware friendly per-channel quantization. To the best of our knowledge we are the first to publish quantization results on DeeplabV3+ as well as for semantic segmentation. \paragraph{Object detection \input{tables/exp2_objdet.tex} \input{tables/exp2_literature.tex} To demonstrate the applicability of our method to object detection we apply DFQ for MobileNetV2 SSDLite \cite{mobilenetv2, liu2016ssd}, evaluated on the Pascal VOC object detection challenge \cite{everingham2015pascal}. In our experiments we use the publicly available Pytorch implementation of SSD\footnote{\url{https://github.com/qfgaohao/pytorch-ssd}}. The results are listed in Table \ref{tbl:exp2_objdet}. Similar to semantic segmentation we observe a significant drop in performance when quantizing the SSDLite model. Applying DFQ recovers almost all performance drop and achieves less than 1\% drop in mAP compared to the full precision model, again outperforming per-channel quantization. \subsubsection{Comparison to other approaches} In this section we compare DFQ to other approaches in literature. We compare our results to two other level 1 approaches, direct per-layer quantization as well as per-channel quantization \cite{krishnamoorthi}. In addition we also compare to multiple higher level approaches, namely quantization aware training \cite{jacob2018cvpr} as well as stochastic rounding and dynamic ranges \cite{Gupta2015, Gysel2016}, which are both level 3 approaches. We also compare to two level 4 approaches based on relaxed quantization \cite{louizos2018relaxed}, which involve training a model from scratch and to quantization friendly separable convolutions \cite{sheng2018} that require a rework of the original MobileNet architecture. The results are summarized in Table \ref{tbl:exp2_literature}. For both MobileNetV1 and MobileNetV2 per-layer quantization results in an unusable model whereas DFQ stays close to full precision performance. DFQ also outperforms per-channel quantization as well as most level 3 and 4 approaches which require significant fine-tuning, training or even architecture changes. On Resnet18 we maintain full precision performance for 8-bit fixed point quantization using DFQ. Some higher level approaches \cite{jacob2018cvpr, louizos2018relaxed} report slightly higher results than our baseline model, likely due to a better training procedure than used in the standard Pytorch Resnet18 model. Since 8-bit quantization is lossless we also compare 6-bit results. DFQ clearly outperforms traditional per-layer quantization but stays slightly below per-channel quantization and higher level approaches such as QT and RQ \cite{jacob2018cvpr, louizos2018relaxed}. Overall DFQ sets a new state-of-the-art for 8-bit fixed point quantization on several models and computer vision tasks. It is especially strong for mobile friendly architectures such as MobileNetV1 and MobileNetV2 which were previously hard to quantize. Even though DFQ is an easy to use level 1 approach, we generally show competitive performance when comparing to more complex level 2-4 approaches. \subsection{Cross-layer range equalization}\label{sec:equalization} \paragraph{Positive scaling equivariance}\label{sec:positivescalingequivariance} We observe that for a ReLU \cite{nair2010} activation function $f(\cdot)$ the following scaling equivariance property holds: \begin{equation} f(sx) = sf(x) \label{eq:equivariance} \end{equation} for any non-negative real number $s$. This follows from the definition of the ReLU: \begin{equation} \text{ReLU}(x)= \begin{cases} x&\text{if } x > 0\\ 0&\text{if } x \leq 0. \end{cases} \end{equation} This equivariance also holds for the PreLU \cite{he2015} activation function. More generally, the positive scaling equivariance can be relaxed to $f(sx) = s \hat{f}(x)$ for any piece-wise linear activation functions: \begin{equation} f(x)= \begin{cases} a_1 x + b_1 &\text{if } x \leq c_1\\ a_2 x + b_2 &\text{if } c_1 < x \leq c_2\\ &\vdots \\ a_n x + b_n &\text{if } c_{n-1} < x \end{cases} \end{equation} where $\hat{f}(\cdot)$ is parameterized as $\hat{a}_i = a_i$, $\hat{b}_i = b_i/s$ and $\hat{c}_i = c_i/s$. Note that contrary to equivariance defined in eq.\ \ref{eq:equivariance} we now also change the function $f(\cdot)$ into $\hat{f}(\cdot)$. \begin{figure}[t] \includegraphics[width=7cm]{figures/flow_diagram.png}\vspace{-0.1cm} \centering \caption{Flow diagram of the proposed DFQ algorithm.} \label{fig:flow_diagram}\vspace{-0.1cm} \end{figure} \subsubsection{Scaling equivariance in neural networks} The positive scaling equivariance can be exploited in consecutive layers in neural networks. Given two layers, $\vec{h} = f(\mati{W}{1} \vec{x} + \veci{b}{1})$ and $\vec{y} = f(\mati{W}{2}\vec{h} + \veci{b}{2})$, through scaling equivariance we have that: \begin{align} \vec{y} &= f(\mati{W}{2} f(\mati{W}{1}\vec{x}+\veci{b}{1}) + \veci{b}{2}) \\ &= f(\mati{W}{2} \mat{S} \hat{f}(\mat{S^{-1}}\mati{W}{1} \vec{x} + \mat{S^{-1}} \veci{b}{1}) + \veci{b}{2}) \\ &= f(\mati{\widehat{W}}{2} \hat{f}(\mati{\widehat{W}}{1} \vec{x} + \veci{\widehat{b}}{1}) + \veci{b}{2}) \end{align} where $\mat{S}=diag(\vec{s})$ is a diagonal matrix with value $\mat{S}_{ii}$ denoting the scaling factor $\vec{s}_i$ for neuron $i$. This allows us to reparameterize our model with $\mati{\widehat{W}}{2} = \mati{W}{2} \mat{S}$, $\mati{\widehat{W}}{1} = \mat{S^{-1}} \mati{W}{1}$ and $\veci{\widehat{b}}{1} = \mat{S^{-1}} \veci{b}{1}$. In case of CNNs the scaling will be per channel and broadcast accordingly over the spatial dimensions. The rescaling procedure is illustrated in Figure \ref{fig:layer_wise_rescaling}. \begin{figure}[t] \includegraphics[width=8cm]{figures/layer_wise_rescaling2.pdf} \centering \caption{Illustration of the rescaling for a single channel. If scaling factor $s_i$ scales $c_i$ in layer 1; we can instead factor it out and multiply $d_i$ in layer 2.} \label{fig:layer_wise_rescaling}\vspace{-0.1cm} \end{figure} \subsubsection{Equalizing ranges over multiple layers} We can exploit the rescaling and reparameterization of the model to make the model more robust to quantization. Ideally the ranges of each channel $i$ are equal to the total range of the weight tensor, meaning we use the best possible representative power per channel. We define the precision of a channel as: \begin{equation} \vec{\hat{p}}_i^{(1)} = \frac{\vec{\hat{r}}_i^{(1)}}{\hat{R}^{(1)}} \end{equation} where $\vec{\hat{r}}_i^{(1)}$ is the quantization range of channel $i$ in $\mati{\widehat{W}}{1}$ and $\hat{R}^{(1)}$ is the total range of $\mati{\widehat{W}}{1}$. We want to find $\mat{S}$ such that the total precision per channel is maximized: \begin{equation} \max_{\mat{S}} \sum_i \vec{\hat{p}}_i^{(1)} \vec{\hat{p}}_i^{(2)} \label{eq:optimal_rescaling} \end{equation} In the case of symmetric quantization we have $\vec{\hat{r}}_i^{(1)} = 2\cdot \max_j |\mat{\widehat{W}_{ij}^{(1)}}|$ and $\hat{R}^{(1)}=2\cdot\max_{ij} |\mat{\widehat{W}_{ij}^{(1)}} |$. Solving eq.\ \ref{eq:optimal_rescaling} (see appendix \ref{app:equalization}) leads to the necessary condition: \begin{equation} \argmax_{j} \frac{1}{\vec{s}_j}\vec{r}_j^{(1)} = \argmax_{k} \vec{s}_k \vec{r}_k^{(2)} \end{equation} meaning the limiting channel defining the quantization range is given by $\argmax_{i} \vec{r}_i^{(1)} \vec{r}_i^{(2)}$. We can satisfy this condition by setting $\mat{S}$ such that: \begin{equation} \vec{s}_i = \frac{1}{\vec{r}_i^{(2)}}\sqrt{\vec{r}_i^{(1)} \vec{r}_i^{(2)}} \end{equation} which results in $\forall i: \vec{r}_i^{(1)} = \vec{r}_i^{(2)}$. Thus the channel's ranges between both tensors are matched as closely as possible. When equalizing multiple layers at the same time, we iterate this process for pairs of layers that are connected to each other without input or output splits in between, until convergence. \subsubsection{Absorbing high biases} In case $\vec{s}_i < 1$ the equalization procedure increases bias $\vec{b}^{(1)}_i$. This could in turn increase the range of the activation quantization. In order to avoid big differences between per-channel ranges in the activations we introduce a procedure that absorbs high biases into the subsequent layer. For a layer with ReLU function $r$, there is a non-negative vector $\vec{c}$ such that $r(\mat{W}\vec{x}+\vec{b}-\vec{c}) = r(\mat{W}\vec{x}+\vec{b}) - \vec{c}$. The trivial solution $\vec{c}=\vec{0}$ holds for all $\vec{x}$. However, depending on the distribution of $\vec{x}$ and the values of $\vec{W}$ and $\vec{b}$, there can be some values $\vec{c}_i>\vec{0}$ for which this equality holds for (almost) all $\vec{x}$. Following the previous two layer example, these $\vec{c}_i$ can be absorbed from layer $1$ into layer $2$ as: \begin{align} \vec{y} &= \mati{W}{2} \vec{h} + \veci{b}{2} \\ &= \mati{W}{2} (r(\mati{W}{1} \vec{x} + \veci{b}{1}) + \vec{c}-\vec{c}) + \veci{b}{2} \\ &= \mati{W}{2} (r(\mati{W}{1} \vec{x} + \veci{\hat{b}}{1}) + \vec{c}) + \veci{b}{2} \\ &= \mati{W}{2} \vec{\hat{h}} + \veci{\hat{b}}{2} \end{align} where $\veci{\hat{b}}{2} = \mati{W}{2} \vec{c} + \veci{b}{2}$, $\vec{\hat{h}}=\vec{h} - \vec{c}$, and $\veci{\hat{b}}{1} = \veci{b}{1}-\vec{c}$. To find $\vec{c}$ without violating our data-free assumption we assume that the pre-bias activations are distributed normally with the batch normalization shift and scale parameters $\boldsymbol{\beta}$ and $\boldsymbol{\gamma}$ as its mean and standard deviation. We set $\vec{c}=\max(\vec{0},\boldsymbol{\beta}-3\boldsymbol{\gamma})$. If $\vec{c}>0$, the equality introduced above will hold for the $99.865\%$ of values of $\vec{x}$ (those greater than $\vec{c}$) under the Gaussian assumption. As we will show in section \ref{sec:exp_equalize}, this approximation does not harm the full precision performance significantly but helps for activation quantization. Note that, in case data is available, the pre-bias distribution of $\vec{x}$ can be found empirically and used to set $\vec{c}$. \subsection{Quantization bias correction}\label{sec:bias_correction} As shown empirically in the motivation, quantization can introduce a biased error in the activations. In this section we show how to correct for the bias in the error on the layer's output, and how we can use the network's batch normalization parameters to compute this bias without using data. For a fully connected layer with weight tensor $\W$, quantized weights $\Wt$, and input activations $\xx$, we have $\yt = \Wt\xx$ and therefore $\yt = \y + \beps\xx$, where we define the quantization error $\beps=\Wt-\W$, $\y$ as the layer pre-activations of the FP32 model, and $\yt$ that layer with quantization error added. If the expectation of the error for output $i$, $\mathbb{E}[\beps \textbf{x}]_i \neq 0$, then the mean of the output $i$ will change. This shift in distribution may lead to detrimental behavior in the following layers. We can correct for this change by seeing that: \begin{align} \mathbb{E}[\y]&=\mathbb{E}[\y]+\mathbb{E}[\beps\xx]-\mathbb{E}[\beps\xx]\\ &=\mathbb{E}[\yt]-\mathbb{E}[\beps\xx]. \end{align} Thus, subtracting the expected error on the output $\mathbb{E}\left[\beps\xx\right] = \beps \mathbb{E}\left[\xx\right]$ from the biased output $\yt$ ensures that the mean for each output unit is preserved. For implementation, the expected error can be subtracted from the layer's bias parameter, since the expected error vector has the same shape as the layer's output. This method easily extends to convolutional layers as described in Appendix \ref{app:biascorr}. \subsubsection{Computing the expected input} To compute the expected error of the output of a layer, the expected input to the layer $\mathbb{E}[\xx]$ is required. If a model does not use batch normalization, or there are no data-usage restrictions, $\mathbb{E}[\beps \xx]$ can be computed by comparing the activations before and after quantization. Appendix \ref{app:databias} explains this procedure in more detail. \paragraph{Clipped normal distribution When the network includes batch normalization before a layer, we can use it to calculate $\mathbb{E}[\xx]$ for that layer without using data. We assume the pre-activation outputs of a layer are normally distributed, that batch normalization is applied before the activation function, and that the activation function is some form of the class of clipped linear activation functions (e.g.\ ReLU, ReLU6), which clips its input range to the range $[a, b]$ where $a < b$, and $b$ can be $\infty$. Due to the centralization and normalization applied by batch normalization, the mean and standard deviation of the pre-activations are known: these are the batch normalization scale and shift parameters (henceforth referred to as $\boldsymbol{\gamma}$ and $\boldsymbol{\beta}$ respectively). To compute $\mathbb{E}[\xx]$ from the previous layer's batch normalization parameters, the mean and variance need to be adjusted to account for the activation function that follows the batch normalization layer. For this purpose we introduce the clipped normal distribution. A clipped-normally distributed random variable $X$ is a normally distributed random variable with mean $\mu$ and variance $\sigma^2$, whose values are clipped to the range $[a, b]$ The mean and variance of the clipped normal distribution can be computed in closed form from $\mu$, $\sigma$, $a$ and $b$. We present the mean of the clipped normal distribution for the ReLU activation function, i.e.\ $a=0$ and $b=\infty$ in this section, and refer the reader to Appendix \ref{app:clippednormal} for the closed form solution for the general clipped normal distribution. The expected value for channel $c$ in $\xx$, $\mathbb{E}[\xx_c]$, which is the output of a layer with batch normalization parameters $\boldsymbol{\beta}_c$ and $\boldsymbol{\gamma}_c$, followed by a ReLU activation function is: \begin{align} \mathbb{E}[\xx_c] &= \mathbb{E}\left[\text{ReLU}\left( \xx_c^{pre}\right)\right] \\ &= \boldsymbol{\gamma}_c\mathcal{N}\left( \frac{-\boldsymbol{\beta}_c}{\boldsymbol{\gamma}_c} \right) + \boldsymbol{\beta}_c\left[1-\Phi\left( \frac{-\boldsymbol{\beta}_c}{\boldsymbol{\gamma}_c} \right)\right] \end{align} where $\xx_c^{pre}$ is the pre-activation output for channel $c$, which is assumed to be normally distributed with mean $\boldsymbol{\beta}_c$ and variance $\boldsymbol{\gamma}_c^2$, $\Phi(\cdot)$ is the normal CDF, and the notation $\mathcal{N}(x)$ is used to denote the normal $\mathcal{N}(x | 0,1)$ PDF. \subsection{Weight tensor channel ranges}\label{sec:weighttensorchannelranges} The fact that per-channel quantization yields much better performance on MobileNetV2 than per-tensor quantization suggests that, in some layers, the weight distributions differ so strongly between output channels that the same set of quantization parameters cannot be used to quantize the full weight tensor effectively. For example, in the case where one channel has weights in the range $[-128, 128]$ and another channel has weights in the range $(-0.5, 0.5)$, the weights in the latter channel will all be quantized to $0$ when quantizing to 8-bits. Figure \ref{fig:mobilenet_channel_scales} shows that large differences in output channel weight ranges do indeed occur in a (trained) MobileNetV2 model. This figure shows the weight distribution of the output channel weights of the depthwise-separable layer in the model's first inverted residual block. Due to the strong differences between channel weight ranges that this layer exhibits, it cannot be quantized with reasonable accuracy for each channel. Several layers in the network suffer from this problem, making the overall model difficult to quantize. \begin{figure}[t] \includegraphics[width=8cm, trim={.2cm .2cm .2cm .7cm}, clip]{figures/channel_range_boxplot.pdf} \centering \caption{Per (output) channel weight ranges of the first depthwise-separable layer in MobileNetV2. In the boxplot the min and max value, the 2nd and 3rd quartile and the median are plotted for each channel. This layer exhibits strong differences between channel weight ranges.} \label{fig:mobilenet_channel_scales \end{figure} We conjecture that performance of trained models after quantization can be improved by adjusting the weights for each output channel such that their ranges are more similar. We provide a level 1 method to achieve this without changing the FP32 model output in section \ref{sec:equalization}. \subsection{Biased quantization error} A common assumption in literature (e.g.\ \cite{alvarez2016}) is that quantization error is unbiased and thus cancels out in a layer's output, ensuring that the mean of a layer's output does not change as a result of quantization. However, as we will show in this section, the quantization error on the weights might introduce biased error on the corresponding outputs. This shifts the input distribution of the next layer, which may cause unpredictable effects. The biased error in a quantized layer's output unit $j$ can be computed empirically using $N$ input data points as: \begin{align} \mathbb{E}[\yt_j - \y_j] &\approx \frac{1}{N}\sum_n (\Wt\xx_{n})_j - (\W\xx_{n})_j \label{eq:compute_output_bias} \end{align} where $\y_j$ and $\yt_j$ are the original outputs and the outputs generated using the quantized weight matrix, respectively. Figure \ref{fig:mobilenet_bias_fixed} shows the biased error per channel of a depthwise-separable convolution layer in a trained MobileNetV2 model. From this plot it is clear that for many channels in the layer's output, the error introduced by weight quantization is biased, and influences the output statistics. Depthwise-separable layers are especially susceptible to this biased error effect as each output channel has only 9 corresponding weights. Such a biased error on the outputs can be introduced in many settings, e.g.\ when weights or activations are clipped \cite{mishra2017}, or in non-quantization approaches, such as weight tensor factorization or channel pruning \cite{he2017, zhang2015}. In section \ref{sec:bias_correction} we introduce a method to correct for this bias. Furthermore, we show that a model's batch normalization parameters can be used to compute the expected biased error on the output, yielding a level 1 method to fix the biased error introduced by quantization. \begin{figure}[t] \includegraphics[width=8cm, trim={.2cm .2cm .2cm .7cm}, clip]{figures/bias_per_channel-fixed.pdf} \centering \caption{Per-channel biased output error introduced by weight quantization of the second depthwise-separable layer in MobileNetV2, before (blue) and after (orange) bias correction.} \label{fig:mobilenet_bias_fixed \end{figure}
1,314,259,992,778
arxiv
\section{Introduction} In December 2019, clusters of pneumonia cases caused by the novel Coronavirus (COVID-19) were identified at the Wuhan, Hubei province in China \cite{huang2020clinical, guan2020clinical} after almost hundred years of the 1918 Spanish flu \cite{trilla20081918}. Soon after the emergence of the novel beta coronavirus, World Health Organization (WHO) characterized this contagious disease as a ``global pandemic" due to its rapid spread worldwide \cite{roosa2020real}. Many scientists have attempted to make forecasts about its impact. However, despite involving many excellent modelers, best intentions, and highly sophisticated tools, forecasting COVID-19 pandemics is harder \cite{ioannidis2020forecasting}, and this is primarily due to the following major factors: \begin{itemize} \item Very less amount of data is available; \item Less understanding of the factors that contribute to it; \item Model accuracy is constrained by our knowledge of the virus, however. With an emerging disease such as COVID-19, many transmission-related biologic features are hard to measure and remain unknown; \item The most obvious source of uncertainty affecting all models is that we don't know how many people are or have been infected; \item Ongoing issues with virologic testing mean that we are certainly missing a substantial number of cases, so models fitted to confirmed cases are likely to be highly uncertain \cite{holmdahl2020wrong}; \item The problem of using confirmed cases to fit models is further complicated because the fraction of confirmed cases is spatially heterogeneous and time-varying \cite{weinberger2020estimating}; \item Finally, many parameters associated with COVID-19 transmission are poorly understood. \end{itemize} Amid enormous uncertainty about the future of the COVID-19 pandemic, statistical, machine learning, and epidemiological models are critical forecasting tools for policymakers, clinicians, and public health practitioners \cite{chakraborty2020real, li2020trend,wu2020nowcasting, fanelli2020analysis,kucharski2020early, zhuang2020estimation}. COVID-19 modeling studies generally follow one of two general approaches that we will refer to as forecasting models and mechanistic models. Although there are hybrid approaches, these two model types tend to address different questions on different time scales, and they deal differently with uncertainty \cite{chakraborty2020integrated}. Compartmental epidemiological models have been developed over nearly a century and are well tested on data from past epidemics. These models are based on modeling the actual infection process and are useful for predicting long-term trajectories of the epidemic curves \cite{chakraborty2020integrated}. Short-term Forecasting models are often statistical, fitting a line or curve to data and extrapolating from there -- like seeing a pattern in a sequence of numbers and guessing the next number, without incorporating the process that produces the pattern \cite{chakraborty2020theta, chakraborty2019forecasting, chakraborty2020real}. Well constructed statistical frameworks can be used for short-term forecasts, using machine learning or regression. In statistical models, the uncertainty of the prediction is generally presented as statistically computed prediction intervals around an estimate \cite{hastie2009elements, james2013introduction}. Given that what happens a month from now will depend on what happens in the interim, the estimated uncertainty should increase as you look further into the future. These models yield quantitative projections that policymakers may need to allocate resources or plan interventions in the short-term. Forecasting time series datasets have been a traditional research topic for decades, and various models have been developed to improve forecasting accuracy \cite{chatfield2000time, armstrong2001principles, hanke2001business}. There are numerous methods available to forecast time series, including traditional statistical models and machine learning algorithms, providing many options for modelers working on epidemiological forecasting \cite{chakraborty2019forecasting, chakraborty2020integrated, brady2012refining, chakraborty2020real, messina2014global, buczak2018ensemble, ribeiro2020short}. Many research efforts have focused on developing a universal forecasting model but failed, which is also evident from the ``No Free Lunch Theorem" \cite{wolpert1997no}. This chapter focuses on assessing popularly used short-term forecasting (nowcasting) models for COVID-19 from an empirical perspective. The findings of this chapter will fill the gap in the literature of nowcasting of COVID-19 by comparing various forecasting methods, understanding global characteristics of pandemic data, and discovering real challenges for pandemic forecasters. The upcoming sections present a collection of recent findings on COVID-19 forecasting. Additionally, twenty nowcasting (statistical, machine learning, and hybrid) models are assessed for five countries of the United States of America (USA), India, Brazil, Russia, and Peru. Finally, some recommendations for policy-making decisions and limitations of these forecasting tools have been discussed. \section{Related works} Researchers face unprecedented challenges during this global pandemic to forecast future real-time cases with traditional mathematical, statistical, forecasting, and machine learning tools \cite{li2020trend, wu2020nowcasting, fanelli2020analysis, kucharski2020early, zhuang2020estimation}. Studies in March with simple yet powerful forecasting methods like the exponential smoothing model predicted cases ten days ahead that, despite the positive bias, had reasonable forecast error \cite{petropoulos2020forecasting}. Early linear and exponential model forecasts for better preparation regarding hospital beds, ICU admission estimation, resource allocation, emergency funding, and proposing strong containment measures were conducted \cite{grasselli2020critical} that projected about 869 ICU and 14542 ICU admissions in Italy for March 20, 2020. Health-care workers had to go through the immense mental stress left with a formidable choice of prioritizing young and healthy adults over the elderly for allocation of life support, mostly unwanted ignoring of those who are extremely unlikely to survive \cite{emanuel2020fair,rosenbaum2020facing}. Real estimates of mortality with 14-days delay demonstrated underestimating of the COVID-19 outbreak and indicated a grave future with a global case fatality rate (CFR) of 5.7\% in March \cite{baud2020real}. The contact tracing, quarantine, and isolation efforts have a differential effect on the mortality due to COVID-19 among countries. Even though it seems that the CFR of COVID-19 is less compared to other deadly epidemics, there are concerns about it being eventually returning as the seasonal flu, causing a second wave or future pandemic \cite{petersen2020comparing, rajgor2020many}. Mechanistic models, like the Susceptible–Exposed–Infectious–Recovered (SEIR) frameworks, try to mimic the way COVID-19 spreads and are used to forecast or simulate future transmission scenarios under various assumptions about parameters governing the transmission, disease, and immunity \cite{hou2020effectiveness, he2020seir, annas2020stability, chen2020time, lopez2020end}. Mechanistic modeling is one of the only ways to explore possible long-term epidemiologic outcomes \cite{anderson1992infectious}. For example, the model from Ferguson et al. \cite{ferguson2020report} that has been used to guide policy responses in the United States and Britain examines how many COVID-19 deaths may occur over the next two years under various social distancing measures. Kissler et al. \cite{kissler2020projecting} ask whether we can expect seasonal, recurrent epidemics if immunity against novel coronavirus functions similarly to immunity against the milder coronaviruses that we transmit seasonally. In a detailed mechanistic model of Boston-area transmission, Aleta et al. \cite{aleta2020modeling} simulate various lockdown ``exit strategies". These models are a way to formalize what we know about the viral transmission and explore possible futures of a system that involves nonlinear interactions, which is almost impossible to do using intuition alone \cite{hellewell2020feasibility, mossong2008social}. Although these epidemiological models are useful for estimating the dynamics of transmission, targeting resources, and evaluating the impact of intervention strategies, the models require parameters and depend on many assumptions. Several statistical and machine learning methods for real-time forecasting of the new and cumulative confirmed cases of COVID-19 are developed to overcome limitations of the epidemiological model approaches and assist public health planning and policy-making \cite{chakraborty2020integrated, petropoulos2020forecasting, anastassopoulou2020data, chakraborty2020real, chakraborty2020theta}. Real-time forecasting with foretelling predictions is required to reach a statistically validated conjecture in this current health crisis. Some of the leading-edge research concerning real-time projections of COVID-19 confirmed cases, recovered cases, and mortality using statistical, machine learning, and mathematical time series modeling are given in Table \ref{Table1}. \begin{table}[] \caption{Related works on nowcasting and forecasting of COVID-19 pandemic} \centering \normalsize \resizebox{\columnwidth}{!}{ \begin{tabular}{|p{3.5cm}|p{3.0cm}|p{2.5cm}|p{4cm}|p{4cm}|p{4cm}|} \\\hline \textbf{Research Topic} & \textbf{Date} & \textbf{Countries} &\textbf{Model} & \textbf{Results} & \textbf{Main Conclusion} \\\hline Forecasting and risk assessment \cite{chakraborty2020real} &January 30-31, 2020, to April 4, 2020 &Canada, France, India, South Korea, UK &ARIMA,Wavelet ARIMA(WBF), Hybrid ARIMA-WBF &MAE and RMSE least for Hybrid ARIMA-WBF &Hybrid ARIMA-WBF performs better than traditional methods and important factors that impact on case fatality rates are estimated using regression tree. \\\hline Forecasting the confirmed and recovered cases \cite{maleki2020time} & Jan 22,2020 to April 30, 2020 & World data & TP–SMN–AR time series (Autoregressive series based on two-piece scale mixture normal distributions) & MAPE = 0.22 for confirmed cases; MAPE = 1.6 for Recovered cases & Provided reasonable forecasts in terms of error and model selection. \\\hline Short-term forecasting of cumulative confirmed cases \cite{ribeiro2020short} &Inception - April 18-19, 2020 &Brazil &ARIMA, Random forest, Ridge regression, Support vector regression, Ensemble learning &Forecast errors lower than 6.9 percent &SVR and stacking-ensemble learning model are suitable tools for forecasting COVID-19. \\\hline Modelling and forecasting daily cases \cite{anastassopoulou2020data} & January 11 to February 10, 2020 &China &Susceptible-Infectious-Recovered-Dead (SIRD) model &Estimated average reproduction number $(R_0)\sim 2.6$ and $CFR\sim 0.15\%$ &simulations predicted a decline of the outbreak at the end of February. \\\hline Forecasting COVID-19 \cite{petropoulos2020forecasting} & January 22, 2020 to March 11, 2020 & Global data & Exponential smoothing models & Ten-days-ahead forecasts have actual cases \ within $90\%$ CI &Forecasts reflect the significant increase in the trend of global cases with growing uncertainty. \\\hline Real-time forecasting \cite{roosa2020real} &February 5to February 24, 2020 & China &Generalized logistic growth model (GLM) and Sub-epidemic wave model &Mean case estimates and 95\% prediction intervals emulsifies the global picture 15-days ahead &All methods perform similarly and and increase in data inclusion decreases the width of prediction intervals. \\\hline Predictions and role of interventions \cite{ray2020predictions} &Live forecast &India &Extended state-space SIR epidemiological models &Live forecasts with broad confidence intervals & Lockdown has a high chance of reducing the number of COVID-19 cases. \\\hline Forecasting and nowcasting COVID-19 \cite{wu2020nowcasting} &Dec 31, 2019, to Jan 28, 2020 &China &Susceptible-exposed-infectious-recovered (SEIR) model & $R_0$ = 2.68 (95\% CI 2.47, 2.86) ; Epidemic doubling time = 6.4 days (95\% CI 5.8, 7.1) &COVID-19 is no longer contained within China, and human-human transmission became evident. \\\hline Forecast \cite{fanelli2020analysis} &Jan, 22- March 15, 2020 &China, Italy and France &Susceptible, infected, recovered, dead (SIRD) model &The recovery rate is the same for Italy and China, while infection and death rate appear to be different. &There is a certain universality in the time evolution of COVID-19. \\\hline AI-based forecasts \cite{hu2020artificial} &Jan, 11 - February 27, 2020 &China &Data driven AI-based methods &Using the multiple-step forecasting, forecasts are given till April 19, 2020 for 34 provinces/cities. &The accuracy of the AI-based methods for forecasting the trajectory of COVID-19 was high. \\\hline Machine learning-based forecasts \cite{sujath2020machine} &January 22, 2020, to April 10, 2020 &India &Multi-layered perceptron (MLP) model &Forecast of confirmed, deaths and recovered cases for 69 days &MLP method is giving good prediction results than other methods. \\\hline Long-term trajectories of COVID-19 \cite{chakraborty2020integrated} &Starting - June 17, 2020 &Spain and Italy &Integrated stochastic-deterministic (ISA) approach &Basic reproduction number and estimated future cases are computed. &ISA model shows significant improvement in the long-term forecasting of COVID-19 cases. \\\hline \end{tabular}} \label{Table1} \end{table} \section{Global characteristics of pandemic time series}\label{global} A univariate time series is the simplest form of temporal data and is a sequence of real numbers collected regularly over time, where each number represents a value \cite{chatfield2016analysis, box2015time}. There are broadly two major steps involved in univariate time series forecasting \cite{hyndman2018forecasting}: \begin{itemize} \item Studying the global characteristics of the time series data; \item Analysis of data with the `best-fitted' forecasting model. \end{itemize} Understanding the global characteristics of pandemic confirmed cases data can help forecasters determine what kind of forecasting method will be appropriate for the given situation \cite{tsay2000time}. As such, we aim to perform a meaningful data analysis, including the study of time series characteristics, to provide a suitable and comprehensive knowledge foundation for the future step of selecting an apt forecasting method. Thus, we take the path of using statistical measures to understand pandemic time series characteristics to assist method selection and data analysis. These characteristics will carry summarized information of the time series, capturing the `global picture' of the datasets. Based on the recommendation of \cite{de200625, wang2009rule, lemke2010meta, lemke2015metalearning}, we study several classical and advanced time series characteristics of COVID-19 data. This study considers eight global characteristics of the time series: periodicity, stationarity, serial correlation, skewness, kurtosis, nonlinearity, long-term dependence, and chaos. This collection of measures provides quantified descriptions and gives a rich portrait of the pandemic time-series' nature. A brief description of these statistical and advanced time-series measures are given below. \subsection{Periodicity} A seasonal pattern exists when a time series is influenced by seasonal factors, such as the month of the year or day of the week. The seasonality of a time series is defined as a pattern that repeats itself over fixed intervals of time \cite{box2015time}. In general, the seasonality can be found by identifying a large autocorrelation coefficient or a large partial autocorrelation coefficient at the seasonal lag. Since the periodicity is very important for determining the seasonality and examining the cyclic pattern of the time series, the periodicity feature extraction becomes a necessity. Unfortunately, many time series available from the dataset in different domains do not always have known frequency or regular periodicity. Seasonal time series are sometimes also called cyclic series, although there is a significant distinction between them. Cyclic data have varying frequency lengths, but seasonality is of a fixed length over each period. For time series with no seasonal pattern, the frequency is set to 1. The seasonality is tested using the `stl' function within the ``stats" package in R statistical software \cite{hyndman2018forecasting}. \subsection{Stationarity} Stationarity is the foremost fundamental statistical property tested for in time series analysis because most statistical models require that the underlying generating processes be stationary \cite{chatfield2000time}. Stationarity means that a time series (or rather the process rendering it) do not change over time. In statistics, a unit root test tests whether a time series variable is non-stationary and possesses a unit root \cite{phillips1988testing}. The null hypothesis is generally defined as the presence of a unit root, and the alternative hypothesis is either stationarity, trend stationarity, or explosive root depending on the test used. In econometrics, Kwiatkowski–Phillips–Schmidt–Shin (KPSS) tests are used for testing a null hypothesis that an observable time series is stationary around a deterministic trend (that is, trend-stationary) against the alternative of a unit root \cite{shin1992kpss}. The KPSS test is done using the `kpss.test' function within the ``tseries" package in R statistical software \cite{trapletti2007tseries}. \subsection{Serial correlation} Serial correlation is the relationship between a variable and a lagged version of itself over various time intervals. Serial correlation occurs in time-series studies when the errors associated with a given time period carry over into future time periods \cite{box2015time}. We have used Box-Pierce statistics \cite{box1970distribution} in our approach to estimate the serial correlation measure and extract the measures from COVID-19 data. The Box-Pierce statistic was designed by Box and Pierce in 1970 for testing residuals from a forecast model \cite{wang2009rule}. It is a common portmanteau test for computing the measure. The mathematical formula of the Box-Pierce statistic is as follows: $$ Q_h = n \displaystyle \sum_{k=1}^{h} r_{k}^{2},$$ where $n$ is the length of the time series, $h$ is the maximum lag being considered (usually $h$ is chosen as 20), and $r_k$ is the autocorrelation function. The portmanteau test is done using the `Box.test' function within the ``stats" package in R statistical software \cite{hyndman2007automatic}. \subsection{Nonlinearity} Nonlinear time series models have been used extensively to model complex dynamics not adequately represented by linear models \cite{kantz2004nonlinear}. Nonlinearity is one important time series characteristic to determine the selection of an appropriate forecasting method. \cite{tong2002nonlinear} There are many approaches to test the nonlinearity in time series models, including a nonparametric kernel test and a Neural Network test \cite{tsay1986nonlinearity}. In the comparative studies between these two approaches, the Neural Network test has been reported with better reliability \cite{wang2009rule}. In this research, we used Teräsvirta's neural network test \cite{terasvirta1993power} for measuring time series data nonlinearity. It has been widely accepted and reported that it can correctly model the nonlinear structure of the data \cite{terasvirta2005linear}. It is a test for neglected nonlinearity, likely to have power against a range of alternatives based on the NN model (augmented single-hidden-layer feedforward neural network model). This takes large values when the series is nonlinear and values near zero when the series is linear. The test is done using the `nonlinearityTest' function within the ``nonlinearTseries" package in R statistical software \cite{garcia2015nonlineartseries}. \subsection{Skewness} Skewness is a measure of symmetry, or more precisely, the lack of symmetry. A distribution, or dataset, is symmetric if it looks the same to the left and the right of the center point \cite{wang2009rule}. A skewness measure is used to characterize the degree of asymmetry of values around the mean value \cite{mood1950introduction}. For univariate data $Y_t$, the skewness coefficient is $$ S = \frac{1}{n \sigma^3} \sum_{t=1}^{n} \left( Y_t - \bar{Y} \right)^3, $$ where $\bar{Y}$ is the mean, $\sigma$ is the standard deviation, and $n$ is the number of data points. The skewness for a normal distribution is zero, and any symmetric data should have the skewness near zero. Negative values for the skewness indicate data that are skewed left, and positive values for the skewness indicate data that are skewed right. In other words, left skewness means that the left tail is heavier than the right tail. Similarly, right skewness means the right tail is heavier than the left tail \cite{kim2013statistical}. Skewness is calculated using the `skewness' function within the ``e1071" package in R statistical software \cite{meyer2019package}. \subsection{Kurtosis (heavy-tails)} Kurtosis is a measure of whether the data are peaked or flat, relative to a normal distribution \cite{mood1950introduction}. A dataset with high kurtosis tends to have a distinct peak near the mean, decline rather rapidly, and have heavy tails. Datasets with low kurtosis tend to have a flat top near the mean rather than a sharp peak. For a univariate time series $Y_t$, the kurtosis coefficient is $\frac{1}{n \sigma^4} \sum_{t=1}^{n} \left( Y_t - \bar{Y} \right)^4$. The kurtosis for a standard normal distribution is 3. Therefore, the excess kurtosis is defined as $$ K = \frac{1}{n \sigma^4} \sum_{t=1}^{n} \left( Y_t - \bar{Y} \right)^4 - 3. $$ So, the standard normal distribution has an excess kurtosis of zero. Positive kurtosis indicates a `peaked' distribution and negative kurtosis indicates a `flat' distribution \cite{groeneveld1984measuring}. Kurtosis is calculated using the `kurtosis' function within the ``PerformanceAnalytics" package in R statistical software \cite{peterson2018package}. \subsection{Long-range Dependence} Processes with long-range dependence have attracted a good deal of attention from a probabilistic perspective in time series analysis \cite{robinson1995log}. With such increasing importance of the `self-similarity' or `long-range dependence' as one of the time series characteristics, we study this feature into the group of pandemic data characteristics. The definition of self-similarity is most related to the self-similarity parameter, also called Hurst exponent (H) \cite{black1965long}. The class of autoregressive fractionally integrated moving average (ARFIMA) processes \cite{granger1980introduction} is a good estimation method for computing H. In an ARIMA$(p,d,q)$, $p$ is the order of AR, $d$ is the degree first differencing involved, and $q$ is the order of MA. If the time series is suspected of exhibiting long-range dependency, parameter $d$ may be replaced by certain non-integer values in the ARFIMA model \cite{brockwell1991time}. We fit an ARFIMA$(0,d,0)$ to the maximum likelihood, which is approximated by using the fast and accurate method of Haslett and Raftery \cite{haslett1989space}. We then estimate the Hurst parameter using the relation $H = d + 0.5$. The self-similarity feature can only be detected in the RAW data of the time series. The value of H can be obtained using the `hurstexp' function within the ``pracma" package in R statistical software \cite{borchers2019package}. \subsection{Chaos (dynamic systems)} Many systems in nature that were previously considered random processes are now categorized as chaotic systems. For several years, Lyapunov Characteristic Exponents are of interest in the study of dynamical systems to characterize quantitatively their stochasticity properties, related essentially to the exponential divergence of nearby orbits \cite{farmer1987predicting}. Nonlinear dynamical systems often exhibit chaos, characterized by sensitive dependence on initial values, or more precisely by a positive Lyapunov Exponent (LE) \cite{farmer1982chaotic}. Recognizing and quantifying chaos in time series are essential steps toward understanding the nature of random behavior and revealing the extent to which short-term forecasts may be improved \cite{hegger1999practical}. LE, as a measure of the divergence of nearby trajectories, has been used to qualifying chaos by giving a quantitative value \cite{benettin1980lyapunov}. The algorithm of computing LE from time-series is applied to continuous dynamical systems in an $n$-dimensional phase space \cite{rosenstein1993practical}. LE is calculated using the `Lyapunov exponent' function within the ``tseriesChaos" package in R statistical software \cite{antonio2013package}. \section{Popular forecasting methods for pandemic nowcasting} Time series forecasting models work by taking a series of historical observations and extrapolating future patterns. These are great when the data are accurate; the future is similar to the past. Forecasting tools are designed to predict possible future alternatives and help current planing and decision making \cite{armstrong2001principles}. There are essentially three general approaches to forecasting a time series \cite{montero2020fforma}: \begin{enumerate} \item Generating forecasts from an individual model; \item Combining forecasts from many models (forecast model averaging); \item Hybrid experts for time series forecasting. \end{enumerate} Single (individual) forecasting models are either traditional statistical methods or modern machine learning tools. We study ten popularly used single forecasting models from classical time series, advanced statistics, and machine learning literature. There has been a vast literature on the forecast combinations motivated by the seminal work of Bates \& Granger \cite{bates1969combination} and followed by a plethora of empirical applications showing that combination forecasts are often superior to their counterparts (see, \cite{bordley1982combination, timmermann2006forecast}, for example). Combining forecasts using a weighted average is considered a successful way of hedging against the risk of selecting a misspecified model \cite{clemen1989combining}. A significant challenge is in choosing an appropriate set of weights, and many attempts to do this have been worse than simply using equal weights -- something that has become known as the ``forecast combination puzzle" (see, for example, \cite{smith2009simple}). To overcome this, hybrid models became popular with the seminal work of Zhang \cite{zhang2003time} and further extended for epidemic forecasting in \cite{chakraborty2019forecasting, chakraborty2020real, chakraborty2020theta}. The forecasting methods can be briefly reviewed and organized in the architecture shown in Figure \ref{fig:tsf_tools}. \tikzset{ basic/.style = {draw, text width=2cm, drop shadow, font=\sffamily, rectangle}, root/.style = {basic, rounded corners=2pt, thin, align=center, fill=white}, level-2/.style = {basic, rounded corners=5pt, thin,align=center, fill=white, text width=2cm}, level-3/.style = {basic, thin, align=center, fill=white, text width=1.8cm} } \begin{figure} \centering \begin{tikzpicture}[ level 1/.style={sibling distance=8em, level distance=6em}, edge from parent/.style={->,solid,black,thick,sloped,draw}, edge from parent path={(\tikzparentnode.south) -- (\tikzchildnode.north)}, >=latex, node distance=1.2cm, edge from parent fork down] \node[root] {\textbf{Time series forecasting methods}} child {node[level-2] (c1) {\textbf{Classical}}} child {node[level-2] (c2) {\textbf{Smoothing}}} child {node[level-2] (c3) {\textbf{Advanced}}} child {node[level-2] (c4) {\textbf{ML}}} child {node[level-2] (c5) {\textbf{Hybrid}}} child {node[level-2] (c6) {\textbf{Ensemble}}}; \begin{scope}[every node/.style={level-3}] \node [below of = c1, xshift=7pt] (c11) {ARIMA}; \node [below of = c11] (c12) {SETAR}; \node [below of = c12] (c13) {ARFIMA}; \node [below of = c2, xshift=7pt] (c21) {ETS}; \node [below of = c21] (c22) {TBATS}; \node [below of = c22] (c23) {Theta}; \node [below of = c3, xshift=7pt] (c31) {WARIMA}; \node [below of = c31] (c32) {BSTS}; \node [below of = c4, xshift=7pt] (c41) {ANN}; \node [below of = c41] (c42) {ARNN}; \node [below of = c5, xshift=7pt] (c51) {ARIMA-ANN}; \node [below of = c51] (c52) {ARIMA-ARNN}; \node [below of = c52] (c53) {ARIMA-WARIMA}; \node [below of = c53] (c54) {WARIMA-ANN}; \node [below of = c54] (c55) {WARIMA-ARNN}; \node [below of = c6, xshift=7pt] (c61) {ARIMA-ETS-Theta}; \node [below of = c61] (c62) {ARIMA-ETS-ARNN}; \node [below of = c62] (c63) {ARIMA-Theta-ARNN}; \node [below of = c63] (c64) {ETS-Theta-ARNN}; \node [below of = c64] (c65) {ANN-ARNN-WARIMA}; \end{scope} \foreach \value in {1,2,3} \draw[->] (c1.195) |- (c1\value.west); \foreach \value in {1,2,3} \draw[->] (c2.195) |- (c2\value.west); \foreach \value in {1,2} \draw[->] (c3.195) |- (c3\value.west); \foreach \value in {1,2} \draw[->] (c4.195) |- (c4\value.west); \foreach \value in {1,...,5} \draw[->] (c5.195) |- (c5\value.west); \foreach \value in {1,...,5} \draw[->] (c6.195) |- (c6\value.west); \end{tikzpicture} \caption{A systemic view of the various forecasting methods to be used in this study} \label{fig:tsf_tools} \end{figure} \subsection{Autoregressive integrated moving average (ARIMA) model} The autoregressive integrated moving average (ARIMA) is one of the well-known linear models in time-series forecasting, developed in the early 1970s \cite{box2015time}. It is widely used to track linear tendencies in stationary time-series data. It is denoted by ARIMA($p,d,q$), where the three components have significant meanings. The parameters $p$ and $q$ represent the order of AR and MA models, respectively, and $d$ denotes the level of differencing to convert nonstationary data into stationary time series \cite{makridakis1997arma}. ARIMA model can be mathematically expressed as follows: $$ y_t = \alpha_{0} + \sum_{i=1}^{p} \beta_i y_{t-i} + \epsilon_t - \sum_{j=1}^q \alpha_j \epsilon_{t-j}, $$ where $y_t$ denotes the actual value of the variable at time $t$, $\epsilon_t$ denotes the random error at time $t$, $\beta_i$ and $\alpha_j$ are the coefficients of the model. Some necessary steps to be followed for any given time-series dataset to build an ARIMA model are as follows: \begin{itemize} \item Identification of the model (achieving stationarity). \item Use autocorrelation function (ACF) and partial ACF plots to select the AR and MA model parameters, respectively, and finally estimate model parameters for the ARIMA model. \item The `best-fitted' forecasting model can be found using the Akaike Information Criteria (AIC) or the Bayesian Information Criteria (BIC). Finally, one checks the model diagnostics to measure its performance. \end{itemize} An implementation in R statistical software is available using the `auto.arima' function under the ``forecast" package, which returns the `best' ARIMA model according to either AIC or BIC values \cite{hyndman2020package}. \subsection{Wavelet-based ARIMA (WARIMA) model} Wavelet analysis is a mathematical tool that can reveal information within the signals in both the time and scale (frequency) domains. This property overcomes the primary drawback of Fourier analysis, and wavelet transforms the original signal data (especially in the time domain) into a different domain for data analysis and processing. Wavelet-based models are most suitable for nonstationary data, unlike standard ARIMA. Most epidemic time-series datasets are nonstationary; therefore, wavelet transforms are used as a forecasting model for these datasets \cite{chakraborty2020real}. When conducting wavelet analysis in the context of time series analysis \cite{aminghafari2007forecasting}, the selection of the optimal number of decomposition levels is vital to determine the performance of the model in the wavelet domain. The following formula for the number of decomposition levels, $WL=int[log(n)]$, is used to select the number of decomposition levels, where $n$ is the time-series length. The wavelet-based ARIMA (WARIMA) model transforms the time series data by using a hybrid maximal overlap discrete wavelet transform (MODWT) algorithm with a ‘haar’ filter \cite{percival2000wavelet}. Daubechies wavelets can produce identical events across the observed time series in so many fashions that most other time series prediction models cannot recognize. The necessary steps of a wavelet-based forecasting model, defined by \cite{aminghafari2007forecasting}, are as follows. Firstly, the Daubechies wavelet transformation and a decomposition level are applied to the nonstationary time series data. Secondly, the series is reconstructed by removing the high-frequency component, using the wavelet denoising method. Lastly, an appropriate ARIMA model is applied to the reconstructed series to generate out-of-sample forecasts of the given time series data. Wavelets were first considered as a family of functions by Morlet \cite{wang2002multiple}, constructed from the translations and dilation of a single function, which is called ``Mother Wavelet". These wavelets are defined as follows: $$ \phi_{m,n}(t) = \frac{1}{\sqrt{|m|}} \phi\left(\frac{t-n}{m}\right); \; \; m, n \in \mathcal{R},$$ where the parameter $m \; (\neq 0)$ is denoted as the scaling parameter or scale, and it measures the degree of compression. The parameter $n$ is used to determine the time location of the wavelet, and it is called the translation parameter. If the value $|m| < 1$, then the wavelet in $m$ is a compressed version (smaller support is the time domain) of the mother wavelet and primarily corresponds to higher frequencies, and when $|m| > 1$, then $\phi_(m,n) (t)$ has larger time width than $\phi(t)$ and corresponds to lower frequencies. Hence wavelets have time width adopted to their frequencies, which is the main reason behind the success of the Morlet wavelets in signal processing and time-frequency signal analysis \cite{nury2017comparative}. An implementation of the WARIMA model is available using the `WaveletFittingarma’ function under the ``WaveletArima" package in R statistical software \cite{paul2017package}. \subsection{Autoregressive fractionally integrated moving average (ARFIMA) model} Fractionally autoregressive integrated moving average or autoregressive fractionally integrated moving average models are the generalized version ARIMA model in time series forecasting, which allow non-integer values of the differencing parameter \cite{granger1980introduction}. It may sometimes happen that our time-series data is not stationary, but when we try differencing with parameter $d$ taking the value to be an integer, it may over difference it. To overcome this problem, it is necessary to difference the time series data using a fractional value. These models are useful in modeling time series, which has deviations from the long-run mean decay more slowly than an exponential decay; these models can deal with time-series data having long memory \cite{pumi2019beta}. ARFIMA models can be mathematically expressed as follows: $$ \left( 1 - \sum_{i=1}^{p} \Phi_i B^i \right) (1 - B)^d X_t = \left( 1 + \sum_{i=1}^{q} \theta_i B^i \right)\epsilon_t,$$ where $B$ is is the backshift operator, $p, q$ are ARIMA parameters, and $d$ is the differencing term (allowed to take non-integer values). An R implementation of ARFIMA model can be done with `arfima' function under the ``forecast"package \cite{hyndman2020package}. An ARFIMA$(p,d,q)$ model is selected and estimated automatically using the Hyndman-Khandakar (2008) \cite{hyndman2008forecasting} algorithm to select $p$ and $q$ and the Haslett and Raftery (1989) \cite{haslett1989space} algorithm to estimate the parameters including $d$. \subsection{Exponential smoothing state space (ETS) model} Exponential smoothing state space methods are very effective methods in case of time series forecasting. Exponential smoothing was proposed in the late 1950s \cite{winters1960forecasting} and has motivated some of the most successful forecasting methods. Forecasts produced using exponential smoothing methods are weighted averages of past observations, with the weights decaying exponentially as the observations get older. The ETS models belong to the family of state-space models, consisting of three-level components such as an error component (E), a trend component (T), and a seasonal component(S). This method is used to forecast univariate time series data. Each model consists of a measurement equation that describes the observed data, and some state equations that describe how the unobserved components or states (level, trend, seasonal) change over time \cite{hyndman2018forecasting}. Hence, these are referred to as state-space models. The flexibility of the ETS model lies in its ability to trend and seasonal components of different traits. Errors can be of two types: Additive and Multiplicative. Trend Component can be any of the following: None, Additive, Additive Damped, Multiplicative and Multiplicative Damped. Seasonal Component can be of three types: None, Additive, and Multiplicative. Thus, there are 15 models with additive errors and 15 models with multiplicative errors. To determine the best model of 30 ETS models, several criteria such as Akaike's Information Criterion (AIC), Akaike's Information Criterion correction (AICc), and Bayesian Information Criterion (BIC) can be used \cite{hyndman2008forecasting}. An R implementation of the model is available in the `ets' function under ``forecast" package \cite{hyndman2020package}. \subsection{Self-exciting threshold autoregressive (SETAR) model} As an extension of autoregressive model, Self-exciting threshold autoregressive (SETAR) model is used to model time series data, in order to allow for higher degree of flexibility in model parameters through a regime switching behaviour \cite{tong1990non}. Given a time-series data $y_t$, the SETAR model is used to predict future values, assuming that the behavior of the time series changes once the series enters a different regime. This switch from one to another regime depends on the past values of the series. The model consists of $k$ autoregressive (AR) parts, each for a different regime. The model is usually denoted as SETAR $(k,p)$ where $k$ is the number of threshold, there are $k+1$ number of regime in the model and $p$ is the order of the autoregressive part. For example, suppose an AR(1) model is assumed in both regimes, then a 2-regime SETAR model is given by \cite{franses2000non}: \begin{equation} \begin{split} y_t & = \phi_{0,1} + \phi_{1,1}y_{t-1} + \epsilon_t \; \; \text{if} \; \; y_{t-1} \leq c, \\ & = \phi_{0,2} + \phi_{1,2}y_{t-1} + \epsilon_t \; \; \text{if} \; \; y_{t-1} > c, \end{split} \end{equation} where for the moment the $\epsilon_t$ are assumed to be an i.i.d. white noise sequence conditional upon the history of the time series and $c$ is the threshold value. The SETAR model assumes that the border between the two regimes is given by a specific value of the threshold variable $y_{t-1}$. The model can be implemented using `setar' function under the ``tsDyn" package in R \cite{di2020package}. \subsection{Bayesian structural time series (BSTS) model} Bayesian Statistics has many applications in the field of statistical techniques such as regression, classification, clustering, and time series analysis. Scott and Varian \cite{scott2014predicting} used structural time series models to show how Google search data can be used to improve short-term forecasts of economic time series. In the structural time series model, the observation in time $t$, $y_t$ is defined as follows: $$ y_t = X_{t}^{T}\beta_t + \epsilon_t$$ where $\beta_t$ is the vector of latent variables, $X_t$ is the vector of model parameters, and $\epsilon_t$ are assumed follow Normal distributions with zero mean and $H_t$ as the variance. In addition, $\beta_t$ is represented as follows: $$ \beta_{t+1} = S_t \beta_t + R_t \delta_t, $$ where $\delta_t$ are assumed to follow Normal distributions with zero mean and $Q_t$ as the variance. Gaussian distribution is selected as the prior of the BSTS model since we use the occurred frequency values ranging from 0 to $\infty$ \cite{jammalamadaka2018multivariate}. An R implementation is available under the ``bsts" package \cite{scott2020package}, where one can add local linear trend and seasonal components as required. The state specification is passed as an argument to `bsts' function, along with the data and the desired number of Markov chain Monte Carlo (MCMC) iterations, and the model is fit using an MCMC algorithm \cite{scott2013bayesian}. \subsection{Theta model} The `Theta method' or `Theta model' is a univariate time series forecasting technique that performed particularly well in M3 forecasting competition and of interest to forecasters \cite{assimakopoulos2000theta}. The method decomposes the original data into two or more lines, called theta lines, and extrapolates them using forecasting models. Finally, the predictions are combined to obtain the final forecasts. The theta lines can be estimated by simply modifying the `curvatures' of the original time series \cite{spiliotis2020generalizing}. This change is obtained from a coefficient, called $\theta$ coefficient, which is directly applied to the second differences of the time series: \begin{equation} Y^{"}_{new}(\theta) = \theta Y^{"}_{data}, \label{eqn1} \end{equation} where $ Y^{"}_{data}= Y_t - 2 Y_{t-1} + Y_{t-2}$ at time $t$ for $t=3,4,\cdots,n$ and $\{Y_1,Y_2,\cdots,Y_n\}$ denote the observed univariate time series. In practice, coefficient $\theta$ can be considered as a transformation parameter which creates a series of the same mean and slope with that of the original data but having different variances. Now, Eqn. (\ref{eqn1}) is a second-order difference equation and has solution of the following form \cite{hyndman2003unmasking}: \begin{equation} Y_{new}(\theta) = a_{\theta} + b_{\theta}(t-1) + \theta Y_{t}, \label{eqn2} \end{equation} where $a_{\theta}$ and $b_{\theta}$ are constants and $t=1,2,\cdots,n$. Thus, $Y_{new}(\theta)$ is equivalent to a linear function of $Y_t$ with a linear trend added. The values of $a_{\theta}$ and $b_{\theta}$ are computed by minimizing the sum of squared differences: \begin{equation} \displaystyle\sum_{i=1}^{t} \left[ Y_t - Y_{new}(\theta) \right]^2 = \displaystyle\sum_{i=1}^{t}\left[(1-\theta) Y_{t} - a_{\theta} - b_{\theta}(t-1) \right]^2. \label{eqn3} \end{equation} Forecasts from the Theta model are obtained by a weighted average of forecasts of $Y_{new}(\theta)$ for different values of $\theta$. Also, the prediction intervals and likelihood-based estimation of the parameters can be obtained based on a state-space model, demonstrated in \cite{hyndman2003unmasking}. An R implementation of the Theta model is possible with `thetaf' function in ``forecast" package \cite{hyndman2020package}. \subsection{TBATS model} The main objective of TBATS model is to deal with complex seasonal patterns using exponential smoothing \cite{de2011forecasting}. The name is acronyms for key features of the models: Trigonometric seasonality (T), Box-Cox Transformation (B), ARMA errors (A), Trend (T) and Seasonal (S) components. TBATS makes it easy for users to handle data with multiple seasonal patterns. This model is preferable when the seasonality changes over time \cite{hyndman2018forecasting}. TBATS models can be described as follows: $$ y_{t}^{(\mu)} = l_{t-1} + \phi b_{t-1} + \sum_{i=1}^{T} s_{t-m_i}^{(i)} + d_t $$ $$ l_{t} = l_{t-1} + \phi b_{t-1} + \alpha d_t $$ $$ b_{t} = \phi b_{t-1} + \beta d_t $$ $$ d_t = \sum_{i=1}^{p} \psi_i d_{t-i} + \sum_{j=1}^{q} \theta_j e_{t-j} + e_{t} ; $$ where $y_{t}^{(\mu)}$ is the time series at time point $t$ (Box-Cox Transformed), $s_{t}^{(i)}$ is the $i$-th seasonal component, $l_t$ is the local level, $b_t$ is the trend with damping, $d_t$ is the ARMA$(p,q)$ process for residuals and $e_t$ as the Gaussian white noise. TBATS model can be implemented using `tbats' function under the ``forecast" package in R statistical software \cite{hyndman2020package}. \subsection{Artificial neural networks (ANN) model} Forecasting with artificial neural networks (ANN) has received increasing interest in various research and applied domains in the late 1990s. It has been given special attention in epidemiological forecasting \cite{philemon2019review}. Multi-layered feed-forward neural networks with back-propagation learning rules are the most widely used models with applications in classification and prediction problems \cite{zhang1998forecasting}. There is a single hidden layer between the input and output layers in a simple feed-forward neural net, and where weights connect the layers. Denoting by $\omega_{ji}$ the weights between the input layer and hidden layer and $\nu_{kj}$ denotes the weights between the hidden and output layers. Based on the given inputs $x_i$, the neuron's net input is calculated as the weighted sum of its inputs. The output layer of the neuron, $y_j$, is based on a sigmoidal function indicating the magnitude of this net-input \cite{goodfellow2016deep}. For the $j^{th}$ hidden neuron, the calculation for the net input and output are: $$net_j^h= \displaystyle\sum_{i=1}^{n} \omega_{ji} x_i \; \; \; \text{and} \; \; \; y_j=f(net_j^h).$$ For the $k^{th}$ output neuron: $$net_k^o=\displaystyle\sum_{j=1}^{J+1} \nu_{kj} y_j \; \; \; \text{and} \; \; \; o_k=f(net_k^o), \; \; \; \text{where} \; f(net)= \frac{1}{1+e^{-\lambda net}}$$ with $\lambda \in (0,1)$ is a parameter used to control the gradient of the function and $J$ is the number of neurons in the hidden layer. The back-propagation \cite{rumelhart1985learning} learning algorithm is the most commonly used technique in ANN. In the error back-propagation step, the weights in ANN are updated by minimizing $$E=\frac{1}{2P} \displaystyle\sum_{p=1}^{P} \displaystyle\sum_{k=1}^{K} (d_{pk}-O_{pk})^2,$$ where, $d_{pk}$ is the desired output of neuron $k$ and for input pattern $p$. The common formula for number of neurons in the hidden layer is $h=\frac{(i+j)}{2} + \sqrt{d}$, for selecting the number of hidden neurons, where $i$ is the number of output $y_j$ and $d$ denotes the number of $i$ training patterns in the input $x_i$ \cite{zhang2005neural}. The application of ANN for time series data is possible with `mlp' function under "nnfor" package in R \cite{kourentzes2017nnfor}. \subsection{Autoregressive neural network (ARNN) model} Autoregressive neural network (ARNN) received attention in time series literature in late 1990s \cite{faraway1998time}. The architecture of a simple feedforward neural network can be described as a network of neurons arranged in input layer, hidden layer, and output layer in a prescribed order. Each layer passes the information to the next layer using weights that are obtained using a learning algorithm \cite{zhang2005neural}. ARNN model is a modification to the simple ANN model especially designed for prediction problems of time series datasets \cite{faraway1998time}. ARNN model uses a pre-specified number of lagged values of the time series as inputs and number of hidden neurons in its architecture is also fixed \cite{hyndman2018forecasting}. ARNN($p,k$) model uses $p$ lagged inputs of the time series data in a one hidden layered feedforward neural network with $k$ hidden units in the hidden layer. Let $\underline{x}$ denotes a $p$-lagged inputs and $f$ is a neural network of the following architecture: \begin{equation} f(\underline{x}) = c_{0} + \displaystyle \sum_{j=1}^{k} w_j \phi \left( a_j + b_{j} '\underline{x} \right); \label{eqn4} \end{equation} where $c_0, a_j, w_j$ are connecting weights, $b_j$ are $p$-dimensional weight vector and $\phi$ is a bounded nonlinear sigmoidal function (e.g., logistic squasher function or tangent hyperbolic activation function). These Weights are trained using a gradient descent backpropagation \cite{rumelhart1985learning}. Standard ANN faces the dilemma to choose the number of hidden neurons in the hidden layer and optimal choice is unknown. But for ARNN model, we adopt the formula $k = [(p+1)/2]$ for non-seasonal time series data where $p$ is the number of lagged inputs in an autoregressive model \cite{hyndman2018forecasting}. ARNN model can be applied using the `nnetar' function available in the R statistical package ``forecast" \cite{hyndman2020package}. \subsection{Ensemble forecasting models} The idea of ensemble time series forecasts was given by Bates and Granger (1969) in their seminal work \cite{bates1969combination}. Forecasts generated from ARIMA, ETS, Theta, ARNN, WARIMA can be combined with equal weights, weights based on in-sample errors, or cross-validated weights. In the ensemble framework, cross-validation for time series data with user-supplied models and forecasting functions is also possible to evaluate model accuracy \cite{shaub2020fast}. Combining several candidate models can hedge against an incorrect model specification. Bates and Granger(1969) \cite{bates1969combination} suggested such an approach and observed, somewhat surprisingly, that the combined forecast can even outperform the single best component forecast. While combination weights selected equally or proportionally to past model errors are possible approaches, many more sophisticated combination schemes, have been suggested. For example, rather than normalizing weights to sum to unity, unconstrained and even negative weights could be possible \cite{granger1984improved}. The simple equal-weights combination might appear woefully obsolete and probably non-competitive compared to the multitude of sophisticated combination approaches or advanced machine learning and neural network forecasting models, especially in the age of big data. However, such simple combinations can still be competitive, particularly for pandemic time series \cite{shaub2020fast}. A flow diagram of the ensemble method is presented in Figure \ref{flow_chart_ensemble}. \begin{figure}[H] \centering \includegraphics[scale=0.75]{ensemble_diagram.eps} \caption{Flow diagram of the ensemble model where M1, M2, and M3 are three different univariate time series models} \label{flow_chart_ensemble} \end{figure} The ensemble method by \cite{bates1969combination} produces forecasts out to a horizon $h$ by applying a weight $w_m$ to each $m$ of the $n$ model forecasts in the ensemble. The ensemble forecast $f(i)$ for time horizon $1 \leq i \leq h$ and with individual component model forecasts $f_m(i)$ is then $$ f(i) = \displaystyle\sum_{m=1}^{n} w_m f_m(i). $$ The weights can be determined in several ways (for example, supplied by the user, set equally, determined by in-sample errors, or determined by cross-validation). The ``forecastHybrid" package in R includes these component models in order to enhance the ``forecast" package base models with easy ensembling (e.g., `hybridModel' function in R statistical software) \cite{shaub4forecasthybrid}. \subsection{Hybrid forecasting models} The idea of hybridizing time series models and combining different forecasts was first introduced by Zhang \cite{zhang2003time} and further extended by \cite{khashei2010artificial, chakraborty2019forecasting, chakraborty2020real, chakraborty2020theta}. The hybrid forecasting models are based on an error re-modeling approach, and there are broadly two types of error calculations popular in the literature, which are given below \cite{mosleh1986assessment, chowdhury2020multiplicative}: \begin{definition} In the additive error model, the forecaster treats the expert's estimate as a variable, $\hat{Y_t}$, and thinks of it as the sum of two terms: $$\hat{Y_t}=Y_t + e_t,$$ where $Y_t$ is the true value and $e_t$ be the additive error term. \label{def1} \end{definition} \begin{definition} In the multiplicative error model, the forecaster treats the expert's estimate $\hat{Y_t}$ as the product of two terms: $$\hat{Y_t}=Y_t \times e_t,$$ where $Y_t$ is the true value and $e_t$ be the multiplicative error term. \label{def2} \end{definition} Now, even if the relationship is of product type, in the log-log scale it becomes additive. Hence, without loss of generality, we may assume the relationship to be additive and expect errors (additive) of a forecasting model to be random shocks \cite{chakraborty2020theta}. These hybrid models are useful for complex correlation structures where less amount of knowledge is available about the data generating process. A simple example is the daily confirmed cases of the COVID-19 cases for various countries where very little is known about the structural properties of the current pandemic. The mathematical formulation of the proposed hybrid model ($Z_t$) is as follows: \begin{eqnarray*} Z_t &=& L_t + N_t, \end{eqnarray*} where $L_t$ is the linear part and $N_t$ is the nonlinear part of the hybrid model. We can estimate both $L_t$ and $N_t$ from the available time series data. Let $\hat{L_t}$ be the forecast value of the linear model (e.g., ARIMA) at time $t$ and $\epsilon_{t}$ represent the error residuals at time $t$, obtained from the linear model. Then, we write \begin{eqnarray*} \epsilon_{t} & = & Z_t - \hat{L_t}. \end{eqnarray*} These left-out residuals are further modeled by a nonlinear model (e.g., ANN or ARNN) and can be represented as follows: \begin{eqnarray*} \epsilon_{t} &=& f(\epsilon_{t-1}, \epsilon_{t-2}, . . . , \epsilon_{t-p}) + \varepsilon_t, \end{eqnarray*} where $f$ is a nonlinear function, and the modeling is done by the nonlinear ANN or ARNN model as defined in Eqn. (\ref{eqn4}) and $\varepsilon_t$ is supposed to be the random shocks. Therefore, the combined forecast can be obtained as follows: \begin{eqnarray*} \hat{Z_t} &=& \hat{L_t} + \hat{N_t}, \end{eqnarray*} where $\hat{N_t}$ is the forecasted value of the nonlinear time series model. An overall flow diagram of the proposed hybrid model is given in Figure \ref{flow_chart_hybrid}. In the hybrid model, a nonlinear model is applied in the second stage to re-model the left-over autocorrelations in the residuals, which the linear model could not model. Thus, this can be considered as an error re-modeling approach. This is important because due to model misspecification and disturbances in the pandemic rate time series, the linear models may fail to generate white noise behavior for the forecast residuals. Thus, hybrid approaches eventually can improve the predictions for the epidemiological forecasting problems, as shown in \cite{chakraborty2019forecasting, chakraborty2020real, chakraborty2020theta}. These hybrid models only assume that the linear and nonlinear components of the epidemic time series can be separated individually. The implementation of the hybrid models used in this study are available in \cite{github2020csf}. \begin{figure}[H] \centering \includegraphics[scale=0.75]{hybrid_diagram.eps} \caption{Flow diagram of the hybrid forecasting model} \label{flow_chart_hybrid} \end{figure} \section{Experimental analysis} Five time series COVID-19 datasets for the USA, India, Russia, Brazil, and Peru UK are considered for assessing twenty forecasting models (individual, ensemble, and hybrid). The datasets are mostly nonlinear, nonstationary, and non-gaussian in nature. We have used root mean square error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE), and symmetric MAPE (SMAPE) to evaluate the predictive performance of the models used in this study. Since the number of data points in both the datasets is limited, advanced deep learning techniques will over-fit the datasets \citep{hastie2009elements}. \subsection{Datasets} We use publicly available datasets to compare various forecasting frameworks. COVID-19 cases of five countries with the highest number of cases were collected \cite{owd2020, wom2020}. The datasets and their description is presented in Table \ref{tab:data_descrip}. \begin{table} \centering \caption{Description of COVID-19 datasets} \begin{tabular}{|p{2cm}|p{3cm}|p{3cm}|p{2cm}|} \hline \textbf{Countries} & \textbf{Start date} & \textbf{End date} & \textbf{Length} \\ \hline USA & 20/01/2020 & 15/09/2020 & 240\\ \hline India & 29/01/2020 & 15/09/2020 & 231\\ \hline Brazil & 25/02/2020 & 15/09/2020 & 204\\ \hline Russia & 31/01/2020 & 15/09/2020 & 229 \\ \hline Peru & 06/03/2020 & 15/09/2020 & 194 \\ \hline \end{tabular} \label{tab:data_descrip} \end{table} \subsection{Global characteristics} Characteristics of these five time series were examined using Hurst exponent, KPSS test and Terasvirta test and other measures as described in Section \ref{global}. Hurst exponent (denoted by H), which ranges between zero to one, is calculated to measure the long-range dependency in a time series and provides a measure of long-term nonlinearity. For values of H near zero, the time series under consideration is mean-reverting. An increase in the value will be followed by a decrease in the series and vice versa. When H is close to 0.5, the series has no autocorrelation with past values. These types of series are often called Brownian motion. When H is near one, an increase or decrease in the value is most likely to be followed by a similar movement in the future. All the five COVID-19 datasets in this study possess the Hurst exponent value near one, which indicates that these time series datasets have a strong trend of increase followed by an increase or decrease followed by another decline. KPSS tests are performed to examine the stationarity of a given time series. The null hypothesis for the KPSS test is that the time series is stationary. Thus, the series is nonstationary when the p-value less than a threshold. From Table \ref{tab:data_tests}, all the five datasets can be characterized as non-stationary as the p-value $<$ 0.01 in each instances. Terasvirta test examines the linearity of a time series against the alternative that a nonlinear process has generated the series. It is observed that the USA, Russia, and Peru COVID-19 datasets are likely to follow a nonlinear trend. On the other hand, India and Brazil datasets have some linear trends. \begin{table} \centering \caption{Test results on COVID-19 datasets} \begin{tabular}{|p{2cm}|p{3cm}|p{3cm}|p{3cm}|} \hline \textbf{Countries} & \textbf{Hurst exponent} & \textbf{KPSS test} & \textbf{Terasvirta test} \\ \hline USA & 0.9996 & p-value $<$ 0.01 & p-value = 0.0181\\ \hline India & 0.9997 & p-value $<$ 0.01 & p-value $<$ 0.01 \\ \hline Brazil & 0.9974 & p-value $<$ 0.01 & p-value $<$ 0.01\\ \hline Russia & 0.9992 & p-value $<$ 0.01 & p-value = 0.0566 \\ \hline Peru & 0.9983 & p-value $<$ 0.01 & p-value = 0.8471 \\ \hline \end{tabular} \label{tab:data_tests} \end{table} Further, we examine serial correlation, skewness, kurtosis, and maximum Lyapunov exponent for the five COVID-19 datasets. The results are reported in Table \ref{tab:data_chars}. The serial correlation of the datasets is computed using the Box-Pierce test statistic for the null hypothesis of independence in a given time series. The p-values related to each of the datasets were found to be below the significant level (see Table \ref{tab:data_chars}). This indicates that these COVID-19 datasets have no serial correlation when lag equals one. Skewness for Russia COVID-19 dataset is found to be negative, whereas the other four datasets are positively skewed. This means for the Russia dataset; the left tail is heavier than the right tail. For the other four datasets, the right tail is heavier than the left tail. The Kurtosis values for the India dataset are found positive while the other four datasets have negative kurtosis values. Therefore, the COVID-19 dataset of India tends to have a peaked distribution, and the other four datasets may have a flat distribution. We observe that each of the five datasets is non-chaotic in nature, i.e., the maximum Lyapunov exponents are less than unity. A summary of the implementation tools is presented in Table \ref{r}. \begin{table} \centering \caption{Characteristics of COVID-19 datasets} \begin{tabular}{|p{2cm}|p{3cm}|p{2cm}|p{2cm}|p{4cm}|} \hline \textbf{Countries} & \textbf{Box test} & \textbf{Skewness} & \textbf{Kurtosis} & \textbf{Chaotic/Non-chaotic} \\ \hline USA & p-value $<$ 0.01 & 0.4971 & - 0.7465 & Non-chaotic \\ \hline India & p-value $<$ 0.01 & 1.4981 & 0.9422 & Non-chaotic \\ \hline Brazil & p-value $<$ 0.01 & 0.6897 & -0.7124 & Non-chaotic \\ \hline Russia & p-value $<$ 0.01 & - 0.0544 & -1.4439 & Non-chaotic \\ \hline Peru & p-value $<$ 0.01 & 0.4421 & -0.2142 & Non-chaotic \\ \hline \end{tabular} \label{tab:data_chars} \end{table} \begin{table} \centering \caption{R functions and packages for implementation.}\label{R_functions_packages} \begin{tabular}{|c|c|c|c|} \hline Model & R function & R package & Reference \\ \hline ARIMA & auto.arima & forecast & \cite{hyndman2007automatic} \\ \hline ETS & ets & forecast & \cite{hyndman2007automatic} \\ \hline SETAR & setar & tsDyn & \cite{di2020package} \\ \hline TBATS & tbats & forecast & \cite{hyndman2007automatic} \\ \hline Theta & thetaf & forecast & \cite{hyndman2007automatic} \\ \hline ANN & mlp & nnfor & \cite{kourentzes9nnfor} \\ \hline ARNN & nnetar & forecast & \cite{hyndman2007automatic} \\ \hline WARIMA & WaveletFittingarma & WaveletArima & \cite{paul2017package} \\ \hline BSTS & bsts & bsts & \cite{scott2020package} \\ \hline ARFIMA & arfima & forecast & \cite{hyndman2007automatic} \\ \hline Ensemble models & hybridModel & forecastHybrid & \cite{shaub4forecasthybrid} \\ \hline Hybrid models & - & - & \cite{github2020csf} \\ \hline \end{tabular} \label{r} \end{table} \subsection{Accuracy metrics} We used four popular accuracy metrics to evaluate the performance of different time series forecasting models. The expressions of these metrics are given below. $$ RMSE = \sqrt{\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2} ; \; MAE = \frac{\displaystyle 1}{\displaystyle n} \sum_{i=1}^n |y_i - \hat{y}_i| ; $$ $$ MAPE = \frac{1}{n} \sum_{i=1}^n |\frac{\hat{y}_i - y_i}{y_i}| \times 100 ; \; SMAPE =\frac{1}{n} \sum_{i=1}^n \frac{|\hat{y}_i - y_i|}{(|\hat{y}_i|+|y_i|)/2} \times 100 ; $$ where $y_i$ are actual series values, $\hat{y}_i$ are the predictions by different models and $n$ represent the number of data points of the time series. The models with least accuracy metrics is the best forecasting model. \subsection{Analysis of results} This subsection is devoted to the experimental analysis of confirmed COVID-19 cases using different time series forecasting models. The test period is chosen to be 15 days and 30 days, whereas the rest of the data is used as training data (see Table \ref{tab:data_descrip}). In first columns of Tables \ref{table_15_days} and \ref{table_30_days}, we present training data and test data for USA, India, Brazil, Russia and Peru. The autocorrelation function (ACF) and partial autocorrelation function (PACF) plots are also depicted for the training period of each of the five countries in Tables \ref{table_15_days} and \ref{table_30_days}. ACF and PACF plots are generated after applying the required number of differencing of each training data using the r function `diff'. The required order of differencing is obtained by using the R function `ndiffs' which estimate the number of differences required to make a given time series stationary. The integer-valued order of differencing is then used as the value of '$d$' in the ARIMA$(p,d,q)$ model. Other two parameters `$p$' and `$q$' of the model are obtained from ACF and PACF plots respectively (see Tables \ref{table_15_days} and \ref{table_30_days}). However, we choose the `best' fitted ARIMA model using AIC value for each training dataset. Table \ref{table_15_days} presents the training data (black colored) and test data (red-colored) and corresponding ACF and PACF plots for the five time-series datasets. \begin{table}[H] \centering \caption{Pandemic datasets and corresponding ACF, PACF plots with 15-days test data}\label{table_15_days} \vspace{1cm} \begin{tabular}{ | c | p{4cm} | p{4cm} | p{4cm} |} \hline Country & Data & ACF plot & PACF plot \\ \hline USA & \begin{minipage}{.30\textwidth} \includegraphics[width=40mm, height=30mm]{data_USA_15_days.eps} \end{minipage} & \begin{minipage}{.30\textwidth} \includegraphics[width=40mm, height=30mm]{ACF_USA_15_days.eps} \end{minipage} & \begin{minipage}{.30\textwidth} \includegraphics[width=40mm, height=30mm]{PACF_USA_15_days.eps} \end{minipage} \\ \hline India & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{data_India_15_days.eps} \end{minipage} & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{ACF_India_15_days.eps} \end{minipage} & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{PACF_India_15_days.eps} \end{minipage} \\ \hline Brazil & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{data_Brazil_15_days.eps} \end{minipage} & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{ACF_Brazil_15_days.eps} \end{minipage} & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{PACF_Brazil_15_days.eps} \end{minipage} \\ \hline Russia & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{data_Russia_15_days.eps} \end{minipage} & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{ACF_Russia_15_days.eps} \end{minipage} & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{PACF_Russia_15_days.eps} \end{minipage} \\ \hline Peru & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{data_Peru_15_days.eps} \end{minipage} & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{ACF_Peru_15_days.eps} \end{minipage} & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{PACF_Peru_15_days.eps} \end{minipage} \\ \hline \end{tabular} \end{table} Further, we checked twenty different forecasting models as competitors for the short-term forecasting of COVID-19 confirmed cases in five countries. 15-days and 30-days ahead forecasts were generated for each model, and accuracy metrics were computed to determine the best predictive models. From the ten popular single models, we choose the best one based on the accuracy metrics. On the other hand, one hybrid/ensemble model is selected from the rest of the ten models. The best-fitted ARIMA parameters, ETS, ARNN, and ARFIMA models for each country are reported in the respective tables. Table \ref{table_30_days} presents the training data (black colored) and test data (red-colored) and corresponding plots for the five datasets. Twenty forecasting models are implemented on these pandemic time-series datasets. Table \ref{r} gives the essential details about the functions and packages required for implementation. \begin{table}[H] \centering \caption{Pandemic datasets and corresponding ACF, PACF plots with 30-days test data}\label{table_30_days} \vspace{1cm} \begin{tabular}{ | c | p{4cm} | p{4cm} | p{4cm} |} \hline Country & Data & ACF plot & PACF plot \\ \hline USA & \begin{minipage}{.30\textwidth} \includegraphics[width=40mm, height=30mm]{data_USA_30_days.eps} \end{minipage} & \begin{minipage}{.30\textwidth} \includegraphics[width=40mm, height=30mm]{ACF_USA_30_days.eps} \end{minipage} & \begin{minipage}{.30\textwidth} \includegraphics[width=40mm, height=30mm]{PACF_USA_30_days.eps} \end{minipage} \\ \hline India & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{data_India_30_days.eps} \end{minipage} & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{ACF_India_30_days.eps} \end{minipage} & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{PACF_India_30_days.eps} \end{minipage} \\ \hline Brazil & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{data_Brazil_30_days.eps} \end{minipage} & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{ACF_Brazil_30_days.eps} \end{minipage} & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{PACF_Brazil_30_days.eps} \end{minipage} \\ \hline Russia & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{data_Russia_30_days.eps} \end{minipage} & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{ACF_Russia_30_days.eps} \end{minipage} & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{PACF_Russia_30_days.eps} \end{minipage} \\ \hline Peru & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{data_Peru_30_days.eps} \end{minipage} & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{ACF_Peru_30_days.eps} \end{minipage} & \begin{minipage}{.3\textwidth} \includegraphics[width=40mm, height=30mm]{PACF_Peru_30_days.eps} \end{minipage} \\ \hline \end{tabular} \end{table} \paragraph{\textbf{Results for USA COVID-19 data:}} Among the single models, ARIMA(2,1,4) performs best in terms of accuracy metrics for 15-days ahead forecasts. TBATS and ARNN(16,8) also have competitive accuracy metrics. Hybrid ARIMA-ARNN model improves the earlier ARIMA forecasts and has the best accuracy among all hybrid/ensemble models (see Table \ref{usa_accuracy_table_15_days}). Hybrid ARIMA-WARIMA also does a good job and improves ARIMA model forecasts. In-sample and out-of-sample forecasts obtained from ARIMA and hybrid ARIMA-ARNN models are depicted in Fig. \ref{Fig:USA_forecasts}(a). Out-of-sample forecasts are generated using the whole dataset as training data. \begin{table}[H] \centering \caption{Performance metrics with 15 days-ahead test set for USA}\label{usa_accuracy_table_15_days} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{15-days ahead forecast} \\ \cline{2-5} & RMSE & MAE & MAPE & SMAPE \\ \hline ARIMA(2,1,4) & \textbf{7187.02} & \textbf{6094.95} & \textbf{16.89} & \textbf{16.07} \\ \hline ETS(A,N,N) & 8318.73 & 6759.65 & 17.82 & 17.86 \\ \hline SETAR & 8203.21 & 6725.96 & 18.19 & 17.77 \\ \hline TBATS & 7351.04 & 6367.46 & 17.86 & 16.73 \\ \hline Theta & 8112.22 & 6791.52 & 18.51 & 17.95 \\ \hline ANN & 9677.105 & 8386.223 & 25.15 & 21.69 \\ \hline ARNN(16,8) & 7633.92 & 6647.18 & 19.75 & 17.42 \\ \hline WARIMA & 9631.98 & 8182.84 & 21.09 & 22.21 \\ \hline BSTS & 10666.15 & 8527.72 & 20.91 & 23.26 \\ \hline ARFIMA(1,0.14,1) & 8413.33 & 6696.09 & 17.48 & 17.68 \\ \hline Hybrid ARIMA-ANN & 7113.72 & 6058.29 & 16.90 & 15.99 \\ \hline Hybrid ARIMA-ARNN & \textbf{5978.04} & \textbf{4650.89} & \textbf{13.22} & \textbf{12.45} \\ \hline Hybrid ARIMA-WARIMA & 6582.93 & 5217.023 & 14.33 & 13.80 \\ \hline Hybrid WARIMA-ANN & 10633.97 & 8729.11 & 21.85 & 24.22 \\ \hline Hybrid WARIMA-ARNN & 9558.34 & 8138.71 & 21.05 & 22.05 \\ \hline Ensemble ARIMA-ETS-Theta & 7602.06 & 6388.96 & 17.32 & 16.89 \\ \hline Ensemble ARIMA-ETS-ARNN & 7012.95 & 6184.23 & 18.09 & 16.45 \\ \hline Ensemble ARIMA-Theta-ARNN & 6933.88 & 6054.97 & 17.42 & 16.07 \\ \hline Ensemble ETS-Theta-ARNN & 7044.20 & 5950.40 & 16.97 & 15.82 \\ \hline Ensemble ANN-ARNN-WARIMA & 7437.21 & 6465.18 & 18.66 & 17.11 \\ \hline \end{tabular} \end{table} \begin{figure}[H] \includegraphics[width=0.52\textwidth]{USA_15_days.eps}(a) \includegraphics[width=0.52\textwidth]{USA_30_days.eps}(b) \caption{Plots of (a) 15-days ahead forecast results for USA COVID-19 data obtained using ARIMA and hybrid ARIMA-ARNN models. (b) 30-days ahead forecast results from ARFIMA and hybrid ARIMA-WARIMA models.} \label{Fig:USA_forecasts} \end{figure} ARFIMA(2,0,0) is found to have the best accuracy metrics for 30-days ahead forecasts among single forecasting models. BSTS and SETAR also have good agreement with the test data in terms of accuracy metrics. Hybrid ARIMA-WARIMA model and has the best accuracy among all hybrid/ensemble models (see Table \ref{usa_accuracy_table_30_days}). In-sample and out-of-sample forecasts obtained from ARFIMA and hybrid ARIMA-WARIMA models are depicted in Fig. \ref{Fig:USA_forecasts}(b). \begin{table}[H] \centering \caption{Performance metrics with 30 days-ahead test set for USA}\label{usa_accuracy_table_30_days} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{30-days ahead forecast} \\ \cline{2-5} & RMSE & MAE & MAPE & SMAPE \\ \hline ARIMA(2,1,4) with drift & 12370.18 & 10499.44 & 29.87 & 24.26 \\ \hline ETS(A,Ad,N) & 11929.897 & 9951.090 & 28.95 & 23.49 \\ \hline SETAR & 8593.527 & 6904.605 & 20.18 & 17.25 \\ \hline TBATS & 10314.23 & 8587.83 & 24.95 & 20.73 \\ \hline Theta & 12234.16 & 9858.115 & 29.03 & 23.24 \\ \hline ANN & 15241.65 & 12973.2 & 37.11 & 28.86 \\ \hline ARNN(16,8) & 19000.09 & 17311.86 & 46.95 & 36.01 \\ \hline WARIMA & 12455.31 & 9501.018 & 22.55 & 27.45 \\ \hline BSTS & 8459.763 & 6444.994 & 15.94 & 16.87 \\ \hline ARFIMA(2,0,0) & \textbf{6847.32} & \textbf{5651.33} & \textbf{14.83} & \textbf{14.40} \\ \hline Hybrid ARIMA-ANN & 12269.99 & 10339.18 & 29.46 & 23.92 \\ \hline Hybrid ARIMA-ARNN & 12584.03 & 10566.16 & 30.14 & 24.32 \\ \hline Hybrid ARIMA-WARIMA & \textbf{8514.36} & \textbf{6702.07} & \textbf{19.52} & \textbf{16.59} \\ \hline Hybrid WARIMA-ANN & 14983.09 & 11918.16 & 28.55 & 36.52 \\ \hline Hybrid WARIMA-ARNN & 12294.48 & 9330.15 & 22.14 & 26.88 \\ \hline Ensemble ARIMA-ETS-Theta & 12014.39 & 9978.22 & 29.04 & 23.49 \\ \hline Ensemble ARIMA-ETS-ARNN & 11484.49 & 10035.78 & 28.35 & 23.49 \\ \hline Ensemble ARIMA-Theta-ARNN & 13596.9 & 12000.69 & 33.86 & 27.21 \\ \hline Ensemble ETS-Theta-ARNN & 13074.13 & 11429.5 & 32.52 & 26.26 \\ \hline Ensemble ANN-ARNN-WARIMA & 11652.2 & 9947.16 & 30.60 & 24.23 \\ \hline \end{tabular} \end{table} \paragraph{\textbf{Results for India COVID-19 data:}} Among the single models, ANN performs best in terms of accuracy metrics for 15-days ahead forecasts. ARIMA(1,2,5) also has competitive accuracy metrics in the test period. Hybrid ARIMA-ARNN model improves the ARIMA(1,2,5) forecasts and has the best accuracy among all hybrid/ensemble models (see Table \ref{india_accuracy_table_15_days}). Hybrid ARIMA-ANN and hybrid ARIMA-WARIMA also do a good job and improves ARIMA model forecasts. In-sample and out-of-sample forecasts obtained from ANN and hybrid ARIMA-ARNN models are depicted in Fig. \ref{Fig:India_forecasts}(a). Out-of-sample forecasts are generated using the whole dataset as training data (see Fig. \ref{Fig:India_forecasts}). \begin{figure}[H] \includegraphics[width=0.52\textwidth]{India_15_days.eps}(a) \includegraphics[width=0.52\textwidth]{India_30_days.eps}(b) \caption{Plots of (a) 15-days ahead forecast results for India COVID-19 data obtained using ANN and hybrid ARIMA-ARNN models and (b) 30-days ahead forecast results from ANN and ANN-ARNN-WARIMA (AAW) models.} \label{Fig:India_forecasts} \end{figure} \begin{table}[H] \centering \caption{Performance metrics with 15 days-ahead test set for India}\label{india_accuracy_table_15_days} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{15-days ahead forecast} \\ \cline{2-5} & RMSE & MAE & MAPE & SMAPE \\ \hline ARIMA(1,2,5) & 8141.76 & 7479.43 & 8.36 & 8.72 \\ \hline ETS(A,A,N) & 15431 & 14415.73 & 15.92 & 17.18 \\ \hline SETAR & 22835.95 & 21851.45 & 24.24 & 27.84 \\ \hline TBATS & 11764.61 & 10837.68 & 12.00 & 12.89 \\ \hline Theta & 18405.29 & 17403.03 & 19.24 & 21.50 \\ \hline ANN & \textbf{6663.10} & \textbf{5891.94} & \textbf{6.54} & \textbf{6.81} \\ \hline ARNN(2,2) & 25617.9 & 24539.67 & 27.23 & 31.86 \\ \hline WARIMA & 12201.48 & 11103.41 & 12.25 & 13.18 \\ \hline BSTS & 13535.1 & 12402.34 & 13.65 & 14.84 \\ \hline ARFIMA(0,0.49,4) & 34848.86 & 33323.88 & 37.03 & 46.25 \\ \hline Hybrid ARIMA-ANN & 8080.862 & 7399.7 & 8.28 & 8.64 \\ \hline Hybrid ARIMA-ARNN & \textbf{7762.32} & \textbf{6560.26} & \textbf{7.20} & \textbf{7.67} \\ \hline Hybrid ARIMA-WARIMA & 8144.77 & 7455.34 & 8.32 & 8.68 \\ \hline Hybrid WARIMA-ANN & 11883.45 & 10697.21 & 11.79 & 12.65 \\ \hline Hybrid WARIMA-ARNN & 11623.15 & 10339.16 & 11.33 & 12.17 \\ \hline Ensemble ARIMA-ETS-Theta & 13734.28 & 12641.35 & 13.93 & 15.14 \\ \hline Ensemble ARIMA-ETS-ARNN & 15940.9 & 14941.65 & 16.50 & 18.17 \\ \hline Ensemble ARIMA-Theta-ARNN & 16883.48 & 15897.06 & 17.57 & 19.45 \\ \hline Ensemble ETS-Theta-ARNN & 19750.31 & 18780.61 & 20.79 & 23.42 \\ \hline Ensemble ANN-ARNN-WARIMA & 14512.1 & 13496.63 & 14.88 & 16.25 \\ \hline \end{tabular} \end{table} ANN is found to have the best accuracy metrics for 30-days ahead forecasts among single forecasting models for India COVID-19 data. Ensemble ANN-ARNN-WARIMA model and has the best accuracy among all hybrid/ensemble models (see Table \ref{india_accuracy_table_30_days}). In-sample and out-of-sample forecasts obtained from ANN and ensemble ANN-ARNN-WARIMA models are depicted in Fig. \ref{Fig:India_forecasts}(b). \begin{table}[H] \centering \caption{Performance metrics with 30 days-ahead test set for India}\label{india_accuracy_table_30_days} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{30-days ahead forecast} \\ \cline{2-5} & RMSE & MAE & MAPE & SMAPE \\ \hline ARIMA(1,2,5) & 17755.52 & 15657.27 & 18.67 & 21.01 \\ \hline ETS(A,A,N) & 14873.78 & 13051.98 & 15.57 & 17.18 \\ \hline SETAR & 21527.58 & 18609.71 & 21.98 & 25.49 \\ \hline TBATS & 24849.07 & 21843.15 & 25.96 & 30.82 \\ \hline Theta & 21713.03 & 19191.21 & 22.84 & 26.47 \\ \hline ANN & \textbf{6379.91} & \textbf{4800.13} & \textbf{6.48} & \textbf{6.13} \\ \hline ARNN(8,4) & 13225.43 & 10287.29 & 11.90 & 13.06 \\ \hline WARIMA & 14720.81 & 12738.66 & 15.15 & 16.72 \\ \hline BSTS & 14332.3 & 12493.74 & 14.88 & 16.34 \\ \hline ARFIMA(0,0.5,4) & 40115.62 & 36452.33 & 43.87 & 58.73 \\ \hline Hybrid ARIMA-ANN & 17640.51 & 15535.58 & 18.53 & 20.83 \\ \hline Hybrid ARIMA-ARNN & 17580.41 & 15507.04 & 18.51 & 20.80 \\ \hline Hybrid ARIMA-WARIMA & 17869.14 & 15771.05 & 18.78 & 21.19 \\ \hline Hybrid WARIMA-ANN & 14616.89 & 12613.57 & 15 & 16.53 \\ \hline Hybrid WARIMA-ARNN & 16052.8 & 14067.29 & 16.83 & 18.74 \\ \hline Ensemble ARIMA-ETS-Theta & 18081.97 & 15928.84 & 18.96 & 21.40 \\ \hline Ensemble ARIMA-ETS-ARNN & 15615.2 & 13419.82 & 15.86 & 17.61 \\ \hline Ensemble ARIMA-Theta-ARNN & 17933.14 & 15330.84 & 18.07 & 20.41 \\ \hline Ensemble ETS-Theta-ARNN & 16442.65 & 14160.56 & 16.74 & 18.69 \\ \hline Ensemble ANN-ARNN-WARIMA & \textbf{9090.214} & \textbf{7427.787} & \textbf{8.83} & \textbf{9.32} \\ \hline \end{tabular} \end{table} \paragraph{\textbf{Results for Brazil COVID-19 data:}} Among the single models, SETAR performs best in terms of accuracy metrics for 15-days ahead forecasts. Ensemble ETS-Theta-ARNN (EFN) model has the best accuracy among all hybrid/ensemble models (see Table \ref{brazil_accuracy_table_15_days}). In-sample and out-of-sample forecasts obtained from SETAR and ensemble EFN models are depicted in Fig. \ref{Fig:Brazil_forecasts}(a). \begin{table}[H] \centering \caption{Performance metrics with 15 days-ahead test set for Brazil}\label{brazil_accuracy_table_15_days} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{15-days ahead forecast} \\ \cline{2-5} & RMSE & MAE & MAPE & SMAPE \\ \hline ARIMA(3,1,2) & 16553.75 & 12530.04 & 76.62 & 41.66 \\ \hline ETS(A,A,N) & 13793.618 & 11038.765 & 63.41 & 38.99 \\ \hline SETAR & \textbf{11645.6} & \textbf{10148.91} & \textbf{49.77} & \textbf{37.35} \\ \hline TBATS & 15842.01 & 11803.72 & 72.67 & 40.05 \\ \hline Theta & 16263.93 & 12614.74 & 65.71 & 42.21 \\ \hline ANN & 19622.3 & 16536.91 & 83.45 & 53.78 \\ \hline ARNN((19,10)) & 13733.19 & 11951.27 & 57.59 & 40.36 \\ \hline WARIMA & 17167.66 & 13487.76 & 80.45 & 43.85\\ \hline BSTS & 21154.89 & 16702.38 & 98.97 & 49.62 \\ \hline ARFIMA(2,0.5,1) & 14023.22 & 11109.03 & 63.94 & 39.03 \\ \hline Hybrid ARIMA-ANN & 17541.86 & 13436.8 & 81.47 & 42.93 \\ \hline Hybrid ARIMA-ARNN & 18151.56 & 15254.77 & 79.64 & 46.73 \\ \hline Hybrid ARIMA-WARIMA & 16596.75 & 12704.16 & 77.16 & 41.94 \\ \hline Hybrid WARIMA-ANN & 16797.05 & 13378.25 & 78.94 & 43.96 \\ \hline Hybrid WARIMA-ARNN & 19211.01 & 16043.31 & 83.34 & 48.11 \\ \hline Ensemble ARIMA-ETS-Theta & 15271.82 & 11497.86 & 70.54 & 39.68 \\ \hline Ensemble ARIMA-ETS-ARNN & 13517.19 & 11260.21 & 62.81 & 39.61 \\ \hline Ensemble ARIMA-Theta-ARNN & 14546.36 & 11975.91 & 66.79 & 41.13 \\ \hline Ensemble ETS-Theta-ARNN & \textbf{13431.11} & \textbf{11324.4} & \textbf{62.67} & \textbf{39.83} \\ \hline Ensemble ANN-ARNN-WARIMA & 15565.1 & 13201.37 & 71.83 & 44.10 \\ \hline \end{tabular} \end{table} WARIMA is found to have the best accuracy metrics for 30-days ahead forecasts among single forecasting models for Brazil COVID-19 data. Hybrid WARIMA-ANN model has the best accuracy among all hybrid/ensemble models (see Table \ref{brazil_accuracy_table_30_days}). In-sample and out-of-sample forecasts obtained from WARIMA and hybrid WARIMA-ANN models are depicted in Fig. \ref{Fig:Brazil_forecasts}(b). \begin{table}[H] \centering \caption{Performance metrics with 30 days-ahead test set for Brazil}\label{brazil_accuracy_table_30_days} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{30-days ahead forecast} \\ \cline{2-5} & RMSE & MAE & MAPE & SMAPE \\ \hline ARIMA(5,1,1) with drift & 17647.13 & 14924.74 & 69.57 & 41.85 \\ \hline ETS(A,A,N) & 20270.82 & 15186.14 & 81.30 & 42.45 \\ \hline SETAR & 16136.69 & 15085.91 & 52.75 & 49.03 \\ \hline TBATS & 14166.74 & 10629.13 & 56.19 & 33.78 \\ \hline Theta & 17662.39 & 12880.03 & 70.55 & 38.38 \\ \hline ANN & 22403 & 18241.79 & 90.86 & 47.29 \\ \hline ARNN(9,5) & 13458.51 & 10884.02 & 40.10 & 30.92 \\ \hline WARIMA & \textbf{10628.51} & \textbf{9075.32} & \textbf{38.24} & \textbf{30.41} \\ \hline BSTS & 16876.78 & 15314.18 & 45.58 & 50.17 \\ \hline ARFIMA(2,0.5,1) & 12647.79 & 11616.15 & 47.49 & 37.56 \\ \hline Hybrid ARIMA-ANN & 17559.43 & 14810.82 & 69.11 & 41.58 \\ \hline Hybrid ARIMA-ARNN & 17274.78 & 14511.77 & 67.87 & 41.00 \\ \hline Hybrid ARIMA-WARIMA & 17464.81 & 14724.52 & 68.89 & 41.49 \\ \hline Hybrid WARIMA-ANN & \textbf{10841.65} & \textbf{8886.71} & \textbf{35.56} & \textbf{29.76} \\ \hline Hybrid WARIMA-ARNN & 10649.35 & 9104.54 & 38.39 & 30.48 \\ \hline Ensemble ARIMA-ETS-Theta & 18096.57 & 13854.34 & 72.82 & 40.27 \\ \hline Ensemble ARIMA-ETS-ARNN & 16186 & 13705.63 & 64.26 & 39.84 \\ \hline Ensemble ARIMA-Theta-ARNN & 15406.54 & 12793.94 & 60.87 & 38.06 \\ \hline Ensemble ETS-Theta-ARNN & 15737.01 & 12512.65 & 63.26 & 37.89 \\ \hline Ensemble ANN-ARNN-WARIMA & 13543.31 & 11230.96 & 52.96847 & 34.57 \\ \hline \end{tabular} \end{table} \begin{figure}[H] \includegraphics[width=0.52\textwidth]{Brazil_15_days.eps}(a) \includegraphics[width=0.52\textwidth]{Brazil_30_days.eps}(b) \caption{Plots of (a) 15-days ahead forecast results for Brazil COVID-19 data obtained using SETAR and ETS-Theta-ARNN (EFN) models and (b) 30-days ahead forecast results from WARIMA and hybrid WARIMA-ANN models.} \label{Fig:Brazil_forecasts} \end{figure} \paragraph{\textbf{Results for Russia COVID-19 data:}} BSTS performs best in terms of accuracy metrics for a 15-days ahead forecast in the case of Russia COVID-19 data among single models. Theta and ARNN(3,2) also show competitive accuracy measures. Ensemble ETS-Theta-ARNN (EFN) model has the best accuracy among all hybrid/ensemble models (see Table \ref{Russia_accuracy_table_15_days}). Ensemble ARIMA-ETS-ARNN and ensemble ARIMA-Theta-ARNN also performs well in the test period. In-sample and out-of-sample forecasts obtained from BSTS and ensemble EFN models are depicted in Fig. \ref{Fig:Russia_forecasts}(a). \begin{table}[H] \centering \caption{Performance metrics with 15 days-ahead test set for Russia}\label{Russia_accuracy_table_15_days} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{15-days ahead forecast} \\ \cline{2-5} & RMSE & MAE & MAPE & SMAPE \\ \hline ARIMA(0,2,3) & 307.34 & 260.18 & 4.87 & 5.02 \\ \hline ETS(A,Ad,N) & 215.43 & 178.64 & 3.36 & 3.42 \\ \hline SETAR & 436.81 & 383.72 & 7.19 & 7.52 \\ \hline TBATS & 215.61 & 178.79 & 3.36 & 3.42 \\ \hline Theta & 186.06 & 157.30 & 2.97 & 3.01 \\ \hline ANN & 367.19 & 313.66 & 5.87 & 6.09 \\ \hline ARNN(3,2) & 208.58 & 184.74 & 3.61 & 3.52 \\ \hline WARIMA & 568.44 & 499.58 & 9.35 & 9.92\\ \hline BSTS & \textbf{160.18} & \textbf{132.28} & \textbf{2.51} & \textbf{2.53} \\ \hline ARFIMA(1,0.1,0) & 351.12 & 297.92 & 5.57 & 5.77 \\ \hline Hybrid ARIMA-ANN & 308.49 & 261.17 & 4.89 & 5.03 \\ \hline Hybrid ARIMA-ARNN & 245.84 & 207.72 & 3.92 & 3.99 \\ \hline Hybrid ARIMA-WARIMA & 299.14 & 251.59 & 4.72 & 4.85 \\ \hline Hybrid WARIMA-ANN & 489.38 & 425.98 & 7.98 & 8.38 \\ \hline Hybrid WARIMA-ARNN & 542.01 & 473.94 & 8.87 & 9.38 \\ \hline Ensemble ARIMA-ETS-Theta & 234.64 & 195.71 & 3.68 & 3.75 \\ \hline Ensemble ARIMA-ETS-ARNN & 168.14 & 135.34 & 2.57 & 2.59 \\ \hline Ensemble ARIMA-Theta-ARNN & 192.28 & 158.52 & 2.99 & 3.03 \\ \hline Ensemble ETS-Theta-ARNN & \textbf{157.25} & \textbf{127.98} & \textbf{2.44} & \textbf{2.45} \\ \hline Ensemble ANN-ARNN-WARIMA & 288.26 & 243.69 & 4.57 & 4.69 \\ \hline \end{tabular} \end{table} SETAR is found to have the best accuracy metrics for 30-days ahead forecasts among single forecasting models for Russia COVID-19 data. Ensemble ARIMA-Theta-ARNN (AFN) model has the best accuracy among all hybrid/ensemble models (see Table \ref{Russia_accuracy_table_30_days}). All five ensemble models show promising results for this dataset. In-sample and out-of-sample forecasts obtained from SETAR and ensemble AFN models are depicted in Fig. \ref{Fig:Russia_forecasts}(b). \begin{table}[H] \centering \caption{Performance metrics with 30 days-ahead test set for Russia}\label{Russia_accuracy_table_30_days} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{30-days ahead forecast} \\ \cline{2-5} & RMSE & MAE & MAPE & SMAPE \\ \hline ARIMA(1,2,1) & 732.12 & 546.87 & 10.40 & 11.44 \\ \hline ETS(A,Ad,N) & 337.44 & 264.40 & 5.08 & 5.25 \\ \hline SETAR & \textbf{285.41} & \textbf{217.23} & \textbf{4.25} & \textbf{4.24} \\ \hline TBATS & 337.78 & 264.62 & 5.08 & 5.25 \\ \hline Theta & 327.46 & 297.91 & 6.04 & 5.82 \\ \hline ANN & 460 & 340.96 & 6.48 & 6.86 \\ \hline ARNN(3,2) & 727.63 & 693.61 & 13.98 & 12.97 \\ \hline WARIMA & 961.24 & 727.34 & 13.86 & 15.73 \\ \hline BSTS & 686.06 & 509.87 & 9.79 & 10.59 \\ \hline ARFIMA(1,0.01,0) & 303.35 & 239.76 & 4.63 & 4.74 \\ \hline Hybrid ARIMA-ANN & 734.05 & 548.49 & 10.43 & 11.48 \\ \hline Hybrid ARIMA-ARNN & 715.58 & 536.69 & 10.22 & 11.19 \\ \hline Hybrid ARIMA-WARIMA & 729.96 & 549.97 & 10.47 & 11.5 \\ \hline Hybrid WARIMA-ANN & 1012.61 & 772.11 & 14.73 & 16.82 \\ \hline Hybrid WARIMA-ARNN & 939.26 & 715.72 & 13.65 & 15.41 \\ \hline Ensemble ARIMA-ETS-Theta & 324.95 & 257.24 & 4.96 & 5.10 \\ \hline Ensemble ARIMA-ETS-ARNN & 330.79 & 280.85 & 5.51 & 5.56 \\ \hline Ensemble ARIMA-Theta-ARNN & \textbf{299.50} & \textbf{264.55} & \textbf{5.36} & \textbf{5.22} \\ \hline Ensemble ETS-Theta-ARNN & 337.63 & 293.23 & 6 & 5.77\\ \hline Ensemble ANN-ARNN-WARIMA & 399.84 & 324.34 & 6.29 & 6.46 \\ \hline \end{tabular} \end{table} \begin{figure}[H] \includegraphics[width=0.52\textwidth]{Russia_15_days.eps}(a) \includegraphics[width=0.52\textwidth]{Russia_30_days.eps}(b) \caption{Plots of (a) 15-days ahead forecast results for Russia COVID-19 data obtained using BSTS and ETS-Theta-ARNN (EFN) models and (b) 30-days ahead forecast results from SETAR and ARIMA-Theta-ARNN (AFN) models.} \label{Fig:Russia_forecasts} \end{figure} \paragraph{\textbf{Results for Peru COVID-19 data:}} WARIMA and ARFIMA(2,0.09,1) perform better than other single models for 15-days ahead forecasts in Peru. Hybrid WARIMA-ARNN model improves the WARIMA forecasts and has the best accuracy among all hybrid/ensemble models (see Table \ref{Peru_accuracy_table_15_days}). In-sample and out-of-sample forecasts obtained from WARIMA and hybrid WARIMA-ARNN models are depicted in Fig. \ref{Fig:Peru_forecasts}(a). \begin{table}[H] \centering \caption{Performance metrics with 15 days-ahead test set for Peru}\label{Peru_accuracy_table_15_days} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{15-days ahead forecast} \\ \cline{2-5} & RMSE & MAE & MAPE & SMAPE \\ \hline ARIMA(1,1,1) with drift & 2275.49 & 1686.84 & 49.97 & 28.99 \\ \hline ETS(M,A,N) & 1689.96 & 1189.05 & 31.89 & 23.15 \\ \hline SETAR & 1935.78 & 1286.56 & 41.57 & 23.71 \\ \hline TBATS & 1944.26 & 1301.07 & 41.72 & 24.06 \\ \hline Theta & 1831.88 & 1146.27 & 38.37 & 21.92 \\ \hline ANN & 1771.59 & 1211.24 & 38.89 & 22.75 \\ \hline ARNN(15,8) & 2564.65 & 2244.78 & 57.13 & 35.78 \\ \hline WARIMA & \textbf{1659.24} & 1060.67 & \textbf{35.22} & 20.85\\ \hline BSTS & 1740.18 & 1082.16 & 36.48 & 21.07 \\ \hline ARFIMA(2,0.09,1) & 1712.47 & \textbf{1022.55} & 35.65 & \textbf{20.13} \\ \hline Hybrid ARIMA-ANN & 2189.18 & 1596.80 & 47.93 & 27.96 \\ \hline Hybrid ARIMA-ARNN & 1646.88 & 1244.03 & 34.95 & 23.43 \\ \hline Hybrid ARIMA-WARIMA & 2082.15 & 1385.87 & 43.93 & 24.99 \\ \hline Hybrid WARIMA-ANN & 1560.68 & 1206.92 & 34.11 & 23.43 \\ \hline Hybrid WARIMA-ARNN & \textbf{1121.10} & \textbf{827.90} & \textbf{23.33} & \textbf{17.46} \\ \hline Ensemble ARIMA-ETS-Theta & 1677.24 & 1040.93 & 35.50 & 20.46 \\ \hline Ensemble ARIMA-ETS-ARNN & 1748.39 & 1185.23 & 38.18 & 22.48 \\ \hline Ensemble ARIMA-Theta-ARNN & 1801.56 & 1324.73 & 39.97 & 24.39 \\ \hline Ensemble ETS-Theta-ARNN & 1613.15 & 1048.04 & 34.76 & 20.62 \\ \hline Ensemble ANN-ARNN-WARIMA & 1864.99 & 1329.83 & 41.16 & 24.43 \\ \hline \end{tabular} \end{table} ARFIMA(2,0,0) and ANN depict competitive accuracy metrics for 30-days ahead forecasts among single forecasting models for Peru COVID-19 data. Ensemble ANN-ARNN-WARIMA (AAW) model has the best accuracy among all hybrid/ensemble models (see Table \ref{Peru_accuracy_table_30_days}). In-sample and out-of-sample forecasts obtained from ARFIMA(2,0,0) and ensemble AAW models are depicted in Fig. \ref{Fig:Peru_forecasts}(b). \begin{table}[H] \centering \caption{Performance metrics with 30 days-ahead test set for Peru}\label{Peru_accuracy_table_30_days} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{30-days ahead forecast} \\ \cline{2-5} & RMSE & MAE & MAPE & SMAPE \\ \hline ARIMA(1,1,1) with drift & 3889.85 & 3288.04 & 70.17 & 41.92 \\ \hline ETS(M,A,N) & 7881.14 & 6892.41 & 81.37 & 66.91 \\ \hline SETAR & 4598.98 & 4077.59 & 83.67 & 48.90 \\ \hline TBATS & 2924.92 & 2366.84 & 52.90 & 33.13 \\ \hline Theta & 3862.84 & 3374.84 & 70.68 & 42.93 \\ \hline ANN & 2183.98 & 1818.07 & \textbf{30.57} & 32.12 \\ \hline ARNN(15,8) & 2833.39 & 2339.49 & 49.10 & 32.92 \\ \hline WARIMA & 5579.69 & 4840.75 & 89.04 & 54.14 \\ \hline BSTS & 5422.13 & 4851.34 & 87.98 & 54.82 \\ \hline ARFIMA(2,0,0) & \textbf{2052.01} & \textbf{1513.62} & 35.37 & \textbf{23.27} \\ \hline Hybrid ARIMA-ANN & 3756.5 & 3131.88& 67.50 & 40.46 \\ \hline Hybrid ARIMA-ARNN & 4137.45 & 3619.54& 74.50 & 44.93 \\ \hline Hybrid ARIMA-WARIMA & 4164.69 & 3602.27& 75.52 & 44.78 \\ \hline Hybrid WARIMA-ANN & 6372.936 & 5722.291 & 95.95 & 60.80 \\ \hline Hybrid WARIMA-ARNN & 5563.043 & 4819.09 & 93.16 & 53.97 \\ \hline Ensemble ARIMA-ETS-Theta & 5176.14 & 4518.43 & 92.73 & 51.99 \\ \hline Ensemble ARIMA-ETS-ARNN & 4908.85 & 4153.58 & 87.26 & 48.69 \\ \hline Ensemble ARIMA-Theta-ARNN & 3410.39 & 2785.71 & 61.39 & 37.11 \\ \hline Ensemble ETS-Theta-ARNN & 4826.01 & 4048.24 & 85.09 & 47.82 \\ \hline Ensemble ANN-ARNN-WARIMA & \textbf{2626.8} & \textbf{2003.06} & \textbf{47.02} & \textbf{29.02} \\ \hline \end{tabular} \end{table} \begin{figure}[H] \includegraphics[width=0.52\textwidth]{Peru_15_days.eps}(a) \includegraphics[width=0.52\textwidth]{Peru_30_days.eps}(b) \caption{Plots of (a) 15-days ahead forecast results for Peru COVID-19 data obtained using WARIMA and hybrid WARIMA-ARNN models and (b) 30-days ahead forecast results from ARFIMA and ensemble ANN-ARNN-WARIMA (AAW) models.} \label{Fig:Peru_forecasts} \end{figure} Results from all the five datasets reveal that none of the forecasting models performs uniformly, and therefore, one should be carefully select the appropriate forecasting model while dealing with COVID-19 datasets. \section{Discussions} In this study, we assessed several statistical, machine learning, and composite models on the confirmed cases of COVID-19 data sets for the five countries with the highest number of cases. Thus, COVID-19 cases in the USA, followed by India, Brazil, Russia, and Peru, are considered. The datasets mostly exhibit nonlinear and nonstationary behavior. Twenty forecasting models were applied to five datasets, and an empirical study is presented here. The empirical findings suggest no universal method exists that can outperform every other model for all the datasets in COVID-19 nowcasting. Still, the future forecasts obtained from models with the best accuracy will be useful in decision and policy makings for government officials and policymakers to allocate adequate health care resources for the coming days in responding to the crisis. However, we recommend updating the datasets regularly and comparing the accuracy metrics to obtain the best model. As this is evident from this empirical study that no model can perform consistently as the best forecasting model, one must update the datasets regularly to generate useful forecasts. Time series of epidemics can oscillate heavily due to various epidemiological factors, and these fluctuations are challenging to be captured adequately for precise forecasting. All five different countries, except Brazil and Peru, will face a diminishing trend in the number of new confirmed cases of COVID-19 pandemic. Followed by the both short-term out of sample forecasts reported in this study, the lockdown and shutdown periods can be adjusted accordingly to handle the uncertain and vulnerable situations of the COVID-19 pandemic. Authorities and health care can modify their planning in stockpile and hospital-beds, depending on these COVID-19 pandemic forecasts. Models are constrained by what we know and what we assume but used appropriately, and with an understanding of these limitations, they can and should help guide us through this pandemic. Since purely statistical approaches do not account for how transmission occurs, they are generally not well suited for long-term predictions about epidemiological dynamics (such as when the peak will occur and whether resurgence will happen) or inference about intervention efficacy. Several forecasting models, therefore, limit their projections to two weeks or a month ahead. \section{Conclusion and Future Challenges} In this research, we have focused on analyzing the nature of the COVID-19 time series data and understanding the data characteristics of the time series. This empirical work studied a wide range of statistical forecasting methods and machine learning algorithms. We have also presented more systematic representations of single, ensemble, and hybrid approaches available for epidemic forecasting. This quantitative study could be used to assess and forecast COVID-19 confirmed cases, which will benefit epidemiologists and modelers in their real-world applications. Considering this scope of the study, we can present a list of challenges of pandemic forecasting (short-term) with the forecasting tools presented in this chapter: \begin{itemize} \item Collect more data on the factors that contribute to daily confirmed cases of COVID-19. \item model the entire predictive distribution, with particular focus on accurately quantifying uncertainty \cite{holmdahl2020wrong}. \item There is no universal model that can generate `best' short-term forecasts of COVID-19 confirmed cases. \item Continuously monitor the performance of any model against real data and either re-adjust or discard models based on accruing evidence. \item Developing models in real-time for a novel virus, with poor quality data, is a formidable task and real challenge for epidemic forecasters. \item Epidemiological estimates and compartmental models can be useful for long-term pandemic trajectory prediction, but they often assume some unrealistic assumptions \cite{ioannidis2020forecasting}. \item Future research is needed to collect, clean, and curate data and develop a coherent approach to evaluate the suitability of models with regard to COVID-19 predictions and forecast uncertainties. \end{itemize} \section*{Data and codes} For the sake of repeatability and reproducibility of this study, all codes and data sets are made available at \url{https://github.com/indrajitg-r/Forecasting-COVID-19-cases}. \bibliographystyle{plain}
1,314,259,992,779
arxiv
\section{Introduction} What is the smoking gun of extra dimensions at the Large Hadron Collider (LHC)? The obvious answer is that extra dimensions at the TeV scale would indicate gravity is strong near that scale, and black hole formation or other effects of strong gravity would be possible. That would be spectacular and convincing evidence for new dimensions of space-time. Alas, the setting of strong gravity is plagued with uncertainties and the LHC may not be able to access it \cite{LisaMeade}. Instead, the smoking gun for extra dimensions would be the discovery of Kaluza-Klein (KK) excitations of fields propagating in the bulk of the extra dimension. Models vary in terms of bulk content, but the common feature of all models is that the graviton propagates in the extra dimension and, hence, KK gravitons\footnote{Interest for spin-two resonances goes beyond uncovering extra-dimensions, as they can contribute to the $t \bar{t}$ forward-backward asymmetry \cite{Grinstein:2012pn}.} are the signature of new dimensions of space-time. What are the properties of KK gravitons that one can look for? KK gravitons, which we denote here by $G$, are massive spin-two particles, and one could use the angular distributions of the KK graviton decay products to determine the spin~\cite{Allanach:2000nr}. One can also look at selection rules related to the spin structure of the resonance~\cite{LisaWise}. But other new physics could be behind massive spin-two states. For example, a new strongly-coupled sector could produce the analogue of the $f_2$ meson in QCD~\cite{Nakamura:2010zzi}, a resonance with $J^{PC}=2^{++}$ just like a KK graviton. Therefore, spin determination is insufficient to claim the discovery of new dimensions. Also, it has been speculated \cite{Foadi:2008xj} that a spin-2 resonance arising from technicolor theories could be misidentified as a KK graviton. The graviton $G$ and the impostor $\hat{G}$ would both be massive spin-two resonances. On the other hand, $G$ couples to the stress tensor of the Standard Model (SM) and, at first sight, one would think that the impostor $\hat{G}$ could exhibit a broader range of couplings. But, as we show in this paper, that is not the case. In this paper we prove: \begin{itemize} \item Once Lorentz and SM gauge symmetries are imposed, $\hat{G}$ cannot couple through a dimension-four term in the Lagrangian. \item If one further assumes that the composite sector respects the flavor and CP symmetries of the SM, then $G$ and $\hat{G}$ couple to the {\it same} operators of the SM at leading order, namely the same dimension-five operators. \item Although the operators coupling to $G$ and $\hat{G}$ are the same, the coefficients are not. In the case of the KK graviton $G$, the coefficients are given by the Planck mass and the overlap of wavefunctions in the extra dimension. For the $\hat{G}$ case, those coefficients are largely unknown, related to the UV structure of the strongly-coupled theory responsible of the appearance of $\hat{G}$. Nevertheless, we find a robust prediction for a ratio of coefficients in the $G$ case, and this ratio is a {\it real} smoking-gun for extra dimensions. \end{itemize} Besides helping to determine whether a new dimension has been discovered, our result has interesting consequences for the holographic approach to technicolor, or lack thereof. The paper is structured as follows. In Sec.~\ref{prop}, we show that $G$ and $\hat{G}$ have the same propagation, and in Secs.~\ref{Gco} and~\ref{Ghatco} we describe their couplings to the SM particles. We then show ways of distinguishing between them in Sec.~\ref{dist}, and describe aspects of the holographic picture between $G$ and $\hat{G}$. \section{The propagation of $G$ and its impostor}\label{prop} In this section we show that the KK graviton $G$ and the spin-two meson from new strong interactions $\hat{G}$ have the same propagation, described by the Fierz-Pauli Lagrangian for massive spin-two resonances~\cite{Fierz-Pauli}. In the Fierz-Pauli Lagrangian, a spin-two field is described by a rank-two symmetric and traceless tensor, \begin{eqnarray} \hat{G}_{\mu\nu}=\hat{G}_{\nu\mu} \, \ , \, \hat{G}_{\mu}^{\mu} =0 \ . \label{traceless} \end{eqnarray} Moreover, the following condition must be satisfied for $\hat{G}_{\mu\nu}$ to have positive energy \begin{eqnarray} \partial_{\mu} \hat{G}^{\mu\nu} = 0 \ . \label{divergence} \end{eqnarray} The propagation of $G$ is identical. The argument is as follows. Let us assume for simplicity that there is a new extra-dimension, denoted by $z$, which is compactified in an interval $z \in (z_{UV},z_{IR})$. We will sometimes denote those limits as the UV(IR) {\it brane}, as one can localize fields on those four-dimensional (4D) manifolds. We then define the set of five-dimensional (5D) factorizable metrics, \begin{eqnarray} ds^2=w^2(z) \, (\eta_{\mu \nu} d x^{\mu} d x^{\nu}-dz^2) \ , \label{geom} \end{eqnarray} where $w(z)=1$ or $z_{UV}/z$ respectively for a flat or AdS extra dimension. In general, $w(z)$ is a constant or decreasing function of $z$. Since the graviton field in an extra-dimensional theory has a massless zero mode (the 4D graviton), the 5D graviton field has Neumann boundary conditions on both sides of the interval. Kaluza-Klein dynamics is obtained by studying fluctuations around the Minkowski metric in Eq.~(\ref{geom}), \begin{eqnarray} \eta_{\mu \nu} \to \eta_{\mu \nu} + h_{\mu \nu}(x,z) \ . \end{eqnarray} The equation of motion of the graviton field is given by the Einstein equation, \begin{eqnarray} G_{MN} = 0 \end{eqnarray} where $M,N=0,1,2,3$ and $5$. Note that the metric is conformally flat, which allows the following separation \begin{eqnarray} G_{MN} = G^{flat}_{MN} + \delta G_{MN}[\nabla w(z), \nabla \nabla w(z)], \end{eqnarray} where $G^{flat}_{MN}$ is the Einstein tensor in Minkowski space-time, and it contains the Fierz-Pauli equation for the graviton in flat space-time. Now $ \delta G_{MN}$ contains only covariant derivatives of the warp factor $w(z)$. Because the warp factor is only a function of the extra dimension coordinate $z$, only derivatives with respect to $z$ will appear in $ \delta G_{MN}$. Then, upon KK decomposition of the graviton field, $G_{\mu \nu}(x,z) = \sum_n G^{n}_{\mu \nu}(x)\chi_n(z)$, terms in $ \delta G_{MN}$ appear in the differential equation for the 5D wavefunction $\chi_n(z)$ of excited KK gravitons \cite{KKgravRS}, while the kinetic term in four dimensions remains the same as the flat space-time case, i.e. the Fierz-Pauli equation. Therefore, all KK excitations behave as 4D Fierz-Pauli fields, and the same equations as Eqs.~(\ref{traceless}) and~(\ref{divergence}) apply to $G$\footnote{We would like to note that this result has been obtained in the flat space case \cite{Han:1998sg,Giudice:1998ck} as well as the Randall-Sundrum case~\cite{KKgravRS}.}. Finally, note that we could easily generalize this argument to $D>5$. \section{The coupling of $G$ to the Standard Model} \label{Gco} In this section we describe the couplings of the KK graviton to matter. Those couplings are in general model-dependent functions of the geometry of the extra dimension and localization of fields in the bulk of the extra dimension. Nevertheless, one can extract general aspects of those couplings, as we discuss below. The graviton couples to matter through the energy stress tensor. The Lagrangian describing the interactions is \begin{equation} \mathscr{L}_{int}=-\frac{c_{i}}{M_{eff}} G^{\mu \nu} T^{i}_{\mu \nu}, \label{LG} \end{equation} where $T^{i}_{\mu \nu}$ is the 4D stress tensor of SM species $i=b$, $f$, $H$ (gauge bosons, fermions, scalars). $M_{eff}$ is the effective Planck mass suppressing the interactions, and we are going to focus on the case \begin{eqnarray} M_{eff} \gtrsim m_{G} \simeq \textrm{ TeV} \ , \end{eqnarray} and assume $M_{eff}/m_G$ is at least 2 or 3, indicating that the effective theory has a range of validity beyond the first resonance $G$. Finally, the $c_i$ are functions of the overlap of the $G$ resonance with the SM fields in the bulk of the extra dimension. The relevant $G$-SM-SM interaction terms can be found in \begin{eqnarray} T^{f}_{\mu \nu} &\supset& \frac{i}{2} \, \bar{\psi} \gamma_{\mu}\partial_{\nu}\psi + (\mu \leftrightarrow \nu),\\ T^{A}_{\mu \nu} &\supset& -F_{\mu}^{ \;\rho} F_{\rho \nu}, \\ T^{H}_{\mu \nu} &\supset& \partial_{\mu} H \partial_{\nu} H + (\mu \leftrightarrow \nu) \ . \label{Tmunu} \end{eqnarray} Note that in $T_{\mu\nu}$ there are also terms with more than two fields as well as terms proportional to electroweak symmetry breaking (EWSB), i.e. proportional to $m_{W,Z}$. What about the values of the coefficients $c_{i}$? Assuming the extra-dimensional geometry can be expressed in the general form of Eq.~(\ref{geom}), one can estimate the coefficients as follows\cite{Gherghetta:2010cj}: \begin{enumerate} \item {\bf Brane fields: } If the SM field lives on a brane located at $z_*$ \begin{eqnarray} c \simeq w(z_*)/w(z_{IR}) \end{eqnarray} where $z_*$ is the location of the brane, $z_*=z_{IR, UV}$. In flat extra dimensions, $w=1$ and there is no parametric suppression on either brane. In warped extra dimensions, $w(z_{IR}) \ll$ $w(z_{UV})$ and \begin{eqnarray} \frac{w(z_{UV})}{w(z_{IR})} \simeq \frac{M_{Pl}}{M_{eff}} \simeq \frac{M_{Pl}}{\textrm{ TeV}} \ . \end{eqnarray} \item {\bf Bulk fields in flat extra-dimensions:} In flat extra dimensions, Kaluza-Klein number is conserved as long as there are no localized boundary terms. In that case, if the SM field lives in the bulk of a flat extra dimension, then the coupling $G$-SM-SM vanishes, \begin{eqnarray} c = 0 \textrm{ with KK conservation.} \end{eqnarray} On the other hand, without KK conservation, the overlap of fields in the extra dimension would be of order one, leading to $c \simeq 1$. \item {\bf Bulk fields in warped extra-dimensions: } If now some fields live in the bulk of extra dimensions, their coupling to $G$ depends on their localization or de-localization in the bulk. Note that $G$ is localized near the IR brane at $z_{IR}$. \begin{itemize} \item Coupling to IR-localized fields: $c \simeq 1$. \item Coupling to massless gauge bosons: suppressed by the effective volume of the extra dimension~\cite{gap-metrics}, \begin{eqnarray} c \simeq \frac{1}{\int_{z_{UV}}^{z_{IR}} w(z) dz} \ . \end{eqnarray} In AdS, the suppression is $\log(\frac{z_{IR}}{z_{UV}})\simeq\log(\frac{M_{Pl}}{M_{eff}})$. In flat space, the suppression is the entire volume of the extra dimension. \item Coupling to UV-localized fields: suppression of order \begin{eqnarray} c \simeq \left(\frac{z_{UV}}{z_{IR}}\right)^a=\left(\frac{TeV}{M_{Pl}}\right)^a \ , \end{eqnarray} where $a>1$. For example, in Randall-Sundrum, the coupling of $G$ to UV-localized fermions is given by \begin{eqnarray} c_{f} \propto \epsilon^{2 |\nu-1/2|} \end{eqnarray} where $\nu<$ -1/2 for UV-localized fermion zero modes and $\epsilon\simeq TeV/M_{Pl}$. Similarly for UV localized scalars with bulk mass parameter $\nu<1$, \begin{eqnarray} c_{\phi} \propto \epsilon^{-2(1-\nu)} \end{eqnarray} where $\nu=z_{UV} \, M_{\psi,\phi}$, where $M_{\psi,\phi}$ is the bulk fermion (scalar) mass. \end{itemize} \end{enumerate} \section{The couplings of the impostor $\hat{G}$ to the Standard Model}\label{Ghatco} In the previous section, we discuss which operators couple to the resonance $G$, and how the coefficients of these operators strongly depend on how the SM particles are localized in the bulk of the extra dimension, or localized on one of the boundaries. For example, in warped extra dimensions, only fields with some support near the IR brane at $z_{IR}$ would have sizable overlap with the KK resonance. That includes fields on or near the IR brane, and delocalized fields (i.e. fields with a flat profile in the extra dimension). The impostor $\hat{G}$ is a resonance of a new sector which confines near the electroweak scale, at $M_{conf}$. As we want to discuss the role of $\hat{G}$ as an impostor of $G$, we identify $M_{conf}$ with $M_{eff}$. In principle, one could imagine $\hat{G}$ coupling to SM particles in a very different fashion than $G$, since it is not constrained by the form of interaction in Eq.~(\ref{LG}). But, as we discuss in this section, Lorentz and gauge invariance determine the couplings of $\hat{G}$ to be dimension-five operators, and if one further assumes flavor and CP invariance, $\hat{G}$ couples to the same operators contained in Eq.~(\ref{LG}). \begin{table}[t] \centering \begin{tabular}{|c|c|c|} \hline $\hat{O}^{decay}_{\mu \nu}$ & CP & coefficients \\ \hline $i\bar\psi \gamma_{\mu}\partial_{\nu}\psi $ & $+$ & $ \hat{c}_f^+$\\ $i\bar\psi \gamma^5 \gamma_{\mu}\partial_{\nu}\psi $& $-$& $\hat{c}_f^-$\\ $F_{\mu}^{ \;\rho} F_{\rho \nu} $ & $+$ & $ \hat{c}_A^+$\\ $ \epsilon_{\alpha \beta \mu \delta} F_{\nu}^{\; \delta} F^{\alpha \beta} $ & $-$ &$ \hat{c}_A^-$\\ $\partial_{\mu} H \, \partial_{\nu} H$ & + & $\hat{c}_{\phi}$\\ \hline \end{tabular} \caption{Flavor-invariant operators up to dimension 5 that could lead to two-body $\hat{G}$ decays. If we further assume the composite sector preserves CP invariance, the remaining operators are the same structures contained in the stress tensor.} \label{table:f2fermions} \end{table} After imposing Lorentz and gauge invariance, $\hat{G}$ exhibits no interactions with fermions, gauge bosons and scalars at the level of dimension four operators. For example, operators such as $\bar{\Psi} \gamma_{\mu} \gamma_{\nu} \Psi$ or $F_{\mu\nu}$ (for abelian gauge groups) vanish due to properties in Eqs.~(\ref{traceless}). Also, interactions where the derivative acts on $\hat{G}$ vanish because of the conditions in Eq.~(\ref{divergence}). Table \ref{table:f2fermions} shows all operators that could lead to two-body decays of $\hat{G}$ up to dimension 5 with no flavor violation. Up to CP conservation, the remaining operators are identical to those listed in Eqs.~(\ref{Tmunu}). It could be that the new physics responsible for $\hat{G}$ includes new sources of CP violation. In particular, a non-zero coefficient for the operator $\hat{c}_f^-$ in Table~\ref{table:f2fermions} would be constrained by precision measurements of, for example, the kaon system. But those operators contain derivatives of the fermion, and by integrating out the massive resonance we would obtain a CP-violating four-fermion operator involving light quarks, \begin{eqnarray} \sim \frac{\hat{c}_i^c \hat{c}_j^c}{M_{eff}^2} \, \frac{\hat{s}}{m_{\hat{G}}^2} \, \bar\psi^i \gamma_{\mu} \gamma_5 \psi^i \bar\psi^j \gamma^{\mu} \gamma_5 \psi^j \ , \end{eqnarray} which is suppressed by $\hat{s}/m_{\hat{G}}^2$. We would obtain a bound on the coefficient of the CP-violating operator~\cite{Gilad} \begin{eqnarray} c \lesssim 10^{-2} \, \frac{ M_{eff} \, m_{\hat{G}}}{\textrm{TeV}^2} \end{eqnarray} where we estimated $\sqrt{\hat{s}}\sim$ ${\cal O}$(GeV). The focus of this paper is the distinction between a KK graviton and its impostor so from now on we are going to assume that CP is an approximate, or exact, symmetry of the strong sector, and therefore the coupling of a $J^{CP}=2^{++}$ resonance to a CP-violating operator is suppressed. \section{Distinguishing between the graviton and the impostor}\label{dist} In the last two sections, we showed that $G$ and $\hat{G}$ couple to the same dimension-five operators. What about dimension-six operators or higher? Obviously, they are suppressed by an extra power of the TeV scale, and their effect is sub-leading. Still, we could classify all dimension-six operators compatible with Lorentz and gauge invariance. Unfortunately, we do not know the behavior of these operators, neither for gravity nor for a strongly-coupled theory. On the gravity side, those operators would arise as a consequence of quantum gravity loops, and their coefficients are therefore hard to estimate. On the strong-coupling side, one faces similar ambiguities. So, dimension-six operators cannot be a way to distinguish between $G$ and $\hat{G}$, and we need to look closely at dimension-five operators to find ways of disentangling signatures for extra dimensions from composite strongly-coupled dynamics. \subsection{The ratio of decay to photons and gluons} Although the form of the gravitational interaction is fixed, the coefficients of operators that couple to the KK graviton are model-dependent. To distinguish between $G$ and an impostor, we would need a model-independent prediction. In this section we show that such an observable is possible. We define the following ratio: \begin{eqnarray} R_{g/\gamma}=\frac{Br(\to g g)}{Br(\to \gamma \gamma)} = \frac{8 c_{g}^2}{c_{\gamma}^2} \ . \end{eqnarray} In extra dimensions $R$ is fixed to be 8, whereas for $\hat{G}$ there is no constraint on $R$. The argument is as follows: For any geometry in the form of Eq.~(\ref{geom}), the KK decomposition for spin-one particles leads to an equation of motion for the wavefunction of the $n$th KK mode, $f_n(z)$~\cite{me-hol}, \begin{eqnarray} \partial_z \left( w(z) \partial_z f_n(z)\right) = -m_n^2 w(z) f_n(z) \ . \end{eqnarray} If the spin-one field has a massless zero-mode, i.e. the 4D gauge symmetry is preserved by the compactification, then \begin{eqnarray} m_0=0 ~~\to~~ w(z) \partial_z f_0(z) = \textrm{constant} \end{eqnarray} Once we take into account the boundary conditions, which are Neumann on both branes, \begin{eqnarray} \partial_z f_0(z) |_{UV} = \partial_z f_0(z) |_{IR} = 0 \ , \end{eqnarray} there is only one solution \begin{eqnarray} f_0(z)=C \end{eqnarray} where the constant $C$ is determined by imposing the canonical normalization for the 4D gauge field. In Randall-Sundrum models, the value of $c_{\gamma,g}$ is \begin{eqnarray} c_{\gamma, g}=2 \frac{1-J_0(x_G)}{\log(z_{IR}/z_{UV}) x_G^2 |J_2(x_G)|} \end{eqnarray} where $x_G=3.83$ and $M_{eff}=M_{Pl} w(z_{IR})$. Here we see explicitly the suppression with $1/\int w d z$ mentioned in Sec.~\ref{Gco}. The discussion above is a consequence of QCD and electromagnetism in the bulk. On the other hand, one could imagine localizing electromagnetism and strong interactions on a brane. Electromagnetism is part of the electroweak group. Therefore, the photon, if stuck on a brane, should be stuck on the brane responsible for electroweak symmetry breaking, i.e. $z_{IR}$. That leads to numerous problems with compositeness effects showing up at the TeV scale and altering precision measurements. Still, this is the scenario that is searched experimentally for Randall-Sundrum models~\cite{Allanach:2000nr}. Also in this case, the ratio $R$ is 8. Finally, a situation where the gluon is stuck on the UV brane, whereas the photon is on the IR brane is phenomenologically ruled out since quarks are charged under both and would need a non-negligible overlap with both branes, only possible if de-localized. In summary, in any phenomenologically viable model we would have a prediction for this ratio, $R=8$. Let us now discuss some aspects of measuring this ratio. In principle, $G$ could have a non-negligible branching ratio to light quarks. Gluons and quarks are seen as jets in colliders, and we would need to distinguish those to evaluate $R$. In the most successful Randall-Sundrum scenarios~\cite{BulkRS}, light quarks are UV-localized fields having a very small overlap with $G$. Hence s-channel production of $G$ is through gluons and the branching ratio to jets is to gluon-jets. The assumption that $G$ has small couplings to the light generations is related to flavor issues and fermion mass generation. Nevertheless, there are scenarios where the light fermions have sizable couplings to KK physics and the flavor problem is solved by a choice of symmetries (see for example Ref.~\cite{Michele}). With significant couplings to light quarks and gluons, a heavy resonance would preferentially be produced in quark-initiated processes, and one would have to disentangle the quark and gluon components of a dijet final state. One has two ways to attack this problem. First, the di-quark and di-gluon angular distributions are different \begin{eqnarray} \frac{d \sigma}{d \cos \theta^* } (q \bar{q} \to G \to f \bar{f}) = 1+\cos^2\theta^* \left(1-4 \sin^2\theta^*\right) \end{eqnarray} \begin{eqnarray} \frac{d \sigma}{d \cos \theta^* } (q \bar{q} \to G \to gg) = 1-\cos^4\theta^* \end{eqnarray} where $\theta^*$ is the angle in the center of mass between the outgoing particle and the incident parton. In Fig.~\ref{distr} we show the two theoretical distributions. \begin{figure}[h!] \centering \includegraphics[scale=0.3]{ang_distr.pdf} \caption{The angular distributions for fermion (solid line) and gluon (dashed line) final states.} \label{distr} \end{figure} Second, one could try to tag the jet as a gluon or quark using the techniques in Ref.~\cite{Matt-jet} which do not rely on angular distributions. An early measurement of the spin relies on a sizable branching ratio of $G$ or $\hat{G}$ to photons~\cite{LisaWise}. See also \cite{Hitoshi}. In the context of warped extra dimensions, one usually expects the third-generation quarks to be localized near $z_{IR}$. In these scenarios \begin{eqnarray} \frac{Br(G\to t \bar{t})}{Br(G\to \gamma\gamma)} \propto \left(\int \frac{w(z)}{z_{UV}} d z \right)^2 \ , \end{eqnarray} where the volume factor is $\cal O$(10's). Therefore, the dominant decay mode for $G$ would be to $t \bar{t}$, and one would need large luminosities for measuring both the spin and $R_{g/\gamma}$. Finally, we would like to mention the typical production cross section for $G$. There is no model-independent prediction but a rather popular choice of extra-dimensional models is the implementation of Randall-Sundrum models in Madgraph~\cite{MG-RS}. With that choice of parameters, a 1 TeV resonance would be produced with a cross section of 2 pb for the LHC at 8 TeV. \subsection{Other spin-two states} Strong interactions would produce a rich spectrum of resonances as we observe in QCD. In this section we discuss other spin-two resonances, both as a motivation to look for them, and as an illustration of the richness of dynamical electroweak symmetry breaking. In Randall-Sundrum models, spin-2 resonances are the excitations of a graviton with quantum numbers $J^{PC}=2^{++}$. On the other hand, a QCD-like theory would contain many spin-2 resonances including some with negative parity and/or negative charge conjugation. In QCD the lightest spin-2 resonances are $2^{++}$ and the next-lightest are $2^{-+}$. All of these QCD states are readily understood from a simple quark model based on the Schrodinger equation\cite{GodfreyIsgur}. For up and down quarks, the $2^{++}$ states are P-wave mesons (an isosinglet named $f_2(1270)$ and an isotriplet named $a_2(1320)$) while the $2^{-+}$ states are D-wave mesons (an isosinglet named $\eta_2(1645)$ and an isotriplet named $\pi_2(1670)$). Observation of a $2^{-+}$ resonance or charged $2^{++}$ resonances having a mass of order the electroweak scale would be a clear indication of physics beyond a KK-graviton. In QCD, those resonances would decay predominantly into $f_2(1270)\pi$ or $3 \pi$. In the analogy of QCD with technicolor, decays to pions are decays to longitudinal $Z_L$ and $W_L$. Hence, those resonances produce a three-body decay and would not appear in the s-channel. Now, a precise prediction for the decay rate is not possible without knowledge of the underlying strong dynamics, but some general insight can be obtained from a rudimentary calculation of the $\pi_2-f_2$ mass difference\footnote{In QCD, the $a_2-f_2$ splitting is due to fine structure and it is of order 4\%.}. A naive rescaling of QCD to the electroweak scale (i.e.\ multiplying all masses by $246{\rm GeV}/f_\pi ~\approx~2600$) leads to a technicolor theory \cite{technicolor} that is opposed by experimental data \cite{Chivukula:2012ug}, but other strongly-interacting theories remain as viable options \cite{Andersen:2011yj,Lewis:2011zb}, such as the possibility of a walking or near-conformal theory \cite{walkingTC}. Consider the Cornell potential \cite{Eichten:1979ms} \[ V(r) = -\frac{4\alpha}{3r} + \sigma r \] where $\alpha$ represents the gauge coupling for the new strong interaction and $\sigma$ is the string tension. For a conformal theory, the string tension $\sigma$ must vanish. (The current mass of the fermion must also vanish for a conformal theory, but it is the nonzero constituent mass that appears in the Schrodinger equation.) In QCD, the $f_2(1270)$ and $\pi_2(1670)$ have a $q\bar q$ separation that is large enough to be dominated by the linear term (see figure 12 of \cite{GodfreyIsgur}), but in a nearly-conformal theory where $\sigma$ is smaller those mesons would be in the Coulomb regime and the $\pi_2-f_2$ mass splitting would shrink. To see this explicitly, we have used the Mathematica code from \cite{Lucha:1998xc} to solve the Schrodinger equation for a range of string tensions with two different values for the coupling. Numerical results for the mass splitting are given in Fig.~\ref{fig:pi2f2} in units of the constituent fermion mass. (In typical models, the constituent mass is 2 or 3 times the electroweak scale.) \begin{figure} \centering \includegraphics[scale=0.4]{pi2f2.pdf} \caption{The $\pi_2-f_2$ mass difference as a function of the square root of the string tension, for two choices of the gauge coupling. Both axes are in units of the constituent fermion mass. At $\sigma=0$ the mass splitting does not vanish, but the tiny Coulomb splitting is not visible in this graph.} \label{fig:pi2f2} \end{figure} For QCD, $\sqrt{\sigma}$ is between $m$ and $2m$ for any standard definition of the constituent mass, and as a consequence the $\pi_2-f_2$ mass splitting is of a comparable size. In the limit of vanishing string tension, the mass splitting in Fig.~\ref{fig:pi2f2} becomes the Coulomb splitting, $M(\pi_2)-M(f_2)=\frac{5}{81}m\alpha^2$, which is almost zero on the scale of the graph. Because the string tension is so dominant, measurement of the $\pi_2-f_2$ mass difference would provide valuable information about the degree of conformality in the new strongly-interacting theory. \subsection{The holographic interpretation} Models in warped extra dimensions are often used as an {\it analogue computer} for strong interactions. This duality between 4D strongly-coupled theories and 5D weakly-coupled theories with gravity was inspired by the AdS/CFT correspondence, but took hold on a more qualitative basis~\cite{Lisa-Porratti} and has been used to build models of QCD~\cite{AdSQCD,me-hol}, technicolor~\cite{HTC}, composite Higgs~\cite{composite-Higgs}, and even condensed matter systems~\cite{condensed-matter}, with some success. What is the dual of a theory with gravity in extra dimensions? If the metric $w(z)$ is AdS, the dual is a 4D conformal theory, and compactification is the dual of spontaneous breaking of conformality, leading to a theory with massive resonances. If the metric is not the one of AdS, the 5D spacetime does not have the same isometries as the conformal group in 4D, and one expects no conformal behavior of the confining theory. Still, compactification of the extra dimension would be the dual description of confinement, and the appearance of massive resonances. Within the same dual picture one can describe bulk gauge fields. If the compactification preserves gauge invariance at the level of zero modes, that situation corresponds to a global symmetry in the 4D sector which has been weakly gauged through adding external sources $J_{\mu}$ to the strong sector, switching on some new operators ${\cal O}^{\mu}$, \begin{eqnarray} {\cal L}_{comp} \supset g_{comp} \, J_{\mu} {\cal O}^{\mu} \ , \end{eqnarray} as we schematically represent in Fig.~\ref{AtoCFT}. \begin{figure}[h!] \centering \includegraphics[scale=0.2]{AtoCFT.pdf} \label{AtoCFT} \caption{Sourcing an operator to the composite sector through external sources.} \end{figure} The key question for holography is whether there is always a metric $w(z)$ and field configuration, no matter how complicated it is, which is dual to our 4D target theory. And this paper showed an instance where this is not the case. Indeed, $\hat{G}$ (or the preon quarks or bosons composing $\hat{G}$) could have no electric or color charges, leading to a ratio $R\in(0,\infty)$. In Fig.~\ref{AtoCFT}, this corresponds to setting one, or both, of the sources to zero. Note, though, that this study is largely based on s-channel resonance by gluon-initiated processes, so that one would compare the effect of $G$ with a $\hat{G}$ with colored constituents. The Regge gluon \cite{Perelstein:2009qi,Perelstein:2011ez}, a spin-2 excitation of the Standard Model gluon in warped extra dimension, is an explicit example of this scenario, There, the Regge gluon couples to the Standard Model gluons but it does not interact with photons at tree level. \section{Conclusions} If new strong interactions lurk near the electroweak scale, one expects a rich variety of new resonances, both mesonic and baryonic. New strong interactions may not deconfine before energies well beyond LHC reach. Instead of finding evidence of form factor interactions or production of the preons, only a suppressed compositeness behavior would be accessible. The question then becomes one of identifying the new sector without really accessing its perturbative description. But composite fermions or vector bosons can mimic new matter generations and new spontaneously-broken gauge symmetries respectively, hindering their unambiguous identification as composite or elementary. As for spin-two resonances, only one framework of new physics is able to mimic them: Kaluza-Klein gravitons. In this paper, we revised the claim that spin-two resonances are a smoking gun for extra dimensions, and were able distinguish between the two scenarios, i.e. spin-2 resonances vs. KK gravitons. Distinguishing between the KK graviton and the impostor turns out to be harder than first expected. Although gravity couples to fields in a very constrained manner, after compactification, there is quite a lot of model dependence in the coupling strength to the operators in the stress tensor. Still, one could have expected that the impostor would couple to different operators than the KK graviton, hence leading to a clear signature of new strong dynamics. But Lorentz invariance and the SM gauge, flavor and CP symmetries are so restrictive that the impostor ends up coupling to the same structures as the massive graviton. Nevertheless, we found a robust prediction for the decays of the KK graviton, and propose this measurement as a way of distinguishing between new extra dimensions and new strong interactions. \section*{Acknowledgements} V. Sanz thanks L. Randall for useful comments. This work is partially supported by funding from NSERC of Canada.
1,314,259,992,780
arxiv
\section{Introduction} Building on the pioneering work of Kohn and Luttinger,\cite{Kohn1955} and later motivated by the promise of using donors in silicon for quantum information processing,\cite{Kane1998,Pla2012,Pla2013,Dehollain2014,Gonz2013b} researchers continue to develop and improve effective mass theories (EMTs).\cite{Ning1971,Pantelides1974,Shindo1976,Friesen2005,Wellard2005,Debernardi2006,Hui2013,Klymenko2014,Pica2014,Saraiva2014} These theories are appealing both because they provide excellent physical intuition and because they require minimal computational resources to implement. In most cases, parameters for the theory are chosen so that the donor binding energies match experimental values.\cite{Jagannath1981,Mayur1993} Differences in the particular approximations adopted have led to dramatic discrepancies (\emph{e.g.}, orders of magnitude differences in exchange oscillations between Refs.~\onlinecite{Pica2014} and~\onlinecite{Wellard2005}), and one is often left wondering which, if any, of multiple, seemingly well-justified theories to believe. It is this muddy picture, where small changes to the theories lead to large differences in outcomes, that has cast doubt upon whether EMT is well suited to making quantitative predictions. Past work has compared the wavefunction as predicted by both EMT and more sophisticated theories to experiment via the contact hyperfine interaction, which serves to probe the wavefunction directly at the donor site. Early work by Feher\cite{Feher:1959} and later by Hale and Mieher\cite{Hale1969,Hale1969_2} compared the contact hyperfine predicted by Kohn-Luttinger EMT to electron nuclear double resonance experiments, obtaining rough qualitative agreement. Later, Ivey and Mieher used tight-binding (TB)\cite{Ivey1975,Ivey1975_2} to gain better agreement to the previous experimental data of Hale and Mieher\cite{Hale1969,Hale1969_2}. More recently, Overhof and Uwe,\cite{Overhof:2004} Huebl \emph{et al.},\cite{Huebl:2006} and Assali \emph{et al.}\cite{Assali2011} studied the contact hyperfine interaction using \emph{ab initio} density functional theory, resulting in much-improved experimental agreement. Friesen\cite{Friesen2005} developed a multi-valley effective mass theory that was capable of studying the Stark shift of the contact hyperfine interaction. Advancements in TB theory\cite{Martins2005,Manchero1999} enabled the detailed study by Rahman \emph{et al.}\cite{Rahman2007} of this Stark shift, which obtained excellent agreement with experiment. Finally, a more sophisticated EMT approach due to Pica \emph{et al.}\cite{Pica2014_2} also obtained experimental agreement for the contact hyperfine Stark shift. However, full spatial wavefunctions have seldom been compared between theories, perhaps with the excuse that the results lacked strong experimental support. The picture is different now: last year, Salfi \emph{et al.}~performed the first direct measurement of a donor wave function \cite{Salfi2014} and found excellent theoretical agreement with atomistic tight-binding simulation.\cite{klimeck2002development} Hence, it is important now to ask whether EMT can replicate the results of atomistic tight-binding; the primary contention of this work is that it can when applied properly. By avoiding unjustified approximations, we present an effective mass framework that, in addition to matching experimental energies,\cite{Jagannath1981,Mayur1993} agrees well with atomistic tight-binding theory over the full spatial wavefunction.\cite{Salfi2014} This agreement is of critical importance, since while operation of a single donor qubit requires a well-controlled hyperfine coupling, coupling two donor qubits depends upon reliable control over the electronic wave function far from the impurity site. The combined computational efficiency and accuracy of our EMT allows us to survey all possible donor-donor position combinations within a large search volume. This is a critically important problem, since the coupling strength varies on the atomic scale due to silicon's six-fold conduction band valley degeneracy. Hence, for a given range of desired coupling strengths, our calculations allow for quantitative estimates of yield in the face of uncertain donor placement. This paper is organized as follows. In Sec.~\ref{sec:SN}, we describe Shindo-Nara effective mass theory. First, Sec.~\ref{sec:SNbloch} details the calculation of silicon's Bloch function, and how they are included in the theory. Here, we pay special attention to common approximations to the Bloch functions and where they lead to inconsistent results. Sec.~\ref{sec:SNcc} discusses the role of the central cell correction in our calculation, and in particular our tetrahedrally symmetric variant necessary to reproduce the energy spectrum of phosphorus donors in silicon when the full Bloch functions of silicon are used. Sec.~\ref{sec:SNvar} describes our variational solution to the theory, including a statistical uncertainty quantification (UQ) procedure that demonstrates the stability of our results and a comparison to NEMO-3D atomistic tight-binding calculations. Sec.~\ref{sec:tunnelCouplings} presents results of our calculations of donor-donor tunnel couplings. In Sec.~\ref{sec:tunnelCouplingsenum}, we first cross-validate our results using NEMO-3D calculations and check for stability using our UQ procedure. We then detail the exhaustive enumeration of the tunnel coupling of all possible relative positions between a phosphorus donor at the origin and a second donor at all lattice locations throughout a 30 nm surrounding cube of silicon. After that, Sec.~\ref{sec:tunnelCouplingsenumstraggle} studies the implications of these results on the feasibility of achieving large donor-donor coupling when faced with uncertain donor placement. Finally, in Sec.~\ref{sec:summary} we summarize our results and offer concluding remarks. \section{Shindo-Nara effective mass theory} \label{sec:SN} \begin{figure*}[tb] \includegraphics[width= 1.0 \linewidth]{oneDonor.pdf} \caption{\label{oneDonor}(Color online) Multi-valley effective mass calculations for a single phosphorus donor in silicon. (a), Sketch of the band structure of silicon and the resulting donor physics. The conduction band valleys are initially six-fold degenerate; valley orbit coupling causes level splitting due to the sharp confinement of an impurity potential. The resulting energy levels for phosphorus are shown. (b), Our converged donor potential, including the central cell correction, which exhibits tetrahedral symmetry. The constant energy surfaces shown are $-0.5$ (outer contour), $-1.0$ (middle contour), and $-4.0$~eV (central contour), respectively. (c-d), Multi-valley effective mass ground state for a single phosphorus donor in silicon. (c) shows a side view, while (d) shows a top-down view of the $x-y$ plane. The silicon lattice is superposed toward the center of the plots for scale; the white curtain indicates when the envelope $|F|^2$ is one percent of its maximum value. (e-f), Atomistic tight-binding simulations corresponding to (c-d), performed in NEMO-3D and visualized using the atomic orbitals of Ref.~\onlinecite{Nielsen2012}. The envelope curtain is copied from (c-d) for comparison. (g-h), Cuts along the parallel and perpendicular directions of the envelope function in one conduction band valley. The dashed lines are the effective mass theory from the present work; the shaded bands are $\pm 4 \sigma$ statistical uncertainty limits, determined by the UQ techniques described in Appendix~\ref{sec:crossValuq}. The lower bold curves show the corresponding Kohn-Luttinger envelope functions, for comparison. (i-k), Cuts along the $x-$axis of the entire effective mass electron density for effective mass (solid curves) and NEMO-3D (dotted curve in i). (i) shows the $A_1$ ground state, (j) shows one of the three degenerate $T_2$ first excited states, and (k) shows one of the two degenerate $E$ first excited states. } \end{figure*} \begin{figure*}[tb] \includegraphics[width= 0.8 \linewidth]{blochFunctions.pdf} \caption{\label{blochFunctions} (Color online) The total Bloch function density, with the silicon lattice superimposed on the plots for scale. Panel (a) shows well-converged Bloch functions, including high-frequency terms due to their periodic parts. Panel (b) truncates to \emph{form factors}, where each pair $u^*_{\mathbf k_0^l}(\mathbf r)u_{\mathbf k_0^j}(\mathbf r)$ is set equal to its constant-frequency component. Panel (c) simplifies the situation further, using \emph{trivial} ($u_{\mathbf k_0^j}(\mathbf r)=1$) Bloch functions. As shown in panels (b) and (c), these represent drastic approximations, so it is not surprising that calculations using them yield results different from those using the Bloch function in panel (a). } \end{figure*} The central tenet of effective mass theory for a low-energy conduction electron in silicon is that its wave function $\psi(\mathbf{r})$ has support only within the vicinity of the six equivalent valley minima,\cite{Kohn1955} sketched in Fig.~\ref{oneDonor}(a): \begin{equation} \psi(\mathbf{r}) = \sum_{j=1}^6 F_j(\mathbf r) \phi_j(\mathbf r). \end{equation} Here, the sum runs over the six valley minima $\mathbf k_0^j$, located $0.84 \times 2 \pi/a$ along the cartesian axes ($a=0.543$~nm is the cubic unit cell length of Si), and $\phi_j(\mathbf r) =u_{\mathbf k_0^j}(\mathbf r) e^{i \mathbf k_0^j \cdot r}$ is the Bloch function belonging to the minimum of the $j$th valley. The prefactors $F_j(\mathbf r)$ are called \emph{envelope functions}, and are slowly varying on the length scale of the lattice. The multi-valley EMT formalism we use here was first derived by Shindo and Nara.\cite{Shindo1976} The central equation of their theory is: \begin{equation}\label{eq:SN} E F_l(\mathbf r) = \left( \hat{\mathbf{T}}_l + U(\mathbf r ) \right) F_l(\mathbf r) + \sum_{j \in \pm \lbrace x, y, z \rbrace} V^{VO}_{lj}(\mathbf r) F_j(\mathbf r), \end{equation} which is an effective Schr\"odinger-like equation for the envelope functions. Here, $\hat{\mathbf{T}}_l$ is the kinetic energy operator of the $l$th valley, where for example $\hat{\mathbf{T}}_{+z} = -\frac{\hbar^{2}}{2m_{\perp}} \big( \frac{\partial^{2}}{\partial x^{2}} + \frac{\partial^{2}}{\partial y^{2}} + \gamma \frac{\partial^{2}}{\partial z^{2}} \big)$ with $\gamma = m_{\perp}/m_{\parallel}$ the ratio of effective masses, $U(\mathbf r)$ is the external (non-crystal) potential energy, and $V^{VO}(\mathbf{r})$ is the valley-orbit coupling $V^{VO}_{lj}(\mathbf{r}) = \phi_l^*(\mathbf r) \phi_j(\mathbf r) U(\mathbf r)$. To solve Eq.~(\ref{eq:SN}), we first need to compute $V^{VO}$, which requires the computation of the Bloch functions $\phi_j(\mathbf r)$ of silicon as well as the potential energy $U(\mathbf r)$. In the next section, we detail the calculation of $\phi_j(\mathbf r)$ within density functional theory. After that, we describe the calculation of $U(\mathbf r)$, including an appropriate central cell correction. \subsection{Calculation and approximation of silicon Bloch functions}\label{sec:SNbloch} We calculated the Bloch function at the conduction band minimum of pure bulk silicon using Kohn-Sham density functional theory within a variety of different approximations. We employed a plane wave basis in all cases, using both the Vienna Ab-initio Simulation Package (VASP) \cite{Kresse1996,Kresse1999} and Quantum Espresso \cite{QE2009} packages to check for consistency among results. In VASP, we used the Projector Augmented Wave (PAW) formalism \cite{Blochl1994} to treat the electron-ion interaction, whereas in Quantum Espresso we used a variety of norm-conserving pseudopotentials (NCPP). While the plane wave coefficients are strictly $\ell^2$-normalized in NCPP calculations, they are not in PAW. Rather than explicitly including the effect of the PAW projectors on the normalization, we rescaled coefficients to achieve strict normalization, i.e. $1 \equiv \sum_\mathbf G \left| A^j_\mathbf G \right| ^2 $, with the Fourier components $ A^j_\mathbf G$ defined in Eq.~(\ref{blochFuncDef}). In a typical case, the uncorrected norm is within $3\%$ of unity due to the delocalized nature of the conduction band minimum orbital. In both methods, we performed calculations using different parameterizations of the local density approximation (LDA) and generalized gradient approximation (GGA) exchange-correlation functionals, as well as the hybrid Heyd-Scuseria-Ernzerhof (HSE) functional.\cite{Heyd2003} For each pseudization scheme and functional, we used the following procedure for calculating the Bloch function. First, we performed an initial highly-converged self-consistent calculation to generate the Kohn-Sham potential which reproduces the ground state electronic density and energy of pure bulk silicon within a specific exchange-correlation approximation. Our criterion for self-consistency was that the change in the total energy between cycles be less than 1 $\mu$eV. We increased the plane wave cutoff and number of k-points in the first Brillouin zone until the total energy was converged to less than 1 meV/atom. Next, we performed a second, non-self-consistent calculation with the fixed Kohn-Sham Hamiltonian at 1000 equally-spaced k-points between the $\Gamma$ and X points, i.e., along $\Delta$. We then assessed the resultant Kohn-Sham orbital energies of the lowest conduction band to determine the location of the conduction band minimum. Finally, we extracted the Bloch function as the plane wave coefficients of this particular orbital. The coefficients generated using different codes and functionals show a high degree of consistency. Applying a uniform phase shift to make the $(0,0,0)$ coefficient of each Bloch function real, the $\ell^2$ distance between any given pair of Bloch functions was $\approx$ 0.025 or less, representing approximately $2.5 \%$ relative error. We observed a similar degree of consistency with results published elsewhere.\cite{Saraiva2011} While the different exchange-correlation approximations utilized give a reasonably accurate description of the equilibrium lattice constant of silicon as well as the ordering and character of its near-gap bands, they vary dramatically in precise value of the band gap.\cite{Heyd2005} This deficiency seems to be irrelevant to the Bloch function of interest. Due to this consistency, there is no specific choice that appears to give a best Bloch function. We choose to use the results of the PAW/HSE calculation, which are tabulated in a supplementary data file. It is worth commenting on the use of Kohn-Sham orbitals in this capacity. Strictly speaking, the direct physical significance of these orbitals is limited, as they are solely intended to serve an auxiliary role in representing the interacting electronic density which is the real quantity of interest in DFT.\cite{Kohn1965} Here, we are simply using Kohn-Sham DFT as a convenient tool to generate an effective mean-field Hamiltonian, the eigenfunctions of which have qualitative atomically-resolved features that are needed by our effective mass theory. The high degree of consistency of relevant quantities between calculations gives us confidence that this approach is reasonable. The contact hyperfine interaction between the spins of the electron and donor nucleus, nonzero for the $A_{1}$ ground state and zero for all other five states in the $1s$ manifold, is proportional to the charge density of the full electron wavefunction at the donor nucleus, $\vert \psi(0)\vert^{2}$. Past work in \emph{ab initio} DFT has found good agreement with experiment,\cite{Overhof:2004,Huebl:2006,Assali2011} demonstrating that $\vert \psi(0)\vert^{2}$ is well-understood. Effective mass theory typically does not attempt to predict contact hyperfine, since doing so requires detailed knowledge of the Bloch function near the atomic core. Rather, a \emph{bunching factor} is defined,\cite{Assali2011} which can either be tuned to experiment or calculated from DFT, and empirically accounts for a larger amplitude of the electronic wave function near the core than would be predicted from EMT alone. Using a bunching factor facilitates comparison of contact hyperfine with experiment, which appears promising.\cite{Pica2014_2} We find that our calculated value of $\vert \psi_{A_{1}}(0)\vert^{2} = 1.0 \ \mathrm{nm^{-3}}$ is consistent with a Bloch function bunching factor at the donor of about $440$ in order to give the measured contact hyperfine interaction strength of 117.5 $\mathrm{MHz}$.\cite{Pica2014,Saraiva2014} The precise value of this bunching factor is not physically meaningful, since the atomic core lies within the augmentation sphere (with radius 0.1054 nm) used in the computation of our Bloch functions. For this calculation, we include only the plane wave part of the full PAW wave function. In particular, we note that the dominant contribution of the central cell correction lies well outside the augmentation sphere around the donor, so we expect the central cell parameterization to not be significantly sensitive to the wave function form near the nucleus. Since this work is concerned primarily with the electronic wave function away from the donor site, for simplicity we do not employ the more sophisticated techniques required to resolve the contact hyperfine coupling. A common approximation performed in EMT treatments involves substantially simplifying the Bloch functions. However, such approximations lead to uncontrolled error, significantly disrupting the reliability of EMT. To avoid this, we decompose the periodic part of the Bloch functions into plane wave components \cite{Saraiva2011}, which are computed as described above using density functional theory. We discuss the practical cost of this procedure in more detail later in this section. One of the most common approximations employed in effective mass theory is to simplify the form of the valley-orbit coupling by using only an approximation for the Bloch function product $ \phi_l^*(\mathbf r) \phi_j(\mathbf r) $. Here, each of the Bloch functions is specified by\cite{Saraiva2011} \begin{align}\label{blochFuncDef} \phi_j(\mathbf r) &= u_{\mathbf k_0^j}(\mathbf r) e^{i \mathbf k_0^j \cdot r} \nonumber \\ & = e^{i \mathbf k_0^j \cdot r} \sum_{\mathbf G} A_{\mathbf G}^j e^{i \mathbf G \cdot r}, \end{align} where $\mathbf G$ is a reciprocal lattice vector. We compute the set of Fourier coefficients $\{ A_{\mathbf G}^j \}$that determine $u_{\mathbf k_0^j}(\mathbf r)$ using density functional theory (as described above), and we list our coefficients in a supplementary data file. The most drastic way to approximate the Bloch function product is to set $\phi_j(\mathbf r) \approx e^{i \mathbf k_0^j \cdot r}$, amounting to \emph{trivial} Bloch functions. Another approximation is to write the product as $\phi_l^*(\mathbf r) \phi_j(\mathbf r) \approx C_{lj} e^{i (\mathbf k_0^j-\mathbf k_0^l) \cdot r}$, where the factors $C_{lj}$ are called \emph{form factors}. In Fig.~\ref{blochFunctions}a, we plot the full Bloch function density $\sum_j \left| \phi_j(\mathbf r) \right|^2$. In Figs.~\ref{blochFunctions}b-c we plot the same quantity with the form factor and trivial approximations, respectively. As can be seen from these plots, the Bloch functions under either approximation are not qualitatively similar to the full Bloch function. In addition, using these approximate Bloch functions with a central cell correction tuned for more converged Bloch functions results in substantial energy discrepancies, with the ground state energy approximately 5 meV too positive when using trivial Bloch functions and approximately 10 meV too positive when using form factor Bloch functions. Although re-converging a central cell correction with these approximate Bloch functions can improve this discrepancy, it remains the case that these approximations are neither well-justified nor well-controlled. We next address the practical cost of including the full Bloch functions in our theory. This addition does not change the dimensionality of the Hamiltonian being diagonalized, and the only additional computational cost is associated with the evaluation of the valley-orbit matrix elements. In practice, when we compute matrix elements, we truncate the series \begin{equation} \phi_l^*(\mathbf r) \phi_j(\mathbf r) = \sum_{\mathbf G, \mathbf G'} \left(A_{\mathbf G'}^l \right)^* A_{\mathbf G}^j e^{i (\mathbf k_0^j-\mathbf k_0^l+\mathbf G - \mathbf G' ) \cdot \mathbf r} \end{equation} to include all terms up to $|\mathbf G - \mathbf G' | \leq 4.4 \times 2 \pi / a$, which we find to be well-converged. By grouping elements by the $\mathbf G - \mathbf G'$ vectors, this results in about 100 terms, and hence the evaluation of a matrix element is $\sim100$ times slower than it would be for trivial Bloch functions. Even so, the total cost is still negligible relative to atomistic methods in which the dimensionality of the Hamiltonian scales with the total number of valence orbitals comprising a supercell of the sample (in this case, millions), whereas the dimensionality of our Hamiltonian scales with the total number of donor basis sets included in this same volume in EMT (in this case, two sets). It should also be emphasized that the calculation of the Bloch functions is strictly restricted to precomputation and tabulation. The associated DFT calculations do not contribute to the computational cost of our method in practice. \subsection{Calculation of the central cell correction}\label{sec:SNcc} The attractive binding potential $U(\mathbf r)$ due to a donor in silicon is well approximated at long distances as a bulk-screened Coulomb potential. However, close to the impurity the dielectric screening effect of silicon is lessened, and the potential is enhanced. The deviation of the potential $U(\mathbf r)$ from bulk-screened Coulomb form at short distances is called a \emph{central cell correction}.\cite{Ning1971,Pantelides1974} In order to reproduce experimentally observed donor energy levels,\cite{Jagannath1981,Mayur1993} we tune the central cell correction using a nested variational optimization. It is worth noting that central cell corrections tuned with the crude approximations of the Bloch functions outlined above do not maintain experimental agreement when the full, correct Bloch functions are used. Likewise, for a central cell correction tuned to the full Bloch functions, using the approximate forms results in markedly different energies. To date, all EMT studies of electron donor energy levels that employ a central cell correction have assumed a spherically-symmetric or contact ($\delta$-function) correction.\cite{Ning1971,Pantelides1974,Friesen2005,Hui2013,Pica2014} However, to accurately reproduce experimentally observed donor binding energies to within experimental measurement uncertainties, we find it necessary to allow for a tetrahedrally-symmetric central cell correction (Fig.~\ref{oneDonor}(b)), as anticipated in Refs.~\onlinecite{Castner2009, Greenman2013}. \begin{table}[tb] \caption{\label{ccparams}Parameters for central cell correction $U_{\mathrm{cc}}(\mathbf{r})$} \begin{tabular}{|r|l|} \hline $A_{0}$ & -1.2837 \ meV \\ $A_{1}$ & -2642.0 \ meV \\ $a$ & 0.12857 \ nm \\ $b$ & 0.21163 \ nm \\ $c$ & 0.09467 \ nm \\ \hline \end{tabular} \end{table} We determined the central cell correction for the phosphorus donor by the following nested variational procedure: Inner optimization: Given a central cell correction, construct the total potential, solve the full coupled effective mass equation variationally using a Gaussian basis with 6 $1s$-type orbitals and one $2s$-type orbital. Outer optimization: Vary the form of the central cell correction in order to optimize the experimental energies for phosphorus shown in Fig.~1(a). Far from the donor, the donor's binding potential takes the form of a bulk-screened Coulomb potential, $U_{\mathrm{c}}(r) = -e^{2} / (4 \pi \epsilon_{\mathrm{Si}} r)$, where $\epsilon_{\mathrm{Si}} = 11.7 \epsilon_0$ is silicon's dielectric constant, $\epsilon_0$ is the permittivity of free space, and $e$ is the electron's charge. Near the donor, the local potential deviates from this simple $1/r$ behavior, as a result of reduced dielectric screening from the silicon lattice and complex reorganization of the local electronic structure.\cite{Pantelides1974,Greenman2013} To describe this effect, we include a central cell correction $U_{\mathrm{cc}}(\mathbf{r})$, such that the full donor impurity potential takes the form $U(\mathbf{r}) = U_{c}(r) + U_{\mathrm{cc}}(\mathbf{r})$. Due to the tetrahedral symmetry of the covalent bonding between the donor and the neighboring silicon atoms in the lattice, we allow for the central cell correction $U_{\mathrm{cc}}(\mathbf{r})$ to be tetrahedrally symmetric, to be contrasted with the more restrictive spherical symmetry assumed in previous studies.\cite{Ning1971,Pantelides1974,Hui2013,Pica2014} We find that this tetrahedral symmetry is necessary in order to obtain the correct donor binding energies when the full Bloch function is considered. Specifically, unlike with trivial Bloch functions, we find that the donor valley splitting cannot be made large enough to match experiment using a spherically symmetric central cell. We allow the central cell correction to be a function of five parameters, \begin{equation}\label{eq:Central_cell_correction} U_{\mathrm{cc}}(\mathbf{r}) = A_{0} e^{-r^{2}/(2a^{2})} + A_{1} \sum_{i=1}^{4} e^{-\vert \mathbf{r} - b \mathbf{t}_{i} \vert^{2} / (2c^{2})}, \end{equation} where $\mathbf{t}_{i} \in \lbrace (1,1,1), (-1,1,-1), (1,-1,-1),(-1,-1,1) \rbrace$. This potential takes the form of a Gaussian centered at the origin plus four identical Gaussians centered at points along the bond directions. We choose this Gaussian basis for the central cell correction as a convenient means of representing a smooth potential with compact support. In our convention for the lattice coordinates, we take the position of the sites of the primitive unit cell to be $(0,0,0)$ and $(a/4)(1,1,1)$, where $a=0.543 \ \mathrm{nm}$. The tetrahedral directions $\mathbf{t}_{i}$ are taken to be oriented along the bonds, for the donor assumed to be located at the coordinate $(0,0,0)$. If the donor is located at a site equivalent to the coordinate $(a/4)(1,1,1)$, the tetrahedral directions must be inverted, $\mathbf{t}_{i} \to -\mathbf{t}_{i}$, to preserve agreement with the bond directions. Following the nested optimization process described earlier, we list the parameters for the tetrahedrally-symmetric central cell correction of Eq. (\ref{eq:Central_cell_correction}) in Table~\ref{ccparams}. Note in particular that the strength of the tetrahedral lobes, $A_{1}$, is large compared to the central spherical term $A_{0}$. This underscores the importance of allowing our central cell to have tetrahedral symmetry. The nested variational approach we used to determine the central cell parameters is underdetermined, as we use 5 unknowns to satisfy 3 constraints. Hence, we began the optimization with physically reasonable initial parameters, and terminated the optimization when the donor energies were well within experimental uncertainties. To confirm that our solution is stable, we developed a statistical UQ technique (Appendix \ref{sec:crossValuq}), which we use throughout this study. We remark now on an inconsistency inherent to using $\delta$-function contact potentials to fit the energy levels, as in Refs \onlinecite{Fritzsche1962, Friesen2005}. In three dimensions, it is well known that attractive potentials of the form $U(r) = -\alpha \delta^{(3)}(r)$ exhibit infinitely many bound states, with a ground state of infinitely negative energy. While this approach captures the essential physics necessary for first-order perturbation theory, it is inconsistent with any sufficiently rich variational optimization for the orbital basis. \subsection{Variational solution}\label{sec:SNvar} \begin{figure*}[tb] \includegraphics[width= 1.0 \linewidth]{twoDonors.pdf} \caption{\label{twoDonors}(Color online) Tunnel couplings computed for two phosphorus donors in silicon. (a-d), Comparison of tunnel couplings computed within multi-valley effective mass theory (points with error bars) and NEMO-3D atomistic tight-binding (connected points with no error bars). Here, the tunnel coupling is defined as the energy difference between the first excited state and ground state of the one-electron, two-donor problem. Panel (a) shows tunnel coupling along the [100] direction, panel (b) along [110], and panel (c) along [111]. Panel (d) depicts typical random instances, not along any particular direction. In all cases, the atomistic and effective mass theory exhibits very similar trends and magnitude of oscillations. Along [111], there appears to be a phase discrepancy, likely due to differing placements of the conduction band minima (see the main text for details). The error bars on the effective mass predictions are $\pm 4 \sigma$ statistical uncertainty limits, determined by the UQ techniques described in Appendix~\ref{sec:crossValuq}. (e), Exhaustive tunnel coupling enumeration for two phosphorus donors. Here, we placed one donor at the origin and the second at every possible point within a 30~nm cube surrounding it ($\sim 1.3$ million instances). The spherical shells show cuts (with nearest-neighbor interpolation) of the tunnel coupling at fixed donor separation distances. The tunnel coupling is highly oscillatory, and there is no large region of stability in the tunnel coupling. The full results of the enumeration are tabulated in a supplementary data file. } \end{figure*} \begin{figure*}[tb] \includegraphics[width= 1.0 \linewidth]{tunnelTarget.pdf} \caption{\label{tunnelTarget}(Color online) Probability of achieving large tunnel coupling with uncertain donor placement. In each panel, one phosphorus donor is placed at the origin and a second is placed at lattice sites within a surrounding 30~nm cube. The placement of the second donor is uncertain. The plotted probability is that of obtaining $t>0.1$~meV for a Gaussian distribution of donor placements as a function of the distribution's center. The lower bound of $0.1$~meV is chosen to be about an order of magnitude larger than typical dilution refrigerator electron temperatures. We performed 20000-shot Monte-Carlo, sampling from a 3D isotropic Gaussian distribution with varying widths: panel (a) corresponds to 1~nm, panel (b) to 5~nm, and panel (c) to 10~nm straggle. Panel (a) depicts an experimentally realistic straggle for STM-based donor placement, while panels (b) and (c) depict the results of increasing donor straggle. The white curtain shown in each plot indicates the contour of constant probability as labeled. These results show that STM placement can ensure large tunnel coupling with high yield, while ion implantation technology can only ever achieve low yield, rendering ion implantation ineffective for deterministic use. } \end{figure*} \begin{table}[tb] \caption{\label{functionParams}Parameters for the variational Cartesian Gaussian envelope basis} \begin{tabular}{|c|c|c|c|} \hline Index & $ (n_{x},n_{y},n_{z})$ & $\alpha_{\perp} \ \mathrm{(nm^{-2})}$ & $\alpha_{\parallel} \ \mathrm{(nm^{-2})}$ \\ \hline 1 & $(0,0,0)$ & 3.48877 & 6.93542 \\ 2 & $(0,0,0)$ & 0.84055 & 3.06020 \\ 3 & $(0,0,0)$ & 0.39326 & 1.23742 \\ 4 & $(0,0,0)$ & 0.03096 & 0.12142 \\ 5 & $(0,0,0)$ & 0.01209 & 0.06195 \\ 6 & $(0,0,0)$ & 0.00732 & 0.03747 \\ 7 & $(2,0,0)$ & 0.20364 & 0.70775 \\ 8 & $(0,2,0)$ & 0.20364 & 0.70775 \\ 9 & $(0,0,2)$ & 0.20364 & 0.70775 \\ \hline \end{tabular} \end{table} Now that we have computed both the Bloch functions and the central cell correction, we are equipped to solve Eq.~(\ref{eq:SN}). We do so variationally, by expanding each $F_j$ over a finite orbital basis set of size $N$, \begin{equation} F_j(\mathbf r) = \sum_{\nu=1}^{N} A_{(j,\nu)} F_{(j,\nu)}(\mathbf r), \end{equation} where the coefficients $ A_{(j,\nu)}$ are unknowns to be determined. For each phosphorus atom and valley, we construct a basis from nine atom-centered Cartesian Gaussian functions. For an atom at the origin and the $+z$ valley, for example, we have \begin{equation} F_{(+z,\nu)}(\mathbf{r}) = \mathcal{N} x^{n_{x}} y^{n_{y}} z^{n_{z}} e^{-\alpha_{\perp} (x^{2} + y^{2})}e^{-\alpha_{\parallel} z^{2}}, \end{equation} where the normalization factor $\mathcal{N}$ is chosen such that $\int_{\mathrm{all \ space}} d^{3}r \ \vert F_{(j,\nu)} \vert^{2} = 1$. By symmetry, the orbital basis for one valley is equivalent up to a coordinate permutation to that of other valleys. Within this basis, we express Eq.~\ref{eq:SN} as the generalized eigenvalue problem \begin{equation} \sum_{j,\nu} \mathbf{H}_{(l,\mu),(j,\nu)} A_{(j,\nu)} = E \sum_{j,\nu} \mathbf{S}_{(l,\mu),(j,\nu)} A_{(j,\nu)}, \end{equation} where the Hamiltonian matrix elements are \begin{align} \mathbf{H}_{(l,\mu),(j,\nu)} &= \int d^3 r F^*_{(l,\mu)}(\mathbf r) F_{(j,\nu)}(\mathbf r) \\ &\times \left[ \left( \hat{\mathbf{T}}_l + U(\mathbf r ) \right)\delta_{l,j} + V^{VO}_{lj}(\mathbf r) \right] \nonumber \end{align} and the overlap matrix, block-diagonal with respect to the valleys, is given by \begin{equation} \mathbf{S}_{(l,\mu),(j,\nu)} = \int d^3 r F^*_{(l,\mu)}(\mathbf r) F_{(j,\nu)}(\mathbf r) \delta_{l,j}. \end{equation} Using this matrix formalism, for a fixed $U(\mathbf r)$, we perform a nonlinear optimization to minimize the ground state energy with respect to the nonlinear basis parameters (the $\alpha_{\perp}$ and $\alpha_{\parallel}$ parameters above). For each step in the nonlinear optimization we solve the linear matrix problem. Hence, for any basis ansatz we determine the optimal linear combination of basis functions that minimizes the ground state energy. The linear combinations of basis functions detailed in Table~\ref{functionParams} that form the lowest six energy eigenstates are tabulated in a supplementary data file. Figs.~\ref{oneDonor}(c-d) illustrate the charge density of the ground ($A_1$) state of a phosphorus donor in silicon given by our calculations. For comparison, we solved the same problem using the atomistic tight-binding code NEMO-3D,\cite{klimeck2002development} as shown in Figs.~\ref{oneDonor}(e-f) and detailed in Appendix~\ref{sec:crossValNemo}; we find visual agreement between the two very different methods. In Figs~\ref{oneDonor}(g-h), we show the variation of the envelope function along the principal axes of one of the six identical envelope functions of the ground state. For comparison, we also plot the envelope functions of Kohn and Luttinger \cite{Kohn1955}, with decay constants found in Ref.~\onlinecite{Baena2012}. The error bars shown are determined by a Monte-Carlo UQ procedure detailed in Appendix~\ref{sec:crossValuq}. As has been anticipated,\cite{Saraiva2014} the states are more strongly peaked than those of Kohn and Luttinger, but are more weakly peaked than other recent calculations that assume approximate Bloch functions and a spherically-symmetric central cell correction.\cite{Pica2014} Figs.~\ref{oneDonor}(i-k) show variation along the $x$-axis of the charge density of the ground ($A_{1}$) state, one of the three degenerate first excited ($T_{2}$) states, and one of the two degenerate higher excited ($E$) states, respectively. \section{Study of donor-donor tunnel couplings}\label{sec:tunnelCouplings} \subsection{Exhaustive enumeration of tunnel couplings}\label{sec:tunnelCouplingsenum} Next, we compute the tunnel coupling $t$ between two phosphorus donors using the multi-valley EMT framework. We define tunnel coupling as the energy difference between the one-electron first excited and ground states of two donors. Earlier work predicted significant sensitivity with respect to donor placement of tunnel coupling \cite{Hu2005} as well as exchange,\cite{Koiller2001} and our results for the tunnel coupling confirm this. Tunnel coupling and exchange are correlated through their mutual dependence on the strength of overlap between states localized to each donor.\cite{Li2010} Figs.~\ref{twoDonors}a-d compare with the results of NEMO-3D; we plot the tunnel coupling along three high-symmetry directions, and in addition a sampling of random instances at various separation distances. Agreement is quantitatively very strong, with the exception of Fig.~\ref{twoDonors}c. There, the results appear to be out of phase, although the magnitude of oscillation and trend are very similar. Of special note is that both theories agree perfectly on where the transition between the strong- and weak-coupling regimes occurs, in which the first excited state changes character.\cite{Klymenko2014,Saraiva2014} Shown as a kink in the curves of Fig~\ref{twoDonors}a, this transition occurs at about 6 nm separation along [100]. Having cross-validated EMT predictions for tunnel coupling, we next leverage the computational efficiency of our EMT to perform an exhaustive enumeration of tunnel couplings within a specified volume. In Fig.~\ref{twoDonors}e, we position one donor at the origin and sweep the second through all valid locations located in an enclosing 30 nm cube, resulting in $\sim 1.3$ million donor placements. To visualize these data, we plot the tunnel coupling on concentric shells of varying radii using nearest-neighbor interpolation. For quantum computing applications, since donor placement has experimental uncertainty (placement straggle), it is desirable for tunnel coupling to be stable under small perturbations of position. Unfortunately, we see here that the tunnel couplings are highly oscillatory. Using this exhaustive analysis, we conclude that there does not exist a sizable region of adjacent donor placements that exhibits stability with respect to straggle, an issue that we will explore in more detail in the next section. \subsection{Statistical analysis of placement straggle}\label{sec:tunnelCouplingsenumstraggle} Since two-qubit gates rely on large couplings between donors \cite{Kane1998}, the preceding calculations cast severe doubt on their experimental viability. Having ruled out deterministically stable tunnel couplings, we turn now to statistical analysis. We accept a donor placement configuration if the tunnel coupling satisfies $t>0.1$~meV, which is roughly an order of magnitude larger than typical dilution refrigerator electron temperatures. We then quantify the probability of obtaining this range given a target donor location and straggle. Straggle is determined in practice by the technology used for donor placement. For scanning tunneling microscope (STM) placement, a conservative overestimate of the straggle is $\sim 1$~nm.\cite{Oberbeck2004} In contrast to this precision placement, ion implantation techniques typically have spreads of tens of nm.\cite{Bielejec2010} To study the effects of different placement technologies on achieving high tunnel couplings, in Fig.~\ref{tunnelTarget} we show the probability of achieving $t>0.1$~meV for three different donor straggles: 1~nm in panel (a), 5~nm in panel (b), and 10~nm in panel (c). In each case, the straggle distribution is taken to be an isotropic Gaussian distribution. We determine the probabilities shown by dividing our 30 nm placement cube into a $201 \! \times \! 201\! \times \! 201$ grid of target donor locations and perform 20000 Monte Carlo samples of the tunnel coupling at each point. For STM-compatible placement we find large regions where acceptably large tunnel coupling occurs with high probability, while for the typical placement uncertainty of ion implantation we do not. We therefore expect that achieving $t>0.1$ meV is practical using STM placement but impractical using ion implantation. \section{Summary}\label{sec:summary} We have demonstrated that properly parameterized effective mass theory obtains results that agree quantitatively with both experimental energy spectroscopy and atomistic tight-binding theory \cite{klimeck2002development} that has been recently validated against direct measurement.\cite{Salfi2014} After benchmarking against tight-binding, we leveraged the computational efficiency of EMT to exhaustively enumerate about 1.3 million donor placements, a task not presently feasible with atomistic methods. We show that although there do not exist any regions of stable tunnel coupling, there do exist regions where experimentally realistic donor placement uncertainty results in large tunnel couplings with high yield. By means of a reliable, physically transparent, and high-throughput statistical survey, this work illustrates that effective mass theory is well suited to quantitative explorations of donor physics that are impractical to solve using more computationally intensive techniques. \section*{Acknowledgements} We thank A.~Saraiva, W.~Witzel, S.~Coppersmith, M.~Friesen, M.~Carroll, A.~Frees, T.~Boykin, J.~Aidun, and P.~Schultz for useful discussions and comments on the manuscript, and R.~Rahman and G.~Klimeck for assistance and support with the NEMO-3D simulations. The simulations presented in this work were performed, in part, on Sandia National Laboratories' Red Sky computing cluster. This work was supported, in part, by the Laboratory Directed Research and Development program at Sandia National Laboratories. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
1,314,259,992,781
arxiv
\section{Observations} \section{Theme} We consider the situation where data are used to distinguish between two possible theories - the null hypothesis $H_0$ that the Standard Model is all that is needed, and the alternative $H_1$ that there is also evidence for some exciting New Physics in our data. We assume that a data statistic $t$ is defined and that probability density functions $f_0(t)$ and $f_1(t)$ under the two hypotheses are know (subject perhaps to the values of nuisance parameters being defined). We discuss very briefly several different statistical issues that arise. Accelerators and detectors are expensive, both in terms of money and human effort. It is thus important to invest effort in performing a good statistical analysis of the data, in order to extract the best information from it. \section{Why 5$\sigma$ for Discovery?} Statisticians ridicule the insistence of achieving a $p$-value at least as small as $3\times 10^{-7}$ (equivalent to a significance of 5$\sigma$) before claiming a discovery. They say that people do not know probability distributions out in such extreme tails; this is especially true for systematic effects. The $`5\sigma$ standard' is supposed to provide protection against false discovery claims from the following effects: \begin{itemize} \item{History: There are many cases of $3\sigma$ and $4\sigma$ effects that have disappeared with more data.} \item{LEE: This is discussed in Section \ref{LEE}.} \item{Systematics: These are usually more difficult to estimate than statistical uncertainties. An analysis that is dominated by systematic effects which are overestimated by a factor of 2 and which claims an apparent $6\sigma$ discovery in reality has only a much less interesting $3\sigma$ effect.} \item{Subconscious Bayes factor: Even when an analysis does not use explicit Bayesian techniques, Physicists subsciously tend to assess the Bayesian probabilities $p(H_i|t)$ of $H_0$ and $H_1$ in deciding which hypothesis to accept: \begin{equation} \frac{p(H_1|t)}{p(H_0|t)} = \frac{p(t|H_1)}{p(t|H_0)} \frac{\pi(H_1)}{\pi(H_0)} \end{equation} Here $t$ is the data statistic; the first ratio on the right-hand side is the likelihood ratio; and the second is the ratio of prior probabilities for the hypotheses. If $H_1$ involves something very unexpected (e.g. neutrinos travel faster than the speed of light; energy is not conserved in LHC collisions; etc), $\pi_1/\pi_0$ will be very small, and so the likelihood ratio would need to be extremely large, in order to have a convincing $p(H_1|t)$. This is the basis for the oft-quoted `Extraordinary claims require extraordinary evidence'. } \end{itemize} The last three items above clearly vary from one analysis to another. Thus it is unreasonable to have a single criterion (5$\sigma$) for all experiments. We might well require a higher level of significance for a claim to have discovered gravitational waves or sterile neutrinos than for the expected production of single top-quarks at a hadron collider. Ref. \cite{five_sigma} is an attempt to stimulate discussion of having different criteria for different analyses. \section{$P(A|B) \ne P(B|A)$} It is worth reminding your Laboratory or University media contact personnel that a very small probability $P(data|theory)$ for getting your data, according to some theory, does not imply that the probablity $P(theory|data)$ of the theory, given the data, is also very small. Thus in an experiment to measure the speed of neutrinos, if the probability of getting the observed timing, assuming that neutrinos don't travel faster than light, is very small, it is incorrect to assume that this implies almost certainty that neutrinos travel faster than the speed of light. If anyone still believes that $P(A|B) = P(B|A)$, remind them that the probability of being pregnant, given that the person is female, is $\sim$3\%, while the probability of being female, given that they are pregnant, is considerably larger. \section{$p$-values} For a given hypothesis $H$, the pdf $f(t|H)$ is the probability or probability density of observing $t$, the chosen data statistic. For a simple counting experiment, $t$ might be just the number of observed events. In more complicated cases, it could be a likelihood ratio. The $p$-value is the probability of a value of $t$ at least as large as the observed value\footnote{We are assuming that interesting deviations from $H_0$ would involve an {\bf increase} in $t$.}. Small $p$-values imply that our data and $H$ are incompatible; this could be because the theory is incorrect, the modelling of the effects of our detector on $f(t|H)$ is inadequate, we have a very large statistical fluctuation, etc. $p$-values tend to be misunderstood. It is crucial to remember that they do {\bf not} give the probability of the theory being true. A typical demonstration of this misunderstanding is the jibe directed against Particle Physicists that `they do not know what they are doing because half of their exclusions based on $p < 5\%$ turn out to be wrong.' The fallacy of such reasoning is demonstrated by imagining a series of 1000 measurements designed to test energy conservation at the LHC. Assuming that energy really is conserved, with a cut at $5\%$, we expect about 20 of these measurements to reject the hypothesis of energy conservation, and all of them will be `wrong'. Provided you understand what $p$-values are, you will not find this paradoxical. Statisticians\cite{statisticians} also tend to attack $p$-values in that numerically they can be very much smaller than likelihood ratios. However, we should not expect them to be similar because $p$-values refer to the tail area of the data statistic $t$ for a single hypothesis, while the likelihood ratio is the relative heights for $t$ of the pdf's for two hypotheses. Also the criticism is rather like complaining that, in comparing elephants and mice, their mass ratio is too extreme compared with the ratio of their heights. \section{Wilks Theorem} We assume that we are comparing some data (e.g. a mass histogram) with two theories $H_0$ and $H_1$. If $H_0$ is true, we expect $\Delta S = S_0 - S_1$ to be small or negative; here $S_i$ is the weighted sum of squares for the comparison of $H_i$ with the data. Wilks Theorem states that under certain circumstances, $\Delta S$ will be distributed as $\chi^2$ with $\nu_0 - \nu_1$ degrees of freedom ($\nu_i$ is the ndf for the fit of hypothesis $H_i$ to the data). This is useful in helping us decide which hypothesis we prefer, in that we know the expected distribution of $\Delta S$, assuming $H_0$ is true, and hence we do not have to do elaborate simulations to determine its distribution. The Table illustrates three different scenarios, with the last column stating whether or not Wilks Theorem applies, with asymptotic data. The conditions for the theorem to apply are: \begin{itemize} \item{$H_0$ is true.} \item{The hypotheses are nested i.e. it is possible to reduce $H_1$ to $H_0$ by a suitable choice of the free parameters in $H_1$.} \item{The values of the free parameters required to achieve this are all defined, and not at the boudary of their range.} \item{The data is asymptotic.} \end{itemize} \begin{table} \begin{center} \begin{tabular} {|c|c|c|c|c|c|} \hline Data & $H_0$ & $H_1$ & Nested? & Params OK? & W. Th. applies? \\ \hline \hline Mass histogram & Polynomial of degree 3 & Polynomial of degree 5 & Yes & Yes & Yes \\ \hline Mass histogram & Background distribution & Bgd + signal & Yes & No & No \\ \hline $\nu$ oscillation data & Normal $\nu$ mass hierarchy & Inverted hierarchy & No & N/A & No \\ \hline \end{tabular} \caption{Applicability of Wilks Theorem. In comparing data with two hypotheses, this depends on whether the hypotheses are nested, whether the parameter values required to reduce $H_1$ to $H_0$ are all defined and not on their physical boundaries, and whether there is sufficient data to be in the asymptotic regime.} \end{center} \label{table:Wilks} \end{table} \section{Look Elsewhere Effect (LEE)} \label{LEE} Last month I was travelling on the London underground, and bumped into a colleague I hadn't seen for ages. What a big coincidence that was! Well, it would have been if I had wondered in advance whether I would by chance meet him that day. But there were plenty of ex-colleagues I could have bumped into, and it didn't have to be that particular day, so the overall probability of such an event happening by chance is much larger than I might have thought. That is the essence of the LEE. If I observe a peak at a particular mass in a specific spectrum, the probability by chance of observing such an effect or larger at that position in that spectrum is the local $p$-value. But the much larger chance of this happening anywhere is the global $p$-value; their ratio is the LEE factor. A problem is that definition of `anywhere' is imprecise. For the graduate student performing this analysis, `elsewhere' is at any relevant mass value in that histogram (or perhaps in any histogram used in that analysis). But the Director General of CERN might be concerned to avoid claiming the discovery of new effects that were in fact simply due to statistical fluctuations in any CERN experiment, and so his `elsewhere' would be much wider than the graduate student's. Because of this ambiguity, it is important when quoting a global $p$-value to specify your definition of `elsewhere'. \section{Background systematics} In a typical search, there are many possible sources of systematics that need to be considered. Here we discuss just one of them. In fitting a mass distribution by the null hypothesis (background only) or the alternative (background plus signal), it is necessary to find a way of describing the background, for example by a specific functional form with free parameters. But perhaps the chosen functional form is inadequate, and hence there is a systematic associated with the choice of function. Ways of coping with this have included: \begin{itemize} \item{Try different functional forms, and for assessing the systematic, ignore those that have a goodness of fit significantly worse than the best choice. But a problem is `What constitutes worse?'} \item{Use a background subtraction method} \item{Use a Baysian approach} \item{Use a non-parametric method} \item{etc.} \end{itemize} A new idea is to try various functional forms, and to plot as a function of the parameter of interest (e.g. the signal strength) the log-likelihood $LL$ for each of them, with possible offsets for different numbers of free parameters. Then a modified $LL'$ is defined as the envelope of all the individual $LL$s. It is this widened $LL'$ that is used to make statements on the signal strength, which incorporate the uncertainty resulting from the various functional forms. It is a method for discrete choices that corresponds to profile likelihoods used for continuous nuisance parameters. It has been used in the CMS $H^0\rightarrow \gamma \gamma$ analysis\cite{bgd_syst}. \section{Coverage} Consider analysing some data to obtain either a range or an upper limit for a parameter (e.g. the rate at which some hypothesised new particle is produced). If this procedure was repeated many times, statistical fluctuations would result in differences among the determined ranges. The fraction of these ranges that include the true value for the parameter is called the `coverage'. Ideally the coverage should be independent of the true value of the parameter, and it should equal the nominal value; for supposed 1$\sigma$ intervals it should be 68$\%$. A technique which has coverage below the nominal value is serious for Frequentists; quoted ranges for the parameter are less likely to contain the true value than is naively expected. In an interesting note, Heinrich has plotted the coverage for a Poisson counting experiment where the intervals for the Poisson parameter are determined from the likelihood by the $\Delta\ln L = 0.5$ rule. The plot of coverage against the Poisson mean is dramatically different from naive expectation (see the figure on page 10 of ref. \cite{Heinrich}). It is important to realise that coverage is a property of the {\bf statistical procedure} used to extract the parameter's range, and does {\bf not} apply to your {\bf actual measurment}. \section{$p_0$ v $ p_1$ plots} A recent preprint\cite{Demortier} advocates the use of plots of $p_0$ versus $p_1$ for understanding various issues in comparing data with two hypothesestwo hypotheses. These include \begin{itemize} \item{the $CL_s$ method for excluding $H_1$;} \item{the Punzi definition of sensitivity;} \item{the relationship between $p$-values and likelihoods;} \item{the probability of misleading evidence;} \item{the Law of the Iterated Logarithm; and } \item{the Jeffreys-Lindley paradox.} \end{itemize} \section{Conclusions} In performing statistical analyses, it is important to be aware of resources that are available. Thus there are books written by Particle Physicists\cite{books}, and a useful summary of Statistics is provided by the Particle Data Group\cite{PDG}. Also the large collaborations have Statistics Committees, some of which have public web-pages\cite{web_pages}. On the software side, ROOSTATS\cite{RooStats} is set up to deal with a wide range of statistical problems. So before reinventing the wheel for your data analysis, see if Statisticians (or Particle Physicists) have already provided a solution to your problem. In particular, do not use your own square wheel if a circular one already exists. Best of luck with your analyses. \bigskip \bigskip \begin{center} \begin{large I am grateful to members of the CDF and CMS for many lively and useful discussions.
1,314,259,992,782
arxiv
\section{Introduction} ~~~~Diarrheal disease is the second leading cause of death around the world for children under 5 years of age \citep{black2010global}. Though there are many infectious causes of diarrheal disease in children, rotavirus is the leading cause of gastroenteritis \citep{bryce2005estimates,unicef2010diarrhoea,tate2012}. In many countries, better sanitation, hygiene and access to care have reduced the burden of diarrhea \citep{clasen2007interventions, kilgore1995trends}. Despite this trend, the proportion of diarrheal hospitalizations attributable to rotavirus increased between 2000 and 2004 \citep{parashar2006rotavirus}. The recent development of new prophylactic vaccines for rotavirus is a promising advance in the prevention of diarrheal disease and the reduction of overall childhood mortality \citep{patel2009association,madhi2010effect}. Observation of rotavirus dynamics and estimation of the burden of rotavirus disease is limited both by non-specific surveillance and under-reporting. The dynamics of rotavirus transmission must often be inferred from non-specific temporal surveillance of diarrheal disease that includes multiple causes. This is analogous to the dynamics of specific influenza strains, which are commonly inferred from non-specific time series surveillance of influenza-like illness (ILI) that includes infection by multiple influenza strains (influenza A and B), as well as additional viral infections, for example parainfluenza, coronavirus, rhinovirus \citep{riley2003transmission, chowell2011characterizing}. In sub-Saharan Africa, the cause of diarrheal disease is often unknown due to a lack of diagnostic capacity \citep{mwenda2010burden}. Even when the cause of disease is known, an unknown fraction of cases will occur in the community and never be recorded by the health system, leading to a potentially significant level of under-reporting. Dynamic models in general, and so-called state-space models in particular, have been an important tool in the assessment of disease burden from non-specific or imperfect surveillance \citep{ionides2006inference,ferrari2008dynamics,breto2009time}. We estimate the burden of rotavirus in the Maradi region of Niger by synthesizing two sources of data. We use hospital surveillance data collected by Epicentre for the incident cases over time, including lab-confirmation to assess the likelilhood that a case of severe diarrhea is caused by rotavirus. In addition, we use a cluster survey of households conducted to estimate the proportion of diarrheal disease cases in the region seeking care. The latter is used to help estimate the reporting rate. State-space models rely on the temporal correlation in a dynamic model to make the unobservable true state of the system, that is, the incidence of the pathogen of interest, estimable from noisy or imperfectly sampled data \citep{jones1993longitudinal}. Thus, the inference about disease burden is conditional on the structure of the underlying dynamic model. For pathogens with well characterized epidemiology, such as measles and influenza, the application of state-space models to infer disease burden and transmission dynamics has become common \citep{ionides2006inference, cauchemez2008likelihood, breto2009time, simons2012assessment}. The dynamics of rotavirus, which itself comprises multiple strains that result in varying levels of cross-protective immunity to other strains, has been variously described by a suite of different models \citep{pitzer2012direct}. Therefore, inference about rotavirus burden is limited both by imperfect surveillance of rotavirus infection and uncertainty about the underlying transmission dynamics. Rather than condition our analysis on any one model, we fit the observed time series to a suite of 5 different model structures and assumptions to account for uncertainty in model parameters as well as the dynamics represented in the models themselves. While the development of several novel rotavirus vaccines is a promising advance for the control of diarrheal disease in children, the potential impact of the introduction of these vaccines at the population-scale is uncertain. The predicted impact of vaccine introduction may depend both on the efficacy of the vaccine and model structure; for example \citep{pitzer2012direct} proposed alternative models for boosting of immunity following sequential exposure to rotavirus. Bayesian model averaging (BMA) \citep{bates1969combination,hoeting1999bayesian} allows for the integration of predictions of multiple models, weighted by their posterior support, to generate a single ensemble estimate that accounts for uncertainty in model selection. Here, via BMA, we use the ensemble of fitted models to predict the short-term and long-term impact of vaccination on rotavirus incidence. We then estimate the predicted impact using the vaccine efficacy from two different studies. Our ensemble approach predicts that the current burden of severe rotavirus disease is 2.9 to 4.1\% of the population each year and that a 2-dose vaccine schedule achieving 70\% coverage could reduce burden by 37-43\%. \section{Material and Methods} ~~~~We use data from two sources: a time series of clinic admissions for diarrheal disease and a community based survey of health-seeking behavior. Clinic surveillance covers a collection of health centers and district hospitals from four districts in the Maradi region of Niger including Aguie, Guidan Roumdji, Madarounfa, and the city of Maradi. A total of 9,590 cases of diarrhea in children under 5 were recorded from December 23, 2009 to March 31, 2012 (118 weeks). For each patient age in months, village of origin, date of consultation were recorded. Also noted were potential symptoms including temperature, duration of diarrhea before consultation, presence of blood in the stool, presence and duration of vomiting, and level of dehydration. In each case a rapid test was administered for detecting rotavirus. 2,921 cases tested positive for rotavirus via the rapid test. A subset of 378 cases testing positive for rotavirus were also genotyped. While 32 separate strains were identified, more than two thirds of positive cases were of strains G12P[8] or G2P[4]. The distributed nature of Niger's healthcare system is a challenge for surveillance. Roughly a third of all health centers in these districts were included. Notably absent were the many local health posts staffed by community health workers. To estimate both the fraction of cases seeking care at a health center, and the fraction seeking any level of care, a second source of data is needed. We use a community survey \citep{page2011health} in the region of children under 5 to get estimates of these reporting rates. A total of 2940 children under 5 were selected for inclusion in the cluster survey from households across the four districts. Clusters were allotted according to the population of each village from census data. Sampling weights accounted for household composition and the relative populations of the districts. Among those surveyed, 1099 caregivers reported at least one episode of diarrhea during the recall period of 27 days. Respondents reported whether they sought care at a health structure. We use the reporting rate of severe diarrhea, which is defined as the presence of acute watery diarrhea and the presence of two or more of the signs of loss of consciousness, sunken eyes, and an incapacity to drink or drinking very little. From the cluster survey we determine that an estimated total of 42.9\% of caregivers who reported severe diarrhea consulted at a health center $\left(95\% \text{CI}: (33.1\%,52.7\%)\right)$. The rest either sought care at a district hospital, local health post or did not seek care at a formal health structure. This estimate is used as a proxy for the reporting rate of rotavirus. More specifically, this information is used to construct an informative prior for our Bayesian approach (as described in the supplementary material). \subsection{Model Overview} ~~~~We consider a range of dynamic models for rotavirus transmission. Information linking individual-level data on the course of infection to the between-person transmission of rotavirus is lacking, leading to variation in the structure of mathematical models for rotavirus \citep{pitzer2012direct}. Using a range of different models allows us to account for the uncertainty in estimation due to model choice. The five models we consider are SIR-like compartmental models of transmission, building upon the models in \cite{pitzer2012direct}. We incorporate age into the model with separate compartments for ages from 0-1 month, 2-3 months, 4-5 months, 6-11 months, 12-23 months, and 24-59 months. Fixed parameters are estimated from England and Wales data as described in \cite{pitzer2012direct}. Here we very briefly outline the main features of five models, Models A through E, based on the SIR framework. Details of the model and inferential procedure are described in the supplementary material. Model A tracks severe and mild rotavirus separately. Severe infections have larger force of infections than mild infections. Unlike Model A, Models B-E assume successive infections and immunity are obtained through repeated infections. Subsequent infections will have a reduced susceptibility to infection and level of infectiousness. Model C allows for an incubation period of infections as well. In Model D there is no temporary immunity during successive infections and immunity is granted after all repeated infections. Model E assumes that full immunity can be obtained during successive infections. All model parameters are estimated via Markov chain Monte Carlo (MCMC) and estimated burden over time were obtained from each model. \subsection{Vaccination} ~~~~We assume vaccination imparts immunity comparable to a natural infection, and consider a strategy wherein a first dose is administered at 2 months of age and a second dose is administered at 4 months. The vaccine was assumed to confer protection comparable to protection conferred by primary infection following the first dose. The second dose confers additional protection comparable to that conferred by secondary infection. For Model A, where the risk of infection does not decrease based previous number of infections, a separate input parameter is used for the vaccine efficacy. The vaccine efficacy is set to be equal to the predicted efficacy for Models B-E (see supplementary material for details). We study the effect of the vaccine under varying levels of coverage. The short-term effect of vaccination is assessed by looking at incidence over a five year period following introduction of the vaccine. The long-term effect is measured by the yearly reduction in incident cases of Rotavirus gastroenteritis (RVGE) measured 20 years after introduction of the vaccine. Field efficacy of a multi-dose rotavirus vaccination strategy is uncertain. To reflect this uncertainty, we investigate the impact of vaccination using the value of efficacy from two different studies. First based on the results of \citep{lopman2012understanding}, for low income countries, we assume a seroconversion rate of 63\%. Second, a recent study of a 3-dose vaccination strategy in Niger \citep{isanaka2017efficacy} estimated efficacy of 66.7\% with all doses. The details of representing these two estimates of efficacy in the 5 models are presented in the supplement. \section{Results} ~~~~We fit each model independently and estimate parameters. Then we calculate ensemble estimates using Bayesian model averaging (BMA) \citep{bates1969combination,hoeting1999bayesian} to formalize uncertainty in model selection (see supplementary material for details). Posterior model probabilities (PMP) measures how much each model is supported by the data. Following the BMA approach, based on these probabilities, we provide a weighted average of estimates from five different models. There is significant discordance across models in the measures of model fit (Table \ref{tab:BMAresults}). Model C, the model with incubation periods performs the best. Notably, Model A, the only model that does not allow for successive infections with decreased levels of infectiousness, performs significantly worse as measured by posterior model probability. \begin{table}[hh] \centering \begin{tabular}{cccc} \hline Model & Probability & $R_{0}$ & Burden\\ \hline A & 0 & 30.7 (25.8,34.3) & 9.2 (8.1,10.1) \\ B & 0.01 & 13.9 (12.7,15.4) & 3.5 (3.1,3.8) \\ C & 0.92 & 13.4 (11.7,15.3) & 3.5 (3.1,3.9) \\ D & 0.03 & 11.2 (9.4,12.7) & 3.6 (3.2,4.1) \\ E & 0.04 & 10.3 (9.5,12.6) & 3.2 (2.9,3.5) \\ BMA & & 13.4 (10.3,15.4) & 3.5 (2.9,4.2) \\ \hline \end{tabular} \caption{For each model we provide posterior model probability (PMP), the basic reproductive number $R_{0}$, and estimated burden. Burden corresponds to yearly cases with severe RVGE (\% of population). The last row corresponds to the model-averaged (via Bayesian model averaging) versions of these estimates.} \label{tab:BMAresults} \end{table} \subsection{Pre-vaccination} ~~~~Our fitted models allow us to construct estimates of the burden in these four districts (Table \ref{tab:BMAresults}). Of children under five, an approximate 3.5\% per year develop severe RVGE as estimated by Models B-D, though this estimate is lower for Model E and significantly larger for Model A. The basic reproductive number $R_0$ is found as the largest eigenvalue of the next-generation matrix \citep{diekmann1990definition} and significantly larger for Model A. BMA for burden and $R_{0}$ are close to those of Model C, which has the highest weight. In Figure \ref{fig:burden}, we plot our model projections with uncertainty for reported cases of rotavirus as well as for all cases of severe RVGE. We also note that Models B-E predict a steep decline in cases in children over 1y of age following the epidemic peak; cases in infants under 1y, by contrast, are predicted to decline more slowly. Figure \ref{fig:BMAburden} shows the BMA-based model projections which are close to those of Model C. However we note that BMA-based projections have wider confidence intervals because averaged projections incorporate model uncertainty. \begin{figure} \includegraphics[width=\linewidth]{burdenplots.pdf} \caption{Burden estimates under the five fitted models. Dashed lines denote 95\% confidence interval. Top: weekly reported cases of RVGE and model projections. Middle: model projections of all severe RVGE cases. Bottom: model projections of RVGE incidence by age. Lines are model projections while points represent observed cases.} \label{fig:burden} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{BMAburdenplots.pdf} \caption{Model-averaged (BMA) burden estimates from the five fitted models. Dashed lines denote 95\% confidence interval. Left: weekly reported cases of RVGE and model projections. Middle: model projections of all severe RVGE cases. Right: model projections of RVGE incidence by age. Lines are model projections while points represent observed cases.} \label{fig:BMAburden} \end{figure} All of the fitted models are able to successfully capture the observed age distribution of cases (Figure \ref{fig:BMAageDisttry2}), though Models C and E predict noticeably more cases than observed for older children (2-5 years). The models vary in their ability to capture the temporal dynamics. During the second year of hospital surveillance we can see a secondary peak in the number of cases that is not captured by our fitted model, although we did find that the model dynamics can produce this double peak through an interaction of a high birth rate and seasonal variation when the seasonal forcing is stronger than that estimated here. BMA shows a similar trend as the Model C, which has the highest weight. \subsection{Projected Impact of Vaccination} ~~~~Here we investigate the impact of vaccination based on the seroconversion rate for low socio-economic settings \citep{lopman2012understanding}. In the supplementary material we provide the impact of vaccination using a different value of efficacy which is measured based on a 3-dose strategy \citep{isanaka2017efficacy}. This was qualitatively similar, but quantitatively small compared to the results in the main paper. Vaccination causes a noticeable shift in the age distribution across Models B-E (Figure \ref{fig:BMAageDisttry2}), with a higher proportion of RVGE cases occurring in older children. This has significant benefits when considering the age-specific mortality of rotavirus is higher for children under 2 years of age \citep{morris2012rotavirus}. The BMA-based burden shows a similar trend. \begin{figure} \centering \includegraphics[width=.7\linewidth]{BMAageDisttry2.pdf} \caption{Distribution of cases across age groups observed in the data (black dots), predicted by the models (red lines), and predicted after vaccination has been introduced at 70\% coverage (blue lines).} \label{fig:BMAageDisttry2} \end{figure} Over the short term, Models A-E predict an overall decline in total burden, but an increase in the magnitude of peak incidence (Figure \ref{fig:vaccShortTerm}). \begin{figure} \includegraphics[width=\linewidth]{vaccineShortTerm.pdf} \caption{Relative incidence of severe RVGE after vaccination has been introduced into the models assuming 70\% coverge, out to five years after vaccination has been introduced. The vaccination has been introduced at 0 year. } \label{fig:vaccShortTerm} \end{figure} Figure \ref{fig:BMAvaccReduce} provides short term and long term impact of vaccination which are model-averaged values from five different models. The short term trend of vaccination impacts based on BMA is similar to that of Model C. At equilibrium (long term), we can observe the reduction in severe rotavirus cases with higher levels of coverage. For a fixed (70\%) level of coverage, we predict 38.9\% reduction of severe RVGE $\left(99\% \text{CI}: (37.1\%,42.6\%)\right)$ over the long-term. Based on the recent vaccine efficacy study in \cite{isanaka2017efficacy}, we predict 29.6\% reduction $\left(99\% \text{CI}: (28.0\%,32.7\%)\right)$ in RVGE over the long-term. Details are provided in the supplementary material. \begin{figure} \includegraphics[width=\linewidth]{BMAvaccineReduction.pdf} \caption{Relative incidence of severe RVGE (Left), percent (Middle) and absolute (Right) long term reduction in cases by coverage for Bayesian model averaging from the five fitted models. Dashed lines denote 99\% confidence interval. The vaccination has been introduced at 0 year. Variation in reduction for a fixed (70\%) level of coverage is demonstrated.} \label{fig:BMAvaccReduce} \end{figure} \section{Discussion} ~~~~Diarrheal disease is a major source of childhood morbidity and mortality. However, the multi-etiology nature of diarrheal disease means that it is difficult, in the absence of lab confirmation, to infer total burden or project the consequences of novel interventions. We have rich but short-term data with which to understand the dynamic process; in combination with survey data on health-seeking behavior; however, we can bring additional information to bear on the observation rate to interpret the patterns from the non-specific clinic surveillance. For rotavirus, the uncertainty inherent in imperfectly observed incidence is compounded by the lack of a generally accepted model and debate about the underlying mechanisms that drive the epidemiological response \citep{pitzer2012direct}. This motivates an ensemble approach, using a combination of different models along with quantitative surveillance to get practical measures of burden and projections about the operational impact of controls. This multi-model ensemble approach is common in geosciences \citep{mcavaney2001model, stainforth2005uncertainty, tebaldi2005quantifying}, where different assumptions on complex underlying processes can produce different climate projections, which motivates a probabilistic forecast from a variety of models. A competing models approach has been adapted to epidemiological problems as well, such as choosing an optimal strategy for measles vaccination \citep{shea2014adaptive} and assessing the impact of control actions for foot and mouth disease outbreaks \citep{lindstrom2015bayesian}. Here we formally address these two sources of uncertainty, using a state-space model to address the problem of incidence from non-specific surveillance data, and comparing the inference from an ensemble of proposed models to address the uncertainty in the underlying dynamics. Our ensemble approach suggests robust support for some general patterns of rotavirus dynamics. The peak transmission is well estimated, with a maximum in early March, with little variation between models. Rainfall, which is a primary driver of seasonality in the region, peaks in August. \cite{bharti2011explaining} found that early March, when urban population density is at its maximum due to seasonal rural-urban migration, was the peak season for transmission of measles. Though measles is transmitted through aerosolized droplets, the similarity in the peak seasonality suggests that higher population density may also facilitate transmission of rotavirus. We find the SEIRS structure in Model C (model with incubation period) best explains the observed data. In this model, subsequent infections have decreased levels of infectiousness and lower risk of infection compared to the initial infection. All models except for Model A, which offers the worst fit to the data, include this dynamic. The estimated basic reproductive number is fairly robust across Models B-E. In particular, point estimates for models B-E vary from 10.3 to 13.9 in Table \ref{tab:BMAresults}, though Model A has a much larger $R_0$. There is an observed double peak in incidence (Figure \ref{fig:burden}) during the second year of observation which our fitted models do not capture. However, this may be an anomaly, as the double peak is not seen strongly during the first and third years. We note that our models are capable of reproducing this behavior when the seasonal variation in transmission is stronger than the best fit estimate, via an interaction between seasonal effects and the high birth rate in the region. More complex explanations for such double peaks have been observed elsewhere. In cholera, similar to rotavirus in transmission, local ecological variations were responsible for bimodal incidence \citep{de2011role}. Our estimate of overall burden of severe RVGE is robust across Models B-E. In spite of the fact that the full epidemiological processes are unknown, we can be fairly sure that the total yearly burden among children under 5 is in the vicinity of 3.5\% (Table \ref{tab:BMAresults}). Model A predicts a 3-fold greater incidence of severe RGVE; however, this model has the weakest support and model-averaged burden is similar to Models B-E. While uncertainty in retrospective dynamics and disease burden can be characterized using different models, additional uncertainty about the efficacy of proposed interventions limits the ability to predict future dynamics and disease burden. \cite{atherly2012projected} estimated that rotavirus vaccine could result in 2.46 million childhood deaths between 2011-2030. Of course, uncertainty in the seroconversion rate \citep{lopman2012understanding} and achievable vaccination coverage means that the true benefit of these vaccines is unknown. Here, we used the ensemble prediction to project the potential impact of rotavirus vaccination in the Niger setting under two scenarios for vaccine efficacy; thus integrating both dynamic uncertainty due to different models and sensitivity to the realized effectiveness of a vaccine program. Using a vaccine efficacy derived from \cite{lopman2012understanding} we estimate that 70\% coverage could result in 37-43\% reduction in severe RVGE in children under 5. \cite{isanaka2017efficacy} reported a lower efficacy from a 3-dose schedule in Niger; this would lower the projected reduction of severe RVGE to 28-33\%. Notably, although BMA estimates a total reduction in yearly cases using both the efficacy reported in \cite{lopman2012understanding} and \cite{isanaka2017efficacy}, it also predicts higher peaks where more cases are observed than pre-vaccination. This short-term difference in cycle amplitude for these models is a phenomenon anticipated by \cite{pitzer2009demographic}. Anticipation of this shift in dynamic regime caused by vaccination may be critical to the interpretation of short-term surveillance as the observation of higher peak incidence following the introduction of vaccination may be wrongly interpreted as a failure in the vaccination program. Dynamic models are a powerful tool to interpret disease surveillance data and anticipate the potential consequences of interventions. The method we describe here addresses two main sources of uncertainty: imperfectly observed data and scientific uncertainty about epidemiological dynamics. Our methods also allow us to identify key epidemiological interpretations -- transmission seasonality and the proportional impact of vaccination -- that are robust to model choice, and those that are model dependent, that is, $R_0$ and the annual burden of severe RVGE. By assessing the fit of the observed surveillance to each model, we find that these latter measures are robust within the subset of well supported models. \section{Acknowledgments} ~~~~The authors are grateful to Epicentre for providing the data sets for this research project. MF is funded by a grant from the Ecology and Evolution of Infectious Disease program of the NSF/NIH (award number 1 R01 GM105247-01). {\it Potential conflict of interest statement:} none of the authors has conflicts of interest. \clearpage
1,314,259,992,783
arxiv
\section{Introduction} \label{sec:intro} Polymorphic malware can bypass signature-based detection methods and simple heuristic detection techniques by slightly changing the instructions of an existing malware sample. These new malware instances are called variants. Although these variants appear to be different programs from the viewpoint of signature-based anti-virus scanners, they exhibit similar functionality to their predecessor. Consequently, these new malware variants can bypass traditional detection methods until a signature for them can be identified and incorporated into detection software~\cite{2}. Authors of malware detection systems have attempted to address this problem by using other methods that are more powerful than signature matching; for example, byte frequency~\cite{3}, general similarity measures~\cite{4}, and behavioral analysis~\cite{229} are among the proposed techniques. A common weakness of these detection methods is that they are executed on the same machine they are monitoring. Hence, successful attackers could disable the monitoring software or modify it to prevent detection after gaining entry to the system~\cite{7}. This behavior is evidenced by rootkits, a particularly insidious subclass of malware. Rootkits are a type of computer malware that were created to hide themselves and elude intrusion detection systems once they gain unauthorized access to a computer system~\cite{46}. Previous work has also explored the idea of detecting the presence of malware by monitoring the power consumption of mobile devices, embedded systems, and software define radio. However, to the best of our knowledge, no one has explored if malware can be detected by monitoring the power consumption on general-purpose computers. Our goal in this paper is to prove the hypothesis that in order to mask themselves, rootkits will require a detectable change in the power consumption. Particularly, we are addressing the following research question: can we detect rootkits on general-purpose computers by analyzing only the power consumption? To this end we built a testbed and designed an experimental setup in which the power consumption was recorded for a sequence of events running on a Windows operating system. This work focuses only on rootkits because they are commonly associated with the establishment of advanced persistent threats and pose serious danger to our nation's computer systems. Preliminary results showed that malware indeed leaves a signal on the power consumption of a general-purpose computer. Specifically, monitoring the +12V rails on the motherboard was the most useful for identifying the increase in the power consumption after the general-purpose computer was infected by malware. The paper proceeds with related work in Section~\ref{sec:relWork}, followed by the experimental design in Section~\ref{sec:expdesign}, which includes the hardware and software setups used for collecting the power data, the experimental machine's execution of tasks, and descriptions of the rootkit. Section~\ref{sec:results} presents the results of the feasibility study. Finally, conclusion and promising directions for future research are discussed in Section~\ref{sec:conclusions}. \section{Related Work} \label{sec:relWork} Several works have used power consumption metrics for malware detection purposes. These methods have been tested on mobile devices~\cite{8,yang2016power}, embedded systems~\cite{9}, and software defined radio~\cite{10, gonzalez2014detecting, 250}. The work by Hoffman et al.~\cite{8} explored if malware can be detected on smartphones by analyzing their power consumption. This method failed due to the noise caused by unpredictable factors, such as user interaction and the mobile's signal strength. On the other side, the approach presented by Yang et al.~\cite{yang2016power} demonstrated that malware can be detected by monitoring the power consumption of smartphones. The difference between these two works is mainly in the type of smartphones used in the experiments. First method~\cite{8} focused mainly on ``old" devices (HTC-Nexus One and Samsung Galaxy Nexus), while the second method~\cite{yang2016power} focused on modern devices (Samsung Galaxy S5 and LG G2). Although PowerTutor~\cite{powerTutor} was used for the data collection in both works, this tool may have been updated between the time these two experiments were conducted, influencing the precision of the collected data and skewing the results. Another method that monitors the power consumption on embedded systems with the objective of detecting malware was presented by Clark et al.~\cite{9}. Supervised machine learning techniques, such as 3-Nearest Neighbor, Multilayer Perceptron, and Random Forest, were used to analyze alternating current (AC) and to detect discrepancies among the power profiles. Even though the proposed approach share several similarities with this work, the main difference is that the work in~\cite{9} focused on monitoring the AC outlet, while we are monitoring several direct current (DC) channels. The problem with AC is that the current changes direction periodically, and because the current changes its direction the voltage reverses making the analog circuits much susceptible to noise. Similarly, power-based malware detection for software defined radio was explored by Gonz\'{a}lez et al.~\cite{10, gonzalez2011}. This approach relied on extracting distinctive power consumption signatures and used pattern recognition techniques to determine if they matched the expected behaviors. This research was expanded, and used by the PFP firm (\url{http://pfpcyber.com}), which developed a commercial product that detect anomalies on a device by analyzing its power consumption. This approach can also be applicable to embedded devices~\cite{gonzalez2014detecting, 250}. The main difference between this approach and our work is that we monitor all the rails attached to the motherboard plus the CPU, while PFP is monitoring the power consumption of the device by placing a sensor on the processor's board as close to the power pins as possible. It appears that there are no published research works focused on testing the use of power consumption monitoring in support of malware detection in general, with respect to the detection of rootkits in particular. \section{Experimental Design and Data Collection} \label{sec:expdesign} The objective of our experiments is to analyze the power consumption of a general-purpose computer in order to detect the presence of rootkits. Our work is based on the hypothesis that rootkits can be detected by the anomalies they cause in the DC power consumption of the general-purpose computer. Specifically, we are interested in determining if there is a difference in the power profiles between the normal and anomalous behavior (i.e., after infection). \subsection{Hardware Configuration} \label{hd_config} Our experimental system is a Dell OptiPlex 755 with a clean installation of 32-bit Windows 7. The instrumentation for our experiments was a Data Acquisition system (DAQ), Model Number: USB-1608G Series \cite{234}. The DAQ connects to the device's motherboard power connector, and the voltage and current are collected on each of the DC power channels. The communication between this machine and the experimental machine was established through USB port. The DAQ provides relatively high-resolution power data, is able to sample at a rate of 250KHz, and can monitor up to 16 channels. Besides the DAQ, we also used an eight inch ATX power extender cable that had one male and one female 24-pin connector. The 24-pin male connector was attached to the motherboard, and the 24-pin female connector was attached to the power supply (PSU). Each group of wires on the PSU were connected to a single overcurrent protection (OCP) circuit that is called a \textit{rail}. A PSU has three voltage rails: \textit{+3.3V}, \textit{+5V}, and \textit{+12V}. Table~\ref{tab1} provides a list of the devices that are typically powered by these voltage rails. The +3.3V rails or the +5V rails are typically used by the digital electronic components and circuits in the system~\cite{99}. Some examples of these components are adapter cards and disk drive logic boards. On the other hand, the disk drive motors and the fans use the +12V rails~\cite{99}. Besides disk drive motors and newer CPU voltage regulators, the +12V supply is used by any cooling fans in the system~\cite {99}. \begin{center} \begin{table}[H] \caption{Voltage rail usage for a general-purpose computer} \begin{tabular}{|c|c |} \hline \bfseries{Rail} & \textbf{Devices Powered} \\ [0.5ex] \hline +3.3V & chipsets, some DIMMs, PCI/AGP/PCIe cards, \\ &miscellaneous chips \\ \hline +5V & disk drive logic, low-voltage motors, SIMMs, \\ &PCI/AGP/ISA cards, voltage regulators \\ \hline +12V & motors, high-output voltage regulators, \\ &AGP/PCIe cards \\ \hline +12V CPU & CPU \\ [1ex] \hline \end{tabular} \begin{tablenotes} \small \item Acronyms: \begin{itemize} \item SIMM = Single Inline Memory Module \item DIMM = Dual Inline Memory Module \item PCI = Peripheral Component Interconnect \item PCIe = PCI Express \item AGP = Accelerated Graphics Port \item ISA = Industry Standard Architecture \item CPU = Central Processing Unit \end{itemize} \end{tablenotes} \label{tab1} \end{table} \end{center} To ensure the power data was collected adequately, we tested three hardware configurations. The first hardware configuration monitored a total of eleven DC power channels (four pins had a signal of +3.3V, five pins had a signal of +5V, and two pins had a signal of +12V). When using this configuration, the voltage levels were obtained for each of the channels that were monitored. However, since we were interested in power, both the voltage and current were required. To address this challenge, a DC voltage and current sense PCB was used. The DC voltage and current sense PCB determines the DC current by measuring the voltage drop across a shunt resistor, and then converts that current to analog voltage output \cite{116}. The PCBs were soldered to those wires on the ATX power extender cable that we were interested in monitoring. Leaving a total of thirteen DC power channels to be monitored (ten of them were used to measure the current and the other three channels were used to measure the voltage). Since the value of the voltage was the same, we measured the voltage as a group, that is, one voltage value for all the rails that were +3.3V, one value for all the rails that were +5V, and one value for all the rails that were +12V. While testing this hardware configuration, we noticed that there were two +12V rails that were powering the CPU of the experimental machine. Particularly, these +12V rails were separate from the rails that we were already monitoring on the ATX power extender cable. These +12V rails were connected from the PSU to a 4-pin ATX12V power connector on the motherboard. Including these rails, we ended up monitoring a total of fifteen channels. \footnote{A survey of other machines was made to verify that general-purpose computers have this 4-pin ATX12V power connector. More than 20 computers were verified and all of them had the 4-pin ATX12V power connector.} Monitoring fifteen channels at the same time was challenging because, when post-processing, we had to sum several measured currents together. To simplify the hardware configuration, we evaluated other options that could help us to reduce the number of channels to be monitored. After some exploring we found that all wires from the same voltage value were soldered together on the same contact point on the power supply. This means that all the +3.3V rails were connected to the same contact point, and the same was true for the +5V rails, and the +12V rails. Figure~\ref{fig:HW}(left) shows the voltage and current sense PCB that was used for the second hardware configuration, while Figure~\ref{fig:HW}(right) shows how the +12V rails were soldered together on the same contact point on the PSU. \begin{figure}[H] \begin{minipage}[b]{0.47\linewidth} \centering \includegraphics[width=\textwidth]{PCB6} \end{minipage} ~\hfill~ \begin{minipage}[b]{0.47\linewidth} \centering \includegraphics[width=\textwidth]{PowerSupply_2} \end{minipage} \caption{(Left) Voltage and current sensor PCB used on the experiments for the second hardware configuration (Right) +12V rails soldered on the same contact point on the PSU.} \label{fig:HW} \end{figure} The third hardware configuration emerged from this observation. We grouped all the +3.3V rails on the same voltage and current sense PCB which was attached to the ATX power extender cable; the same was done for the +5V rails and the +12V rails. Figure \ref{fig:HW2}(left) shows the third hardware configuration, while Figure~\ref{fig:HW2}(right) shows how the wires from the ATX power extender cable were hooked to the DAQ. As can be seen from Figure~\ref{fig:HW2}(right), for each channel on the DAQ we hooked a wire for the current (black wire), the voltage (red wire), and the ground (silver wire). \begin{figure}[H] \begin{minipage}[b]{0.47\linewidth} \centering \includegraphics[width=\textwidth, angle=90]{ATX_DAQ_ThirdConfiguration} \end{minipage} ~\hfill~ \begin{minipage}[b]{0.47\linewidth} \centering \includegraphics[width=\textwidth]{ATX_attachedToDAQ} \end{minipage} \caption{(Left) Third hardware configuration used during the experiments (Right) Wires attached to DAQ.} \label{fig:HW2} \end{figure} Grouping the rails reduced the numbers of channels to be monitored to six\textemdash three channels for measuring the current, and the other three channels for measuring the voltage. In addition, we also included the two +12V rails that power the CPU. Overall, instead of monitoring fifteen channels, we reduced the number to eight: 4 voltage channels and 4 corresponding current channels. This configuration was the one used for the experiments and data collection described here. \subsection{Software Configuration} \label{soft_config} Initially, we used a tool called \textit{TracerDAQ Pro} (version 2.3.1.0), which is an out-of-the box virtual instrument that acquires and displays data~\cite{234}. This tool ran on a different machine (data repository) in order to provide integrity during the experimentation process. The acquired data from the experimental machine was stored as a CSV file on the data repository. As our experimental design evolved, we found that TracerDAQ Pro was not suitable for obtaining precise power data. To address this issue we developed our own Visual Basic program. There were three advantages to using our own software versus using the supplied software: (1) data has 16 bits of precision; (2) we have control over the sample recording rates and sample timing (3) we were able to make additional real-time calculations that helped us to verify the obtained power data. Another application used in our experiments is called \textit{Clonezilla}~\cite{203}. Clonezilla is a partition and disk imaging/cloning program. This tool was used to ensure we had a consistent, clean installation of Windows. We used Clonezilla to create an exact copy of the master hard drive and this hard drive was not exposed to malware. \subsection{Data Collection} \label{datacol} The power consumption of the general-purpose computer was collected in two different scenarios: normal behavior (no rootkit running on the system) and anomalous behavior (a rootkit was running on the system). For the data collection workflow, we assumed a clean installation of Windows, then power data was collected and labeled as normal. Subsequently, the experimental machine was infected and power data was collected and labeled as anomalous. For this case study we infected the general-purpose computer with two rootkits: Alureon and Pihar. A segregated network was created to ensure the malware will not spread around the main network. The segregated network consisted of an experimental machine, a data collection repository, a hub, and a cellular data connection. The data collection repository connects to the personal hotspot, and then through the hub we shared the wireless connection with the experimental machine. Two advantages from the use of segregated network are: (1) allowing rootkits to behave normally, while avoiding the possibility of infecting other machines on the network; and (2) allowing us to monitor, record, and analyze the experimental machine's network traffic. Wireshark was used to collect the network traffic of the experimental machine and to validate that the experimental machine was successfully infected with the rootkits being tested. As part of the network traffic analysis, we organized the protocols on the PCAP file by alphabetical order and then focused only on the column for the Domain Name System (DNS) protocol. From the domains that were captured, one of them got our attention (\textit{term0l5ter12.com}). Several websites \cite{243, 245} had this domain registered as malicious. After all these analyses, we were certain that the experimental machine was successfully infected with both rootkits. To initialize the data collection process, we wrote two scripts: a Python script that executes a sequence of events, and a C++ program that inserted what we called a \textit{marker}. The objective of the Python script was to ensure repeatability, while the objective of the marker was to insert a signal into the measured power data to mark the start and end points for each of the events. IE was chosen because the Alureon and Pihar rootkits affect the performance of browsers~\cite{117, 118}. When the Python script is executed, it launches two markers before the experimental machine goes idle for a minute. Then the Python script opens ten windows of IE each with 5 seconds delay. Figure~\ref{fig:SeqOfEvents} shows data collected after the Python script was executed for the +12V CPU rail prior and after the infection with the Alureon rootkit. These events (idle, opening IE, booting/rebooting) were recorded during three states: (1) prior to infection, (2) after infection, and (3) after infection plus reboot. In order to segment these sections of the power profile, we used the marker to stress the CPU of the experimental machine for five seconds. The Python script places markers in the power data before and after the events were recorded. The advantage of using these markers is that they allow us to understand when a particular event occurs and how long it takes to complete its execution. This workflow was completed three times for the four rails that were monitored. \begin{figure}[H] \includegraphics[scale=0.30]{SequenceOfEvents_1} \centering \caption{Sequence of events after the Python script was executed} \label{fig:SeqOfEvents} \end{figure} The first rootkit, Alureon, also known as \textit{TDL4} or \textit{TDSS}, is a Trojan that allows an attacker to intercept incoming and outgoing Internet traffic in order to gather confidential information such as user names, passwords, and credit card data \cite{89}. There are several generations of this type of malware, and for our experiments, we used the fourth generation \cite{204}. Typically, it infects a computer via drive-by download through a questionable website, often a distributor of pornography or pirated media \cite{90}. Once Alureon is installed on the machine, the software searches the system for any competitor's malware and removes it. It also uses an encryption algorithm to hide its communications from traffic analysis tools that are sometimes used to detect suspicious transmissions \cite{90}. Furthermore, this rootkit can manipulate the master boot record (MBR) of the computer to ensure that it is loaded early during the bootup process so that it can interfere with the loading of the operating system \cite{207}. The second rootkit, which is a variant of Alureon, is a Trojan called \textit{Purple Haze} (also known as \textit{Pihar}). Like Alureon, this rootkit can modify the MBR of the machine, as well as changing system settings and reconfiguring the Windows registry. Its rootkit capabilities include disabling the antivirus software to keep itself hidden~\cite{ref:Pihar1}. \subsection{Data Pre-processing} \label{datepreprocessing} As part of the data pre-processing, the voltage and current for the monitored rails were multiplied to obtain the power consumption of the general-purpose computer. To plot and interpret the power data, we used MATLAB. After obtaining the power data, the next step was to separate the events based on the start and end point. To obtain the indexes we wrote a MATLAB script that returns the start and end point of all the markers that appeared on the dataset. For this case study, there are a total of eighteen markers. Once we had the start and end point for each event, the next step was to compare those events that were related to each other. Specifically, we were interested in the following comparisons: (1) when the machine was booting prior to infection versus when the machine was rebooting after infection; (2) idle prior to infection versus idle after infection; (3) idle prior to infection versus idle after infection and reboot; (4) when opening IE windows prior to infection versus when opening IE windows after infection (5) when opening IE windows prior to infection versus when opening IE windows after infection and reboot. \section{Data Analysis and Case Study Results} \label{sec:results} The primary goal of this proof of concept is to determine if there is a difference in the power consumption of a general-purpose computer after malware infection. To prove or disprove this hypothesis, several experiments were conducted and power profiles were collected for specific events (idle, opening IE, and booting/rebooting). This was done for the rootkits Alureon and Pihar. For each rootkit there were three datasets. Each dataset contains the power consumption obtained for each of the rails that were monitored. In other words, each dataset contains the power profiles of all the sequence of events that were recorded for each one of the rails. The comparison between the normal and anomalous state was done for each of the events that were recorded on the four rails. Five graphs were generated for each monitored rail. The x axis for each of these graphs shows ``Data Points", which refers to the total of power readings that were sampled every 10 milliseconds. For example, if a graph shows 3,000 data points that would be equivalent to 30 seconds. \subsection{+3.3V Rails} \label{3VRails} These rails are typically used by digital electronic components and circuits in the system, such as memory. When comparing the power profiles of booting prior to infection versus when it was rebooting after infection, we noticed that at the beginning the power consumption was lower and subsequently both events kept their power consumption similar to each other. Regarding the other events (idle and opening IE), results showed that the difference in the power consumption cannot be established by the naked eye. After analyzing all six datasets (three datasets per rootkit), we concluded that the +3.3V rails are not very useful for detecting different behaviors between the normal and anomalous power profiles because these rails are used to power up memory, and that component does not consume as much power as the hard drive or CPU. \subsection{+5V Rails} \label{5VRails} For all datasets, when comparing booting prior to infection with the rebooting after infection for +5V rails , we noticed the same behavior as the +3.3V rails, that is the power consumption after infection was lower at the beginning of the initialization process, but later it kept the same pace as the normal behavior. Hence, comparing booting prior to infection versus booting after infection for the +5V rails is not sufficient to distinguish between normal and anomalous behavior. When we compared idle prior to infection versus idle after infection with Alureon we obtained an increment in the power consumption after the general-purpose computer was infected for two out of the three datasets (66.67\% of the time), while for Pihar we noticed an increment in the power consumption for all datasets (100\% of the time). However, when comparing idle prior to infection versus idle after infection and reboot for both rootkits, we noticed that the power profiles for both scenarios (normal and anomalous) were at the same level. In other words, a distinguishable difference cannot be made by the naked eye. Furthermore, when comparing all the graphs in which the general-purpose computer was idle we noticed a delay in the power data after the general-purpose computer was infected. We believe this delay is because after the infection more processes are running and this extra work consumes more power. Figure~\ref{fig:IdlePriorAfterInfection} shows the power consumption after infecting the general-purpose computer with the Alureon rootkit. As can be seen from Figure~\ref{fig:IdlePriorAfterInfection}, the power consumption in the idle state was higher after the infection than prior to infection. Hence, this comparison is a good criterion for detecting malware through the power consumption. \begin{figure}[H] \includegraphics[scale=0.15]{IdlePriorInfectionVsIdleAfterInfection_5V_D3} \centering \caption{Power consumption for idle prior to infection vs. idle after infection with Alureon for the +5V rails} \label{fig:IdlePriorAfterInfection} \end{figure} When IE was opened prior to infection versus after the infection with Alureon, we noticed an increment in the power consumption after infection for two out of three datasets (66.67\% of the time). In the case of the Pihar rootkit, this behavior was seen only in one out of three datasets (33.33\% of the time). Figure~\ref{fig:IEPriorAfterInfection} shows the power consumption after opening IE prior to infection versus after infection. From Figure~\ref{fig:IEPriorAfterInfection} we can see an increment in the power consumption when some IE windows were opened. Interestingly, this increment was seen when some windows of IE were jammed. This was consistent with the behavior we saw during the data collection process and later was confirmed when analyzing the PCAP file. Based on network traffic collected by Wireshark, we noticed that Alureon was trying to redirect the search engine to advertisement websites. However, when IE was opened prior to infection versus after the infection and reboot for both rootkits, a difference by the naked eye could not be established. \begin{figure}[H] \includegraphics[scale=0.15]{OpeningIEPriorInfectionVsOpeningIEAfterInfection_Alureon_5V_D2} \centering \caption{Power consumption for opening IE prior to infection vs. opening IE after infection with Alureon for the +5V rails} \label{fig:IEPriorAfterInfection} \end{figure} \subsection{+12V Rails on the Motherboard} \label{12VRails} The +12V rails on the motherboard are used to power up the disk drive motors and the fans. For one of the Alureon datasets results showed that the power consumption was higher after the infection compared to when it was booted prior to infection (33.33\% of the time). However for the other two datasets, we saw similar behavior as in the case of +3.3V and +5V rails. Figure~\ref{fig:BootingPriorAfterInfection} shows an increment in the power consumption after the general-purpose computer was infected during the initialization process. In the case of Pihar, an increment in the power consumption was noticeable on two out of three datasets (66.67\% of the time). \begin{figure}[H] \includegraphics[scale=0.15]{BootingPriorInfectionVsBootingAfterInfection_12V_3} \centering \caption{Power consumption for booting prior to infection vs. booting after infection with Alureon for the +12V rails on the motherboard} \label{fig:BootingPriorAfterInfection} \end{figure} When comparing the idle state (idle prior to infection versus idle after infection and idle prior to infection versus idle after infection and reboot), results for Alureon showed an increment in the power consumption after infection for two out of the three datasets (66.67\% of the time). Similar increment was seen in all three datasets of Pihar (100\% of the time). Figure~\ref{fig:IdlePriorAfterInfectionAndReboot} shows an increment in the power consumption when comparing idle prior to infection versus idle after infection and reboot for the Alureon rootkit. \begin{figure}[H] \includegraphics[scale=0.15]{IdlePriorInfVsIdleAfterInfectionandReboot_12V_2} \centering \caption{Power consumption for idle prior to infection vs. idle after infection and reboot with Alureon for the +12V rails on the motherboard} \label{fig:IdlePriorAfterInfectionAndReboot} \end{figure} Nonetheless, when comparing IE (IE prior to infection versus after infection and IE prior to infection versus after infection and reboot), results for Alureon showed that an increment in the power consumption after infection can be seen in only one of the datasets (33.33\% of the time). Figure~\ref{fig:IEPriorAfterInfectionAndReboot} shows an increment in the power consumption when comparing IE prior to infection versus IE after the Alureon infection and reboot. After analyzing the +12V rails on the motherboard, we concluded these rails are very useful when analyzing the normal and anomalous power profiles. \begin{figure}[H] \includegraphics[scale=0.15]{OpeningIEpriorInfectionVsOpeningIEAfterInfectionAnReboot_12V_2} \centering \caption{Power consumption for opening IE prior to infection vs. opening IE after infection and reboot with Alureon for the +12V rails on the motherboard} \label{fig:IEPriorAfterInfectionAndReboot} \end{figure} When comparing IE prior to infection versus IE after infection for Pihar, we noticed an increment in the power consumption after infection for one out of the three datasets (33.33\% of the time). Interestingly, when comparing IE prior to infection versus IE after infection and reboot we noticed the power consumption of the general-purpose computer was higher after infection for all datasets (100\% of the time). \subsection{+12V CPU Rails} \label{12VCPURail} The +12V CPU rails are separate from the +12V rails on the motherboard (monitored in the PSU). They are used to power the CPU or GPU of a general-purpose computer. The +12V rails on the motherboard are used to power disk drive motors and fans The comparison between the power consumption when the general-purpose computer was booting prior to infection versus when it was booting after infection showed that at the beginning of the initialization process the power consumption was higher prior to infection for both rootkits. However, at some point during the initialization, an increment in the power consumption after infection was noticeable. This comparison by itself does not provide information that can help us to distinguish between normal and anomalous behavior because of the presence of noise. Noise is expected during the booting and rebooting process because the system is executing several processes simultaneously, so even if the malware is present, its challenging to differentiate between normal and anomalous states. In the case of idle (idle prior to infection versus after infection and idle prior to infection versus after infection and reboot), we noticed that the power consumption for both rootkits in the normal and anomalous scenarios were similar. However, there were some higher spikes after infection. We believe these spikes were generated when the system was executing normal ``non malicious processes". Similarly, these spikes were also seen in the +5V rails. To be sure about the cause of these spikes, as part of our future work, we plan to collect other parameters such as kernel events, registry files, or syslogs of the general-purpose computer and correlate this information with the power consumption. A similar behavior was noticeable during IE execution (IE prior to infection versus after infection and IE prior to infection versus after infection and reboot). Results showed that for both normal and anomalous power profiles, the power consumption was similar. In addition, some delays were seen on the general-purpose computer after it was infected. Figure~\ref{fig:IEPriorAfterInfection12VCPU} showsthe power consumption for opening IE prior to infection versus opening IE after infection with Alureon \begin{figure}[H] \includegraphics[scale=0.15]{OpeningIEPriorInfectionVsOpeningIEAftterInfection_Alureon_12VCPU_D3} \centering \caption{Power consumption for opening IE prior to infection vs. opening IE after infection with Alureon for the +12V CPU rails} \label{fig:IEPriorAfterInfection12VCPU} \end{figure} After analyzing all six datasets (three datasets per rootkit), we concluded that a distinguishable difference cannot be made by the naked eye when analyzing the normal and anomalous power profiles for the +12V CPU rails. These results are not the ones we expected because by monitoring the CPU of the general-purpose computer we thought these rails would be more informative. However, we are aware that many processes are running and this extra work consumes more power making it difficult to establish a difference by the naked eye. However, it is possible that the normal and anomalous power profiles may be distinguished by using machine learning algorithms. \section{Conclusions} \label{sec:conclusions} In this paper we presented a proof of concept whose objective was to investigate whether malware leaves a signal on the power consumption of the general-purpose computer. Power data was collected for four rails (+3.3V, +5V, +12V, and +12V CPU) in two different states (normal and anomalous) for two different rootkits. A comparison between the power consumption of the normal and anomalous state was made for each of the events that were recorded. The results showed that malware undoubtedly leaves a detectable signal on the power consumption of a general-purpose computer. The signal on the +12V rails on the motherboard was the most useful when identifying an increment in the power consumption after the machine was infected. Results for Alureon showed that when the general-purpose computer was idle (idle prior to infection versus idle after infection and idle prior to infection versus idle after infection and reboot) in a 66.67\% of the time an increment in the power profiles was noticeable by the naked eye, while for Pihar this increment in power was seen in 100\% of the time. For both Alureon and Pihar, there was a 33.33\% of the time in which a notable power signal was seen after the Alureon infection when IE was opened (IE prior to infection versus IE after infection and IE prior to infection versus IE after infection and reboot). In the case of Pihar, 33.33\% of the time an increment was noticeable in the power after infection when opening IE prior to infection versus after infection. When comparing IE prior to infection versus infection and reboot with Pihar, we noticed an increment in the power consumption 100\% of the time. Besides the +12V rails, the +5V rails are also a valuable parameter to obtain an increment in the power consumption after infection. Results for Alureon showed that 66.67\% of the time there was an increment in the power consumption when comparing idle prior to infection versus idle after infection and when comparing IE prior to infection versus after infection. In the case of Pihar, a noticeable increment in the power consumption was seen 100\% of the time when comparing idle prior to infection versus idle after infection. However, when comparing opening IE prior to infection versus after infection with Pihar we noticed an increment in the power consumption after infection only 33.33\% of the time. While we obtained promising results, more rootkit samples and complex data analytics are needed to test and validate this approach. In addition, while all the processes running on the machine consumes power, distinguishing between the normal and anomalous behavior for the general-purpose computer is a challenge because this device is not limited to a certain amount of instructions. Increasing in this way the false positives. As part of our future work we intend to include more rootkit samples and workloads in the experimental design and data collection process. Furthermore, we plan to propose an approach that can minimize the number for false positives. In addition, we plan to incorporate machine learning techniques to automatically distinguish between the normal and anomalous power profiles and detect malware. \section*{Acknowledgments} Research sponsored by the Laboratory Directed Research and Development Program of Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the U. S. Department of Energy. This material is based upon work supported by the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Building Technologies Office. Katerina Goseva-Popstojanova's work was funded in part by the NSF under grant CNS-1618629. The authors thank Darren Loposser from the Research Instrumentation group at ORNL for his contributions to this project by providing electronics and sensor support. \section*{Acknowledgments} \bibliographystyle{IEEEtran}
1,314,259,992,784
arxiv
\section{Introduction} \label{intro} Hitherto, the number of detected exoplanets has been boosted to over 4000 thanks to various ground-based and space-based surveys, among which the {\it Kepler} mission \citep{Bor10} has played a major role in contributing over two thirds of these discoveries\footnote{http://exoplanet.eu}. The bulk of exoplanets detected by the {\it Kepler} mission are so called super-Earth or sub-Neptunes with radii between Earth and Neptune and orbital periods less than several hundred days \citep{Tho18}. Although super-Earths are found to be common \citep{DZ13,How13,Zhu18,Mul15}, they do not exist in our Solar System, and how they were formed remains an open question.\citep{Lis14,MR16} Among the Kepler discoveries, one of the most valuable parts is the large sample of multiple transiting planet systems \citep{RH10}, which has greatly advanced our knowledge on exoplanets in many aspects, including planetary masses and thus physical compositions \citep{Car12,HL14,WL13}, orbital eccentricities and inclinations \citep{FM12,Fab14,Xie16,Van19} and etc., shedding light on their formation and evolution history \citep{Mil16,OC20}. In this paper, we focus on the aspect of orbital spacing, which has attracted numerous studies. \citet{BL13} and \citet{Hua14} investigated the orbital spacings of Kepler's multiple systems in a context of extended Titus-Bode law of our Solar System. \citet{PW15} found that the orbital spacings of Kepler planets are clustered around the theoretical stability threshold. Some studies investigated the spacings of Kepler planets in terms of orbital period ratio \citep{Lis11,Ste13,SH15}. From the period ratio distribution, the majority of Kepler planets were found to be not in mean motion resonance (MMR). {\color{black} Nevertheless, the period ratio distribution has shown overabundances just wide of first-order MMRs and deficits short of them \citep{Fab14}, which may have implications to planet formation and evolution \citep{LW12,BM13,Xie14,DL14,CF15,ML19}. } Recently, \citet{Wei18} found that planets orbiting the same host tend to be similar in sizes (see also in \citet{Mil17,Wang17} ) and have regular orbital spacings (i.e., period ratio correlation), a pattern which they dubbed as `peas in a pod'. However, whether such a pattern is astrophysical or a selection effect due to observational biases is still currently in debate \citep{Zhu20,WP20,MT20,GF20}. Here, we revisit {\color{black} one aspect of the pattern, i.e.,} the period ratio correlation, in detail by taking observational biases into account. This paper is organized as follows. In section \ref{sample}, we select different planet samples by applying different criteria. Then, for each planet sample, we evaluate the significance of period ratio correlation and the effects of observational biases (section \ref{result.rev_prcor}). We find evidences, in section \ref{result.evi_prdich}, which show that the orbital spacing pattern is more like a dichotomy rather than a global correlation. In section \ref{discuss}, we discuss the implications of such an orbital spacing dichotomy. Section \ref{summary} is the summary of the paper. \section{Sample} \label{sample} Our study is based on the multiple transiting planet systems detected by the {\it Kepler} mission. We use the Q1-Q17 table of Kepler Objects of Interest (hereafter KOIs) from the NASA Exoplanet Archive. \footnote{https://exoplanetarchive.ipac.caltech.edu}. Firstly, we exclude all the KOIs which are identified as false positives. Secondly, we adopt three filters as follows to the remaining planetary systems. \begin{itemize} \item[1] The multiplicity of planetary systems $N_p \geqslant 4$. \item[2] The maximum radius of planets in the systems $R_{max} \leqslant 6R_{\oplus}$, where $R_{\oplus}$ is the Earth radius \item[3] The maximum of period ratios of adjacent planets in the systems $PR_{max} \leqslant 4.0$ \end{itemize} {\color{black} We adopt the first filter for the reason that systems of lower multiplicities tend to be not dynamically packed and thus have a higher likelihood of missing non-transiting planets in between the transiting planets (see more discussions in section 4.3.1), causing a systematic overestimation of the period ratios of neighbouring planets.} Through the second filter, we exclude giant planets, allowing us to focus on smaller planets, i.e. super-Earths and sub-Neptunes. We adopt the third filter according to \citet{Wei18} for comparison with their results. After all these three filters, we have 56 multiple planet systems in our nominal sample (sample 1 in table \ref{Tab.samp_des}). {\color{black} For comparison with sample 1, we adjust the above three filters to construct our sample 2 and sample 3. In sample 2, we release the radius cutoff to include those systems which host giant planets with $R_{max}>6R_{\oplus}$. In sample 3, we release the spacing cutoff of $PR_{max} \leqslant 4.0$. Besides, we adopt the same sample of \citet{Wei18} as our sample 4. The descriptions of the samples are summarized in table \ref{Tab.samp_des}. } \begin{table}[] \centering \begin{tabular}{@{}cccccccc@{}} \toprule Sample id & 1 &2 &3 & 4 (Weiss+ 2018) \\ \colrule $R_{max} \leqslant 6 R_{\oplus}$ & Yes &No &Yes & No\\ $PR_{max} \leqslant 4$ &Yes &Yes &No & Yes \\ $N_{sys}$ &56 &60 & 65 & 95\\ $N_{3}$ &0 &0 & 0 & 53\\ $N_{4}$ &39 &41 & 46 & 31\\ $N_{5}$ &15 &17 & 17 & 10\\ $N_{6}$ &2 &2 & 2 & 1\\ \botrule \end{tabular} \caption{ {\color{black} Summary of the samples 1, 2, 3 and 4 mentioned in section \ref{sample}. Different cut conditions (i.e., $R_{max} \leqslant 6R_{\oplus}$ and $PR_{max} \leqslant 4$) are applied to some of the samples. $N_{sys}$ is the total number of systems and $N_3,N_4,N_5$ and $N_6$ are the specific numbers of systems with 3,4,5 and 6 transiting planets respectively.}} \label{Tab.samp_des} \end{table} \begin{figure \centering \includegraphics[width=\linewidth,height=\linewidth]{f1} \caption{The period ratio correlation evaluation for the four observed samples (see table \ref{Tab.samp_des} and section \ref{sample}). The x-axis and the y-axis in each panel denote the period ratio of the inner pair of neighbouring planets ($P_{i+1}/P_{i}$) and that of the outer pair ($P_{i+2}/P_{i+1}$). On the {\color{black} upper-right} of each panel, we printed the P value of {\color{black} the Kendall correlation test and Pearson correlation test} (section \ref{result.rev_prcor.eval}). The {\color{black} grey dashed line} shows the perfect correlation, i.e. $y=x$. We can see that {\color{black} all the samples show strong PR correlation in the Kendall correlation test. However, in the Pearson correlation test, all the sample except Sample 3 show strong PR correlation. {\color{black} We note that the relatively weaker PR correlation in the sample 3 is probably attributed to the inclusion of planet pairs with larger period ratios, i.e., $PR>4$} (The axes scale in the bottom-left panel is different from the other panels.). In fact, the trend that the period ratio correlation becomes weaker with increasing period ratio can be indeed seen in all the samples.}} \label{obs_pr} \end{figure} \section{Results} \label{result} \subsection{Revisit the Period Ratio Correlation} \label{result.rev_prcor} First, we revisit the period ratio correlation \citep{Wei18} in different samples in section \ref{sample}. \subsubsection{Correlation Evaluation} \label{result.rev_prcor.eval} In the work of \citet{Wei18}, the authors measured the correlation of the orbital period ratio of each pair of neighbouring planets $P_{i+1}/P_{i}$ and that of the outer pair of neighbouring planets $P_{i+2}/P_{i+1}$. They found a Pearson-R correlation coefficient of 0.46 with a significance of P value $<10^{-5}$, leading to a conclusion that there is a strong correlation among orbital period ratios of planets in the same systems. {\color{black} Pearson correlation coefficient, however, is not very appropriate for searching correlations in a relatively small sample, because it assumes the linear correlation and Guassian scatter. {\color{black} For this reason, besides using the Pearson correlation coefficient (mainly for comparison with \citet{Wei18}), we further repeat all the analyses using the Kendall's tau correlation coefficient, which is non-parametric without making neither assumptions, and thus more robust.}} The detailed procedure is as follows. \begin{itemize} \item[Step 1] We calculate Kendall's tau nonparametric correlation coefficient $\tau_{obs}$ {\color{black} (or Pearson's correlation coefficient $R_{obs}$)} for each sample in section \ref{sample}. \item[Step 2] We randomly scramble period ratios of neighbouring planets among planetary systems then re-calculate the correlation coefficient for each simulated realization $\tau_{sim}$ (or $R_{sim}$). \item[Step 3] We repeat Step 2 for 10000 times and calculate the fraction of times with $\tau_{sim}>=\tau_{obs}$. (or $R_{sim}>=R_{obs}$) This fraction gives the P value $P_{Kendall}$ (or $P_{Pearson}$) of the Kendall Correlation Test and $1-P_{Kendall}$ (or $1-P_{Pearson}$) is the confidence level of the observed period ratio correlation. \end{itemize} \begin{figure \centering \includegraphics[width=\linewidth,height=\linewidth]{f2} \caption{Similar to \reffig{obs_pr}, but for a set of typical Monte Carlo realizations of simulated samples with the assumption that planets are intrinsically randomly paired (see section \ref{result.rev_prcor.eff_obsbias}). Compared to \reffig{obs_pr}, the period ratio correlations vanish, with much larger P values of the Kendall(Pearson) correlation tests, $P_{Kendall}$($P_{Pearson}$).} \label{typi_simu_pr} \end{figure} \begin{figure \centering \includegraphics[width=1\linewidth,height=1\linewidth]{f3} \caption{ {\color{black} The distribution of $P_{Kendall}$ (blue histograms) and $P_{Pearson}$ (red histograms) for 1000 Monte Carlo realizations (see section \ref{result.rev_prcor.eff_obsbias} and the appendix) of simulated samples. The arrows in each panel show the $P_{Kendall}$ (blue) and $P_{Pearson}$ (red) of the corresponding observed sample. In each panel, we print the fractions of simulations whose P values are not smaller than the observed ones, i.e., $P_{Pearson}^{obs} \leqslant P_{Pearson}^{sim}$, $P_{Kendall}^{obs} \leqslant P_{Kendall}^{sim}$, which can be treated as the confidence level that the observed correlation cannot be reproduced by observational biases. } } \label{simu_pr_dis} \end{figure} \reffig{obs_pr} shows the period ratio correlation evaluation for the four samples as defined in table \ref{Tab.samp_des}. For each of the sample described in table \ref{Tab.samp_des}, the period ratio correlation is significant with a confidence of larger than 99.99\% in Kendall correlation test, which is consistent with the result in \citet{Wei18} although we use different samples and correlation tests. {\color{black} However, as for the bottom left panel for the result of sample 3, the Pearson test returns a much larger P value of 0.274. This is probably because planets pairs with larger period ratios i.e., $PR>4$, are included in sample 3. In fact, each panel also shows an apparent tend that the points with larger period ratios become more dispersed with respect to the 1:1 (y=x) line. } In section \ref{result.evi_prdich}, we will investigate this trend in more detail. {\color{black} Note, although P values are reported to high precision here, one should not over interpret the numbers in high precision. \citep{DL11,Laz14}. For example, $P_{Kendall}=0.279$ and $P_{Kendall}=0.378$ are essentially the same; both indicate no correlation at all. What really matters is the order of magnitude of the P value.} \subsubsection{Effect of Observational Biases} \label{result.rev_prcor.eff_obsbias} Before reaching any conclusion, one should address the issue of observational bias. How do the transit selection effect and detection efficiency affect the observed orbital spacing pattern? Could the observed pattern (\reffig{obs_pr}) be reproduced by the observational bias \citep{Zhu20} ? Here, we address this issue by forward modeling the transit detection and selection process with a Monte Carlo method (see the appendix for the detailed procedure). With this method, we create 1000 corresponding simulated sample of equal size as each observed sample. We then perform the same period ratio correlation evaluation (section \ref{result.rev_prcor.eval}) to the simulated samples. Figure \ref{typi_simu_pr} shows the typical result of each set of simulated samples. As can be seen, {\color{black} all the Pearson test P values for the Monte Carlo realizations are larger than 0.1, and all the Kendall test P values are larger than 0.05}, indicating almost no correlation at all. {\color{black} In Figure \ref{simu_pr_dis}, we plot the distributions of P values for the four simulated sample sets, and calculate the fractions of simulations whose P values are not smaller than the observed ones. As can be seen, in most cases (except the Pearson test in sample 3) the fraction numbers are close to $100\%$, implying high confidence level that the period ratio correlations observed in these samples are likely to be physical rather than the results of observational biases. As for the low fraction number ($68.6\%$) for the Pearson test in sample 3, this is because the inclusion of larger period ratios largely reduces the period ratio correlation as mentioned in Figure \ref{obs_pr}. In the following section, we will investigate how the period ratio correlation changes with period ratio itself. } \subsection{Evidence of Period Ratio Dichotomy} \label{result.evi_prdich} {\color{black} In this subsection, we further perform a `moving sample' analysis, which reveals that the orbital spacing pattern as a whole is more like a dichotomy rather than a correlation.} For the sake of clarity, hereafter, we only present the results of analyzing the nominal sample (sample 1 in table \ref{Tab.samp_des}), since other samples generally give similar results. \begin{figure* \centering \includegraphics[width=\linewidth,height=0.6\linewidth]{f4} \caption{The P value of correlation test $P_{Kendall}$ (blue solid line) and $P_{Pearson}$ (red solid line) as a function of the median value of average system period ratio of the moving subsamples, $Median(\overline{PR})$, for the nominal sample (see table \ref{Tab.samp_des}). {\color{black} We see that there is an abrupt increase in the observed $P_{Kendall}$ (and $P_{Pearson}$) from $<0.01$ to $\sim 1$ at $Median(\overline{PR})\sim1.5-1.7$ (the grey shaded transition area), forming a dichotomy, namely, period ratios are correlated to each other on the left but uncorrelated on the right. For comparison, we also plot the 1-$\sigma$ region of the results for the corresponding simulated samples(blue and red shaded region on the top of the Figure). In contrast to the observed one, most simulated $P_{Kendall}$ (and $P_{Pearson}$) stay above 0.9 (i.e., uncorrelated at all) regardless of $Median(\overline{PR})$, demonstrating that the period ratio correlation, especially the dichotomous feature, could not be produced by random pairing nor by selection effects. } } \label{mov_sample_test} \end{figure*} \begin{figure* \centering \includegraphics[width=1\linewidth,height=0.25\linewidth]{f5} \caption{Similar to \reffig{mov_sample_test} but here compare the results of using different sub-sample sizes (15, 16 and 17 from left to right). For clarity, only the results for the Kendall correlation test are shown. The dotted line across each data point shows the range of $\overline{PR}$ of individual systems in the corresponding sub-sample. All the three curves show a similar trend that $P_{Kendall}$ abruptly increases from $< 0.01$ to $\sim 1$ at $Median(\overline{PR})\sim1.5-1.7$. {\color{black} Note, in the middle and the right panel $P_{Kendall}$ are smaller than 0.001 for $Median(\overline{PR})<1.5$, thus not shown there.}} \label{diff_subsample_size} \end{figure*} The procedure of such an analysis is described as follows: \begin{itemize} \item[1] Firstly, we sort all the systems in the sample according to the average period ratio of neighbouring planets $\overline{PR}$ of each system. \item[2] Secondly, we select the first 15 systems as a subsample and perform the Kendall (and Pearson) correlation evaluation (section \ref{result.rev_prcor.eval}) to the sub-sample, obtaining the P value $P_{Kendall}$ and $P_{Pearson}$. \item[3] Thirdly, we repeat the above correlation evaluation to a series of continuously moving subsamples until the entire sample goes through. Specifically, for each time, we move the subsample one step towards larger $\overline{PR}$. For example, we select 15 systems from the 2nd and the 16th in the sorted sample next time. \end{itemize} In \reffig{mov_sample_test}, we plot the result of the above moving sample analysis, which is the P value of correlation test $P_{Kendall}$ and $P_{Pearson}$ as a function of the median of $\overline{PR}$ in each moving subsample. As can be seen, the P value ($P_{Kendall}$ (blue solid curve) and $P_{Pearson}$ (red solid curve)) increases from $\sim10^{-3}$ (strong correlation) to $\sim1$ (no correlation at all) as the subsample moves towards larger period ratios. {\color{black} However, the increase in P value is not smooth. The transition from correlated to uncorrelated is abrupt. The P value increases by more than two orders of magnitude (from $\sim0.005$ to $\sim0.8$) as the median $\overline{PR}$ just slightly changes from $1.5$ to $1.7$. This transition zone (the grey shaded area in \reffig{mov_sample_test}) separates two populations; one with correlated period ratios and the other with uncorrelated ones.} We also apply the above moving sample analysis to the 100 simulated samples (created in section \ref{result.rev_prcor.eff_obsbias}) to investigate the {\color{black} effects of random pairing and} observational biases. The blue(red) shaded region in \reffig{mov_sample_test} (both panel) shows the $68.3\%$ confidence interval of the results for Kendall(Pearson) test. As expected from \reffig{simu_pr_dis}, most simulated samples have large P values, and thus not likely to produce the observed correlation nor the transition between correlated and uncorrelated. In \reffig{diff_subsample_size}, we compare the results of changing the moving sample size from 15 to 16 and 17. {\color{black} In this figure, we can see the results are similar, which demonstrates that the result is not sensitive to a specific bin size.} As a summary of the moving sample analysis, we find an evidence of orbital spacing dichotomy, namely, orbital period ratios are significantly correlated for tightly packed systems but nearly uncorrelated for loosely packed systems. The boundary of such a dichotomy is around $\overline{PR}\sim1.5-1.7$, {\color{black} i.e, the grey shaded area in \reffig{mov_sample_test}}. In fact, this dichotomous feature can also be seen from the envelope of the data (see \reffig{diff_pop_prcor}). \begin{figure \centering \includegraphics[width=\linewidth,height=0.6\linewidth]{f6} \caption{{\color{black} The distribution of $\Delta$ of neighbouring pairs in Kepler multiple transiting systems. We can see the overabundance of planet pairs just outside exact mean motion resonances (MMR) as in \citet{Lis11,Fab14}. We set the boundary of MMR proximity as $|\Delta| < 0.03$ (vertical dashed lines) to include the peak of the overabundance.}} \label{MMR_criterion} \end{figure} \begin{figure* \centering \includegraphics[width=1\linewidth,height=1.2\linewidth]{f7} \caption{Overview of the orbital architectures of planetary systems in the nominal sample. Each dot denotes a planet or planet candidate and each line of dots represents a planetary system with its name on the right edge of the figure. The orbital periods of the planets are normalized by the orbital periods of the innermost planets in the same systems. Between each pair of adjacent planets, there is a number indicating the orbital period ratio. The red color denotes the proximity to first order mean motion resonances (MMRs) and the blue to second order MMRs.} \label{sys_overview} \end{figure*} \begin{figure \centering \includegraphics[width=\linewidth,height=\linewidth]{f8} \caption{The number distributions of MMR poor, middle and rich systems (defined in section \ref{discuss.interp.mmr_dich}) in both the nominal sample (red) and expected values from the corresponding random simulations (grey). The P value of Chi-square test, $P_{\chi^{2}}=7\times10^{-4}$ is printed on the upper panel. In the bottom panel, the difference in $N_{sys}$ between the nominal observed sample and the simulated one are plotted. As can be seen, there is excesses in both MMR poor and rich systems and an deficit in MMR middle system in the observed sample.} \label{MMR_dich} \end{figure} \begin{figure \centering \includegraphics[width=\linewidth,height=0.9\linewidth]{f9} \caption{{\color{black} Similar to the upper-left panel in Figure \ref{obs_pr}, but here we divide the nominal sample into three subsamples, MMR poor (blue), middle (green) and rich (red) (see section \ref{discuss.interp.mmr_dich}). Fore each subsample, we repeat the Kendall correlation test and print the corresponding P value, $P_{Kendall}$. As can be seen, the period ratio correlation is significant ($P_{Kendall}=0.003$) in MMR rich systems, but weak in MMR middle ($P_{Kendall}=0.08$) and MMR poor ($P_{Kendall}=0.35$) systems. The two broken dashed lines generally match the envelopes of the data. The break points are at $PR=1.65$, which are consistent with the transition zone ($PR=1.5-1.7$) in \reffig{mov_sample_test} (see \ref{discuss.interp.project} for more discussion). }} \label{diff_pop_prcor} \end{figure} \section{Discussions} \label{discuss} In this work, we have revisited the period ratio correlation of Kepler multiple transiting systems. {\color{black} Unlike the bootstrap method based on the observed systems in \citet{Wei18}, we take a different approach by generating the intrinsic planet populations and forward modeling the transit detection process. Our forward modeling approach naturally takes into account various effects in the process, such as the effects of transit detection efficiency, orbital stability (as concerned by \citet{Zhu20}) and missing planets (see section \ref{discuss.interp.ntp} below). } {\color{black} We confirm that the period ratios are, in general, indeed correlated, which cannot be explained by selection effects from observational bias. Our result is consistent with that of \citet{He19}, which also took a forward modeling approach and found that the observed distributions of ratios of period ratios are more peaked around unity than their model prediction if assuming no correlation between period ratios at all. } Furthermore, we have revealed that the period ratio correlation is highly dependent on period ratio itself, and it shows a dichotomous feature, namely, {\color{black} the correlation is strong only in tightly packed systems and becomes weak in loosely packed ones (\reffig{mov_sample_test}).} In the following, we discuss the implications of such an orbital spacing dichotomy. Specifically, we present our interpretation in section \ref{discuss.interp}, and then discuss some future tests to this interpretation in section \ref{discuss.pred}. \subsection{Interpretation} \label{discuss.interp} As shown in \reffig{mov_sample_test}, the boundary of the period ratio correlation dichotomy is around period ratio $\sim1.5-1.7$. Is this a coincidence? In the following, we interpret this as a result {\color{black} that might be related to } Mean Motion Resonance (MMR) distribution. \subsubsection{MMR Dichotomy} \label{discuss.interp.mmr_dich} Following \citep{Lit12}, we use the parameter $\Delta$ to describe the proximity of a period ratio to $j+1:j$ MMR, \begin{equation} \Delta=\frac{j}{j+1}{\rm PR}-1, \end{equation} where $\rm PR$ is the period ratio of adjacent planets. {\color{black} Note that $\Delta$ is calculated with respect to the nearest first order MMR. (making the absolute value of $\Delta$ a minimum.)} {\color{black} Figure \ref{MMR_criterion} shows the $\Delta$ distribution of neighbouring planet pairs in the Kepler multiple transit systems. Similar to \citet{Lis11} and \citet{Fab14}, we also see an overabundance just outside the MMR center (i.e., $\Delta=0$). As the overabundance is mainly within $|\Delta|=0.03$, therefore, we set it as the boundary to select those near-MMR period ratios. } We plot in \reffig{sys_overview} an overview of the orbital architecture of the planetary systems in our nominal sample. Each dot denotes a planet or planet candidate, and each line of dots represents a planetary system with its name on the right edge of the figure. The orbital periods of the planets are normalized by the orbital periods of the innermost planets in the same systems. Between each pair of adjacent planets, there is a number indicating the orbital period ratio. {\color{black} All the systems are sorted bottom-up according to the average period ratios, $\overline{PR}$. We have an intuitive impression that near-MMR period ratios are clustered in compact systems rather than randomly and evenly distributed among all systems.} In order to see how the distribution of near-MMR period ratios deviates from random distribution, we perform the following statistical test. {\color{black} Note, in the following analysis, we consider only the first order MMRs for simplicity as we found that the result would be similar if the second order MMRs were included.} We classify all the planetary systems into three groups according to the number of near-MMR pairs in each system: MMR poor (zero near-MMR pair), MMR middle (one or two near-MMR pairs) and MMR rich (three or more near-MMR pairs) systems. For our nominal sample (sample 1), the numbers of MMR poor, MMR middle and MMR rich systems are 22, 26 and 8 respectively. We then apply the same classification to those 10000 randomly simulated systems where near-MMR period ratios are randomly distributed. The average number (expectation) is 15.2, 37.6 and 3.2 in MMR poor, MMR middle and MMR rich systems, respectively. These results are plotted in \reffig{MMR_dich}. In the top panel, we count the number of systems of these three groups. We compare the observed numbers (red histogram) with what we would expect (grey histogram) if all near-MMR pairs are randomly distributed. The chi-square test gives a $\chi^2=13.286$ for the deviation of the observed sample from the expectation, and there are only 7 in 10000 times of random realizations resulting larger $\chi^2$. This gives a P value of $7\times 10^{-4}$, indicating that the distribution of near-MMR period ratios significantly deviates from a random distribution. As can be seen from the bottom panel, with respect to random distribution, the distribution of near-MMR period ratios is polarized into the two ends: MMR rich and MMR poor. There is a deficit in MMR middle class systems. Note, MMR is loosely defined here, namely, it generally refers to planets pairs whose period ratios are close to MMR, regardless of whether they are dynamically in MMR state with librating resonant angles. {\color{black} Previous studies \citep{Lis11,Fab14} have shown that the \emph{global} period ratio distribution deviates somewhat from random distribution in the sense that there is an overabundance of near-MMR ones. Here, we further show that the \emph{local} period ratio distribution also deviates from random distribution, namely, those near-MMR period ratios are not evenly distributed among individual systems. Some systems are MMR rich, while some are MMR poor, forming a MMR dichotomy (\reffig{MMR_dich}).} {\color{black} \subsubsection{PR dichotomy or MMR dichotomy ?} \label{discuss.interp.project} So far, we have revealed two dichotomous features on the orbital spacing, i.e., the period ratio (PR) dichotomy and the MMR dichotomy. In fact, the two dichotomies are largely equivalent to each other. On one hand, MMR dichotomy could be nothing more than a restatement of the PR dichotomy (the small period ratio correlation) given the fact that MMRs are denser for smaller period ratios. On the other hand, the apparently small period ratio correlation (i.e., PR dichotomy) could also be just a projection of the MMR dichotomy. As shown in \reffig{sys_overview}, most of the first order and second order MMRs (except for the 2:1 MMR and 3:1 MMR) have period ratios in a relatively small range ($PR\le5:3 \sim 1.7$, ). Thus, period ratios of a MMR rich system are more likely to be correlated to each other, while such a correlation is not expected in a MMR poor system, causing the apparent PR correlation dichotomy. These are clearly shown in \reffig{diff_pop_prcor}. As can be seen, the two broken dashed lines generally match the envelopes of the data in \reffig{diff_pop_prcor}. The envelopes of MMR rich systems generally follow the parts that are parallel to the 1:1 line, and thus resulting in strong PR correlation with a P value of Kendall correlation test of $P_{Kendall}=0.003$. In contrast, the envelopes of other systems generally follow the part that are parallel to the x and y axes respectively, resulting in weak PR correlations in MMR middle ($P_{Kendall}=0.08$) and MMR poor ($P_{Kendall}=0.35$) systems. The break points of the dashed lines are at $PR=1.65$, which are consistent with the transition zone ($PR=1.5-1.7$) as shown in \reffig{mov_sample_test}. } {\color{black} That being so, then which one is more essential to reflect the orbital spacing pattern? PR dichotomy or MMR dichotomy? Here, we prefer the MMR dichotomy rather than the PR dichotomy for the following reasons.} {\color{black} First, PR dichotomy or small period ratio correlation is just a mathematical correlation whose boundary ($\overline{PR}\sim 1.5-1.7$, \reffig{mov_sample_test}) itself needs an additional explanation, while the MMR dichotomy is more physically-based and naturally explains the PR correlation boundary (as discussed above and shown in \reffig{diff_pop_prcor}}). {\color{black} Second, perhaps more importantly,} the MMR dichotomy could be a natural result of planet migration and dynamical evolution. One of the leading models on the formation of close-in super-Earths is the inward migration model, namely planets formed at larger distances (e.g., snowline) from the star followed by inward migration driven by gas disk \citep{TP07,IL08,Cos14,HN12}. At the beginning when the gas disk was present, planets grew and migrated inward to form a MMR chain. Afterwards, when the gas disk dissipated, these MMR chains generally evolved to the following two branches \citep{Izi17}. On one hand, some of the MMR chains could become dynamically unstable, which underwent a phase of giant impact that erased the footprint of MMR. On the other hand, some MMR chains could remain relatively stable. Although most of these MMRs could still be broken afterwards due to various mechanisms e.g., tides damping \citep{LW12,BM13,DL14}, planetesimal interaction \citep{CF15} and etc., many of these effects are gentle and planets are able to stay near MMR with approximately commensurable period ratios. These two branches of dynamical evolution naturally lead to the MMR dichotomy {\color{black} As a conclusion of above discussions, we therefore consider the orbital spacing pattern dichotomy shown in \reffig{mov_sample_test} is a consequence of the MMR dichotomy (\reffig{MMR_dich}.} \begin{figure* \centering \includegraphics[width=1\linewidth,height=0.32\linewidth]{f10} \caption{{\color{black} Ratio distributions of Period ratios for the one-population scenario (right panel, all planets are generated from a period ratio correlated population, i.e., $f_{correlated}=100\%$), twp-population scenario ($f_{correlated}=35\%$, left panel) and the real sample 1 (middle panel, the same data as Figure \ref{diff_pop_prcor}). In each panel, The black solid line shows perfect correlation, i.e., $y=x$. The two black dashed lines $y=\frac{11}{9}x$ and $y=\frac{9}{11}x$ represent 10\% deviation from perfect period ratio correlation). The two blue dashed lines $y=x^2$ and $y=\sqrt{x}$ denote the expected locations of outliers caused by missing the intermediate planets. In each panel, we print the fraction ($f_{outliers}$) of outliers, namely the data points further away from the perfect correlation line, i.e.,y=x, than the two black dashed lines. See more details in section \ref{discuss.interp.ntp}. }} \label{twopop_dis} \end{figure*} \begin{figure \centering \includegraphics[width=\linewidth,height=0.8\linewidth]{f11} \caption{{\color{black} The fraction of outliers $f_{outliers}$ as a function of the fraction of the correlated population $f_{correlated}$ in a two-population scenario, with different inclination dispersion $\sigma_{i,5}$ parameters. To reproduce a similar $f_{outliers}=32.8\%$ as in the observed sample 1 (black-dashed line), the $f_{correlated}$ should be around $\sim$35\% (for $\sigma_{i,5}=1^{\circ}$) to $\sim$50\% (for $\sigma_{i,5}=10^{\circ}$). }} \label{twopop_trend} \end{figure} {\color{black} \subsubsection{Effect of Missing Planets \label{discuss.interp.ntp}} Planets which intrinsically exist between the detected transiting planets could be missed by the transit survey, due to either weak signals (low SNR) or non-transiting geometry. In our forward modelling simulations, we found $\sim$2-3\% of planets in the simulated transiting multi-planet systems were missed due to low SNR, and $\sim$ 1\%-14\% (depending on the intrinsic inclination dispersion, $\sigma_{i,5}$) of them were missed because of non-transiting geometry . These missing planets cause the observed period ratios larger than the intrinsic ones, which randomizes the period ratio distribution to some degree. If adopting a typical minimum intrinsic period ratio of $1.2$, this effect can affect period ratios larger than $1.2^2\sim$ 1.4. Therefore, one might concern that the observed tendency of weaker period ratio correlation at larger period ratios could be caused by the effect of missing planets. In the follows, we quantify this effect. First, we investigate a one-population scenario with a toy model, in which period ratios are intrinsically correlated (along the diagonal line of Figure \ref{diff_pop_prcor}), and those observed uncorrelated period ratios (outliers away from the diagonal line) are caused by the missing planets. Specifically, to generate an intrinsic system, we randomly draw the first period ratio from the debiased period ratio distribution (Appendix 1), then draw other period ratios with a random deviation within 10\% from the first one. A typical result of the one-population scenario is shown in the right panel of Figure \ref{twopop_dis} with $\sigma_{i,5}=1^{\circ}$ and $f_{correlated}=100\%$ (i.e., 100\% systems are period-ratio-correlated). As compared to the result of real sample shown in the middle panel, the one-population scenario fails to reproduce the observation in the following two aspects. \begin{enumerate} \item It produces too few outliers (points away from the diagonal line further than the two dashed lines, ($11y=9x$ and $9y=11x$ in a x-y plane, see the caption of Figure \ref{twopop_dis}). The outliers fraction is 5.7\% vs. the observed 32.8\% in this case. Although increasing the intrinsic orbital inclination dispersion $\sigma_{i,5}$ generally increases the numbers of non-transiting planets and thus the fraction of outliers, it is still significantly lower than the observed one even if assuming an unrealistically large $\sigma_{i,5}=10^\circ$ (as shown in the bottom right part of Figure \ref{twopop_trend}). \item Its envelopes (set by the outliers), as expected, follow the blue dashed lines in Figure \ref{twopop_dis} ($y=x^2$ and $y=x^{0.5}$ in a x-y plane), which is significantly different from the observed one (red dashed lines in Figures \ref{diff_pop_prcor} and \ref{twopop_dis}). \end{enumerate} Second, we then further consider a two-populations scenario with a toy model, in which only a fraction ($f_{correlated}<100\%$) of systems are assumed as period-ratio correlated as in the above one-population scenario. For the other $1-f_{correlated}$ fraction of systems, the period ratios are randomly drawn from the debiased period ration distribution but with a lower limit truncated at 1.35 (motivated by the apparent envelopes). As shown in Figure \ref{twopop_trend}, by adding more uncorrelated population systems (i.e., decreasing $f_{correlated}$), the outlier fraction generally increases, and it meets the observed value if $f_{correlated}\sim35\%$ for $\sigma_{i,5}=1^\circ$. In the left panel of Figure \ref{twopop_dis}, we plot the ratio distribution of period ratios for this specific case. As can be seen, the two-populations toy model largely reproduces the result of the real observed sample, especially in terms of both the outlier fraction (31.7\% vs 32.8\%) and the distribution envelopes. As a summary of this subsection, we conclude that the effect of missing planets (either low SNR planets or non-transiting planets) alone is too small to reproduce the observed ratio distribution of period ratios (Figure \ref{twopop_dis}). In addition, we find that the observed results could be largely reproduced with a two-populations toy model, which further demonstrates the dichotomy nature of the orbital spacing pattern. } \subsubsection{Effect of Ultra Short Period Planets} \label{discuss.interp.usp} Systems with ultra short period (USP, period $<1$ day) are found to have relative larger period ratios \citep{WF15} and larger orbital inclinations \citep{Dai18}, and they could have undergone some different formation history \citep{Pet19,PL19}. Thus, one might concern whether USP planets are related to the observed trend of weaker period ratio correlation in systems with larger period ratios. However,the occurrence rate of USP planets is in fact very low ($\sim0.5\%$) around sun-like stars \citep{San14}. In our nominal sample, only 2 out of 56 systems host USP planets. After removing these two systems, we repeat the moving sample analysis and find that the result is nearly unchanged as compared to Figure 4. We therefore conclude that our results are not affected by USP planets. \subsection{Predictions} \label{discuss.pred} Based on the above discussions on the dynamical origin of the MMR dichotomy, we may further make some predictions for future studies. First, we predict that the planets in MMR-poor systems (with relatively larger and thus uncorrelated period ratios) may have larger masses, densities and orbital eccentricities/inclinations than those in MMR-rich systems (with relatively smaller and thus correlated period ratios). This is simply because the giant impact process which erased the footprint of MMR also increased the masses and the orbital eccentricities/inclinations of planets. The prediction on mass and density is consistent with the recent finding that the masses and densities of TTV ({\color{black} Transit Timing Variation}) planets (most are near MMR) are systematically lower than those of the RV (radial velocity) planets (most are not near MMR) \cite{Ste16}. The confirmation of the prediction on orbital eccentricity/inclination is not trivial, because the increase in eccentricity/inclination is moderate, which requires future dedicated studies on orbital characterization. Second, we may predict that MMR-poor systems (with relatively larger and thus uncorrelated period ratios) are relatively older than those MMR-rich systems (with relatively smaller and thus correlated period ratios). This is simply based on the consideration that the longer time of dynamical evolution (e.g., giant impact, tidal damping and planet-planetesimal interaction), the larger probability to erase the footprint of MMR. The prediction on age is qualitatively consistent with the result of previous study \citep{KZ11} based on the radial velocity planet sample. Future studies with large and diverse samples are needed to fully establish this point. \section{Summary} \label{summary} In this paper, we studied the pattern of orbital spacings (in terms of period ratios) of Kepler multiple planet systems. We confirm that, period ratios are indeed somewhat correlated (Figure \ref{obs_pr}), and such a correlation is unlikely to be caused by observational biases (Figures \ref{typi_simu_pr}-\ref{simu_pr_dis}). Furthermore, we reveal that the above orbital spacing pattern is dichotomous, namely, {\color{black} period ratios are strongly correlated to each other in the tightly packed systems, but uncorrelated at all in the loosely packed systems. The transition from correlation to noncorrelation is abrupt with the boundary at $ Median(\overline{PR}) \sim 1.5-1.7$ (section \ref{result.evi_prdich} and Figure \ref{mov_sample_test}).} Then, we relate such a period ratio dichotomy to another dichotomy that reflects the near-MMR period ratios tend to be clustered rather than evenly distributed (dubbed as MMR dichotomy for short, see section \ref{discuss.interp} and Figures \ref{sys_overview}-\ref{MMR_dich}). The MMR dichotomy naturally leads to a transition from period ratio correlation to non-correlation around $\overline{PR}\sim1.5-1.7$ (\reffig{diff_pop_prcor}), and it could be also a natural result of planet migration and dynamical evolution (section \ref{discuss.interp.project}). {\color{black} The transition from period ratio correlation to non-correlation cannot be explained by the missing intermediate planets (due to either low SNR or non-transiting geometry, section \ref{discuss.interp.ntp}) nor by ultra short period planets (section \ref{discuss.interp.usp}). Nevertheless, it can be largely reproduced with a two-population toy model, further demonstrating the dichotomy nature of the orbital spacing pattern.} Finally, based on the formation of the MMR dichotomy, we predict that planets in MMR-poor systems are more massive, denser and dynamically hotter (larger orbital eccentricities and inclinations) than those in MMR-rich ones (section \ref{discuss.pred}). \label{sec:acknowledgments} \acknowledgments We thank W. Zhu for helpful comments and suggestions. This work is supported by the National Key R\& D Program of China (No. 2019YFA0405100) and the National Natural Science Foundation of China (NSFC) (grant No. 11933001). J.-W.X. also acknowledges the support from the National Youth Talent Support Program and the Distinguish Youth Foundation of Jiangsu Scientific Committee (BK20190005) \clearpage
1,314,259,992,785
arxiv
\section{Introduction} Having doubles\footnote{\href{https://en.wikipedia.org/wiki/Double\_(filmmaking)}{https://en.wikipedia.org/wiki/Double\_(filmmaking)}} for the starring actors in movies is an indispensable component of movie-making. A double may take the actor's place during stunt scenes involving difficult and dangerous life-risking acts. They may even stand-in for the actor during regular fill scenes or multiple retakes. For instance, `The Social Network' extensively used body doubles as a stand-in for actor Armie Hammer who played multiple roles of twin brothers\footnote{\href{https://www.youtube.com/watch?v=spIdefyvjTs}{Captain America - Skinny Steve Rogers Behind the Scenes}}\footnote{\href{https://www.youtube.com/watch?v=fCrYfRjpuXU&t=26s}{How CGI made Cody and Caleb as PAUL WALKER | VFX}}\footnote{\href{https://www.cinemablend.com/new/Armie-Hammer-Didn-t-Play-Both-Winklevoss-Twins-Social-Network-20994.html}{Armie Hammer Didn't Play Both Winklevoss Twins Social Network}}. In such scenes, the double's face is later replaced by the actor's face and expressions using CGI technology requiring hundreds of hours of manual multimedia edits on heavy graphical units costing millions of dollars and taking months to complete. Thus, the production team is generally forced to avoid such scenes by changing the mechanics of the scene such that only the double's body is captured to provide an illusion of the actor. This may act as a constraint to the director's creativity. However, such adjustments are not always possible. A different scenario is post-production scene modifications. If a dialogue is discovered in post-production that suits a scene better than the original, the entire scene is reset and re-shot. We propose that the actor could instead record in a studio and get their face superimposed on the previous recording. In fact, like other industries, the movie industry is also headed in this direction where actors can work from home. In today's era, CGI technologies can produce incredible human structures, scenes, and realistic graphics. However, it is known that they struggle to create realistic-looking skin\footnote{\href{https://www.youtube.com/watch?v=FtifBqf2Z50}{Why It's SO HARD To Do CGI Skin!}}. As shown in Fig.~\ref{fig:teaser}, an actor could lend their identity and expressions from the comfort of their home or studio while leaving the heavy-duty to graphics or a double. Today's CGI technologies needed for such tasks are, however, manually operated, expensive and time-consuming. To automate such tasks, fast and inexpensive computer vision based face-swapping~\cite{deepfacelabs, motion-coseg, fsgan, faceswapdisney, faceshifter, faceswapphotos} techniques that aim to swap an identity between a source (actor) video and target (double) video can be considered. However, such techniques cannot be directly used. Face-swapping swaps only the source identity whilst retaining the rest of the target video characteristics. In this case, the expressions of the actor (source) are not captured in the output. To tackle this, we introduce "video-to-video (V2V) face-swapping" as a novel task of face-swapping that aims to \textbf{(1)} swap the identity and expressions of a source face video and \textbf{(2)} retain the pose and background of the target face video. The target pose is essential as it depends on the scene's context. E.g., a stunt man performs at an outdoor location dealing with machines or talking to a fellow double; the actor acts in front of a green screen at a studio. Here, the double's pose is context-aware, and the actor only improvises. \textbf{How is the proposed task a video-to-video face-swapping task? }Unlike the face-swapping task that swaps a fixed identity component from one video to another video, V2V face-swapping swaps expressions changing over time (a video) with another video with changing pose and background (another video), making our task video-to-video. \textbf{Approach: }Swapping faces across videos is non-trivial as it involves merging two different motions - the actor's finer face motion (such as eye, cheek, or lip movements) and the double's head motion (such as pose and jaw motion). This needs a network that can take two different motions as input and produce a third coherent motion. We propose \textbf{FaceOff}, a video-to-video face swapping system that operates by reducing the face videos to a quantized latent space and blending them in the reduced space. A fundamental challenge in training such a network is the absence of ground truth. Face-swapping approaches~\cite{motion-coseg, fsgan, deepfacelabs} use a discriminator-generator setup for training the networks. The discriminator is responsible for monitoring the desired characteristic of the swapped output. However, using a discriminator leads to hallucinating components of the output different from the input. For instance, modified identity or novel expressions. Thus, we devise a self-supervised training strategy for training our network: We use a single video as the source and target. We then introduce pseudo motion errors on the source video. Finally, we train a network to `fix' these pseudo errors to regenerate the source video. FaceOff can face-swap unseen cross-identities directly at inference without any finetuning. Moreover, unlike most of the face-swapping methods that need inference time optimization ranging from $5$ minutes to $24$ hours on high-end GPUs, FaceOff face-swaps videos in just one forward pass taking less than a second. A key feature of FaceOff is that it preserves at least one of the input expressions (source in our case), whereas, as we show later, existing methods fail to preserve either of the expressions (source or target expressions). Lastly, we curate and benchmark V2VFaceSwap, a V2V face-swapping test dataset made of instances from unconstrained YouTube videos on unseen identities, background, and lighting conditions. \textbf{Our contributions} in this work are as follows: (1) We introduce V2V face-swapping, a novel task of face-swapping that aims to swap source face identity and expressions whilst retaining the target background and pose. (2) We propose FaceOff: a V2V face-swapping system trained in a self-supervised manner. FaceOff generates coherent videos by merging two different face videos. (3) Our approach works on unseen identities directly at the inference time without any finetuning. (4) Our approach does not need any inference time optimization taking less than a second for inference. (5) We release V2VFaceSwap test dataset and establish a benchmark for V2V face-swapping task. \section {Related Work} Table~\ref{tab:comps} provides a comparison between the existing tasks and FaceOff. FaceOff aims to solve a unique challenge of V2V face-swapping that has not been tackled before. \textbf{Face Swapping}: Swapping faces across images and videos have been well-studied \cite{deepfacelabs, fsgan, motion-coseg, simswap, fastfaceswap, faceshifter, faceswapdisney, faceswapphotos, 3dmodelfaceswapping} over the years. These works aim to swap an identity obtained from a source video (or an image) with a target video of a different identity such that all the other target characteristics are preserved in the swapped output. DeepFakes\footnote{\href{https://github.com/deepfakes/faceswap}{https://github.com/deepfakes/faceswap}}, DeepFaceLabs~\cite{deepfacelabs}, and FSGAN~\cite{fsgan} swap the entire identity of the source; Motion-coseg~\cite{motion-coseg} specifically swaps the identity of single/multiple segments of a given source image (either hair or lips or nose, etc.) to a target video. Unlike these approaches that swap only the identity or a specific part of an image, we swap temporally changing expressions along with the identity of the source. Moreover, FSGAN takes $5$ minutes of inference time optimization, DeepFaceLabs and DeepFakes take up to $24$ hours of inference time optimization on high-end GPUs. FaceOff takes less than a second to face swap in-the-wild videos of unseen identities. \begingroup \renewcommand{\arraystretch}{1.1} \begin{table}[t] \resizebox{\linewidth}{!}{ \begin{tabular}{|l|c|c|c|c|} \hline & \multicolumn{2}{c}{\textbf{Source}} & \multicolumn{2}{|c|}{\textbf{Target}}\\ \cline{2-5} \textbf{Method} & \textbf{Identity} & \textbf{Expression} & \textbf{Pose} & \textbf{Background} \\ \hline Face Swapping & \checkmark & $\times$ & \checkmark & \checkmark \\ \hline Face Reenactment & $\times$ & \checkmark & $\times$ & \checkmark \\ \hline Face Editing & $\times$ & $\times$ & \checkmark & \checkmark \\ \hline FaceOff (Ours) & \checkmark & \checkmark & \checkmark & \checkmark \\ \hline \end{tabular} } \caption{Comparison of FaceOff with existing tasks. \checkmark and $\times$ indicate the characteristic is preserved and lost respectively. FaceOff solves a unique task of preserving source identity and expressions that has not been tackled before.} \label{tab:comps} \end{table} \endgroup \textbf{Face Manipulation}: Face manipulation animates the pose and expressions of a target image/video according to a given prior~\cite{face-vid2vid, fomm2, fomm, reenactgan, deepfacelabs, flowguided, nvp, makeittalk}. In audio-driven talking face generation~\cite{wav2lip, lipgan, wav2lip-emotion, posecont, nvp, pirenderer, vdub}, the expressions, pose, and lip-sync in the target video are conditioned on a given input speech audio. Unlike such works, we do not assume an audio prior for our approach. A different direction of \textbf{face reenactment} animates the source face movements according to the driving video \cite{deffererneuralrendering, pirenderer, face2face, deepvideopotraits, fomm, fomm2}. The identity is not exchanged in these works. This can tackle a special case of our task -- when the target and source have the same identity. Here, a target image can be re-enacted according to the source video expressions. As we show in Section~\ref{sec:targetfacemanipulation}, FaceOff captures the micro-expression of the driving video, unlike the existing approaches. This is because we rely on a blending mechanism - allowing a perfect transfer of the driving expressions. Another direction that handles this special case is \textbf{face editing} that involves editing the expressions of a face video. Using this approach, one can directly edit the target video according to the source expressions. Image-based face editing works such as \cite{pix2pix,stargan,starganv2,cgan} have gained considerable attention. However, realizing these edits on a sequence of frames without modeling the temporal dynamics often results in temporally incoherent videos. Recently, STIT~\cite{stit} was proposed that can coherently edit a given video to different expressions by applying careful edits in the video's latent space. Despite the success, these techniques allow limited control over the types and variations in expressions. Moreover, obtaining a correct target expression that matches the source expressions is a manual hit and trial. FaceOff can add micro-expressions undefined in the label space simply by blending the emotion from a different video of the same identity with the desired expressions. \section{FaceOff: Face Swapping in videos} \begin{figure*}[t] \centering \includegraphics[width=1.0\linewidth]{images/arc_acmm._V22.pdf} \caption{FaceOff is a temporal autoencoder operating in a hierarchical quantized latent space. We use a self-supervised training scheme to train FaceOff using a distance loss on the exact output-ground truth pairs. In the scheme, we first extract the face, $f$, and background, $b$, from a single video, $s$. We then apply ``pseudo errors" made of random rotation, translation, scaling, colors, and non-linear distortions to modify $f$. Next, modified $f$ (acting as a source) and $b$ (acting as a target) are concatenated at each corresponding frame channel-wise to form a single video input. This video input is then reduced, blended, and a coherent and meaningful output is generated. This output is expected to match the original source video, $s$. } \label{fig:arch_main} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{images/inference_pipeline._V13.pdf} \caption{Inference pipeline: FaceOff can be directly inferred on any unseen identity without any finetuning. At inference, the source video is first aligned frame-by-frame using the target face landmarks. FaceOff then takes (1) foreground of aligned source video, and (2) the background and pose of target video as input and generates the output.} \label{fig:inference_pipe} \end{figure} We aim to swap a source face video with a target face video such that (1) the identity and the expression of the source video is preserved and (2) the pose and background of the target video is retained. To do this, we learn to blend the foreground of the source face video with the background and pose of the target face video (as shown in Fig.~\ref{fig:inference_pipe}) such that the blended output is coherent and meaningful. This is non-trivial as it involves merging the two separate motions (finer foreground expression motion of the source; head and background motion of the target). Please note that we only aim to blend the two motions, thus, the desired input characteristics -- identity, expressions, pose, and background -- are naturally retained from the inputs without any additional supervision. The main challenge of our blending approach is to align the foreground and background videos in a way that the output forms a coherent identity and has a single coherent pose. All the other characteristics are simply reconstructed from the inputs. Our core idea is to use a special temporal autoencoding model that merges these motions using a quantized latent space. Overall, our approach relies on (1) Encoding the two input motions to a quantized latent space and learning a robust blending operation in the reduced space. (2) A temporally and spatially coherent decoding. (3) In the absence of ground truth, a self-supervised training scheme. \subsection{Merging Videos using Quantized Latents} \label{sec:merging_video_using} We pose face-swapping in videos as a blending problem: given two videos as input, blend the videos into a coherent and meaningful output. To do so, we rely on an encoder to encode the input videos to a meaningful latent space. Our overall network is a special autoencoder that can then learn to blend the reduced videos in the latent space robustly and generate a blended output. We select our encoder model carefully, focusing on ``blending" rather than learning an overall data distribution. Encoder networks with a continuous latent space reduce the dimension of a given input, often down to a single vector that can be considered a part of an underlying distribution. This latent vector is highly stochastic; a very different latent is generated for each new input, introducing high variations that a decoder needs to handle. Recently, ``vector quantization" was proposed in \cite{vqvae, vqgan, vqvae2}. Quantization reduces the variation in latents by fixing the number of possible latent codes. However, retaining the input properties using a single quantized latent vector is impossible. Thus the inputs are reduced to a higher dimensional quantized space (such as $64 \times 64$) such that properties of the input needed for a full reconstruction is preserved. We adopt such an encoder in our proposed autoencoder for encoding our videos. As shown in Fig.~\ref{fig:arch_main}, our encoder is a modified VQVAE2~\cite{vqvae2} encoder that encodes videos instead of images. To do so, we introduce temporal modules made of non-linear 3D convolution operations. The input to our encoder is a single video made by concatenating the source foreground and target background frames channel-wise as shown in Fig.~\ref{fig:inference_pipe}. Like VQVAE2, our encoder first encodes the concatenated video input framewise into $32 \times 32$ and $64 \times 64$ dimensional top and bottom hierarchies respectively. Before the quantization step at each of these hierarchies, our temporal modules are further added that process the reduced video frames. This step allows the network to backpropagate with temporal connections between the frames. The further processing is then again done in a framewise manner using a standard VQVAE2 decoder. In practice, we observed that this temporal module plays an important role in generating temporally coherent outputs as we show through ablations in Sec.~\ref{sec:ablation}. Our special autoencoder differs from standard autoencoders in the loss computation step. Instead of reconstructing the inputs, a six-channel video input -- first three channels belonging to the source foreground and the last three channels belonging to the target pose and background -- FaceOff aims to generate a three channel blended video output. Therefore, the loss computation is between a ground truth three-channel video and the three-channel video output \subsection{Self-supervised Training Approach} \begin{figure}[t] \centering \includegraphics[width=1.\linewidth]{images/expressions_mismatch.V9.pdf} \caption{Existing face-swapping methods~\cite{deepfacelabs, motion-coseg, fsgan} use a generator-discriminator training strategy. This results in outputs with novel expressions as explained in Sec.~\ref{sec:self-supervised-explanation}. We show this phenomenon on DeepFaceLabs~\cite{deepfacelabs}. The expressions in the output (red boxes) does not match either of the inputs, source or target. E.g., direction of the eye gaze (second row), or overall laugh expression (first row). FaceOff succesfully preserves the source expressions (green boxes).} \label{fig:expressions_mismatch} \end{figure} \label{sec:self-supervised-explanation} Existing face-swapping approaches employ generators and discriminators to train their networks. These discriminators are classifiers that indicate a relationship between the generator's outputs and underlying data distribution, such as an identity distribution or an expression distribution. In such a setup, the generators are encouraged to hallucinate some aspects of the outputs to match the discriminator's data distribution causing it to output novel identities or expressions. We show this phenomenon in Fig.~\ref{fig:expressions_mismatch}. A hard distance loss (e.g. Euclidean distance) indicating the exact output-ground truth relationship instead of a stochastic discriminator loss can be used to overcome this issue. In V2V face-swapping, an important aspect is to retain the exact source expressions. Thus, we train our network using a distance loss by devising a self-supervised training scheme that forces the network to reconstruct a denoised version of a given input video allowing us to use a distance loss. To understand the training scheme, we first look at the challenges we encounter when trying to blend two motions naively. First, there is a global and local pose difference between the faces in the source and target videos. We fix the global pose difference by aligning (rotating, translating, and scaling) the source poses according to the target poses using face landmarks, as shown in Fig.~\ref{fig:inference_pipe}. However, the local pose difference is not overcome this way, and we observe temporal incoherence across the frames. Next, we observe a difference in the foreground and background color (illumination, hue, saturation, and contrast). Thus, we train our network to solve these known issues by reproducing these errors during training. As illustrated in Fig.~\ref{fig:arch_main}, we train our model in the following manner: (1) Take a video, say $s$. (2) From $s$, extract the face region, say $f$; and the background region, say $b$. (3) Introduce pseudo errors (rotation, color, scale, etc.) on $f$. (4) Construct the input $v$ by concatenating $f$ and $b$ channel-wise at every corresponding frame. (5) Train the network to construct $s$ from $v$. Although we train the network using the same identity in the self-supervised scheme, it can face-swap unseen identities directly at inference without any finetuning. We encourage our readers to view the supplementary video for results. \subsection{Reproducing Inference Errors at Training} Given two talking-head videos, source and target denoted by $S$ and $T$ respectively, our aim is to generate an output that preserves (1) the identity and the emotions from $S$ and (2) the pose and background from $T$. We assume the number of frames, denoted by $N$, in $S$ and $T$ are equal. Given two frames, $s_i \in S$ and $t_i \in T$ such that $i = 1 ... N$, we denote $f_{s_i} \in F_s$ and $b_{t_i} \in B_t$ as the foreground and background of $s_i$ and $t_i$ respectively. Given $F_s$ and $B_t$ as input, the network fixes the following issues: First, the network encounters a local pose difference between $f_{s_i}$ and $b_{t_i}$. This pose difference can be fixed using an affine transformation function: $\delta(f_{s_i}, b_{t_i}) = m(rf_{s_i} + d) + m(rb_{t_i} + d)$ where $m$, $r$, and $d$ denote scaling, rotation, and translation. Face being a non-rigid body, this affine transformation only results in the two faces with a perfect match in pose but a mismatch in shape. One can imagine trying to fit a square in a circle. One would need a non-linear function to first transform the square to a shape similar to the circle so that they fit. We denote this non-linear transformation as a learnable function $\omega(f_{s_i}, b_{t_i})$. Being non-linear in nature, a network can perform any one of many such transformations on the input frames as long as both faces fit. These transformations can be constrained using a distance loss to encourage spatially-consistent transformations that generate a coherent and meaningful frame. However, these spatially-consistent transformations may be temporally-inconsistent across the video. This would result in a video with a face that wobbles as shown in the ablation Sec.~\ref{sec:ablation}. Thus, we constrain the transformations as $\omega(f_{s_i}, b_{t_i}, f_{s_k}, b_{t_k})$ where $k = 1..N$ such that $k \ne i$. Here, the transformation on the current frame is constrained by the transformations on all the other frames in the video. This is enabled by the temporal module as explained in Sec.~\ref{sec:merging_video_using}. Lastly, the network encounters a difference in color (contrast, hue, saturation, etc.) between $f_{s_i}$ and $b_{t_i}$ that is fixed as $c(f_{s_i}, b_{t_i})$. \begin{figure*}[t] \centering \includegraphics[width=1.0\linewidth]{images/im2im.V27.pdf} \caption{``Inference Cost" denotes the time taken for a single face-swap. FSGAN, with $400\times$ FaceOff's inference cost, fails to swap the identities fully. DeepFakes and DeepFaceLabs swap the identities successfully but are $9000\times$ less efficient than FaceOff. FaceOff perfectly swaps source identity and expressions. None of the other methods can swap source expressions. } \label{fig:im2im} \end{figure*} \begingroup \renewcommand{\arraystretch}{1.1} \begin{table*}[t] \centering \adjustbox{max width=0.93\textwidth}{ \begin{tabular}{l|ccccc|ccc} \hline & \multicolumn{5}{ c| }{\textbf{Quantitative Evaluation}} & \multicolumn{3}{c}{\textbf{Human Evaluation}}\\ \cline{2-9} \textbf{Method} & \textbf{SPIDis} $\downarrow$ & \textbf{LMD} $\downarrow$ & \textbf{TL-ID} $\uparrow$ & \textbf{TG-ID} $\uparrow$ & \textbf{FVD} $\downarrow$ & \textbf{Identity} $\uparrow$ & \textbf{Exps.} $\uparrow$ & \textbf{Ntrl.} $\uparrow$ \\ \hline Motion-coseg~\cite{motion-coseg} & $0.48$ & $0.59$ & $0.872$ & $0.893$ & $293.652$ & $6.82$ & $5.81$ & $7.44$ \\ FSGAN~\cite{fsgan} & $0.49$ & $0.57$ & $0.914$ & $\mathbf{0.923}$ & $\mathbf{242.691}$ & $7.84$ & $6.83$ & $\mathbf{8.31}$ \\ FaceOff ( Ours ) & $\mathbf{0.38}$ & $\mathbf{0.41}$ & $\mathbf{0.925}$ & $0.915$ & $255.980$ & $\mathbf{9.64}$ & $\mathbf{9.86}$ & $8.18$ \\ \hline \end{tabular} } \caption{ Quantitative metrics on V2VFaceSwap dataset. DeepFakes and DeepFaceLabs take upto $24$ hours for best inference on a single face-swap~\cite{deepfacelabs}, thus we do not compare with them. The metrics used for comparisons is explained in Sec.~\ref{sec:experiments}. For fair comparisons, FSGAN scores are reported without any inference time optimization. Although FSGAN has a slightly higher FVD and Naturalness (Ntrl.) score, it fails to swap the identity fully as can be clearly seen from SPIDis, LMD, and Identity metric. Moreover, the difference in the FVD of FSGAN and FaceOff is not statistically significant perceptually~\cite{fvd}.} \label{tab:metrics} \end{table*} \endgroup As shown in Fig.~\ref{fig:arch_main}, at the time of training $S=T$. For each frame $s_i \in S$, we first extract the foreground, $f_{s_i} \in F_s$ (acting as the source) and the background, $b_{t_i} \in B_t$ (acting as the target) from $s_i$. Next, we apply random rotation, translation, scaling, color, and distortion (Barrel, Mustache) errors on $f_{s_i}$. The training setting is then formulated as: \begin{equation} \Phi: \Omega(\delta, \omega, c) \end{equation} \begin{equation} J = \frac{1}{N}\sum_{i = 1}^{N} [ s_i - \Phi(f_{s_i}, b_{t_i}, f_{s_k}, b_{t_k})] + P(F_s, B_t) \end{equation} where $\Omega$ is a learnable function, $J$ is the overall cost of the network to be minimized, and $P$ is a perceptual metric (LPIPS~\cite{lpips} in our case), and $k = 1\dots N$ such that $k \neq i$. \section{Experiments and Results} \label{sec:experiments} \begin{figure*}[t] \centering \includegraphics[width=0.9\linewidth]{images/results_full.V18.pdf} \caption{ Qualitative results of FaceOff. Note that there is a significant difference in the source and target expressions in all the cases. FaceOff swaps the source expressions (mouth, eyes, etc.) and identity; and retains the target pose and background.} \label{fig:main_results} \end{figure*} In this section, we try to answer the following questions: (1) How well can we preserve the source identity compared to the alternate approaches? (2) How well do we preserve the expressions of the input videos? (3) How efficient is FaceOff when compared to other techniques? We compare FaceOff against different tasks: ``face-swapping", ``face reenactment", and ``face editing". Please note that none of these methods can fully solve the task of V2V face-swapping that we aim to solve. Specifically, V2V face-swapping aims to (1) swap source identity and expressions and (2) retain the target pose and background. \textbf{Quantitative Metrics: } \textbf{(1)} \textbf{S}ource-\textbf{P}rediction \textbf{I}dentity \textbf{Dis}tance \textbf{(SPIDis)}: computes the difference in identity between face images. It is computed as the euclidean distance between the face embeddings generated using dlib's face detection module. \textbf{(2)} \textbf{F}réchet \textbf{V}ideo \textbf{D}istance \textbf{(FVD)}, as proposed in \cite{fvd}, computes the temporal coherence in the generated video output. \textbf{(3)} \textbf{L}and\textbf{m}ark \textbf{D}istance \textbf{(LMD)}: evaluates the overall face-structure and expressions of the source and swapped-output. To compute LMD, the source, and the swapped face landmarks are normalized: faces are first centered and then rotated about the x-axis such that the centroid and angle between the eye coordinates, respectively align a mean image. Next, the faces are scaled with respect to the mean image. Euclidean distance between the normalized swapped and source video landmarks gives the LMD. We compute LMD between the source and the output face expressions (excluding the landmarks of the face permiter). \textbf{(4)} \textbf{T}emporally \textbf{L}ocally \textbf{(TL-ID)} and \textbf{T}emporally \textbf{G}lobally \textbf{(TG-ID)} \textbf{Id}entity Preservation: proposed in \cite{stit}. They evaluate a video’s identity consistency at a local and global level. For both metrics, a score of 1 would indicate that the method successfully maintains the identity consistency of the original video. \textbf{Qualitative Metrics: }A mean absolute opinion score on a scale of $1-10$ is reported for \textbf{(1)} \textbf{Identity}: How similar is the swapped-output identity with the source identity? \textbf{(2)} Expressions \textbf{(Exps.)}: How similar is the swapped-output expression with the source expression?, and \textbf{(3)} Naturalness \textbf{(Ntrl.)}: Is the generated output natural? \textbf{Experimental Dataset}: We benchmark V2VFaceSwap dataset made of unconstrained YouTube videos, with many unseen identities, backgrounds, and lighting conditions. We strongly encourage our readers to view the supplementary video for best experience. Subjective human opinion, further details about the dataset, and evaluation setup are reported in the supplementary paper. \subsection{Face-Swapping Results} Fig.~\ref{fig:im2im} and Table~\ref{tab:metrics} present a qualitative and quantitative comparison respectively between the existing methods and FaceOff. Fig.~\ref{fig:main_results} demonstrates FaceOff's face-swapping results on videos. As shown in Fig.~\ref{fig:im2im}, FaceOff successfully swaps the identity and expressions of the source face video. Existing methods cannot swap the source expressions which shows that FaceOff solves a unique challenge of V2V face-swapping. An interesting finding of our experiments is that the existing methods do not preserve any of the input expressions -- source or target -- at the output and generates novel expressions, e.g., novel gaze direction or mouth movements. This phenomenon is also demonstrated in Fig.~\ref{fig:expressions_mismatch}. FSGAN and Motion-Coseg fail to swap the identity entirely. This is further corroborated through quantitative metrics in Table~\ref{tab:metrics}. As shown, FaceOff has an improvement of $\sim 22\%$ and $\sim 28\%$ on SPIDis and LMD over FSGAN, clearly indicating FaceOff's superiority. \begin{figure}[t] \centering \includegraphics[width=0.93\linewidth]{images/reenactment_comparison._V11.pdf} \caption{ Qualitative demonstration of Face Manipulation. As can be seen, none of the methods, except FaceOff, preserve the source expressions or pose information perfectly.} \label{fig:facemani} \end{figure} FSGAN achieves a slightly higher FVD and is voted more natural in human evaluation. This is expected as FSGAN does not change the target identity much and thus retains the original target video making it more natural to observe. FaceOff swaps identity near-perfectly. Moreover, existing methods only have a single target motion to follow. FaceOff tackles an additional challenge of motion-to-motion swapping that needs source-target pose alignment at every frame in a temporally coherent manner. This requires FaceOff to generate a novel motion such that the identity, expressions, and pose in the motion look natural and match the inputs. Despite this challenge, the difference in FSGAN and FaceOff's FVD is not perceptually significant, as stated in \cite{fvd}. DeepFaceLabs and DeepFakes swap identity well but are $9000\times$ more computationally expensive than FaceOff, making FaceOff much more scalable and applicable in the real world. \subsection{Target Face Manipulation Results} \label{sec:targetfacemanipulation} Given that the source and target have the same identity, the problem reduces to the following - transfer expressions from a source video to a target video. This is fundamentally the setting of ``face reenactment". One could also modify the expression of the target by identifying and quantifying the source expressions and using a ``face-editing" network to edit the target expressions. Fig.~\ref{fig:facemani} presents a qualitative comparison between FaceOff, ``face reenactment" (Face-Vid2Vid) and ``face-editing" (STIT). \textbf{Face Reenactment}: We compare against Face-Vid2Vid~\cite{face-vid2vid}, a SOTA face reenactment network that reenacts the pose and expression of a target image using source (driving) video. As shown in Fig.~\ref{fig:facemani}, FaceOff preserves source's micro-expression such as, exact mouth opening, eye-frown. As FaceOff relies on a deterministic distance loss, it can retain the exact input expressions in the output. Moreover, FaceOff retains the temporal target pose and background whereas Face-Vid2Vid modifies a static frame. \textbf{Face Editing: } Using a powerful neural network, one can simply introduce the desired expressions in a video by performing edits. We compare our method against STIT~\cite{stit}. STIT modifies the expressions of a face video based on an input label. We observe the source expression and manually try out the various intensity of the "smile" emotion ranging from the negative to positive direction. As seen in Fig.~\ref{fig:facemani}, although STIT can change the overall expression, it needs a significant manual hit-and-trial to pinpoint the exact expression. It also lacks personalized expression (amount of mouth opening, subtle brow changes). Also, each and every expression cannot be defined using a single label, and introducing variations in emotion along the temporal dimension is hard. With our proposed method, one can incorporate any emotion in the video (as long as we have access to a source video). \section{Ablation Study} \label{sec:ablation} We investigate the contribution of different modules and errors in achieving FaceOff. Fig.~\ref{fig:ablation} demonstrates the performance of FaceOff without the proposed temporal module. As shown, although at a frame level, the output is spatially-coherent, as we look across the frames, we can notice the temporal incoherence. The face seems to `wobble' across the frames - squishing up and down. In fact, without the temporal module, the network does not understand an overall face structure and generates unnatural frames (marked in red). Jumping from one red box to another, we can see that the face structure has completely changed. This suggests that constraining the network by the neighboring frames using the temporal module enables the network to learn a global shape fitting problem, consequently generating a temporally coherent output. Table~\ref{tab:ablation} presents the quantitative contribution of the temporal module and each of the errors used for self-supervised training. The metrics indicate that each of them contributes significantly to achieving FaceOff. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{images/ablation._V3.pdf} \caption{\small FaceOff without Temporal Module. As we jump from one frame to another (red boxes), we can observe a "wobble effect": significant change in the facial structure (elongated and then squeezed). This occurs as the model does not have an understanding of the neighboring frames while generating the current frame.} \label{fig:ablation} \end{figure} \begin{table}[t] \centering \adjustbox{max width=0.8\linewidth}{ \begin{tabular}{l|ccc} \toprule Component & SPIDis $\downarrow$ & LMD $\downarrow$ & FVD $\downarrow$ \\ \midrule FaceOff & $\mathbf{0.38}$ & $\mathbf{0.41}$ & $\mathbf{255.980}$ \\ \midrule w/o Temporal. & 0.71 & 0.49 & 350.60 \\ w/o Rotation & 0.65 & 0.44 & 292.76 \\ w/o Color & 0.74 & 0.42 & 303.35 \\ w/o Translation & 0.58 & 0.47 & 271.82 \\ w/o Distortion & 0.55 & 0.45 & 285.54 \\ \bottomrule \end{tabular} } \caption{\small We remove different components and errors and evaluate their contributions in achieving FaceOff. } \label{tab:ablation} \end{table} \section{Conclusion} We introduce ``video-to-video (V2V) face-swapping", a novel task of face-swapping. Unlike face-swapping that aims to swap an identity from a source face video (or an image) to a target face video, V2V face-swapping aims to swap the source expressions along with the identity. To tackle this, we propose FaceOff, a self-supervised temporal autoencoding network that takes two face videos as input and produces a single coherent blended output. As shown in the experimental section, FaceOff swaps the source identity much better than the existing approaches while also being $400\times$ computationally efficient. It also swaps the exact source identity that none of the methods can do. V2V face-swapping has many applications, a significant application can be automating the task of replacing the double's face with the actor's identity and expressions in movies. We believe our work adds a whole new dimension to movie editing that can potentially save months of tedious manual effort and millions of dollars. {\small \bibliographystyle{ieee_fullname} \section{Network Design} We adopt the architecture of VQVAE2 \cite{vqvae2}. VQVAE2 encodes the input into multiple hierarchies: top and bottom. We adopt the same architecture but modify it in two fundamental ways. (1) VQVAE2 is an autoencoding network and thus computes the distance between the input and the output of dimension $H \times W \times C$ -- the height, width of the image, and the number of input channels, respectively. In our case, input is a channel-wise concatenation of the source foreground, $f_{s_i}$, and target background, $b_{t_i}$, giving a dimension of $H \times W \times 6$, and thus, the output generated by our network is of the same dimension $H \times W \times 6$. During training, instead of the input, we compute the loss against the ground truth video, $s_i$, of dimension $H \times W \times 3$. Thus, at the network's output, we only consider the first three channels of $H \times W \times 6$ output. Similarly, at the inference, we only consider the first three channels as our output. (2) VQVAE2 operates at a frame level and thus cannot model temporal properties. Thus, we add temporal modules in the network just before the quantization block. At each hierarchy, the encoder produces a latent of dimension $(B * T) \times C \times H \times W$. Here, we expand the batch dimension to convert the flattened input into videos. These video latents of dimension $B \times T \times C \times H \times W$ are then passed through the temporal block made of 3D convolution and ReLU layers (see Fig.~2, main paper). Post this step; we again convert the batch dimension to $(B * T)$. The losses are then applied frame-by-frame. The temporal layers learn to identify the properties across the video and produce a blended encoding even with a frame-by-frame loss. At this point, the encoder outputs are quantized, and we adopt the decoder architecture of VQVAE2 for decoding the latent. \subsection{Our results and Potential Applications} Our approach can have several potential applications, especially multimedia, entertainment, and education. We demonstrate two such applications in this paper. First is depicted in Fig.~6, main paper, that shows a real-use case of Paul Walker. In post-production, the VFX team replaced the face of Cody and Caleb Walker, who acted as Paul's double\footnote{\href{https://en.wikipedia.org/wiki/Furious\_7\#Redevelopment_of_Walker's_character}{Redevelopment of Walker's character}}. The team underwent extensive graphical post-processing to superimpose Paul's face from previous recordings of Cody and Caleb. In Fig.~1, we demonstrate another result of FaceOff. Here, we simulate a scenario of body doubles. Nolan, the actor in the source video, is `working from home' recording his dialogues and expressions at the convenience of his home. Joey Tribiani, the double in the target video, is acting in a famous sitcom FRIENDS. FaceOff swaps Nolan into the scene in one forward pass! We show such an application in the supplementary video and we encourage our readers to view the result of double-actor V2V face-swapping. FaceOff can potentially save millions of dollars and reduce months of post-production edits to merely few minutes of touch-ups on top of the FaceOff output! Another application of our work is post-production movie editing. Today, multiple scenes are anticipated in advance to avoid retakes during post-production. Our work will encourage the movie-production team to become more flexible with doubles and post-production movie edits. FaceOff also has huge potential in the advertisement sector. FaceOff could be a potential futuristic technique for making advertising videos. Today, the VFX and CGI take abundant resources for V2V face swapping, whereas with our work, one could replace themselves in a sitcom in less than a second. This could also become a potential teaching technique. For example, creating light-hearted advisory videos about vital life lessons for students. Our work can also be applied in animation\footnote{\href{https://disney.fandom.com/wiki/List\_of\_recycled\_animation\_in\_Disney\_movies}{List of recycled animation in Disney movies}} to swap an existing face/background in multiple scenes. \section{Limitations} Our work fundamentally lacks two areas: (1) Pose difference in the 'Z' direction (normal to the image) between the source and target. The network struggles to generate coherent outputs. As can be seen in Fig.~\ref{fig:limitations}, the lips and the overall production seem unnatural. Going beyond 2D images and exploring the space of 3D modeling could be an exciting way to approach this issue. (2) Difference in face ornaments. As can be seen in Fig.~\ref{fig:limitations}, artifacts such as part of the hair and spectacles are visible in the output. As we avoid adding a discriminator, the model does not learn to `remove' any input part to make the output more realistic. As future work, one could experiment with soft discriminators such that there are minimum hallucinations. Lastly, in this work, we extract the source face using the eye and mouth region landmarks. But a part of one's identity also includes the head region. We do this to preserve the pose of the target. In the specific use-case we tackle, a double is selected such that the head of the double is similar, if not the same, to the actor (see Fig.~1, main paper). Thus, extracting only the face region is sufficient for preserving the identity in our case. However, to preserve the entire identity, one would have to move from face-swapping to head-replacement \cite{deepfacelabs} which would also be an interesting direction of exploration. Here, one would need to be able to transfer the head pose of the target to the source head while preserving the other necessary characteristics. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{images/limitation._V12.pdf} \caption{Limitations of our approach. Artifacts such as hair strands and spectacles are visible. In case of extreme pose change, the network struggles to produce a coherent output.} \label{fig:limitations} \end{figure} \section{Ethical Issues} Unlike other generative works in similar settings, we do not re-enact a given identity according to a driving video. Our work focuses on swapping relevant parts of the source video onto the target video so that the expression and lip movements of the source video are preserved. At the same time, the head motion and background remain the same as the target video. This ensures that the generated identity, as well as the spoken content in the generated video, matches the source speaker (extensively evaluated in Table 2, main paper). Thus, body doubles and doppelgangers of celebrities cannot be directly used to re-enact a target celebrity video since the final generated identity will be copied from the source. However, since our work deals with modifying critical facial features of the target identity, we decide to take further steps to ensure fair use. We will only be releasing the code after signing legal agreements with the users to maintain records. We will also use a visible watermark on the generated video to ensure they remain identifiable and fake. \section{Experimental Setup} \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{images/distortion._V15.pdf} \caption{Ablation Experiments. In each of the experiment, we remove the type of error mentioned at the time of self-supervised training. Here, we present the results of the trained models at the inference on cross-identity.} \label{fig:pseudo-error} \end{figure*} \subsection{Hardware Setup} All of our models are trained and inferred on NVIDIA GTX 3080 Ti using 4 GPUs and 1 GPU respectively. \subsection{Dataset} \begingroup \renewcommand{\arraystretch}{1.2} \begin{table}[h] \centering \caption{Speakers in the training dataset collected from publicly available YouTube VLOG videos.} \adjustbox{max width=\linewidth} { \begin{tabular}{|r|c|c|c|} \hline & Name & Nationality & YouTube Channel \\ \hline 1. & Anfisa Nava & Russia & ANFISAofficial\\ \hline 2. & Sejal Kumar & India & sejalkumar7theclothingedit\\ \hline 3. & Johnny Harris & USA & johnnyharris \\ \hline 4. & BestDressed & USA & bestdressed \\ \hline 5. & Jack Edwards & UK & thejackexperience \\ \hline \end{tabular} } \label{tab:training-dataset} \end{table} \endgroup To create the \textbf{\textcolor{magenta}{training}} dataset, we curate publicly available unconstrained YouTube VLOG videos. It includes 5 different YouTubers, specifications of the same is provided in Table~\ref{tab:training-dataset}. The data amounts to a total of \underline{15 hours} of video divided equally among all of the speakers. All the speakers speak in English, although have different accents based on their nationality. The details of videos along with the timestamp will be released publicly to promote future research. The \textbf{\textcolor{magenta}{test}} set is also curated from unconstrained YouTube videos. The videos have a different identity, background, and light setting from the training set. Furthermore, they are selected from a widely varying timeline ranging from the 1990s to the late 2021s! This ensures we cover different video capture technologies, compression techniques, etc. Specifically, the videos are collected from Sitcoms snippets, interviews, and movies. Some examples are The Office (Sitcom), Alex Honnold's Interviews, Think Media's tutorials, and FRIENDS (Sitcom). \subsection{Human Evaluation} We conduct human evaluations as part of our qualitative evaluations, primarily to assess the quality of video to video face swapping achieved by our network. We randomly select 10 videos from our curated dataset and the results from all the comparisons, along with our network are displayed in a random order to the user. A pool of 50 participants are asked to assign a score between 1-10 indicating the perceptual quality of the generated videos. Our participant pool comprised of people aged between 25-45 years of age. At the time of rating, every user was asked to rate a video on a scale of $1-10$. $1$ and $10$ being the worst and the best respectively. Each user was shown a source video, a target video, and the final swapped video. The swapped video could be randomly from FSGAN, Motion-coseg, or FaceOff. Each user saw $10$ instances of each category during rating. They had to then answer the following three questions - (1) How natural does this video (swapped) look? (2) How similar is the expression in the swapped video to the source expression? and (3) How similar is the identity in the swapped video to the source identity? No additional directions were given to the users for rating. Along with the rating, they were also asked to submit their subjective opinion on the naturalness aspect of the swapped video. The mean opinion scores of all the users are reported in the main paper. We also try to summarize their opinion in this section. As was observed in Table~2, we outperformed the existing approaches in preserving the source identity in both quantitative and qualitative evaluation. However, FSGAN was voted slightly better qualitatively for naturalness factor. Hereon, we will discuss the naturalness factor of the observed videos. Out of the three, the highest variations in the user rating was observed to be in Motion-coseg. FaceOff had the least variation in rating and almost all the videos appear natural. Although, FSGAN was rated highest in terms of naturalness, the users commented that the output had unnatural color. Despite the drawback, the users agreed that the overall expression and the swapped person looked natural. It is to be noted that despite that FSGAN was voted to be more natural than FaceOff, the task of identity swapping was unanimously voted to be superior in FaceOff. Although FSGAN preserved the source identity and looked more natural, the users agreed that the output had little match with either of the expressions - source or target. This meant that the model took leeway in creating expressions as long as the output looked natural. \section{Ablation Study} As mentioned in Section~3.3, we introduce five types of pseudo errors: rotation, translation, scaling, distortion, and color, at the time of training to emulate the different errors we face during inference. In this section, we perform an ablation to show the effects (at the time of inference) of removing each of the errors during training. In each of the subsection, we try to remove the errors one at a time. i.e., as we remove rotation, the remaining four errors are still present while training. To showcase the clear distinction between the foreground and the background, we turn off the color error for all the ablations. As clearly depicted in Fig.~\ref{fig:pseudo-error}, each of these errors causes a degradation in the output. The left-most column in the Figure shows the effect of not introducing the color normalization error. This leads to sub-optimal blending between the source and target face with significant artifacts. Similarly, the scale and rotation pseudo errors are also shown to be extremely important in the same figure, Fig.~\ref{fig:pseudo-error}. Removing the scaling error causes the blended face to be on a different scale. On the other hand, the rotation error forces the faces to be aligned, making it easier for the algorithm to blend. Finally, without the translation error, the source face does not fit the target face giving rise to an unstructured output. A conjunction of these different errors leads to a setting where the model can blend the given videos spatially and temporally. Affine transformation is a combination of scaling, rotation, and translation. Therefore, removing one of these errors does not confuse the model of the underlying task of alignment. The model still performs the task well and can fit the irregular face shape into the background. However, distortion error (as shown in the last column of the figure) turns out to be very important. Without the distortion error (which is in fact the non-linear transformation), the model struggles to warp the face in a way that best fits the background. This causes the foreground to go out of the background and generate unnatural outputs. \section{Additional Results} \subsection{Poisson Blending vs Neural Blending} \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{images/poisson.pdf} \caption{Sample output of blending using the classical technique of Poisson blending.} \label{fig:poisson} \end{figure} In this section, we observe that by simple applying a heuristic blending technique like Poisson Blending on the heuristically aligned frames, the blending approach fails to produce convincing/photo-realistic results. The neural blending approach learns a non-linear transformation and blending strategy on the given input that cannot be emulated with a heuristic blending approach like Poisson Blending. Poisson blending performs blending really well when the source and the target faces are well aligned and fails to generalize to cases where there's a difference between the source and the target faces and learning an affine transformation no longer suffices. Face are rigid bodies and a rigid-body transformation doesn't suffice for cases with considerable head difference between the source and the target frames. Moreover, Poisson blending requires precise alignments and masks to be able to paste the source face onto the target face. A sample output of Poisson blending is shown in Fig.~\ref{fig:poisson}. The blending was performed after heuristic alignment step as shown in Fig.~$3$ of main paper. As can be seen, even though the images were blended, the output looks unnatural and distorted. \subsection{Accuracy vs Inference Trade-Off} \begin{figure}[h] \centering \includegraphics[width=1.0\linewidth]{images/faceoff_inference_time.png} \caption{Comparison on the time needed for performing video-to-video face swapping. Faceoff is considered to need $1\times$ inference time optimization, every other model is plotted relative to FaceOff's inference time. Motion Co-seg: $1.5\times$, FSGAN: $400\times$, DeepFakes: $9000\times$, and DeepFaceLabs: $9000\times$.} \label{fig:graph} \end{figure} In the graph~\ref{fig:graph}, we demonstrate the huge disparity between the inference times of our approach against SOTA approaches DeepFakes, DeepFaceLabs (denoted by DFL), Motion Coseg, and FSGAN. Our approach and motioncoseg are one shot approaches and they do not require further finetuning. Fsgan provides two modes of inference, a faster inference and an inference that requires finetuning the output. We used the second approach to further improve Fsgan's results and achieved the finetuning in 5 minutes for qualitative results. Quantitative scores were computed without any optimization. Deepfakes and Deepfacelabs require considerable amount of time to achieve reasonable faceswapping and they work on a pair of videos with heavy compute. Even though our approach is one shot, we outperform existing approaches in the SPI metric as mentioned in the main paper. We achieve the best SPI of $0.38$ over all the baseline approaches. {\small \bibliographystyle{ieee_fullname}
1,314,259,992,786
arxiv
\section{Introduction} Quadrotor Unmanned aerial vehicles (UAV) have been increasingly envisaged in defense, industrial and civil applications due to its simplified mechanical design and ease of maneuvering. The quadrotor consists of two pairs of symmetrically located counter rotating propeller blades which independently generate aerodynamic thrust along a common axis, in order to regulate the overall applied force and torques on the UAV. In typical flight missions, the independent rotor thrusts are regulated in tracking the center of mass position and heading angle of the UAV. A basic understanding of the quadrotor dynamics and control design can be found in (\cite{beard}) and some recent research can be found in (\cite{andrea1, andrea2, andrea3, vijay_snap, vijay_avian, vijay2}). In conventional quadrotors, the thrust generated by each rotor is regulated by varying its speed. Such an actuation mechanism has a low control bandwidth due to saturation limits in the electro-mechanical circuit driving the rotor. Further, the rotor thrust needs to be strictly positive, thereby impairing the flight envelop. These factors have motivated the development of \textit{variable pitch} quadrotors (\cite{cutler1}, \cite{cutler2}) in which rotor thrust is regulated by varying the pitch angle of the propeller blades, while maintaining a constant rotor speed. This mechanism has a significantly higher actuation bandwidth than conventional rotors. Further, the blade pitch angles can be reversed to enable negative thrust generation. It has been shown in \cite{cutler3}, \cite{energies}, and \cite{kothari}, that the variable pitch mechanism appreciably enhances the flight envelop, thereby enabling aggressive maneuvers. The theoretical focus of this paper is to design a control law for a quadrotor, subsequent to complete failure of a single rotor. With three functioning rotors, only a three dimensional submanifold of the output space can be completely regulated. A possible tracking solution is to relinquish control of the angular rate about the thrust axis, and choose the orientation of the thrust axis (i.e. reduced attitude) and net thrust as tracking outputs. The main new results in this paper over the existing studies, reviewed in {\it Related work}, are summarized as follows: \begin{itemize} \item The proposed control law can track globally defined reduced attitude and position trajectories, with only two control torques and a scalar thrust input. \item The control law is free of singularities due to attitude parameterizations or input-output decoupling. \item In the presence of bounded uncertainties, the tracking errors almost-globally converge at an exponential rate, to an arbitrarily small neighborhood of the origin. \end{itemize} \subsection{Related work} While there is a significant amount of research in fault tolerant control of quadrotors with partial rotor loss, there are only a few results in case of complete rotor failure. In \cite{andrea} and \cite{andrearelaxed}, the authors present relaxed hover solutions with multiple rotor failures. The attitude dynamics are linearized about a hovering point where the yaw rate is a non-zero constant. In order to stabilize the position of the UAV, the orientation of the vertical axis (i.e. reduced attitude) and net rotor thrust is regulated. In \cite{landing1} and \cite{landing2}, a PID and back-stepping approach is used for emergency landing in case of rotor failure. In \cite{lanzonifac} and \cite{lanzonjgcd}, the authors present a hierarchical control design in which the inner loop controls the reduced attitude and the outer loop controls the position. The inner loop consists of a robust feedback linearization based controller and the outer loop is a $H_{\infty}$ based controller for the translational dynamics, linearized about a hover point. In \cite{peng} and \cite{akhtar}, the authors present static and dynamic feedback linearization based controllers to regulate the reduced attitude and position of the quadrotor. Control designs based on small angle or linear approximations restrict the motion of the quadrotor to near-hover maneuvers. Further, feedback linearization based control laws mentioned above, encounter singularities when the roll and pitch angles are $\pi/2$ or when the net thrust is zero. In order to avoid this, the initial state errors have to be restricted within a sufficiently small neighborhood of the origin. These factors render the existing fault tolerant control designs ineffective in tracking global trajectories or performing aggressive maneuvers (such as attitude recovery from an inverted pose). It is imperative to understand that post rotor failure, the orientation of the quadrotor may undergo large deviations from the operating point, thereby necessitating global maneuvering capability. In recent times, globally stabilizing geometric controllers which exploit the intrinsic structure of the underlying manifold, have been developed. Here, singularities due to attitude parameterizations or input-output decoupling are avoided. Approaches such as that in \cite{maithri} and \cite{bullo} stabilize mechanical systems on Lie Groups using nonlinear proportional-derivative (PD) control. One such pioneering control design for quadrotors on the Lie Group $SE(3)$ has been presented in \cite{tlee} and \cite{tlee_tac}. Reduced attitude stabilization to a fixed point on $S^2$ with two control torques is presented in \cite{bulloreduced}. In \cite{sphere}, the primary axis of a rigid body on $S^2$ as well as the angular velocity about this axis, are tracked using three independent torques. In \cite{teel}, global reduced attitude tracking with three torques is achieved by constructing a synergistic family of potential functions on $S^2$. To the best of our knowledge, none of the above mentioned control laws are suitable for trajectory tracking with three functioning rotors (i.e. two torque inputs and a net thrust). \subsection{Proposed control design} First, a control law is developed on $SO(3)$ in order to track a commanded reduced attitude trajectory. A back-stepping feedback law for the two torques about the horizontal body axes of the quadrotor is designed based on the geometric structure of $TS^2$ (on which the reduced attitude dynamics evolve). The back-stepping law is preferred over standard geometric PD controllers as they are not applicable in case of reduced dimensional input space. Subsequently, a saturation based feedback law is designed for the translational dynamics, in order to track a prescribed position trajectory with bounded thrust. This also ensures that the commanded thrust vector does not vanish, thereby ensuring that the reference reduced attitude trajectory is well defined. The control law is further robustified in order to account for propeller-induced gyroscopic moment (which is typically neglected when all four rotors are functional), and rotational drag. The tracking errors are shown to exponentially converge to an arbitrarily small open neighborhood of the origin. The performance of the controller is first demonstrated through simulations on a variable pitch quadrotor which is capable of negative thrust generation. Then, the same control law is simulated with a strictly positive rotor thrust constraint, in order to demonstrate its effectiveness on a conventional quadrotor. In this case however, aggressive trajectory tracking is successful provided that the angular velocity about the vertical axis is high enough. The paper is organized as follows. In section 2., the nonlinear dynamics of the quadrotor is presented. Section 3. contains the formulation of the geometric control law. In section 4., simulation results with the proposed control law have been presented, which is then followed by concluding remarks. \section{Problem Formulation} \begin{figure}[h] \includegraphics[width=0.4\textwidth]{rotor_failure_diag} \caption{Quadrotor model} \label{fig:quad} \end{figure} \subsection{Quadrotor Dynamics} Consider the quadrotor as shown in Fig.\ref{fig:quad}. Let $\{\vec{e}_1,\vec{e}_2,\vec{e}_3\}$ denote the inertial frame and $\{\vec{b}_1,\vec{b}_2,\vec{b}_3\}$ denote the body frame. The four identical rotors are designed to generate thrusts $T_1,~T_2,~T_3,~T_4$ along $\vec{b}_3$. In this paper, it is assumed that the fourth rotor has been completely disabled post fault detection (i.e. $T_4\equiv 0$). The origin of the body frame is located at the center of mass. $R\in SO(3)$ is the rotation matrix from the $\vec{b}$ frame to $\vec{e}$ frame, denoting the attitude of the quadrotor. $\Omega$ is the angular velocity in the body frame. $x$ and $v$ denote the position and velocity of the center of mass. $m$ denotes the mass of the quadrotor, $J=diag(J_1,J_2,J_3)$ is the moment of inertia matrix in the body frame, $J_r$ is the inertia of the rotors, and $\tau_d$ is the aerodynamic rotational drag. The total thrust and torque due to the rotors is represented by $f$ and $M$ respectively. The first and second rotor spin clockwise and the third spins anti-clockwise at angular speeds $\omega_1,\omega_2,\omega_3$ respectively. The rigid body equations of motion are derived using the \textit{Euler Poincar\`e} formalism on $SE(3)$ (the configuration manifold of the quadrotor) as follows: \begin{equation} \begin{matrix} \dot{x}=v \\ m\dot{v}=-mge_3+fRe_3 \\ \dot{R}=R\hat{\Omega}\\ J\dot{\Omega}=J\Omega\times\Omega+g_r-\tau_d+M \end{matrix} \label{quad} \end{equation}where $\hat{.}:\mathbb{R}^3 \to \mathfrak{so}(3)$ is defined as $\hat{x}y=x\times y,~x,y\in\mathbb{R}^3$. We will denote $(.)^\vee$ as its inverse map throughout the paper and $e_i$ as the representation of $\vec{e}_i$. When all four rotors are functioning, the gyroscopic moment $g_r$ generated by their different rotational speeds is typically neglected. However, $g_r$ may be significant in case of rotor failure and is therefore included. This effect has been modeled as an additional moment by the following equation (\cite{gr}). \begin{equation} g_r=J_r(\Omega\times e_3)(\omega_1-\omega_2+\omega_3) \label{g_r} \end{equation}where $\omega_i$ is the speed of the $i^th$ rotor. The aerodynamic drag torque $\tau_d$ is hard to model as it depends on the quadrotor profile. The model used here has been adopted from \cite{andrearelaxed}, and is based on the form drag of a translating object (\cite{cormick}) which is quadratic in the vehicle's angular velocity as: \begin{equation} \tau_d=||\Omega||K_d\Omega, \label{tau_d} \end{equation}where $K_d$ is a positive-definite matrix. Let $T_i$ and $D_i$ denote the thrust and drag induced torque generated by the $i^{th}$ rotor. $f$ and $M$ are then obtained for the 'X' configuration of the quadrotor as: \begin{eqnarray} f&=&T_1+T_2+T_3 \nonumber \\ M_1&=&d(T_1 -T_2-T_3) \nonumber \\ M_1&=&d(T_1+T_2-T_3) \nonumber \\ M_3&=&D_1-D_2+D_3 \label{f,M} \end{eqnarray} In case of a variable pitch quadrotor, the thrust in each rotor is varied by changing the collective blade pitch angle of the propellers, while maintaining constant rotor speed. The relation between rotor thrust and drag, with the blade pitch angle has been adopted from \cite{cutler1} as: \begin{eqnarray} T_i&=&b_L\omega_i^2\gamma_i \nonumber \\ D_i&=& b_{D_1}\omega_i^2+b_{D_2}\omega_i^2\gamma_i^2+b_{D_3}\omega_i\gamma_i \label{TiandDi} \end{eqnarray} where $\gamma_i$ is the pitch angle of the $i^th$ rotor, and $b_L,~b_{D_1},~b_{D_2},~b_{D_3}$ are constants which depend on the aerodynamic profile of the propeller. Though in principle one can use the rotor speed as an additional control input, this is not advisable due to significant aerodynamic uncertainties when $\omega_i$ is low or rapidly fluctuating. \section{Geometric Control Design} \begin{figure}[h] \includegraphics[width=0.5\textwidth]{rotor_failure_block} \caption{Controller Structure} \label{fig:block} \end{figure} With three functioning rotors, the control inputs are chosen as $f$ and $U=[U_1,U_2]^T=[M_1/J_1,M_2/J_2]^T$. Control of the angular velocity about the vertical axis is relinquished. First, a reduced attitude controller with input $U$ is designed on $S^2$, which is then extended to a trajectory tracking controller on $S^2\times \mathbb{R}^3$ by designing a control law for $f$. \subsection{Reduced Attitude Tracking Controller on $SO(3)$} The reduced attitude of the quadrotor is defined as the pointing direction of the thrust axis (i.e. body $z$ axis) which is obtained via the projection $\pi:SO(3)\to \mathbb{S}^2$ (as introduced in \cite{bulloreduced}), defined as \begin{equation} \pi(R)=Re_3 \end{equation}Let $q=Re_3$ denote the reduced attitude , and $\vec{r}_1 =Re_1$, $\vec{r}_2=Re_2$ denote the two horizontal body axes. The dynamics of $q$ can be obtained from (\ref{quad}) as: \begin{eqnarray} \dot{q}&=&\Omega_1\vec{r}_2-\Omega_2\vec{r}_1 \nonumber \\ \dot{\Omega}_1&=&\frac{(J_2-J_3)}{J_1}\Omega_2\Omega_3+U_1+d_1(\Omega) \nonumber \\ \dot{\Omega}_2&=&\frac{(J_3-J_2)}{J_2}\Omega_1\Omega_3+U_2+d_2(\Omega) \label{q_dynamics} \end{eqnarray}Here, $d=[d_1,d_2]^T$ denotes the uncertainties due to propeller induced gyroscopic torque and rotational drag. From their respective forms as described in (\ref{g_r}) and (\ref{tau_d}), we can bound $d$ as: \begin{equation} ||d||_2\leq(||\tilde{\Omega}||_2+||\tilde{\Omega}||^2_2)\Delta \label{d_bound} \end{equation}where $\Delta=max~\{J_r\omega_1,\lambda_{max}(K_d) \}$ and $\tilde{\Omega}=[\Omega_1,\Omega_2]^T$. Let $q_d(t)\in \mathbb{S}^2$ denote the reference reduced attitude trajectory. Motivated by \cite{leeso3} and \cite{sphere}, we define a reduced attitude error function as: \begin{equation} \Psi(q,q_d)=2-\dfrac{2}{\sqrt{2}}\sqrt{1+q_d^Tq} \label{psi} \end{equation}This choice is motivated by the fact that the left trivialized differential of the error function does not vanish when $q\to-q_d$. However, this happens in case of the conventional error function on $\mathbb{S}^2$ as defined in \cite{bullo}, thereby rendering the tracking performance poor. It can be observed that the error function satisfies: \begin{equation} \begin{matrix} \Psi(q,q_d)\in [0,2],~\\ \\ \Psi(q,q_d)=0\iff q=q_d \end{matrix} \end{equation} With $q_d$ constant, the differential of $\Psi(q,q_d)$ along $T_q^*\mathbb{S}^2$ is computed via the tangent map corresponding to the projection $\pi_{\mathbb{S}^2}:\mathbb{R}^3-\{\vec{0}\} \to \mathbb{S}^2$ where $\pi_{\mathbb{S}^2}(x)=x/||x||_2$, as \begin{equation} d_1\Psi_{\mathbb{S}^2}(q,qd)=\dfrac{1}{\sqrt{2}\sqrt{1+q_d^Tq}}q\times (q\times q_d) \label{d_psi} \end{equation}We denote the reduced attitude error vector as: \begin{equation} e_q:= d_1\Psi_{\mathbb{S}^2}(q,qd) \end{equation} Note that this quantity is well defined as long as $q_d^Tq>-1$ i.e. when the angle between them is less than $180^{\circ}$. We now establish the following relation between $e_q$ and $\Psi$. \begin{lemma} \begin{equation} ||e_q||_2^2\leq \Psi \leq 2 ||e_q||_2^2,~\forall (q,q_d)\in \Psi^{-1}[0,2) \end{equation} \label{eq_leq_psi} \end{lemma} \begin{proof} By using the identity: $||q_d^Tq||_2=\cos(\theta)$ and $||q\times(q\times q_d)||_2=\sin(\theta)$, where $\theta$ denotes the angle between $q$ and $q_d$. \end{proof} Next, in order to define the velocity error on $T_{\mathbb{S}^2}$, we define the transport map $\mathcal{T}_{\mathbb{S}^2}(q,q_d):T_{q_d}\mathbb{S}^2\to T_{q}\mathbb{S}^2$ (\cite{bullo}) as follows. \begin{equation} \mathcal{T}_{\mathbb{S}^2}(q,q_d).v=(q_d\times v)\times q,~\forall v\in T_{q_d}\mathbb{S}^2 \label{Tau} \end{equation}We make the following observation: \begin{lemma} The pull-back of the transport map $\mathcal{T}$ satisfies the equation: \begin{equation} \mathcal{T}_{\mathbb{S}^2}(q,q_d)^*(d_1\Psi_{\mathbb{S}^2}(q,qd))=-d_2\Psi_{\mathbb{S}^2}(q,qd) \end{equation} \label{Tau_lemma} \end{lemma} \begin{proof} By using the identity $(x\times y)\times z=(z^Tx)y-(z^Ty)x,~\forall x,y,z\in\mathbb{R}^3$. \end{proof} We now define the velocity error vector as: \begin{equation} e_{\dot{q}}:=\dot{q}-\mathcal{T}_{\mathbb{S}^2}(q,q_d).\dot{q_d} \label{eqdot} \end{equation} The derivative of the error function can be obtained using Lemma \ref{Tau_lemma} as \begin{eqnarray} \frac{d}{dt}\Psi(q,q_d)&=&d_1\Psi_{\mathbb{S}^2}(q,q_d)\dot{q}+d_2\Psi_{\mathbb{S}^2}(q,q_d)\dot{q}_d \nonumber \\ &=&d_1\Psi_{\mathbb{S}^2}(q,q_d)e_{\dot{q}} \label{Psi_dot} \end{eqnarray}In order to stabilize the dynamics of $\Psi$, we require that $e_{\dot{q}}$ satisfies: \begin{equation} e_{\dot{q}}=-k_q d_1\Psi_{\mathbb{S}^2},~k_q>0 \label{eqdot1} \end{equation} Then, $\dot{\Psi}$ can be obtained as \begin{equation} \dot{\Psi}=-k_q||e_q||_2^2 \end{equation}Further, Lemma \ref{eq_leq_psi} asserts that $\Psi$ can be sandwiched between two positive definite quadratic forms in $e_q$, thereby ensuring that the dynamics of $\Psi$ can be bounded as: \begin{equation} \Psi(q(t),q_d(t))\leq \Psi(q(0),q_d(0))e^{-k_qt} \end{equation} Equation (\ref{eqdot1}) can be written as: \begin{equation} \Omega_1\vec{r}_2-\Omega_2\vec{r}_1=\mathcal{T}_{\mathbb{S}^2}(q,q_d).\dot{q_d}-k_qd_1\Psi_{\mathbb{S}^2} \end{equation}Since $span~\{\vec{r}_1,\vec{r}_2\}=T_q\mathbb{S}^2$, the above equation admits a unique solution for $\Omega_1$ and $\Omega_2$ which is given by: \begin{equation} \Omega_d=\begin{bmatrix} \langle\vec{r}_2,(\mathcal{T}_{\mathbb{S}^2}(q,q_d).\dot{q_d}-k_qd_1\Psi_{\mathbb{S}^2})\rangle\\ \langle-\vec{r}_1,(\mathcal{T}_{\mathbb{S}^2}(q,q_d).\dot{q_d}-k_qd_1\Psi_{\mathbb{S}^2})\rangle \end{bmatrix} \label{wd} \end{equation} We now construct a control law in order to track a commanded reduced attitude trajectory. Define: \begin{eqnarray*} e_{\Omega}&:=&([\Omega_1,\Omega_2]^T-\Omega_d), \\ f_J(\Omega)&:=&\bigg[\frac{(J_2-J_3)}{J_1}\Omega_2\Omega_3,\frac{(J_3-J_1)}{J_2}\Omega_1\Omega_3\bigg]^T \label{definitions_theorem1} \end{eqnarray*} and \begin{equation} U_{\Delta}=\left\lbrace \begin{matrix} \dfrac{e_{\Omega}}{||e_{\Omega}||_2},~ ||e_{\Omega}||_2 >tol \\ \\ \dfrac{e_{\Omega}}{tol},~ ~||e_{\Omega}||_2 \leq tol \end{matrix} \right\rbrace \label{U_delta} \end{equation} where $tol>0$ is a positive constant depending on the slew rate of the torque actuation. \begin{theorem} Given a reference trajectory $q_d(t)$ which is smooth with bounded derivatives, the control law: \begin{eqnarray} U(R,\Omega,t):=&&-\alpha\begin{bmatrix} \langle d_1\Psi_{\mathbb{S}^2},\vec{r}_2\rangle\\ \langle d_1\Psi_{\mathbb{S}^2},-\vec{r}_1\rangle \end{bmatrix}- k_{\Omega}e_{\Omega}-f_J(\Omega)\nonumber \\ \nonumber \\ &&+\dot{\Omega}_d-U_{\Delta}(||\tilde{\Omega}||_2+||\tilde{\Omega}||^2_2)\Delta, \label{U} \end{eqnarray} ensures that $e_q$ and $e_{\Omega}$ exponentially converge to an arbitrarily small open neighborhood of the origin, for all initial conditions in the open-dense sublevel set $\Psi^{-1}[0,2)$ satisfying: \begin{equation} ||e_{\Omega}(0)||_2<2\alpha(2-\Psi(0)). \label{condition theorem1} \end{equation} Further, the sublevel set $\Psi^{-1}[0,2)$ remains invariant for the closed loop flow of (\ref{q_dynamics}) with the feedback law (\ref{U}). \label{theorem1} \end{theorem} \begin{proof} Consider the Lyapunov function: \begin{equation} V_1:=\alpha\Psi+\frac{1}{2}||e_{\Omega}||_2^2. \label{v1} \end{equation} Its derivative along the trajectories of (\ref{q_dynamics}) with the control law (\ref{U}), is obtained using (\ref{Psi_dot}), (\ref{wd}), (\ref{definitions_theorem1}), and (\ref{U_delta}), as: \begin{eqnarray} \dot{V}_1=&&-\alpha k_q||e_q||_2^2- k_{\Omega}||e_{\Omega}||_2^2 \\ \nonumber &&+\langle e_{\Omega},d-U_{\Delta}(||\tilde{\Omega}||_2+||\tilde{\Omega}||^2_2)\Delta\rangle \end{eqnarray}When $||e_{\Omega}||_2>tol$, the inner product term can be shown to be negative using (\ref{d_bound}) along with the Cauchy-Schwarz inequality. When $||e_{\Omega}||_2\leq tol$, a straightforward calculation shows that we can bound the inner product term using (\ref{d_bound}), as: \begin{equation} \langle e_{\Omega},d-U_{\Delta}(||\tilde{\Omega}||_2+||\tilde{\Omega}||^2_2)\Delta\rangle~~\leq tol\Delta(||\tilde{\Omega}||_2+||\tilde{\Omega}||^2_2) \end{equation} Further, since $\dot{q}_d$ is bounded, $\Omega_d$ can be uniformly bounded. When $||e_{\Omega}||_2<tol$, one can show that $||\tilde{\Omega}||_2$ is uniformly bounded within a neighborhood of $\Omega_d$, using the triangle inequality. Therefore, by appropriately selecting the value of $tol$, the inner product term can be bounded above by an arbitrarily chosen constant $\epsilon>0$ as: \begin{equation} \langle e_{\Omega},d-U_{\Delta}(||\tilde{\Omega}||_2+||\tilde{\Omega}||^2_2)\Delta\rangle~~\leq \epsilon \end{equation} With this, the derivative of $V_1$ can be bounded as: \begin{equation} \dot{V}_1\leq -\alpha k_q||e_q||_2^2- k_{\Omega}||e_{\Omega}||_2^2+\epsilon \label{v1dotfinal} \end{equation}From (\ref{eq_leq_psi}) we have, \begin{equation} -\alpha k_q||e_q||_2^2\leq-\dfrac{\alpha k_q}{2}\Psi \end{equation}therefore, \begin{equation} \dot{V}_1\leq -\beta V_1+\epsilon \end{equation}where $\beta=min\big(\frac{k_q}{2},k_{\Omega}\big)$. Further, since $\epsilon$ can be arbitrarily defined, $V_1$ can be guaranteed to be strictly monotonically decreasing in $\Psi^{-1}(\bar{\epsilon},2)$ where $\bar{\epsilon}$ can be made arbitrarily small. In this region, \begin{equation} \alpha \Psi(t)\leq V_1(0)\leq \alpha \Psi(0)+\dfrac{1}{2}||e_{\Omega}(0) ||_2^2 \end{equation}Therefore, applying the condition (\ref{condition theorem1}), we obtain: \begin{equation} \alpha\Psi(t)\leq 2\alpha,~\forall \Psi(0)\in(\bar{\epsilon},2), \end{equation}Since $\bar{\epsilon}$ was arbitrary, the sublevel set $\Psi^{-1}[0,2)$ remains invariant. \end{proof} \textbf{Remark 1:} In this control design, the four tuning parameters are $k_q,~k_{\Omega},~\alpha,~\Delta$. A higher value of $\alpha$ ensures that the control law does not encounter any discontinuities when $\Psi=2$. Such a high gain may be necessary when the initial angular velocity error is large. $k_q,~k_{\Omega}$ may be chosen to arbitrarily dictate the rate of exponential tracking. Finally, choosing a higher value of $\Delta$ ensures a tighter bound on the asymptotic tracking error. \textbf{Remark 2:} In case a high gain $\alpha$ is not admissible, a possible solution is to mollify the error function $\Psi$ such that $d_1\Psi$ is continuous at $\Psi=2$. For example, one such mollification is the standard error function $\Psi=1-q_d^Tq$. With the same form of control as in (\ref{U}), the derivative of the Lyapunov function $V_1$ is obtained as in (\ref{v1dotfinal}). However in this case, $\dot{V}_1$ may vanish when $\Psi=2$ and $e_{\Omega}=0$. The Lasalle-Yoshizawa theorem (\cite{krstic}) can now be applied to conclude that the limit set of the trajectories is $e_q=0,~e_{\Omega}=0$. It can be seen that $e_q$ may vanish when $q=q_d$ or $q=-q_d$. It is then necessary to show that the undesired equilibrium point $q=-q_d$ is locally unstable (atleast when $\epsilon\approx 0$). Consider a function $W=2\alpha-V_1$ which vanishes when $q=-q_d$. From the continuity of $\Psi$, it can be shown that in any arbitrarily small neighborhood of $(q,e_{\Omega})=(-q_d,0)$, there exists points $q$ where $2-\Psi>0$. At such points, when $e_{\Omega}$ is small enough, it can be shown that $W>0$. Further, in an open neighborhood of the undesired equilibrium point (excluding it), $\dot{W}=-\dot{V}>0$. Since the complement set of the equilibria is positively invariant, Chetaev's theorem (\cite{khalil}) can be applied to conclude that the undesired equilibria are unstable. Hence, the trajectories of the system converge asymptotically to the stable equilibrium $(\Psi,e_{\Omega})=(0,0)$, for almost all initial conditions. Note however, that such analysis may not be valid when $\epsilon$ is significant, thereby further justifying our choice of error function. \subsection{Position Tracking Controller on $SE(3)$} Let $x_d$ denote a smooth reference trajectory for the position of the center of mass. We assume that $x_d$ and its derivatives are bounded. Let $e_x:=x-x_d$ and $e_v:=\dot{x}-\dot{x}_d$ denote the position and velocity errors. We now design a saturation based feedback law in order to track the position trajectory with bounded thrust. \begin{definition}[Definition:] Given constants $a$ and $b$ such that $0<a\leq b$, a function $\sigma:\mathbb{R}\to\mathbb{R}$ is said to be a smooth linear saturation function with limits $(a,b)$, if it is smooth and satisfies: \begin{enumerate} \item $s\sigma(s)>0,~\forall s\neq 0$ \item $\sigma(s)=s,~\forall |s|\leq a$ \item $|\sigma(s)|\leq b,~\forall s\in \mathbb{R}$ \end{enumerate} \end{definition}It is well known that such smooth saturation functions exist. For example, consider the integral of a smooth function with compact support, which is constant within a sub-interval of its support (\cite{shakarchi}). Such a function when shifted by a constant, satisfies the conditions in the definition. In practice, one can approximate these functions using polynomials. Let $\sigma_1$ and $\sigma_2$ be two saturation functions with limits $(a_1,b_1)$ and $(a_2,b_2)$ such that, \begin{equation} b_1<\frac{a_2}{2}. \label{a1b1} \end{equation} We now define a control law for $\hat{f}$ which is the total vector thrust acting on the rigid body, as follows: \begin{equation} \hat{f}=\bar{\sigma}(e_x,e_v)+f_d \label{fhat} \end{equation} where \begin{equation} f_d=m\ddot{x}_d+mge_3, \end{equation} \begin{equation} \bar{\sigma}(e_x,e_v)=-\begin{bmatrix} &&\sigma_2\bigg(\frac{k_1}{k_2}e_{v_1}+\sigma_1\bigg(k_2me_{v_1}+k_1e_{x_1}\bigg)\bigg ) \\ \\ &&\sigma_2\bigg(\frac{k_1}{k_2}e_{v_2}+\sigma_1\bigg(k_2me_{v_2}+k_1e_{x_2}\bigg)\bigg ) \\ \\ &&\sigma_2\bigg(\frac{k_1}{k_2}e_{v_3}+\sigma_1\bigg(k_2me_{v_3}+k_1e_{x_4}\bigg)\bigg ) \end{bmatrix}, \label{sigma_bar} \end{equation} and $k_1,~k_2$ are positive constants. When $fRe_3=\hat{f}$, it can be established from Theorem 2.1 in \cite{teelglobal} that the tracking errors enter the linear region of the saturation functions in finite time, and remain within thereafter. This would ensure that the origin of the tracking errors is exponentially attractive. The following Lemma will be subsequently used to demonstrate that the tracking errors enter the linear region in finite time, when the reduced attitude error is sufficiently bounded. \begin{lemma} Let $\sigma_1$ and $\sigma_2$ be saturation functions with limits as prescribed in (\ref{a1b1}). Then, the trajectories of the system \begin{eqnarray*} \dot{y}_1&=&y_2 \nonumber \\ m\dot{y}_2&=&-\sigma_2((k_1/k_2)y_2+\sigma_1(k_1y_1+k_2my_2))+\xi(t), \end{eqnarray*}enter the linear region of $\sigma_1$ and $\sigma_2$ in a finite time $t_2$ and remain within thereafter if, $|\xi(t)|<min~((a_2/2)-b_1,a_1),~\forall t>0$. \label{lemma_sigma} \end{lemma} \begin{proof} Let $w_1=my_2^2$. We obtain, \begin{equation} \dot{w}_1=2y_2(-\sigma_2((k_1/k_2)y_2+\sigma_1(k_1y_1+k_2my_2))+\xi(t)) \end{equation} When $|y_2|\geq(k_2/k_1)(a_2/2)$, using the bound on $\xi$ and (\ref{a1b1}), we can see that $\dot{w}_1$ is uniformly negative definite. Hence, $\exists t_1>0,~ \ni |y_2(t)|<(k_2/k_1)(a_2/2),~ \forall t>t_1$. Using the bound on $b_1$, we conclude that $\sigma_2$ operates in its linear region after $t_1$. Let $w_2=||(k_1y_1+k_2my_2)||_2^2$. When $t>t_1$, its derivative is obtained as, \begin{equation} \dot{w}_2=-2(k_1y_1+k_2my_2)(\sigma_1(k_1y_1+k_2my_2)+\xi(t)) \end{equation} From the definition of $\sigma_1$ and the bound on $\xi$, $w_2$ is uniformly negative definite when $|k_1y_1+k_2my_2|\geq a_1$. Hence, $\exists t_2>t_1>0,~ \ni |k_1y_1+k_2my_2|<a_1,~ \forall t>t_2$. It can there be concluded that $\sigma_1$ and $\sigma_2$ operate in their respective linear regions after $t_2$. \end{proof} We now define the commanded reduced attitude trajectory as: \begin{equation} q_d=\dfrac{\hat{f}}{||\hat{f} ||_2} \label{qd_command} \end{equation} This is well defined when $||\hat{f} ||_2$ is bounded away from zero. One way to ensure this is to choose a bound on $b_2$ as: \begin{equation*} ~b_2<\inf\limits_{t>0}\{||f_d(t) ||_{\infty}\}. \end{equation*} The control law for the net thrust $f$ is then chosen as: \begin{equation} f=~\langle\hat{f},Re_3\rangle. \label{f} \end{equation} \begin{theorem} Consider the control law for $U$ and $f$ as given in (\ref{U}) and (\ref{f}) such that the condition (\ref{condition theorem1}) is satisfied. Further, define the matrices: \begin{eqnarray} W_1&=&\begin{bmatrix} \frac{ck_x}{m}(1-\sin(\theta_0)) & -\frac{ck_v}{2m}(1+\sin(\theta_0)) \\ -\frac{ck_v}{2m}(1+\sin(\theta_0)) & k_v(1-\sin(\theta_0))-c, \end{bmatrix},\nonumber \\\nonumber \\ W_2&=&2\begin{bmatrix} (c/m)||f_d ||_2 & 0 \\ \\ a_1+||f_d ||_2 & 0 \end{bmatrix}. \label{wmatrices} \end{eqnarray} Given $0<k_x:=k_1$, $0<k_v:=(k_1/k_2)+k_2$, and $\theta_0<\pi/2$, we choose positive constants $c$, $k_q$, $k_{\Omega}$, such that \begin{small} \begin{eqnarray} &&c<min\bigg\{ k_xk_v(1-\sin(\theta_0))^2\bigg(k_x(1-\sin(\theta_0)) \nonumber \\ &&+\frac{k_v^2(1+\sin(\theta_0))^2}{4m} \bigg)^{-1}, k_v(1-\sin(\theta_0)), \sqrt{k_x/m}~\bigg\},~ \nonumber \\ &&min~(\alpha k_q,k_\Omega)>\frac{4||W_2 ||^2}{\lambda_{min}(W_1)}. \label{condition on theorem2} \end{eqnarray} \end{small} Then, the tracking errors $e_x$, $e_v$, $e_q$, $e_{\Omega}$, exponentially converge to an arbitrarily small open neighborhood of the origin, for all initial conditions lying in an open-dense subset. \end{theorem} \begin{proof} Assuming that $\theta<\frac{\pi}{2}$, the dynamics of $e_v$ can be written using (\ref{quad}) as: \begin{equation} m\dot{e}_v=-mge_3-m\ddot{x}_d+\dfrac{f}{q_d^Tq}q_d+\mathcal{E} \end{equation}where $\mathcal{E}\in \mathbb{R}^3$ is defined as \begin{equation} \mathcal{E}=f\bigg(q- \dfrac{q_d}{q_d^Tq} \bigg) \label{E} \end{equation}Further, we can write \begin{equation} \dfrac{f}{q_d^Tq}q_d=\dfrac{||\hat{f}||_2q_d^Tq}{q_d^Tq}.\dfrac{\hat{f}}{||\hat{f}||_2}=\hat{f} \end{equation}Hence, using (\ref{fhat}) we can write: \begin{equation} m\dot{e}_v=\bar{\sigma}(e_x,e_v)+\mathcal{E} \label{evdot} \end{equation} From (\ref{E}), we can bound $\mathcal{E}$ as \begin{equation} \mathcal{E}\leq ||\hat{f}||_2||((q_d^Tq)q-q_d)||_2 \end{equation} where the term $||((q_d^Tq)q-q_d)||_2 =|\sin(\theta)|$. Further, the commanded thrust $\hat{f}$ can be bounded using the saturation limit $b_2$ as \begin{equation} ||\hat{f}||_2\leq\sqrt{3}b_2+\sup\limits_{t>0}\{||f_d(t)||_2\}:=B \end{equation} From Theorem \ref{theorem1} we know that, \begin{equation} \exists t_0>0,~ \ni |\sin(\theta)|< min\bigg(\dfrac{\delta}{B},|\sin(\theta_0)|\bigg),~\forall t>t_0, \label{t0 definition} \end{equation} This implies that $||\xi(t)||_{\infty}<\delta,~\forall t>t_0$, where $\delta:=min~((a_2/2)-b_1,a_1)$. Using Lemma \ref{lemma_sigma}, we can then conclude that the error dynamics $e_x$ and $e_v$ operate in the linear region of $\sigma_1$ and $\sigma_2$ after a finite time $t_0+t_2$. Further, if $||\hat{f}||_2$ is bounded away from zero, the reference trajectory $q_d$ is well defined and its derivatives are bounded. Therefore, the trajectories of (\ref{quad}) remain bounded in $(0,t_0+t_2)$. In the linear region, the dynamics of $e_v$ can be written as: \begin{equation} m\dot{e}_v=-k_xe_x-k_ve_v+\mathcal{E} \label{evdotlinear} \end{equation} We choose a Lyapunov function candidate for the translational dynamics as \begin{equation} V_2:=\dfrac{1}{2}k_x||e_x||_2^2+\dfrac{1}{2}m||e_v ||_2^2 +ce_x^Te_v \end{equation}Its derivative along the flow of (\ref{evdotlinear}) is obtained as \begin{equation} \dot{V}_2=-(k_v-c)|| e_v||_2^2-\frac{ck_x}{m}||e_x||_2^2-\frac{ck_v}{m}+\mathcal{E}^T\bigg(\frac{c}{m}e_x+e_v\bigg) \label{v2dot1} \end{equation} From (\ref{t0 definition}) we observe that, \begin{equation} ||((q_d^Tq)q-q_d)||_2<1,~ \forall t>t_0 \end{equation} Further, from the identity given in the proof of Lemma \ref{eq_leq_psi}, we observe that \begin{eqnarray} &&||((q_d^Tq)q-q_d)||_2\leq 2||e_q||_2 \end{eqnarray} This can be substituted in (\ref{v2dot1}) to obtain: \begin{small} \begin{eqnarray} \dot{V}_2\leq -(k_v(1-\sin(\theta))-c )||e_v||_2^2-\frac{ck_x}{m}(1-\sin(\theta))||e_x||^2_2 \nonumber \\ +\frac{ck_v}{m}(1+\sin(\theta))||e_x||_2||e_v||_2 \label{v2dotfinal} \\ +2||e_q||_2\bigg(k_x||e_x||_2||e_v||_2+\frac{c}{m}||f_d||_2||e_x||_2+||f_d||_2||e_v||_2\bigg)\nonumber \end{eqnarray} \end{small}In the linear region of the saturation functions, we can bound the cubic term in the above equation as: \begin{equation} ||e_q||_2k_x||e_x||_2||e_v||_2\leq a_1||e_q||_2||e_v||_2. \label{cubicbound} \end{equation} Consider a Lyapunov function candidate for the complete dynamics as \begin{equation} V=V_1+V_2 \label{V} \end{equation}where $V_1$ is defined as in (\ref{v1}). Define $z_1=[||e_x||_2,||e_v||_2]^T$ and $z_2=[||e_q||_2,||e_{\Omega}||_2]^T$. We can bound $V$ between two quadratic forms using Lemma \ref{eq_leq_psi} as: \begin{equation} z_1^TM_1z_1+z_2^TM_2z_2\leq V\leq z_1^TM_3z_1+z_2^TM_4z_2 \end{equation}where \begin{equation} \begin{matrix} M_1=\frac{1}{2}\begin{bmatrix} k_x & -c \\ -c & m \end{bmatrix}, ~M_3=\frac{1}{2}\begin{bmatrix} k_x & c \\ c & m \end{bmatrix},\\ \\ M_2=\frac{1}{2}\begin{bmatrix} 2\alpha & 0 \\ 0 & 1 \end{bmatrix},~M_4=\frac{1}{2}\begin{bmatrix} 4\alpha & 0 \\ 0 & 1 \end{bmatrix} \end{matrix} \end{equation}From (\ref{v1dotfinal}), (\ref{v2dotfinal}), (\ref{cubicbound}) and (\ref{V}), the derivative of $V$ can be obtained as: \begin{equation} \dot{V}\leq -Q(e_x,e_v,e_q,e_{\Omega})+\epsilon \end{equation} where \begin{equation} Q(e_x,e_v,e_q,e_{\Omega})=z_1^TW_1z_1-z_1^TW_2z_2+z_2^TW_3z_2 \end{equation} and $W_3=\begin{bmatrix} \alpha k_q & 0 \\ 0 & k_{\Omega} \end{bmatrix}$, $W_1$ and $W_2$ are as given in (\ref{wmatrices}). Using the conditions in (\ref{condition on theorem2}), we observe that $ Q$ is a positive definite quadratic form and $V$ is sandwiched between two positive definite quadratic forms. Hence, after a finite time $t_0+t_2$, the errors $[e_x,~e_v,~e_q,~e_{\Omega}]$ exponentially converge to an arbitrarily small open neighborhood of the origin. \end{proof} \textbf{Remark 1:} By using saturated thrust feedback, it was possible to bound the error $\mathcal{E}$ in the translational dynamics, by sufficiently decreasing the reduced attitude error in the initial phase $t<t_0$. This was essential in order to ensure that the position errors decrease into the linear region, in finite time. This also allowed us to to bound the cubic term as (\ref{cubicbound}), which resulted in exponential stability. In \cite{tleeasme}, the authors restrict the stability analysis of the translational dynamics to a domain where $e_x$ is bounded. However, in order to remain within this domain the total system errors need to be further bounded, rendering the overall stability only local. In \cite{tlee}, the authors attempt to bound the velocity error $e_v$. However, such analysis is valid only when the gain $k_x$ is uniformly zero, failing which there can be no tractable bound on $e_v$. The stability analysis with the proposed control law in this paper is not restricted by any such conditions. \textbf{Remark 2:} The conditions of the theorem dictate that the attitude tracking gains need to be high enough so that the translational errors enter the linear region of the saturation function. The limits chosen for the saturation functions are quite conservative, to ensure that the commanded thrust vector $\hat{f}$ is bounded away from the origin. This ensures that $\dot{q}_d$ and $\ddot{q}_d$ are bounded, thereby bounding the required torque. In practice however, one may further relax this limit within rotor thrust saturation. \section{Numerical Simulations} Simulations were carried out on a variable pitch quadrotor which is capable of negative thrust, and a conventional quadrotor with positive rotor thrust constraint. Subsequent to rotor failure, the quadrotor was required to track a figure-of-8 trajectory while initially recovering from a downward facing pose. \subsection{Variable Pitch Quadrotor} The parameters of the quadrotor chosen for simulation are $m=1kg,~\omega_i=600 ~rad/s$ and $b_L=3.2\times 10^{-6}$, and the inertia matrix as $J=\begin{bmatrix} 0.0972 & 0.0194 & 0.0195 \\ 0.0194 & 0.0974 & 0.0317 \\ 0.0195 & 0.0317 & 0.1584 \end{bmatrix}$. The nominal inertial matrix for control design was chosen as $J_0=diag(0.081,0.0812,0.1320)$. The propeller inertia was chosen as $J_r=5\times 10^{-5}$, and the rotational drag coefficient matrix was chosen as $K_d=diag(0.7,0.7,1.4)\times 10^{-4}$. The control gains were chosen as $k_q=8,~k_{\Omega}=10,~k_x=2,~k_v=3,~\Delta=3\times 10^{-3},~tol=10^{-3}$. The reference position trajectory was chosen as a figure of '8' curve at constant altitude i.e. $x_d(t)=2[\sin(2t),\cos(2t),5]^T$ The initial conditions were chosen as $x(0)=[5,5,5]$, $\dot{x}(0)=0$, $\Omega(0)=0$, and an initial orientation as a $140^{\circ}$ rotation about the $x ~\mathrm{axis}$ as: $R(0)=\begin{bmatrix} 1 & 0 & 0 \\ 0 & \cos(140^{\circ}) & -\sin(140^{\circ})\\ 0 & \sin(140^{\circ}) & \cos(140^{\circ}) \end{bmatrix}$ \begin{figure}[h] \includegraphics[width=0.5\textwidth]{3d_1} \caption{Trajectory tracking after recovering from downward facing pose} \label{fig:3d_1} \end{figure} \begin{figure}[h] \includegraphics[width=0.5\textwidth]{ex_1} \caption{Position error $e_x$ during the maneuver} \label{fig:ex_1} \end{figure} Fig.\ref{fig:3d_1} shows the quadrotor tracking a 'figure-of-8' reference trajectory after recovering from an inverted pose. Initially, when the reduced attitude error is large, there is a transient deviation from the reference trajectory. This can be seen in the position error plot in Fig.\ref{fig:ex_1}. From this plot, it can also be seen that the tracking errors exponentially decrease and are bounded within an arbitrarily small open ball. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{psi_1} \caption{Reduced attitude error $\Psi(q,q_d)$ during the maneuver} \label{fig:psi_1} \end{figure} Fig.\ref{fig:psi_1} shows the evolution of the reduced attitude error function $\Psi(q,q_d)$ during the maneuver. It can be seen that $\Psi$ decreases exponentially to an arbitrarily small open ball. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{omega_1} \caption{Angular velocity $\Omega$ in $deg/sec$ during the maneuver} \label{fig:omega_1} \end{figure} Fig.\ref{fig:omega_1} shows the angular velocity about the three body axes during the maneuver. It can be seen that the angular velocity about the body $z$ axis increases rapidly and saturates due to rotational drag. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{thrust_1} \caption{Rotor thrusts $T_1,~T_2,~T_3$ during the maneuver} \label{fig:thrust_1} \end{figure} Fig.\ref{fig:thrust_1} shows the variation of the thrust generated by the three functioning rotors during the maneuver. \subsection{Quadrotor with Positive Thrust Constraint} The control law was simulated on a similar quadrotor where the rotor thrusts were constrained to be strictly positive. It was observed that when $\Omega_3$ was sufficiently high, the control law was successfully able to execute the attitude recovery and tracking maneuver. It was also observed that in case of large initial attitude errors, the tracking failed when the initial angular velocity $\Omega_3(0)$ was low. Similar parameters were chosen, except for a mass $m=3kg$ and initial angular velocity $\Omega_3(0)=2\pi ~rad/s$. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{3d_2} \caption{Attitude recovery and position tracking with positive rotor thrust constraint} \label{fig:3d_2} \end{figure} \begin{figure}[h] \includegraphics[width=0.5\textwidth]{ex_2} \caption{Position error with positive rotor thrust constraint} \label{fig:ex_2} \end{figure} Fig.\ref{fig:3d_2} and Fig.\ref{fig:ex_2} show larger transients as compared with Fig.\ref{fig:3d_1} and Fig.\ref{fig:ex_1}, which is due to thrust saturation. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{psi_2} \caption{Reduced attitude error $\Psi(q,q_d)$ with positive rotor thrust constraint} \label{fig:psi_2} \end{figure} Fig.\ref{fig:psi_2} shows fluctuations while stabilizing the reduced attitude, and a persistent error. This is due to the fact that when one rotor fails, the torque generated about one of the horizontal axes is strictly positive, which may lead to actuation error. The fluctuations and asymptotic errors can be further decreased by maintaining a higher $\Omega_3$ as shown in Fig.\ref{fig:omega_2}. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{omega_2} \caption{Angular velocity $\Omega$ in $deg/sec$ with positive rotor thrust constraint} \label{fig:omega_2} \end{figure} \begin{figure}[h] \includegraphics[width=0.5\textwidth]{thrust_2} \caption{Constrained rotor thrusts $T_1,~T_2,~T_3$ } \label{fig:thrust_2} \end{figure} Fig.\ref{fig:thrust_2} shows that the rotor thrusts operate within their constraints, and initially saturate when the attitude error is large. \textbf{Practical Considerations in Conventional Quadrotors:} In conventional quadrotors, the rotor thrusts are constrained to be strictly positive and consequently the torque about one of the horizontal axes (say $U_1$) as well. Due to this, the controller performance can suffer due to large actuation error. A possible solution is to design a nominal trajectory $x_d(t)$ accounting for the initial conditions, such that $U_1$ is positive and uniformly bounded away from zero along this trajectory. This is possible considering that the position $x$ is a flat output of the dynamics on $SE(3)/SO(2)$. In \cite{andrea}, the authors discuss various periodic nominal trajectories satisfying the positive torque condition, about which they linearize the dynamics. From the exponential attractiveness of the geometric control law, it can be shown that if the tracking gains are appropriately chosen, the system trajectories will remain close to the nominal trajectory. As discussed in \cite{andrea}, such nominal trajectories require the angular velocity $\Omega_3$ to be significantly high. Post rotor failure, this angular velocity needs to be sufficiently increased before executing the maneuver. It is therefore essential to use high bandwidth attitude sensors (such as the MPU6050 DMP) and actuators. Further in conventional quadrotors, the tracking performance is improved if the ratio of the mass to inertia about the body $z$ axis, is sufficiently high. This ensures that in the initial phase where the angular velocity is increased, the translation errors do not grow significantly due to parasitic thrust. \section{Concluding Remarks} We proposed a fault tolerant geometric control law for a quadrotor, subsequent to complete failure of a single rotor. It was demonstrated that unlike existing fault tolerant control laws, the quadrotor was able to perform aggressive maneuvers such as attitude recovery from an inverted pose and nontrivial trajectory tracking. This was primarily achieved by exploiting the geometric structure of the reduced configuration manifold, and designing a control law which was free of singularities which inhibit the performance envelop of the UAV. The back-stepping geometric control law for reduced attitude control also enabled reduced attitude tracking at arbitrarily high rates, which was essential for inhibiting transients and enhancing tracking performance. While implementing the control law on a conventional quadrotor where the rotor thrusts are strictly positive, the angular rate about the body $z$ axis needs to be significantly high when the attitude error is large. Hence, when a fault in one of the rotors is detected, this angular rate needs to be first sufficiently increased before initiating the reduced attitude and position tracking maneuver. \bibliographystyle{IEEEtran}
1,314,259,992,787
arxiv
\section{Introduction} The study of the low-lying eigenmodes of the Dirac operator (LDE), those corresponding to the smallest eigenvalues, has a rich history since these modes are thought to be representative of, if not responsible for, much of the infrared behavior in QCD. Such effects include \begin{itemize} \item Chiral Symmetry Breaking \`a la Banks and Casher where $\langle\bar\psi\psi\rangle \sim \rho_{_\lambda}(0)$ \item The low eigenmodes of $\mathop{\not\!\! D}$ dominate quark propagators \item Confinement in many scenarios is thought to be related to topological excitations: instantons, monopoles, or vortices. These objects all localize Dirac zero-modes in some way. \end{itemize} Thus we are interested to learn what we can about this localization, if indeed it exists, and then to characterize it in some quantitative way. Hopefully this characterization can tell us something about the mechanisms responsible for localization. \section{Inverse Participation Ratio} The Inverse Participation Ratio (IPR) provides a quantitative number which characterizes the localization of a scalar field. For the LDEs, it is defined as $$ I_i = V \sum_x \rho_i^2(x) $$ where $V$ is the number of lattice sites $x$, and $i$ labels which eigenmode is under consideration, $$ \rho_i(x) = \psi_i^\dagger\psi_i(x). $$ $\psi_i(x)$ is the $i$-th lowest eigenvector of the (asqtad) Dirac operator and $$ \sum_x \rho_i(x) = 1 $$ The IPR takes the following values in cases of different localization: $\qquad$ $I = 1$ ~~~~if $\rho$ is constant, $\qquad$ $I = 1/f$ ~~if $\rho$ is localized (and constant) on a fraction $f$ of sites, and $\qquad$ $I = V$ ~~~~if $\rho = \delta_{x,x_0}$. As was first pointed out in \cite{lat04}, the fraction of points involved in localization should scale with the dimension of the localizing manifold. For example, $$ f_{\rm 1-dim} = \frac{{\cal L}/a}{V/a^4}\qquad{\rm or}\qquad f_{\rm 2-dim} = \frac{{\cal A}/a^2}{V/a^4}, \qquad {\rm etc.} $$ where in the first, one-dimensional, case ${\cal L}$ is the total length of ``localizing material'' and lattice spacing $a$, or in the two-dimensional case, ${\cal A}$ is the area of two-dimensional material, and so on for higher dimensions. The (possibly fractal) dimension, $d$, of the localizing manifold is given by the scaling of the IPR as the lattice spacing is varied, \begin{equation} I = 1/f \sim a^{d-4} \end{equation} In recent years a growing number of authors have found evidence for localization of either the LDE or other quantities, such as the topological charge density \cite{Horvath}. Since the appearance of \cite{lat04} other groups have analyzed the scaling of the IPR using improved \cite{Gubarev} and alternative \cite{Greensite} operators, finding similar conclusions. While the scaling of the IPR is a rather clean indication of the underlying localization dimension, its extraction requires a wide range of lattice spacings to get a fit with good statistics. In \cite{lat04} for example, the dimension was given as somewhere between 2 and 3. For best results, scaling measurements should be done at {\em fixed physical volume}, which can then be compared to similar measurements at {\em fixed lattice spacing} to understand finite size issues. \section{Scaling of the IPR} In \cite{lat04} we presented preliminary results for the scaling of the IPR using quenched lattices (Symanzik 1-loop improved gauge action) ranging from $12^4$ and $a = 0.2$ fm up to $24^4$ and $a \sim 0.1$ fm. However, with better statistics on the finest lattices, we found that the lattice spacing was closer to $a = 0.095$ fm, 5\% from of our target of $a = 0.1$ fm. In order to maintain a fixed volume we regenerated this ensemble at the target lattice spacing. Furthermore, as our lattice spacing is set using $r_1$ from the static quark potential, we use here an updated value of $r_1$ (0.317 fm) taken from \cite{light} ($r_1 = 0.344$ fm was used in \cite{lat04}). This increases all lattice spacings by $\sim$10\%, but has no effect on our results. Additionally, we have increased the convergence criteria for computing eigenvectors and added a finer $28^4$ ensemble. The lattices and parameters used for the present work are collected in the table below. \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $a$ & $L$ & vol & $\beta$ & no. configs. \\ \hline 0.218 fm~~ & 12 & (2.61)$^4$ fm$^4$ & 7.56 & 100\\ 0.163~~~~ & 16 & (2.61)$^4$ & 7.847 & 100\\ {\em 0.128~~~} & {\em 20} &{\em (2.56)$^4$} & {\em 8.109} & {\em 100}\\ {\tt 0.110~ } & {\tt 24} & {\tt (2.64)$^4$} & {\tt 8.295} & {\tt 100}\\ {\tt 0.0915~ } & {\tt 28} & {\tt (2.56)$^4$} & {\tt 8.527} & {\tt 100}\\ \hline \end{tabular} \end{center} with 64 eigenvectors per lattice. The {\em italicized} ensemble was rerun for better convergence of eigenvectors (with very little change), while ensembles in {\tt teletype} are new. Our main result is summarized in Figure 1, where we plot the scaling of the average IPR using the lowest 8 eigenvectors. Compared with \cite{lat04} we have better statistics on the IPR values and, particularly, the new value at a smaller lattice spacing (0.0915 fm) gives a much more precise scaling dimension for the IPR. \begin{figure}[h] \begin{center} \includegraphics[width=400pt]{newnewdata.eps} \end{center} \caption{The scaling of the IPR for the lowest 8 eigenvectors, in red. In green is the best fit, and associated values. In blue is a fit to constant + constant/$a$ for comparison. } \end{figure} The best fit to the data has $d-4 = 0.934\pm 0.149$ (see eq. 2.1), thus the scaling dimension, $d$, of the localization manifold is essentially 3. \section{Mobility Edge} In analogy with the mechanism of Anderson localization in condensed matter physics, we can investigate whether our data shows a {\em mobility edge}, an energy above which quarks are delocalized and below which they show localization. This feature has been investigated by Golterman and Shamir \cite{GS} and observed in SU(2) \cite{Gubarev} using an overlap Dirac operator with exact zero modes. The signal is a reduction in the IPR (less localization) at some critical value for the eigenvalue--the mobility edge. In our SU(3) data the IPR values are rather low as compared to IPR values in SU(2) studies \cite{Gubarev}, \cite{Greensite} (3-4 vs 5-20). One possible reason for this might be that the localization is due to topology of one of the SU(2) subgroups, while the other two subgroups randomize the eigenvectors. Whatever the reason, this is an interesting clue to understanding the localization. Only our 28$^4$ lattice at $a = 0.0915$ fm shows a weak indication of the mobility edge, shown in Figure 2. Its value in physical units, around 50 $\sim$ 100 MeV is consistent with that seen in \cite{Gubarev} . A study of the IPR on various volumes would be required to confirm this. \begin{figure}[h] \begin{center} \includegraphics[width=400pt, bb=0 50 410 302]{mobility28.eps} \caption{$\LL{\rm IPR}\RR$ vs eigenvalue in physical units. The mobility edge is weakly indicated by a decrease in IPR at about 60$-$100 MeV.} \end{center} \end{figure} \section{Two-point Correlations} While the IPR is a good quantitative indicator of localization, it only tells us the fraction of lattice sites where the eigenvector is large. If we rearranged the lattice sites we would obtain the same IPR. The two-point correlator on the other hand gives information on the connectedness of the eigenvector, and should drop off as $\sim r^{d-4}$ if the eigenvector is localized uniformly on a $d$-dimensional manifold. We have computed the ``all-to-all'' correlator, $\LL\rho(x)\rho(y)\RR$ and present the average at different spatial separation $|x-y|$ for eigenvalue 0 in Figure 3. These correlators have parity sawtooth behavior familiar in staggered fermion propagators, which lessens as the lattice spacing decreases. Taking the slope of the correlator from the even points, we show a line with $d=3.5$ (a very similar result if achieved for the odd points which lie slightly below the line). \begin{figure}[h] \begin{center} \includegraphics[width=400pt]{corr_phys.eps} \caption{The two-point correlator of the lowest eigenvector, $\sum_x \LL\rho(x)\rho(x+r)\RR$ as a function of distance $r$. Plots (except a=0.0915) are displaced upward for clarity. The axes are log-log.} \end{center} \end{figure} We see that in physical units, the correlation disappears by about 1 fermi, for all of our lattices. \section{Conclusions} We have extended our study of the IPR of the lowest eigenvectors of the asqtad Dirac operator computed on quenched background gauge fields. A number of conclusions are apparent. \begin{itemize} \item The IPR scaling $\sim 1/a$ implies a dimension $d = 3$ for the localizing manifold. The signal is quite a bit clearer than in our previous study. \item We see a weak mobility edge in our finest lattices (only) at 50$\sim$100 MeV. This is consistent with other work. The fact that our signal is weak is likely attributed to the lack of exact zero modes of our Dirac operator and possibly to the larger gauge group (SU(3) vs SU(2)). \item The two point correlator $\LL\rho(x)\rho(x+r)\RR$ suggests a fractal dimension of $\sim 3.5$. It should be noted that we have not finished computing this quantity on all of our largest lattices (Figure 3 represents only about 8 of our 28$^4$ dataset, however the correlator shows little fluctuation from lattice to lattice). In physical units the correlation disappears at $\sim$1 fm. \end{itemize} Our findings do not support the naive picture where the low-lying Dirac eigenmodes are localized on monopoles ($d$=1) or vortices ($d$=2). The relationship between topological excitations and eigenvector localization is more subtle. For instance, a set of center vortices gives eigenvectors localized weakly on the vortices themselves, and more strongly on their intersections \cite{Reinhardt}.
1,314,259,992,788
arxiv
\section{Introduction} \label{Sec-intorduction} Intense laser pulses have been long used for controlling molecular rotation (for reviews on this broad topic, see Refs.~\citenum{Stapelfeldt2003,Ohshima2010,Fleischer2012}). The control mechanism is based on the interaction of the electric field of an optical wave with the induced electric dipole of the molecule. The ensuing torque, exerted by the field on the molecule, affects the molecular rotational dynamics \cite{Zon1975, Seideman1995, Friedrich1995, Vrakking1997, Dooley2003, Underwood2005}. The ability to steer these dynamics towards the desired direction and frequency of rotation is key to numerous applications in molecular science \cite{KremsBook}. When executed with an ultrashort laser pulse (the so-called laser ``kick''), the strength of the instantaneous light-induced torque is a function of the initial orientation of the molecular frame with respect to the polarization of the field. Random orientation angles of molecular axes in a thermal ensemble result in a broad distribution of torque magnitudes and correspondingly broad distribution of final rotational states after a single kick \cite{Averbukh2001, Milner2016a}. Hence, despite the difference in molecular properties, a single non-resonant pulse typically generates overlapping rotational distributions in different molecular species with limited selectivity between them. To achieve molecular selectivity, sequences of two or three properly timed laser pulses have been used \cite{Renard2004, Lee2006, Fleischer2006, Fleischer2007}. The desired selectivity stems from the quantum interference between the rotational wave packets created by multiple kicks, which allows one to increase the amplitude of one quantum state, while decreasing the other. Though successful in selective rotational excitation of molecular isotopes \cite{Fleischer2006} or spin isomers \cite{Fleischer2007} in a mixture, the method does not offer a means of controlling the target rotational states in the excited molecules. Increasing the number of pulses promised to add the state selectivity \cite{Zhdanovich2012}, but has not proved practical due to the technical challenges of creating long pulse trains on a picosecond time scale \cite{Bitter2016b}. \begin{figure*}[t] \includegraphics[width=2\columnwidth]{Setup.pdf} \caption{Illustration of the main principle behind shaping the field of an optical centrifuge with the ``truncation'' and ``piercing'' techniques. (\textbf{a}) Optical setup of the input segment of the centrifuge shaper. G: diffraction grating, L: lens, M: mirror, P: truncating prism, FP: Fourier plane. The inset shows the use of optical fibers (two grey circles) for blocking a narrow segment in the centrifuge spectrum. The location ($d_x, d_y, d_z$ - see the coordinate axes at the upper right corner) and width ($w$) of the fibers determine the properties of the created frequency notch. (\textbf{b}) \textit{Optical} spectrum of the shaped centrifuge, showing both the truncation of the red arm (red shaded area) at $\lambda_t=805$~nm, and the piercing of the blue arm (blue shaded area) at $\lambda_p=783$~nm. (\textbf{c}) Frequency notch in the \textit{rotational} spectrum of the pierced centrifuge (see text for the definition of the term). $f_p$ and $\Delta f_p$ describe the rotational frequency window, within which the centrifuge intensity is reduced by a factor $(1-a)$, where $a$ is the piercing depth. (\textbf{d}) Calculated polarization profile of the pierced centrifuged. The vector of linear polarization $\vec{E}(t)$ undergoes accelerated rotation around the wave vector $\vec{k}$. The lower-amplitude ``neck'' in the middle of the pulse is a result of the spectral piercing. } \label{Fig-shaper} \end{figure*} A powerful method of controlling the target rotational state of a molecule has emerged in 1999, when an ``optical centrifuge'' (OC) has been theoretically proposed \cite{Karczmarek1999} and later experimentally demonstrated \cite{Villeneuve2000}. An optical centrifuge is a laser pulse, whose polarization vector undergoes an accelerated rotation around the propagation direction. An interaction of the induced dipole moment with such a field forces molecules to follow the polarization vector, much like an object placed inside a mechanical centrifuge follows its rotary motion. Quantum mechanically, the process can be described as a series of adiabatic transitions, through which a molecule climbs the ladder of rotational energy levels \cite{Spanner2001, Spanner2001a, Armon2017}. In contrast to the non-adiabatic kicks, OC is capable of creating a narrow rotational wave packet centered at a well-defined target rotational state \cite{Korobenko2014a}. To date, optical centrifuges entered numerous studies (for a recent review, see Ref.~\citenum{MacPhail2020}). Yet despite the analogy implied by its name, the optical centrifuge has never been used to select molecules according to their physical properties and control their rotation separately (similarly to the selectivity of its mechanical counterpart, based on the material density). Rather, the centrifuge has always defined a single rotational frequency in any given experiment. Investigations of the effects of rotational excitation on the collisional properties \cite{Yuan2011, Toro2013, Korobenko2014a, Milner2014a, Murray2018}, gyroscopic dynamics \cite{Milner2015c, Murray2017} or chemical reactions \cite{Ogden2019} have all been limited to having either both collision partners rotating with the same frequency, or one of them being a randomly rotating molecule at equilibrium with the thermal ensemble. This rather limiting constraint stems from the binary ``all or nothing'' selective power of the centrifuge: it either overcomes the thermal motion of the molecules and spins them up, or leaves them behind if the interaction potential is too weak or the angular acceleration is too fast. In this work, we demonstrate a method of adding molecular selectivity to the rotational control with a \textit{shaped optical centrifuge}. By modifying the spectrum of the centrifuge, we make it spin different molecular species in a gas mixture to different angular frequencies. Both the separation and the controllability of target frequencies is accomplished by removing certain segments from the OC spectrum (hereafter referred to as spectral ``piercing'') according to the rotational spectra of the molecules. By blocking a resonant Raman line in the rotational ladder of one molecule, we force it to stop climbing the ladder midway. At the same time, another molecule keeps climbing to higher frequencies, as long as the missing spectral component does not belong to the rotational ladder of that molecule. \begin{figure*}[t] \includegraphics[width=1.95\columnwidth]{Piercing.pdf} \caption{Numerically calculated characteristics of the frequency notch, applied to the rotational spectrum of the centrifuge by means of the spectral piercing method and depicted in Fig.~\ref{Fig-shaper}(\textbf{c}) . (\textbf{a}) Frequency of the OC rotation, blocked by piercing and defined by the position of the piercing element in the Fourier plane of the shaper, $d_y$. (\textbf{b}) Shifting the piercing element along the two other directions, $d_z$ (at $d_x=0$, solid line) or $d_x$ (at $d_z>\SI{20}{\micro\meter}$, dashed line), controls the depth of the frequency notch $a$. (\textbf{c}) The thickness of the piercing element $w$ defines the width of the frequency notch, $\Delta f_p$ (dashed line), but also affects its depth $a$ (solid line).} \label{Fig-piercing} \end{figure*} \section{Spectral shaping of an optical centrifuge} \label{Sec-shaper} To create the field of an optical centrifuge, the spectrum of a broadband laser pulse is split in two equal parts (hereafter referred to as ``centrifuge arms'') using a Fourier pulse shaper \cite{Villeneuve2000}. The two arms are frequency chirped with opposite chirps, circularly polarized with opposite handedness, and overlapped in space and time. Interference of the two circularly polarized laser fields results in a rotating linear polarization, whereas the increasing frequency difference between the two centrifuge arms due to the opposite frequency chirps makes the latter rotation accelerate with time (for specific sets of parameters, see Ref.~\citenum{MacPhail2020}). We use a regenerative Ti:Sapphire amplifier, which produces laser pulses with 10~mJ energy per pulse and 35~fs pulse length at a central wavelength of 790~nm and a repetition rate of 1~KHz. Fig.~\ref{Fig-shaper}(\textbf{a}) schematically depicts the input segment of our centrifuge shaper (for the full optical layout, which replicates the original design of Villeneuve \textit{et al.} \cite{Villeneuve2000}, see for example Fig.~3 in \cite{MacPhail2020}). Mirror M splits the spectrum of the centrifuge in the Fourier plane (FP) of Lens L. Using the standard combination of lenses and diffraction gratings (not shown), a negative frequency chirp is applied to the reflected red arm, whereas the transmitted blue arm is chirped positively. In addition to the frequency chirping, introduced by the conventional OC shaper, we modify the centrifuge spectrum as follows. An array of glass fibers is introduced in the Fourier plane in order to block (by means of light scattering) a small part of the blue arm's spectrum, as shown in the inset to Fig.~\ref{Fig-shaper}(\textbf{a}). Glass fibers of optical quality are chosen for the purpose of withstanding high light intensities, with up to 10 fibers (125~$\mu$m diameter each) grouped together in a tight side-by-side geometry. The spectrum of the red arm is truncated by a glass prism, positioned close to the Fourier plane and blocking the red spectral tail of the centrifuge (again, by scattering that light out of the shaper). The effects of both the piercing of the blue arm and the truncation of the red arm are demonstrated in the OC spectrum plotted in Fig.~\ref{Fig-shaper}(\textbf{b}). To understand the effect of these modifications in the \textit{optical} spectrum of the centrifuge field, we recall the expression for the two centrifuge arms in the time domain \cite{MacPhail2020}: \begin{equation}\label{Eq-onePhoton} \vec{E}_\pm(t)=\frac{E_0(t)}{2} \hat{\epsilon}_\pm e^{-i(\omega_0 \pm \beta t)t,} \end{equation} \noindent where $E_0(t)$ and $\omega _0$ are the field envelope and the central frequency of the OC pulse, $\hat{\epsilon}_\pm$ are the unit vectors of two circular polarizations [with `+' (`-') being the polarization of the blue (red) arm], and $\beta $ is the angular acceleration of the centrifuge. The instantaneous optical frequencies of these fields are $\omega (t) = \omega _0\pm 2\beta t$. Hence, piercing the spectrum of the blue arm $\vec{E}_+ (t)$ at frequency $\omega _p$ corresponds to attenuating the field at time $t_{p} = (\omega _{p}-\omega _0)/2\beta $. Similarly, truncating the red arm at $\omega _t$ results in its termination at time $t_{t} = (\omega _0 -\omega _t)/2\beta $. This is illustrated in Fig.~\ref{Fig-shaper}(\textbf{d}), which shows the numerically calculated centrifuge pulse, propagating in the direction $\vec{k}$, with its vector of linear polarization $\vec{E}(t)$ rotating at an accelerated rate around the propagation axis. Piercing of the blue arm results in a temporary drop of the field amplitude in the middle of the OC pulse, whereas the truncation of the red arm stops the accelerated rotation of the polarization vector. Note that the circularly polarized fields of both the non-pierced red arm in the middle section, and the non-truncated blue arm at the end of the pulse, are not shown in Fig.\ref{Fig-shaper}~(\textbf{d}) for clarity. Being far off any electronic, vibrational and rotational resonances, this light has no significant effect on the molecular dynamics. The rotational ladder climbing is a sequence of two-photon stimulated Raman transitions, during which a molecule absorbs one photon from the blue arm and emits one photon into the red arm. The process is therefore governed by the two-photon difference-frequency-generation (DFG) spectrum of the OC (hereafter referred to as the ``rotational'' spectrum of the centrifuge). The two-photon field in the time domain is a product of the two centrifuge arms: \begin{equation}\label{Eq-twoPhoton} E^{(2)}_\text{DFG}(t):= \vec{E}_+(t) \cdot \vec{E}^*_-(t) = \frac{E_0^2(t)}{2} e^{-i 2\beta t^2}. \end{equation} \noindent The instantaneous frequency of this field is $\Omega (t) = 4\beta t$. By substituting $t_{p,t}$ from the previous paragraph, one finds that piercing (truncating) the centrifuge spectrum at $\omega _p$ ($\omega _t$) results in the attenuation (termination) of the rotational ladder climbing at $\Omega _{p,t}=\left| 2(\omega _{p,t} - \omega _0) \right|$. An example of the spectral notch in the two-photon spectrum of the pierced centrifuge is shown in panel (\textbf{c}) of Fig.~\ref{Fig-shaper}. The notch is characterized by the central frequency $f_p$, width $\Delta f_p$ (full width at half maximum), and depth $a$. The relationship between these quantities and the position/dimensions of the piercing element $d_x, d_y, d_z$ and $w$, indicated in the inset of Fig.~\ref{Fig-shaper}(\textbf{a}), can be found by using the known rules of Fourier optics \cite{GoodmanBook}. Numerically calculated $f_p, \Delta f_p$ and $a$ for the parameters of our centrifuge shaper (groove density of 1500 mm$^{-1}$ and focal length of 20~cm for the diffraction grating G and the focusing lens L, respectively) are shown in Fig.~\ref{Fig-piercing}. Panel (\textbf{a}) of Fig.~\ref{Fig-piercing} shows the linear transformation of the rotational centrifuge spectrum to the spatial distribution in the Fourier plane, similar to any standard `$4f$' pulse shaper \cite{Weiner2000}. By moving the piercing element on the scale of a few millimeters, the notch can be introduced at any Raman frequency between 0 and 10~THz. Note that the same frequency-to-space conversion of $\approx 1.5$~THz/mm applies to the truncation of the OC spectrum with prism P, as discussed above. Fig.~\ref{Fig-piercing}(\textbf{b}) illustrates the two possible mechanisms for controlling the piercing depth $a$. It can be executed by either moving the piercing element (here, a glass fiber of $\SI{125}{\micro\meter}$ diameter) in a perpendicular direction with respect to the dispersion plane of the centrifuge shaper ($d_z$), or by moving it away from the Fourier plane ($d_x$). In the former case, the length scale is defined by the diffraction-limited beam radius of a monochromatic beam, $w_0=\SI{15}{\micro\meter}$ at $d_x=0$. Increasing $d_x$ on the scale of the Rayleigh length, $x_R=\SI{0.4}{\milli\meter}$, with the fiber fully inserted along the z-axis ($d_z > w_0$), governs the second mechanism of attenuating a particular range of Raman frequencies. The latter approach may be useful for controlling the steepness of the frequency notch, e.g. for increasing the adiabaticity of the molecule-centrifuge interaction at the piercing frequency. The ability to control the width of the frequency notch in the rotational spectrum of the OC by varying the thickness of the piercing element ($w$) is demonstrated in panel (\textbf{c}) of Fig.~\ref{Fig-piercing}. As expected, the notch width grows linearly with $w$, as long as the latter is bigger than $\SI{20}{\micro\meter}$, which is defined by the diffraction limit along the $\hat{y}$-axis. The diffraction limit also determines the narrowest piercing, which in the case of our shaper amounts to $\Delta f_p= \SI{18}{\giga\hertz}$. Note, however, that approaching that minimum comes at the cost of incomplete attenuation, $a<1$. An incomplete attenuation may also stem from a small leakage of light through the piercing element. Glass fibers, for example, withstand high intensities in the Fourier plane, but limit the piercing depth to $a\lesssim 90\%$. This constraint is not included in Fig.~\ref{Fig-piercing}, but was encountered in our experiments discussed below. Our experimental setup for the state-selective excitation of molecules to high rotational states (also known as ``super-rotors'' states) has been described in previous publications and summarized in Ref.~\citenum{MacPhail2020}. Briefly, the centrifuge pulses are focused in the cell filled with a gas of interest at room temperature and variable pressure. A positive lens with the focal length of 10~cm provides the length of the centrifuged region of about 1~mm and peak intensities of up to $5\times 10^{12}$~W/cm$^{2}$. Higher intensities are avoided due to the detrimental strong-field effects, such as multi-photon ionization and filamentation. For the state-resolved detection of optically centrifuged molecules, we use polarization-sensitive rotational Raman spectroscopy. Each centrifuge pulse is followed by a weak circularly polarized probe pulse, derived from the same laser system and spectrally narrowed down to the bandwidth of $0.1$~nm (pulse length of $\sim3$~ps). Frequency doubling of probe pulses shifts their central wavelength to 395~nm, which allows an easy separation from the centrifuge beam. Coherent forward scattering of the probe light by an ensemble of centrifuged molecules results in a rotational Raman shift, whose magnitude is equal to twice the rotational frequency, whereas the sign indicates the direction of molecular rotation with respect to the circular probe polarization \cite{Korech2013, Korobenko2014a}. \begin{figure}[t] \includegraphics[width=0.83\columnwidth]{O2.pdf} \caption{Illustration of the main concept of spectral ``piercing'' of an optical centrifuge. Raman signals in a gas of \otwo{} molecules at room temperature and pressure of 36~kPa. From top to bottom, the spectra were obtained with (\textbf{a}) a centrifuge truncated at 6~THz; (\textbf{b}) a hard pierced centrifuge for stopping the rotational acceleration around 4~THz; (\textbf{c}) a soft pierced truncated centrifuge for letting the molecules accelerate to 6~THz; (\textbf{d}) a medium pierced truncated centrifuge for splitting the ensemble between the two rotational frequencies of 3.5 and 6~THz. The strongest line in each rotational wave packet is labeled with the corresponding value of the rotational quantum number $J$. Colored rectangles indicate the position, width and depth of the spectral notch in the pierced centrifuge spectrum. The insets illustrate a few consecutive Raman transitions between the rotational energy levels (horizontal lines) near the truncation frequency (\textbf{a}) and the frequency of the notch (\textbf{b-d}). Grey horizontal lines represent (almost) empty levels, while grey vertical arrows denote weaker transitions due to the lower field amplitudes.} \label{Fig-o2} \end{figure} \section{Control of molecular rotation with a pierced centrifuge} \label{Sec-o2} Figure~\ref{Fig-o2}(\textbf{a}) shows an example of the Raman spectrum of oxygen gas, obtained with an unshaped optical centrifuge. The set of discrete Raman lines around 404.5~nm indicates a coherent rotational wave packet, which corresponds to the oxygen super-rotors spinning at a frequency of $\approx 6$~THz in the same direction as the circular polarization of the probe (hence, the frequency down-shifted Stokes lines). The wave packet consists mostly of three eigenstates with the rotational quantum numbers $J=71, 73$ and 75 \footnote{Note that in the case of \otwo{} molecules, $J$ should be understood as an average value between the three unresolved spin-rotational components $J=\{N,N\pm 1\}$, where $N$ is the nuclear rotation quantum number.}. As we have demonstrated in a number of our previous works (e.g., see Ref.~\citenum{Milner2016a}), truncating the centrifuge spectrum, as illustrated in Fig.~\ref{Fig-shaper}, enables us to control the target rotational state of the super-rotors. For a given angular acceleration of the OC (here, 0.3~rad/ps$^{2}$), the rotational state of the centrifuged molecules is determined by the time of their interaction with the centrifuge field. The implemented spectral truncation shortens the duration of the pulse and, therefore, the corresponding interaction time, thus lowering the molecular final rotational frequency. In the language of consecutive Raman transitions, the rotational ladder climbing starts from the initial state and continues uninterrupted until the highest possible step, dictated by the truncated edge of the OC spectrum, as depicted in the inset to panel (\textbf{a}) in Fig.~\ref{Fig-o2}. Fig.~\ref{Fig-o2}(\textbf{b}) illustrates an alternative way of rotational control, based on stopping the rotational acceleration of molecules prematurely, i.e. before the end of the centrifuge pulse. Piercing the centrifuge spectrum with a relatively deep notch filter, which blocks two consecutive Raman transitions, interrupts the rotational excitation for just enough time to make it stall. In the example of Fig.~\ref{Fig-o2}(\textbf{b}), the notch (schematically shown with a blue rectangle) takes out two steps from the rotational ladder, $J=47\rightarrow J=49$ and $J=49\rightarrow J=51$ (note missing Raman transitions in the inset). With those two steps absent, the molecules pile up at the rotational states with $J=47, 49$. This approach may be advantageous in those cases when it is important to maintain constant pulse energy while controlling the target rotational frequency. In comparison to the spectral truncation described above and typically leading to large energy changes (often exceeding 50\%), spectral piercing reduces the pulse energy by as little as 2\%. An example of utilizing this property of the pierced centrifuge can be found in our recent work on the detection of the mechanical Faraday effect in gaseous media \cite{Milner2021a}. In contrast to the blocking action of the spectral notch covering two Raman transitions, a narrower hole in the spectrum may have little effect on the centrifuge excitation if it is placed between two consecutive Raman lines. An example is shown in Fig.~\ref{Fig-o2}(\textbf{c}), where the spectrum is pierced between the transitions $J=29\rightarrow J=31$ and $J=31\rightarrow J=33$. Owing to the limited resolution of our pulse shaper, the notch is not sufficiently narrow to let all oxygen molecules pass through without causing some of them to fall out of the centrifuge. This is reflected by a small Raman signal originating from the two states with $J=29, 31$. Yet the majority of the molecules, caught by the centrifuge, continue their rotational acceleration and reach the same target states around $J=75$ as in the case of a non-pierced centrifuge. Both the amplitude and the shape of the final rotational wave packet are hardly changed by the piercing procedure. By partially overlapping the spectral notch in the centrifuge spectrum with one or two Raman transitions, one can split the rotational wave packet between two central frequencies in a controlled way. This is illustrated in Fig.~\ref{Fig-o2}(\textbf{d}), where piercing the spectrum so as to partially cover the $J=39\rightarrow J=41$ and $J=41\rightarrow J=43$ Raman lines, results in two equally-populated wave packets. We note that the relative amplitudes of the two wave packets are fully controllable by varying the depth of the spectral notch (here, adjusted for a $\approx$~50/50 split). Together with the freedom in choosing both the piercing and the truncation wavelengths, this gives us complete rotational control over the two groups of molecules. \section{Selective spinning of molecules in mixtures} \label{Sec-2molecules} As one can see from the described examples in Fig.~\ref{Fig-o2}, the effect of the centrifuge piercing on a particular molecule depends on the relative position of the hole in the OC spectrum with respect to the Raman resonances of that molecule. This suggests that a centrifuge may be pierced in such a way as to stop the acceleration of one molecular species at a lower frequency, defined by the location of the spectral notch, while spinning the other one higher up. Fig.~\ref{Fig-2molecules}(\textbf{a}) shows an example of applying the pierced centrifuge separately to the gas of OCS and \otwo{} molecules. Due to the much higher moment of inertia, the rotational spectrum of carbonyl sulfide is seven times denser than oxygen's spectrum. This means that even the narrowest piercing available with our pulse shaper will cover quite a few Raman transitions in the OCS spectrum, making the notch largely ``impassable''. The top red curve in Fig.~\ref{Fig-2molecules}(\textbf{a}) confirms that the majority of carbonyl sulfide molecules ended their angular acceleration at the location of the spectral notch around 2.5~THz, which corresponds to the most populated rotational state with $J_\text{OCS}\approx216$. The bottom blue curve in Fig.~\ref{Fig-2molecules}(\textbf{a}) shows that the very same centrifuge, which ``dropped'' OCS around 2.5~THz, spins the majority of oxygen molecules to the high super-rotor states centered at 6.2~THz (or the rotational quantum number $J_{\text{O}_2}=75$). As illustrated in Fig.~\ref{Fig-o2}(\textbf{c}), the high rotational excitation of \otwo{} is accomplished by piercing the centrifuge not too deep and between two Raman transitions, thus making it less disruptive for the rotational ladder climbing. A small leakage of OCS towards higher rotational frequencies, and a small amount of \otwo{} lost at lower frequencies, indicate the incomplete rotational selectivity afforded by the presented piercing technique. The limitation stems from the simplicity of the employed spectral shape, and will be discussed in Sec.~\ref{Sec-summary}. Our numerical simulations of the molecular spinning in the pierced OC qualitatively reproduce the experimental observations. The simulations are based on solving the system of classical coupled Euler equations and quaternion equations of motion \cite{Tutunnikov2018}. The results are plotted with dashed lines in Fig.~\ref{Fig-2molecules}(\textbf{a}). Interestingly, the numerically obtained contrast in the rotational selectivity between \otwo{} and OCS is considerably smaller than what we achieve in the experiment. The reason for the disagreement is the inability of the classical model to account for the discreteness of the rotational spectrum of a quantum rotor, which lies at the heart of our method. \begin{figure}[t] \includegraphics[width=0.99\columnwidth]{Mixtures.pdf} \caption{Selective rotational excitation by a pierced optical centrifuge in (\textbf{a}) a gas sample of either OCS or \otwo{} molecules at room temperature and pressure of 10~kPa and 36~kPa, respectively, and (\textbf{b}) a mixture of OCS with \ntwo{} gases at room temperature and partial pressure of 7~kPa and 30~kPa, respectively. All lines represent rotational Raman spectra, collected with the probe pulses arriving between 40~ps and 50~ps after the end of the centrifuge pulse, and integrated over that time window. The labels indicate the rotational quantum number of the most populated rotational state in the corresponding wave packet. Vertical dashed lines depict the central frequency of the spectral notch, introduced by the piercing procedure. The dashed lines in panel (\textbf{a}) show the results of numerical simulations (see text for details).} \label{Fig-2molecules} \end{figure} Selective rotational excitation of two molecular species, mixed together in the same gas cell, is demonstrated in Fig.~\ref{Fig-2molecules}(\textbf{b}). Here, OCS and \ntwo{} were mixed at partial pressures of 7~kPa and 30~kPa, respectively. Similar to the previous example, big difference in the density of rotational states between the two molecules enables one to stop the angular acceleration of OCS earlier than the acceleration of \ntwo{}, which proceeds all the way to the end of the centrifuge pulse. This can be recognized by the presence of two peaks in the Raman spectrum of the mixture - the broad one at a lower frequency and the narrow one at a higher frequency. The peaks can be respectively assigned to carbonyl sulfide, whose lower rotational constant does not allow us to resolve individual lines (hence, larger peak width), and nitrogen, whose Raman resonances are well resolved with our spectrometer. Sharp lines on top of the broad peak at 2.3~THz are due to the small amount of \ntwo{} molecules, which fell out of the pierced centrifuge together with the majority of OCS. The two Raman spectra in Fig.~\ref{Fig-2molecules}(\textbf{b}) illustrate our ability to control both the slow carbonyl sulfide and the fast nitrogen rotors, independently from one another. By moving the position of the hole in the spectrum of the centrifuge from 787.2~nm to 785.1~nm [$\lambda_p$ in Fig.~\ref{Fig-shaper}(\textbf{b})], we prolonged the life time of OCS in the centrifuge from 24~ps to 35~ps, thus increasing its final rotational frequency from 2.3~THz to 3.3~THz. Similarly, shortening the centrifuge pulse from 57~ps to 47~ps by means of varying the truncation wavelength [$\lambda_t$ in Fig.~\ref{Fig-shaper}(\textbf{b})] slows down the rotation of \ntwo{} from 5.4~THz to 4.5~THz. \begin{figure}[t] \includegraphics[width=0.99\columnwidth]{SpinIsomers.pdf} \caption{Selective rotational excitation in the mixture of two nuclear-spin isomers of $^{14}\text{N}_2$. Raman spectra are plotted along the rotational quantum number of nitrogen, thus showing the constituent quantum states of each rotational wave packet. The three spectra correspond to (\textbf{a}) a non-pierced optical centrifuge, (\textbf{b}) a centrifuge pierced for decreasing (increasing) the amount of para-nitrogen in the fast (slow) wave packet, and (\textbf{c}) a centrifuge pierced for decreasing (increasing) the amount of ortho-nitrogen in the fast (slow) wave packet. Colored rectangles indicate the position and width of the spectral notch in the pierced centrifuge spectrum.} \label{Fig-2isomers} \end{figure} \section{Selective rotational excitation of two spin isomers} \label{Sec-2isomers} In addition to the rotational selectivity between different molecular species, discussed above, centrifuge piercing can also be utilized in the selective spinning of molecular isotopes and spin isomers. The former would rely on the difference between the rotational constants of the two isotopes \cite{Fleischer2006}, whereas the latter could be based on the different parity constraints for the rotational wave functions of the two spin isomers \cite{Fleischer2007}. Incidentally, for a diatomic homonuclear molecule whose nuclei are bosons, such as $^{14}\text{N}_2$, the symmetric nuclear wave function (known as ortho-nitrogen) must be combined with the symmetric rotational wave function. Hence, ortho-nitrogen has only even $J$ numbers in its rotational spectrum. Similarly, the antisymmetric nuclear wave function of para-nitrogen limits its rotational quantum numbers to odd values only. Because of the $\Delta J=\pm2$ selection rule, rotational Raman transitions do not couple quantum states of different parity. Hence, by piercing an optical centrifuge in such a way as to suppress the rotational ladder climbing via either even or odd $J$-states, the acceleration of one spin isomer can be terminated early, while the other isomer is accelerated to higher frequency by the end of the OC pulse. Both spin isomers coexist in an ambient gas of $^{14}\text{N}_2$ with the ortho:para ratio of 2:1. This is reflected by the coherent Raman spectrum of \ntwo{} super-rotors in Fig.~\ref{Fig-2isomers}(\textbf{a}), where an approximately quadratic dependence of the Raman signal on the molecular population \cite{Bitter2016c} results in a $\sim$~4:1 ratio of the peak intensities originating from the two spin isomers. Ortho-nitrogen is excited to a broad wave packet centered at $\bar{J}_\text{ortho}=52$ (the comb of tall lines), whereas the most occupied state of para-nitrogen is $\bar{J}_\text{para}=51$ (the comb of short lines). Figs.~\ref{Fig-2isomers}(\textbf{b,c}) demonstrate the ability of the pierced OC to differentiate between ortho- and para-nitrogen in terms of their respective rotational excitations. In panel~(\textbf{b}), the location of the notch in the centrifuge spectrum results in blocking two steps in the rotational ladder of para-nitrogen, $J=29\rightarrow J=31$ and $J=31\rightarrow J=33$, and only one step in the climbing path of the ortho-isomer, $J=30\rightarrow J=32$ (depicted by the blue rectangle). As can be seen from the relative line strengths, this piercing largely eliminates para-nitrogen from the fast wave packet (which remains centered at $J=52$), while making it the dominant component of the slow one (centered at $J=29$). Reversal of the slow--fast separation between the isomers is not as visible in Fig.~\ref{Fig-2isomers}(\textbf{c}), due to the initial prevalence of ortho-nitrogen in the sample. Here, para-nitrogen is slightly dominant at higher $J$'s (wave packet centered at $J=51$), whereas most of ortho-nitrogen has been ``dropped'' by the centrifuge around $J=28$. The separation has again been accomplished by blocking two steps in the ladder of even rotational states, while taking out only a single step from the ladder of odd $J$'s. Note that similarly to the earlier example with the OCS--\ntwo{} mixture, controlling the center frequency of both the slow and fast wave packets can be easily achieved by moving the spectral notch and the outer truncation edge in the centrifuge spectrum, respectively. \section{Summary} \label{Sec-summary} In summary, we developed an experimental method of selective rotational control in gas mixtures. Using a shaped optical centrifuge, we simultaneously excite two different molecular species to two different rotational frequencies of choice. The control is accomplished by piercing the centrifuge spectrum with a notch filter, and aligning its spectral position with respect to the Raman resonances of both molecules. The position of the frequency notch determines whether the molecule falls out of the centrifuge, and if so - with what rotational frequency, or continues climbing the rotation ladder to a higher level of rotational excitation. As a proof of principle, we demonstrated the selectivity of the pierced centrifuge in a mixture of two different gases, and a mixture of two spin isomers of the same molecule. In the first case, the difference in the density of rotational states allows one to terminate the rotational acceleration of a molecule with the higher moment of inertia earlier than a molecule, whose moment of inertia is lower. As a result, the former is excited to the lower frequency than the latter, with both frequencies being controlled by the shape of the centrifuge spectrum. In the case of spin isomers, the control is accomplished by aligning the spectral notch so as to eliminate more Raman resonances of either ortho- or para-isomer, thus interrupting the acceleration of that isomer earlier than its nuclear-spin counterpart. The second example showcases our ability to differentiate between two molecules with similar (or even equal) moments of inertia, as long as their Raman spectra are sufficiently distinct. The reported method of rotational control exhibits limited degree of selectivity, in that the rotational wave packets created by the pierced OC are slightly cross-``contaminated'' by the molecules from the other group, as can be seen in Figs.~\ref{Fig-2molecules} and \ref{Fig-2isomers}. We attribute this shortcoming to the simplicity of the applied spectral shaping, which in its current implementation consists of a single frequency notch. Our experiments with OCS and \otwo{} show that making the notch narrower increases the amount of (undesired) fast rotating carbonyl sulfide, whereas widening it causes bigger accumulation of slower oxygen rotors. Neither change in the parameters of the single-notch piercing (including its depth and steepness) results in a better simultaneous suppression of those two groups of molecules, and hence in the improved rotational selectivity. The results of our numerical simulations reflect the same limitation. We therefore conclude, that piercing the centrifuge at a single frequency may not provide complete selectivity, although the latter improves as the difference between the rotational properties of the two molecules increases. Studies are underway to eliminate this constraint by applying more elaborate pulse shapes, e.g., by means of increasing the number of holes in the OC spectrum. The demonstrated effect adds yet another ``control knob'' to the existing toolbox for harnessing molecular dynamics with laser fields. It can be instrumental in any studies involving molecular collisions or controlled chemical reactions, because of the added ability to control both the absolute and the relative rotational frequencies of the collision/reaction partners. \section*{ACKNOWLEDGMENTS} This work was carried out under the auspices of the Canadian Center for Chirality Research on Origins and Separation (CHIROS).
1,314,259,992,789
arxiv
\section{Preliminaries and Related Work} \label{sec:preliminaries} Significant effort has been recently devoted to safety in DRL, which can be addressed with a variety of methods. A wide branch of literature proposes the introduction of constraints in the exploration phase to limit the learned behavior of the agent, or the use of well-designed reward functions to encourage or discourage certain actions \cite{DC-1, DC-2}. These methods, however, aim at minimizing undesirable behaviors without providing any formal guarantees. Trivially, the input space cardinality is infinite so it is not possible to test or simulate all the configurations to ensure their safety. DNNs, in fact, are vulnerable to adversarial attacks and can suffer from a poor generalization to unseen states \cite{J-29}. In contrast, the formal verification of DNNs \cite{J-0} that do not encode decision-making problems has been addressed exploiting Boolean Satisfiability or Satisfiability Modulo Theories (SMT), searching for configurations that falsify an assertion \cite{DC-9, J-7}. A different approach exploits optimization techniques \cite{J-4} for such search \cite{J-23, J-38, J-31}. These methods, however, are strongly limited by their scalability on large networks and require a significant amount of computation, in particular for non-linear constraint networks. Recently, several approaches aim at searching input configurations that deny a given safety property, using a layer-by-layer analysis to partially overcome the limitations of previous approaches. An example is FastLin \cite{J-41} which combines search and reachability to compute and analyze the absolute bounds of an output. Another group of methods such as Neurify \cite{DC-17} (an improvement over ReluVal \cite{DC-16}), DeepPoly \cite{DC-18} and others \cite{J-20, J-13}, rely on an accurate bound computation to propagate the inputs and calculate the corresponding output bounds, using interval analysis. We also exploit this analysis, which is briefly described in the next section. \subsection{Interval Analysis} In this section, we introduce the main concepts and notations adopted throughout the paper. We define an \textit{area} as a set of inputs, limited by an upper and lower bound. A \textit{subarea} is a further subdivision of the same input set. Figure \ref{fig:bound_overall} shows an example of these concepts, where area = $([a, b], [a', b'])$ (i.e., the input area to evaluate). Two possible subareas of this are $([a, (a + b)/2], [a', b'])$ and $([(a + b)/2, b], [a', b'])$, notice that the union of the subareas, always returns the former area. Figure \ref{fig:bound_overall} also shows an example of two different bound representations: (i) a simple network with two inputs and one output (Figure \ref{fig:bound_overall}A); (ii) a graphical overview (Figure \ref{fig:bound_overall}B), where the area is bounded by values [a, b] on the first input and by [a', b'] on the second one. \begin{wrapfigure}{b}{0.5\textwidth} \centering \includegraphics[width=0.5\textwidth]{Images/propagation_basic.png} \caption{Example of a bound analysis of a generic neural network with 2 inputs and 1 output.} \label{fig:bound_overall} \end{wrapfigure} The $Y$ curve is the representation of the network output $f(X)$ and [c, d] is the bound for the values that the function can assume. Summarizing, we visualize the bound analysis of a DNN as a 2-dimensional graph with the inputs of the network (i.e., $x \in X$) on the x-axis and the output values (i.e., $f(X)$) on the y-axis. To visualize the bounds when $n > 1$ (where $n$ is the number of input nodes), we assume that each point on the x-axis represents a tuple of $n$ values. Notice that, the ordering among the tuples on the x-axis is arbitrarily defined and is required only for visualization purposes (i.e., it does not affect the analysis). \subsection{Safety Properties in Decision-Making Problems} \label{sec:preliminaries:formal} Following the standard representation provided by \citet{J-0}, a safety property can be formalized in the following form (where $x_k \in X$, with $k \in [0, n]$ and $y_j$ is a generic output): \begin{equation} \label{equation_1} \Theta: \mbox{If } x_0\in[a_0, b_0] \land ... \land x_n\in[a_n, b_n] \Rightarrow y_j\in[c, d] \end{equation} In this work, however, we focus on neural networks that encode decision-making problems, where each output node represents an action, and the action corresponding to the node with the highest value is usually chosen. In this scenario, we are not interested in the absolute value that each output node assumes. In contrast, we aim at verifying whether the value of an output is lower (or greater) than the value of another one. For this reason, we reformulate Proposition \ref{equation_1} in the following form: \begin{equation} \label{equation_2} \Theta: \mbox{If } x_0\in[a_0, b_0] \land ... \land x_n\in[a_n, b_n] \Rightarrow y_i < y_j \end{equation} where the relation between $y_i$ and $y_j$ can be verified using interval algebra of \citet{DC-x}. In particular, supposing $y_i= [a, b]$ and $y_j=[c, d]$ we obtain the proposition: \begin{equation} \label{equation_3} b < c \Rightarrow y_i < y_j \end{equation} Here we describe how to exploit the iterative refinement proposed by \citet{DC-16}, that reduces the overestimation of the output bound computation, to also directly verify safety properties for decision-making scenarios. Afterward, Section \ref{sec:methods:semiformal} introduces the lightweight semi-formal approach used to estimate our novel metrics. Previous approaches exploit a formal input area layer-by-layer propagation (we refer the interested reader to the original implementation for more details \cite{J-46}), to obtain a strict estimation of the output bounds. However, such technique returns bounds that are not informative for a decision-making problem. Even with a perfect estimation of the output bounds\footnote{The real shape of the two output function is impossible to obtain due to the non-linearity of DNNs and here we show two explanatory curves}, Figure \ref{fig:interval_analysis}A shows that if $y_1$ $<$ $y_0$ (i.e., two output functions) in the given input area, $d \nless a$ so, with Proposition \ref{equation_3}, we can not formally conclude if the decision-making property is proved or denied. \begin{figure*}[t] \centering \includegraphics[width=0.85\linewidth]{Images/bounds_estimation.png} \caption{Explanatory output analysis of: (A) decision-making problem with two outputs and one subdivision; (B) estimation of an output function shape, using multiple subdivisions; (C) output analysis with three outputs and multiple subdivisions.} \label{fig:interval_analysis} \end{figure*} To address this, we exploit the \textit{iterative refinement} to also obtain an estimation of the output curves (the overestimation reduction provided by this method is also important to make the separation between the outputs function more clear). Figure \ref{fig:interval_analysis}B shows this key process, where increasing the number of subdivisions we compute an arbitrary precise estimation of the real function. Figure \ref{fig:interval_analysis}C shows how we perform the evaluation, where for each generated subarea we check if the property is respected. In order to formally verify a property, Proposition \ref{equation_2} has to hold for all the subareas; while to deny a property we just need one counterexample. For these reasons, with a sufficient number of subdivisions, we can use an approximate shape of the output functions, to always assert if a property is respected or not. \subsection{Subareas Analysis} In the previous section, we showed that with an appropriate number of subdivisions we can check a safety property in the form of Proposition \ref{equation_2}. However, it is not possible to estimate the exact number of the necessary subdivision a priori. For this reason, we encode our approach as a tree search problem, that iteratively subdivides the input area (or subareas) until it reaches a solution (i.e., the scenario in Figure \ref{fig:interval_analysis}C). In fact, we further subdivide a subarea when we can not prove (or deny) a property, hence, we need a smaller input area (Figure \ref{fig:interval_analysis}A). Figure \ref{fig:areatree} shows an example of a possible execution flow on a network with two inputs (i.e. $x_0$ and $x_1$). Initially, the root of our tree, to which we refer as \textit{subarea tree}, contains the input areas ($X_{0}^{A_0}$ and $X_{1}^{A_0}$ in Figure \ref{fig:areatree}, where $X_{i}^{A_j}$ represents a network input range with lower bound $\underline{x_{i}}$ and upper bound $\overline{x_{i}}$). This example considers a split of the input area in $2$ sections, following a random strategy\footnote{It is possible to use different heuristics to optimize the search. However, in our experiments, the usage of a different strategy, i.e., split the biggest area first, does not provide a significant improvement over the random strategy.}. We assume to split first $X_{0}^{A0}$ obtaining $X_{0}^{A_1}, X_{0}^{A_2}$, and then $X_{1}^{A0}$, obtaining $X_{1}^{A_1}, X_{1}^{A_2}$, where $x_{i}'$ represents a new input range limit (e.g., one subdivision of the initial $[\underline{x_{i}}, \overline{x_{i}}]$, generates two intervals $[\underline{x_{i}}, x_{i}']$ and $[x_{i}', \overline{x_{i}}]$). The combinations of all the new bounds represents the next layer of the tree, which is the \textit{unverified-subarea-tree} (Figure \ref{fig:areatree} at depth 1). Hence, in this example, the first subarea $[X_{0}^{A_1}, X_{1}^{A_1}]$ shows the situation in Figure \ref{fig:interval_analysis}C (where the property $y_1 < y_0$ holds for each subarea) and the algorithm continues with the verification of the next subarea $[X_{0}^{A_1}, X_{1}^{A_2}]$. Here the example falls into the case of Figure \ref{fig:interval_analysis}A and further subdivisions are required to verify the property in this branch (Figure \ref{fig:areatree} at depth 2). Finally, $[X_{0}^{A_3}, X_{1}^{A_2}]$ represents the case in Figure \ref{fig:interval_analysis}C, where the property $y1 < y_2$ is violated in the subareas after the intersections of the two curves. \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{Images/exploration_tree.png} \caption{Explanatory example of the iterative bisection tree generated to verify the property $y_0 < y_1$ in the given input area ($X_{0}^{A_0}$, $X_{1}^{A_0}$). As example, $X_{1}^{A_1}$ and $X_{1}^{A_2}$ derive from the split of the original input area $X_{1}^{A_0}$, where $x_{1}'$ indicates the new value that limits that range.} \label{fig:areatree} \end{figure*} \section{Conclusion} \label{sec:conclusion} We present SFV, an efficient novel framework designed as a semi-formal verification tool for the analysis of safety properties for real-world decision-making problems. We compare its performance with respect to previous formal verification tools on the ACAS benchmark. Moreover, we evaluate SFV on the CartPole problem, and on two robotic scenarios of real interest in DRL literature, mapless navigation, and trajectory generation. Our semi-formal interval propagation method for the output bounds computation, allows SFV to drastically outperform existing verification approaches in terms of time while obtaining comparable safety results. Moreover, we introduce two complementary metrics to measure the reliability of a trained model, showing that standard metrics such as total reward are not enough informative to evaluate the behaviour of the model. This paper paves the way for several important research directions which include exploiting the computational efficiency of SFV to evaluate models in the training phase, to guarantee that the network maximizes the expected cumulative reward while resulting in safer trained models. \section{Empirical Result} \label{sec:evaluation} Our goal is to measure the reliability of the trained models in decision-making applications, comparing how typical evaluation metrics such as total reward and success rate differ from the proposed safe rate. We aim to spur discussion about how to interpret the behaviour of different models, to choose the safest and best performing one. In detail, we show that trained networks that present high scores with standard metrics, actually present very different behaviours in practice (i.e., models with comparable rewards have very different safety scores) that can lead to undesirable situations. We empirically evaluate our framework on a well-known DRL benchmark, CartPole \cite{DC-30}, and two real robotic applications that are widely considered in recent DRL literature: mapless navigation \cite{1, mapless1, 59} and trajectory generation for a commercial manipulator \cite{3, 33}. The considered agents are modeled using two relu hidden layers with 64 neurons each and trained using the Rainbow algorithm \cite{68} (we refer the interested reader to the original papers for further details about training algorithm and hyperparameters). \paragraph{Cart Pole} The OpenAI Gym \cite{38} problem is solved when collects an average reward of 195.0 over 100 consecutive trials, according to the original version of CartPole \cite{BartoCartPole}. When the pole is offset by $> 15$ $deg$ from the vertical position, or the cart is $> 2.4$ units from the center, an episode ends. Following official documentation, the network has inputs: (i) cart position $x_0 \in [-4.8, 4.8]$; (ii) cart velocity $x_1 \in [-Inf, Inf]$; (iii) pole angle $x_2 \in [-0.24, 0.24]$, (iv) pole velocity $x_3 \in [-Inf, Inf]$ and discrete outputs: (i) push cart to left $y_0$ and (ii) push cart to right $y_1$. According to these limits, we formalize two safety properties, with values normalized in range [0, 1], as follows: $ \textbf{$\Theta_{C, 0}$}: \mbox{If } x\textsubscript{0} \in[0.2, 0.8] \land x\textsubscript{1}\in[0.4, 0.6] \land x\textsubscript{2}\in(0.7, 1] \land x\textsubscript{3}\in[0.5, 1] \Rightarrow y\textsubscript{0} < y\textsubscript{1}$ $ \textbf{$\Theta_{C, 1}$}: \mbox{If } x\textsubscript{0} \in[0.2, 0.8] \land x\textsubscript{1}\in[0.4, 0.6] \land x\textsubscript{2}\in[0, 0.3) \land x\textsubscript{3}\in[0, 0.5] \Rightarrow y\textsubscript{1} < y\textsubscript{0}$ These properties aim at verifying that when the pole reaches its angle limit, the cart pushes in the opposite direction to move the pole towards the vertical alignment. \paragraph{Mapless Navigation} In this navigation task, a robot must reach a target using local observation to avoid obstacles and without a map of the environment. Our problem formalization is similar to the one presented in \cite{mapless1}, using a Turtlebot3 with constant linear velocity. The network has 21 inputs: (i) 19 sparse scan values $x_0, .., x_{18}$ normalized between $[0,1]$, sampled in a fixed angle distribution between -90 and 90 $deg$; (ii) target polar coordinates with respect to the robot (i.e., heading $x_{19}$ and distance $x_{20}$, normalized in $[0, 1]$); and three outputs for the angular velocities (i.e., [-90, 0, 90] deg/s). In this task, we evaluate the safety of the network and compare it with the success rate (i.e., how many correct trajectories are performed in the last 100 epochs) using two safety properties, which can be described in natural language as: \noindent\textbf{$\Theta_{T, 0}$:} If there is an obstacle too close to the left and one in front, whatever the target is, turn right. \\ \noindent\textbf{$\Theta_{T, 1}$:} If there is an obstacle too close to the right and one in front, whatever the target is, turn left. \paragraph{Trajectory Generation} In this scenario, the agent has to rotate the joints to generate a real-time trajectory to reach a target. The formalization of this problem is similar to the one presented in \cite{33}, where the input layer contains 9 nodes normalized in range $[0, 1]$: (i) one for each considered joint; (ii) three for target coordinate; and 12 nodes for the output: each joint is represented by 2 nodes to move it $\omega$ degrees clockwise or anti-clockwise. This encoding of the output allows a straightforward verification process for our tool (i.e., one node represents only one specific action). Hence, we designed safety properties to check if the manipulator operates inside its work-space, considering properties in the following form: if the current angle of a joint $j_i$ equals to its domain limits (left or right), the robot must not rotate $j_i$ in the wrong direction (i.e. an action that rotates $j_i$ causes the robot to exit from the work-space). This translates in the following formalization: $ \textbf{$\Theta_{P, 0L}:$} \mbox{If } x_0\in[1, 1] \land x_1, ..., x_8 \in D \Rightarrow y_0 < [y_1, ..., y_{11}], \textit{where D = (0, 1)} $ $ \textbf{$\Theta_{P, 0R}$:} \mbox{If } x_0\in[0, 0] \land x_1, ..., x_8 \in D \Rightarrow y_1 < [y_0, y_2, ..., y_{11}], \textit{where D = (0, 1)} $ where $\Theta_{P, 0L}$ is a configuration where $j_0$'s angle equals to its limit on the left (i.e., a normalized value 1). The output value corresponding to the action \textit{rotate left}, must be lower than at least one of the others. For each joint $j_i$ we consider two properties, one for the left limit ($\Theta_{P, iL}$) and one for the right limit ($\Theta_{P, iR}$). \subsection{Results} \begin{figure}[t] \centering \begin{subfigure}[t]{0.30\linewidth} \centering \includegraphics[width=1\linewidth]{Images/final_cartpole.png} \caption{CartPole} \label{fig:final-results:a} \end{subfigure} \quad \begin{subfigure}[t]{0.30\linewidth} \centering \includegraphics[width=1\linewidth]{Images/final_turtlebot.png} \caption{Mapless Navigation } \label{fig:final-results:b} \end{subfigure} \quad \begin{subfigure}[t]{0.30\linewidth} \centering \includegraphics[width=1\linewidth]{Images/final_franka.png} \caption{Robotic Manipulator} \label{fig:final-results:c} \end{subfigure} \caption{Comparison between the \textit{normalized reward} (or \textit{success rate}) and the \textit{safe rate}.} \label{fig:final-results} \end{figure} In order to collect statistically significant data, we performed five training phases for each task, using different random seeds \cite{colas2019hitchhikers}. We performed our experiments on an i7-9700k and a GTX2070, using the implementation described in Section \ref{sec:methods}. For each graph, the curves report the average and the standard deviation of the runs, considering: (i) standard metrics (i.e., normalized reward or success rate), where the success rate is how many successful trajectories are performed in the case of the TurtleBot3 and the manipulator; (ii) safe rate: the average over the safe rates (Section \ref{sec:methods:saferate}) for each considered property. These metrics are smoothed on one hundred episodes (expect for CartPole, where results are smoothed on ten episodes). Figure \ref{fig:final-results} shows that, in our experiments, the safe rate is always characterized by high values (i.e., properties hold in the input areas) in the early stage of the training phases. We motivate this as the models initially choose random actions to explore the state space, while collecting reward to infer the task. A more detailed analysis shows that all the agents tend to stay still or move close to their initial position. Afterward, the safe rate starts to follow a similar trend to the considered standard metrics as the agent is learning the required behaviour to solve the task. In contrast, in the advanced stages of the training, when agents successfully address the task, the safe rate curve behaves differently. Figure \ref{fig:final-results:a} refers to the CartPole environment, where the safe rate has a similar trend to the normalized reward as a poor generalization is required to solve the problem. Mapless navigation in Figure \ref{fig:final-results:b} has a drop in the safe rate after many successful epochs where, according to the success rate, the agent seems to solve the task. This evidence the problem described in the previous sections as the multitude of trained models with similar standard performance, actually have very different behaviors. In particular, we noticed that around epoch 25000, the network tries to learn shorter paths by navigating closer to obstacles, leading to more unsafe behaviours. Figure \ref{fig:final-results:c} shows similar results for the trajectory generation task that behaves similarly to the navigation problem. Summarizing, our experiments show that our novel safe (or violation) rate can evaluate the behavior of the trained models with respect to the designed safety properties. This allows to find the best performing model among all the trained ones that seem comparable in terms of performance using standard metrics. \section{Introduction} \label{sec:introduction} Inspired by animal behaviour \cite{Sutton1998}, Reinforcement Learning (RL) aims at solving decision-making problems maximizing the expected cumulative reward, while an agent interacts with the surrounding environment to learn a policy. Challenging problems, however, are characterized by a high-dimensional search space that is not manageable by traditional RL algorithms. For this reason, Deep Reinforcement Learning gained attention in solving complex safety-critical tasks, due to its performance in a variety of areas, ranging from healthcare \cite{healtcare}, to robotics \cite{openai2019solving}. Despite recent impressive results, several factors limit the deployment of DRL models to control physical systems, outside of research demos. In tasks where high-cost equipment or humans are involved, the safety of a trained Deep Neural Network (DNN) must be evaluated to avoid potentially dangerous or undesirable situations. Furthermore, it is common practice to compare and evaluate DRL approaches in benchmarks that are considered standard (e.g., video games \cite{4} or simulated continuous locomotion \cite{38}), measuring their performance using target metrics such as total reward or number of successes in independent trials \cite{drlEval, drlEval2}. The usage of such metrics implies an underlying problem in the evaluation of the models that are collected during the training. Different sources of error (e.g., human-imperceptible perturbations in the input \cite{DC-3} or poor generalization \cite{drlEval}) are in fact challenging to detect with empirical testing phases. To address the safety problem of DNNs, a recent trend in DRL researches proposes the design of formal verification frameworks that, given a set of desired properties (e.g., a mobile robot does not turn in the direction of a close obstacle), either guarantee that the property is satisfied in a certain input space or return the input configurations that cause a violation \cite{J-0}. These methodologies aim at verifying whether the bound of a single output of a trained model lies in the desired interval (e.g., a motor velocity never exceeds a certain value) \cite{J-14, J-20} and the estimation accuracy of such output bounds have been successfully addressed by several recent studies \cite{J-44, DC-18}. The verification of DNNs that encode sequential decision-making policies, however, can not be directly addressed by previous formal approaches. Trivially, to verify these models, we should consider the relationships among multiple outputs. In a typical decision-making scenario, in fact, a network chooses the action that maximizes a return. Furthermore, as detailed in Section \ref{sec:preliminaries}, previous formal frameworks either are limited by poor scalability on high-dimensional DNNs, or require long wall-clock time to verify the designed safety properties. Against this background, in this paper, we propose a semi-formal verification approach (which we refer to as SFV), based on previous interval-analysis \cite{DC-x} formal verification approaches \cite{DC-16, DC-17}. We introduce two complementary metrics, \textit{safe rate} and \textit{violation rate}, to estimate the safety of trained models with respect to the desired properties. These values represent the percentage of the given input space that guarantees or violates a property (respectively), using limited computational demanding. The idea is to compute the output bounds using a semi-formal interval propagation method, detailed in Section \ref{sec:methods:semiformal}. Moreover, this allows to obtain comparable results in terms of safe and violation rate while drastically reducing the computation time, with respect to formal verifiers. We designed SFV to provide an evaluation of the behavior of a trained agent, finding the best performing model among all the trained ones that, at a first glance (i.e., comparing standard metrics such as total reward), seems comparable in terms of performance. Crucially, SFV allows to evaluate the high-dimensional (continuous) input space for all the trained models, which is either not feasible by simulation or very time consuming with formal verifiers (that are also not directly applicable in decision-making problems). We empirically evaluate SFV on a standard benchmark, CartPole \cite{DC-30}, and on two decision-making task of real interest in recent DRL literature: (i) mapless navigation \cite{50, 59} for a TurtleBot3 (a widely considered platform in several works \cite{1, 48}), and (ii) trajectory generation for a commercial manipulator \cite{3, 33} (among the variety of platforms, we considered a manipulator for their wide utilization in industry). Summarizing, this work makes the following contributions: (i) we introduce SFV, a semi-formal verification approach that enables to estimate the safety of trained DNNs that encodes real decision-making tasks; (ii) our evaluation in Section \ref{sec:methods:semiformal} shows that the output bounds computation, based on novel semi-formal interval propagation allows to drastically reduce the verification time while obtaining comparable results with respect to a state-of-the-art approach \cite{DC-17}, on the ACAS models \cite{DC-32}, a standard benchmark to compare formal verification tools, and (iii) Section \ref{sec:evaluation} shows that standard metrics (e.g., total reward) are not informative enough to evaluate a model, and we introduce safe and violation rates to measure the reliability of a network with respect to designed safety properties. \section{Evaluation Methods} \label{sec:methods} In this section, we present how SFV evaluates the safety metrics of neural network over a set of desired properties, in decision-making tasks. We start by introducing two complementary metrics to quantify the safety of a trained model and describe the semi-formal interval propagation method. \subsection{Safe and Violation Rates} \label{sec:methods:saferate} Standard performance metrics used in DRL to evaluate a model typically includes average total reward or some derivations such as the number of successes in independent trials. However, given a set of models that achieve similar reward, we can not directly infer how they behave with respect to some desired safety properties. For this reason, we aim at measuring the safety over such properties, introducing a novel evaluation metrics. Section \ref{sec:evaluation} shows that, in practical applications (e.g., robotics), it provides a more reliable estimation on the safety of the model. We introduce the \textit{violation rate} as a the percentage of the input area that causes a property violation. Hence, it is the normalized sum, with respect to the input area dimension, of the subarea sizes that presents a violation (e.g. the subarea $[X_{0}^{A_3}, X_{1}^{A_2}]$ in Figure \ref{fig:areatree}). When considering multiple properties in a single task (as in Section \ref{sec:evaluation}), we compute the violation rate as the average of the violation rates for the different properties. Conversely, we define the \textit{safe rate} as the percentage of the input area that respect the properties (e.g. the subarea $[X_{0}^{A_1}, X_{1}^{A_1}]$ in Figure \ref{fig:areatree}). Crucially, these novel rates do not measure the true probability of a property violation in typical task execution, as it usually represents an upper or lower bound. In fact, within an input area, the different state configurations do not have the same probability to appear (and the metrics do not consider this). The heterogeneous distribution in a high-dimensional state space, makes not possible to compute an accurate value for these novel metrics using standard empirical evaluations (as shown in Section \ref{sec:evaluation}), which typically return a very imprecise safe rate. Hence, the idea is to efficiently provide a strict estimation of the safe rate using our semi-formal method on the multitude of trained models that achieve comparable performance (e.g., total reward). \subsection{Semi-Formal Interval Propagation} \label{sec:methods:semiformal} Current state-of-the-art approaches require high computational demanding that limits their application as model evaluators. Furthermore, they can not be directly applied to decision-making problems. In this section, we present our semi-formal interval propagation method to estimate the safe rate, while drastically reducing the computational demanding (i.e., in terms of time) of the verification phase. Section \ref{sec:methods:comparison} shows that the semi-formal implementation obtains comparable safe rates with respect to a formal verification approach in a benchmark problem. In more detail, formal methods based on bound analysis suffer from the following issues that servery limits their efficiency: (i) the \textit{overestimation} and (ii) the \textit{propagation time} (especially on high-dimension networks). The former is partially solved by the iterative refinement (detailed in Section \ref{sec:preliminaries}), however, it requires a very small input area to compute a strict estimation of the bounds. The latter, in contrast, can not be solved as it is an intrinsic problem of formal propagation methods. To address these issues, we introduce a semi-formal propagation approach, based on an empirical evaluation of the output bounds. In detail, we sample $n$ points from the interested subarea and compute the corresponding $n$ values for each output node, using a simple forward step of the network. Afterward, we estimate the output bound, using the maximum and the minimum value obtained for each node. Ideally, if $n \rightarrow \infty$, we obtain an exact estimation of the output bounds. Crucially, the next section shows that we do not require big values for $n$, to obtain a good estimation. \subsubsection{Comparison with Formal Verification} \label{sec:methods:comparison} Figure \ref{fig:semi-formal} shows the output bound computation results of our method compared with Neurify \cite{DC-17}, a state-of-the-art formal verification tool. In detail, Figure \ref{fig:semi-formal:a} shows the overestimation problem where formal methods present a high overestimation when computing the output bounds on big input areas. To address this, they require multiple executions of the iterative refinement to obtain smaller areas and compute a strict estimation of the output bounds, causing exponential growth in time. In contrast, Figure \ref{fig:semi-formal:b} shows that our semi-formal approach does not suffer from overestimation on big input areas as it uses real input configurations for the estimation. Hence, there is a clear limitation of SFV as it presents an implicit underestimation on the output bounds. Considering the entire input area (i.e., the normalized value $1$ in Figure \ref{fig:semi-formal}) with a "real"\footnote{To compute a significant estimation of the real output bound, we run the semi-formal propagation with $n$ equals to ten million configurations} output bound of size $0.0091$, Neurify overestimates it in a size of $11$, while SFV underestimates it in a size of $0.0063$. Summarizing, SFV obtains a tight underestimation of the output bounds regarding the size of the input area, while Neurify requires smaller areas to compute a precise estimation. However, to obtain a formal verification, it is not acceptable to use an underestimation of the bounds as it can lose information; SFV, in contrast, aims at computing an estimation of the safety metrics as an indicator of the behaviour of the trained models. Moreover, Figure \ref{fig:semi-formal:c} shows how the computational demanding of the formal propagation compared with semi-formal one, varies with respect to the size of the network. In particular, it highlights that the semi-formal approach drastically scales better on the network size. To further validate the safe rate estimation of our method, we compare Neurify and SFV on a standard verification benchmark, the ACAS models \cite{DC-32}. Figure \ref{fig:formal-comparison} shows the comparison between Neurify, SFV, and an informal method, which computes the safe rate using only simulation (i.e., we run the models to collect how many propertiy failures they encounter). Crucially, our semi-formal method returns comparable results with respect to the formal verifier with limited computation time. In detail, to obtain a comparable safe rate (i.e., with error $< 1\%$), SFV is, $726\%$, $400\%$ and $456\%$ (i.e., $527\%$ on average) faster than Neurify in the considered ACAS properties. In the next section, we evaluate SFV in three different decision-making tasks, showing that the violation (or safe) rate is a valuable metric to measure safety in DRL applications. \begin{figure}[t] \centering \begin{subfigure}[t]{0.30\linewidth} \centering \includegraphics[width=1\linewidth]{Images/overestimation.png} \caption{Overestimation } \label{fig:semi-formal:a} \end{subfigure} \quad \begin{subfigure}[t]{0.30\linewidth} \centering \includegraphics[width=1\linewidth]{Images/underestimation.png} \caption{Underestimation} \label{fig:semi-formal:b} \end{subfigure} \quad \begin{subfigure}[t]{0.30\linewidth} \centering \includegraphics[width=1\linewidth]{Images/method_scalability.png} \caption{Scalability} \label{fig:semi-formal:c} \end{subfigure} \caption{Comparison between Neurify and our semi-formal method with $n=20$. Red lines represent the \textit{"real" bounds} of the input area.} \label{fig:semi-formal} \end{figure} \begin{figure}[b] \centering \includegraphics[width=0.30\linewidth]{Images/comparison_p1.png} \includegraphics[width=0.30\linewidth]{Images/comparison_p2.png} \includegraphics[width=0.30\linewidth]{Images/comparison_p3.png} \caption{Comparison between SFV, the informal one and the formal \textit{safe rate} computed with Neurify, on three properties of the standard ACAS benchmark. Neurify requires 363.449s for the property 1, 240.07s for the property 2 and 274.341s for the property 3.} \label{fig:formal-comparison} \end{figure}
1,314,259,992,790
arxiv
\section{Introduction} Upon photoexcitation of an electron and hole in the barrier of an (In,Ga)As/GaAs self-assembled quantum dot the carriers relax to their ground states through a complicated dynamics. Much debate has taken place on the mechanisms responsible for the final stages of the non-radiative decay dynamics, which have been observed to involve relaxations of about $40$-$60\;{\rm meV}$ and take place surprisingly fast---within $2$-$60\;{\rm ps}$. These decay times are much smaller that the radiative recombination times $\tau_R\sim 1\;{\rm ns}$ observed in (In,Ga)As/GaAs dots.\cite{karachinsky_APL_2004,buckle_JAP_1999,bardot_PRB_2005} To explain this fast relaxation, three alternative mechanisms have been proposed and supported by model calculations: multiphonon-emission, Auger carrier-carrier scattering, and polaron decay. To provide a general perspective we first outline in this paper the general decay channels of photoexcited carriers in the GaAs-barrier of (In,Ga)As/GaAs self-assembled quantum dots (Sec. \ref{literature_review}), and then we focus on the $P\rightarrow S$ Auger cooling due to electron-hole scattering, providing accurately calculated results. We use a realistic atomistic, pseudopotential-based approach (Sec. \ref{Auger_calculation}) that has been recently applied to successfully reproduce the magnitude of the radiative recombination lifetime of ground-state electrons and holes in (In,Ga)As/GaAs dots (Ref. \onlinecite{narvaez_PRB_2005b}) and CdSe colloidal dots (Ref. \onlinecite{califano_NanoL_2005}). Our results for inter-shell decay time $\tau(P\rightarrow S)$ compare well with data from experiments in which photoexcited holes are present. Thus, as long as both an electron and hole are present the Auger mechanism can explain fast inter-shell relaxation without resorting to other (e.g. polaronic decay or multi-phonon emission) mechanisms. \section{Characteristic dynamical processes of excited electrons and holes in self-assembled (In,Ga)As/GaAs quantum dots} \label{literature_review} One distinguishes first between systems having a lone carrier, either electron or hole, and systems having both an electron and hole. A lone carrier can be produced by doping the dot\cite{bras_APL_2002,kammenerer_APL_2005,zibik_PhysicaE_2005,zibik_PRB_2004,zibik_PhysicaE_2005a,sauvage_PRL_2002} or by electrochemical injection.\cite{guyot-sionnest_JCP_2005} Exciting a lone carrier and following its decay \cite{zibik_PRB_2004,zibik_PhysicaE_2005a,sauvage_PRL_2002} is a specialized field and will be reviewed briefly in Sec. \ref{inter-shell_decay}. More commonly we encounter relaxation of systems having both photoexcited electrons and holes. This is reviewed next. Figure \ref{Fig_1} sketches four non-radiative relaxation processes that take place following photocreation of an electron-hole pair in an (In,Ga)As/GaAs quantum dot system. The electron is shown as a solid dot and the hole as a circle. The processes are illustrated with a dot with sparse confined electron (CB) states $\{e_0,\,e_1,\,e_2\}$, and with a much denser set of confined hole (VB) states $\{h_0,\,h_1,\,\ldots,\,h_k,\,\ldots,\,h_N\}$ as is characteristic of self-assembled dots. The continuum of states of the wetting layer (dashed region) and GaAs barrier (shaded) are also shown schematically. The main observed carrier relaxation processes are the following. \subsection{Barrier-to-wetting layer carrier capture} Non-resonant photoexcitation of an electron-hole pair in the barrier [Fig. \ref{Fig_1}(a)] often leads to capture by wetting-layer (WL) quasi-continua. This process consists of carrier thermalization within the GaAs barrier and subsequent capture by the WL. Barrier thermalization occurs within $1\;{\rm ps}$.\cite{siegert_PRB_2005,yuan_PhysicaB_1999} Siegert {\em et al.} measured time-resolved photoluminescence (PL) signal from the wetting layer of InAs/GaAs dots at high excitation and found a capture time of $\sim 2\;{\rm ps}$ regardless of doping (Ref. \onlinecite{siegert_PRB_2005}). Similarly, in undoped dots, Sun {\em et al.} have found a capture time smaller that $2\;{\rm ps}$ (Ref. \onlinecite{sun_Nanotechnology_2005}), while Yuan {\em et al} observed a capture time of about $10\;{\rm ps}$ (Ref. \onlinecite{yuan_PhysicaB_1999}). \subsection{Carrier capture from the wetting layer into the dot} Following barrier-to-wetting layer carrier capture, the hole relaxes to the lowest-energy confined hole state $h_N$ while the electron is captured from the bottom of the wetting layer to the highest-energy confined state [illustrated by $P$; Fig. \ref{Fig_1}(b)]. Sosnowski {\em et al.}\cite{sosnowski_PRB_1998} found in time-resolved differential transmission experiments at low excitation in an (In,Ga)As/GaAs dot with two confined electron states that the electron capture time is $2.8\;{\rm ps}$. On the other hand, a {\em combined} capture time has been derived from time-resolved photoluminescence (PL) experiments at high excitation by several groups. (These times are affected by the subsequent intra-dot carrier relaxation.) Siegert {\em et al.}\cite{siegert_PRB_2005} have found a capture time of $4.9\;{\rm ps}$ in undoped dots, and $5.4\;{\rm ps}$ and $6.1\;{\rm ps}$ in {\em n}-doped and {\em p}-doped dots, respectively.\cite{note_00} Similarly, Yuan {\em et al.} \cite{yuan_PhysicaB_1999} found a capture time within $5\;{\rm ps}$, while Sun {\em et al.} found a capture time of less than $2\;{\rm ps}$ (Ref. \onlinecite{sun_Nanotechnology_2005}). \begin{figure} \includegraphics[width=8.5cm]{./Fig_1.eps} \caption{{\label{Fig_1}}Sketch of different dynamical process experienced by photocreated carriers in a self-assembled (In,Ga)As/GaAs quantum dot: (a) Barrier-to-wetting layer (WL) carrier capture, (b) carrier capture from the wetting layer into the dot, (c) carrier relaxation within the dot, (d) thermal escape of carriers.} \end{figure} \subsection{Relaxation of excited carriers within the dot} \label{inter-shell_decay} Following carrier capture from the wetting layer into the dot, carriers can experience different dynamical processes. These processes largely reflect the type of spacings that exist between various confined states. The (In,Ga)As/GaAs system has interesting properties in this respect. First, not only are these direct gap materials, but the competing band-structure valleys ($X,L$) are rather far energetically from $\Gamma$ [unlike InP or PbSe (Ref. \onlinecite{landolt_bornstein_table})], so these materials, specially InAs, are in fact {\em strongly} direct-gap systems. Second, the hole mass in InAs is much heavier than the electron mass, so confined hole states tend to be more densely spaced than electron states. Third, the electron states are arranged in $S,\;P,\;D\, \dots$ ``shells'' and each shell shows intra-shell level splittings, e.g. ${\cal E}(P_1)\neq {\cal E}(P_2)$ are split by $1$-$6\;{\rm meV}$, while inter-shell splittings are larger, e.g. $S$-$P$ spacing is $40$-$60\;{\rm meV}$ (Refs. \onlinecite{bras_APL_2002,kammenerer_APL_2005,zibik_PRB_2004,sauvage_PRL_2002,zibik_PhysicaE_2005,muller_APL_2003}) (compared to $\sim 300\;{\rm meV}$ in CdSe dots). Thus, the intra-shell splitting is of the order of (small wave vector) acoustic phonon energies, whereas inter-shell spacing is larger than (small wave vector) longitudinal optical phonon energies. Therefore, inter-shell relaxation via single-phonon emission due to electron-phonon coupling (within the Born-Oppenheimer adiabatic approximation) is expected to be ineffective---\cite{bockelmann_PRB_1990,benisty_PRB_1991}the phonon-bottleneck effect---because energy cannot be conserved in the inter-shell relaxation process. Finally, hole states do not form shells, with exception of flat dots\cite{narvaez_JAP_2005} (height of about $20\;$\AA), and the splitting between hole states is about $1$-$20\;{\rm meV}$, thus comparable to acoustic phonon frequencies. Given these general characteristics, the main electron- and hole-relaxation channels within the dot are: (a) {\em Hole thermalization.} The hole relaxes to $h_0$, most likely via electron--acoustic-phonon emission. Such a hole relaxation has been found to occur within sub-${\rm ps}$ times.\cite{sosnowski_PRB_1998,urayama_PRL_2001} Moreover, Quochi and co-workers showed that the hole relaxation time depends strongly on temperature: $20\;{\rm ps}$ at $60\;{\rm K}$ and $0.8\;{\rm ps}$ at $300\;{\rm K}$ (Ref. \onlinecite{quochi_PhysicaB_2002}). Note that in CdSe colloidal dots the existence of energy gaps of $\sim 60\;{\rm meV}$ {\em within} the valence-band quasi-continuum was shown experimentally\cite{xu_PRB_2002} and theoretically\cite{califano_NanoL_2003} to slow down the hole thermalization. (b) {\em Intra-shell electron relaxation} (e.g. $P_2\rightarrow P_1$; Fig. \ref{Fig_1}). The electron relaxes from $P_2\rightarrow P_1$ ($1$-$6\;{\rm meV}$ splitting), or between magnetic-field split states, via acoustic phonon emission. From optical pumb-probe measurements, Zibik {\em et al.} have recently deduced relaxation times of $15\;{\rm ps}$ and $35\;{\rm ps}$ for $P_1$-$P_2$ splittings of $3.7\;{\rm meV}$ and $5.5\;{\rm meV}$ (Ref. \onlinecite{zibik_PhysicaE_2005a}), respectively. A model calculation that adopts longitudinal acoustic phonon emission predicts, correspondingly, values of $8\;{\rm ps}$ and $34\;{\rm ps}$.\cite{zibik_PhysicaE_2005a} (c) {\em Inter-shell electron relaxation for sole carrier and for electron-hole pair} (e.g. $P \rightarrow S$; Fig. \ref{Fig_1}) within the $40$-$60\;{\rm meV}$ separating the electronic shells. This relaxation is different if an electron-hole pair is present or just a sole electron (doped dot). As expected from the phonon bottleneck effect, inter-shell relaxation in (In,Ga)As/GaAs dots has been observed to be slow by Urayama and co-workers\cite{urayama_PRL_2001} (relaxation time of $\sim 750\;{\rm ps}$) as well as Heitz and co-workers\cite{heitz_PRB_2001} ($7.7\;{\rm ns}$). In contrast, time-resolved optical measurements have clearly demonstrated that this inter-shell decay is a {\em fast} process whether a hole is present or not. For instance, in experiments in which both an electron and hole are present, M\"uller {\em et al.} have found decay times of $4.7\;{\rm ps}$ at $5\;{\rm K}$ and $2.8$-$1.5\;{\rm ps}$ (depending upon excitation power) at room-temperature in interband-pump--intraband-probe experiments (Ref. \onlinecite{muller_APL_2003}); Boogart {\em et al.} found $19\;{\rm ps}$ (low intensity) and $9\;{\rm ps}$ (high intensity) within $5\;{\rm K}$ and $77\;{\rm K}$, but $7\;{\rm ps}$ (high intensity) at room-temperature, in time-resolved pump-probe differential reflectance spectroscopy (Ref. \onlinecite{bogaart_icps27_2005}); Sosnowski {\em et al.} found $5\;{\rm ps}$ at $10\;{\rm K}$ in pump-probe differential transmission experiments (Ref. \onlinecite{sosnowski_PRB_1998}); De Giorgi {\em et al.} found $6.5\;{\rm ps}$ at $4\;{\rm K}$ ($3.0\;{\rm ps}$ at high intensity) and $3.5\;{\rm ps}$ at room-temperature in time-resolved PL upconversion experiments (Ref. \onlinecite{de_giorgi_APL_2001}); with the same experimental technique, applied to large ($b=350\;${\AA}, $h=110\;${\AA}) and small ($b=250\;${\AA}, $h=30\;${\AA}) dots, Boggess {\em et al} found, respectively, $1\;{\rm ps}$ and $7\;{\rm ps}$ below $100\;{\rm K}$, and $\sim 2.5\;{\rm ps}$ at $200\;{\rm K}$ and $6\;{\rm ps}$ at $150\;{\rm K}$ (Ref. \onlinecite{boggess_APL_2001}); while Siegert {\em et al.} found that at $80\;{\rm K}$ the $D\rightarrow S$ decay time corresponds to $7\;{\rm ps}$, $3\;{\rm ps}$, and $2\;{\rm ps}$ for undoped, n-doped, and p-doped dots, respectively (Ref. \onlinecite{siegert_PRB_2005}). On the other hand, when a sole electron is present and no hole, the inter-shell relaxation time slows down by a factor of about 2-10. For instance, in $n$-doped (In,Ga)As/GaAs quantum dots the low-temperature $P\rightarrow S$ relaxation time has been extracted from pump-probe infra-red spectroscopy and is in the range of $20$-$65\;{\rm ps}$ in the experiments of Zibik {\em et al.} (Ref. \onlinecite{zibik_PRB_2004}) and $40$-$70\;{\rm ps}$ in the experiments of Sauvage {\em et al.} (Ref. \onlinecite{sauvage_PRL_2002}). In the latter, the room-temperature $P\rightarrow S$ relaxation is $37\;{\rm ps}$ for $\Delta(S-P)\simeq 54.5\;{\rm meV}$. Note that in earlier pump-probe interband absorption experiments at high excitation Sauvage {\em et al.} found a relaxation time of $3\;{\rm ps}$ at room temperature (Ref. \onlinecite{sauvage_APL_1998}). The situation is similar in colloidal dots such as CdSe, where the $P\rightarrow S$ inter-shell relaxation in the absence of a hole slows down to $\sim 10\;{\rm ps}$ (Ref. \onlinecite{guyot-sionnest_JCP_2005}), relative to $\sim 1\;{\rm ps}$ when an electron-hole pair is present. Several relaxation mechanisms have been proposed as responsible for the fast inter-shell relaxation: multi-phonon emission,\cite{inoshita_PRB_1992} Auger (carrier-carrier) scattering,\cite{bockelmann_PRB_1992,jiang_PhysicaE_1998,ferreira_APL_1999,nielsen_PRB_2004} and polaron relaxation\cite{inoshita_PRB_1997,verzelen_PRB_2000,jacak_PRB_2002,seebeck_PRB_2005}. (We discuss the Auger and polaron models in Sec. \ref{Auger+polaron}.) \subsection{Thermal escape of carriers from dot} Upon increasing temperature, the photoexcited electron and hole escape the confined states of the dot [Fig. \ref{Fig_1}(d)]. Thermal depopulation has been found to be significant at temperatures $T>100\;{\rm K}$.\cite{de_giorgi_APL_2001,urayama_APL_2002,norris_JPD_2005} However, Heitz and co-workers have found the onset to be $200\;{\rm K}$.\cite{heitz_PhysicaB_1999} In {\em n}-doped InAs/GaAs dots, Bras and co-workers showed that thermal depopulation becomes significant above $70\;{\rm K}$ (Ref. \onlinecite{bras_APL_2002}). \section{Auger and polaron mechanisms for $P\rightarrow S$ inter-shell decay} \label{Auger+polaron} \subsection{Auger relaxation via electron-hole scattering} Figure \ref{Fig_1}(c) illustrates this process whereby the hot electron decays by scattering a low-lying photoexcited hole into deep hole states like $h_k$. Scattering takes place via the electron-hole Coulomb interaction, so this relaxation process does not take place in the absence of a photexcited hole. For the mechanism to be effective it requires energy conservation: The excess energy of the electron has to be elastically transfered to the hole [as sketched in Fig. \ref{Fig_1}, where ${\cal E}^{\,(e)}_1-{\cal E}^{\,(e)}_0={\cal E}^{\,(h)}_0-{\cal E}^{\,(h)}_k$]. On the other hand, electronic level broadening due to phonons effectively relaxes this stringent condition.\cite{sosnowski_PRB_1998} In (In,Ga)As/GaAs self-assembled quantum dots the ${\cal E}^{(e)}_{P}-{\cal E}^{(e)}_S \sim 50\;{\rm meV}$ whereas in CdSe colloidal dots ${\cal E}^{(e)}_{P}-{\cal E}^{(e)}_S \sim 300\;{\rm meV}$. In the latter case the $P\rightarrow S$ decay via Auger process is highly effective.\cite{wang_PRL_2003,klimov_JPCB_2000,klimov_PRL_1998,guyot-sionnest_PRB_1999} In fact, Hendry et al. \cite{hendry_PRL_2006} have demonstrated the validity of the electron-hole Auger mechanism for $P\rightarrow S$ relaxation in CdSe dots by measuring directly the hole thermalization time (Sec. II C) versus the electron excess energy. Moreover, in Ref. 48 Guyot-Sionnest and co-workers have shown that in CdSe dots the $P\rightarrow S$ relaxation of electrons is slowed down upon inducing hole trapping at the surface of the dots. This is strong evidence in favor of relaxation due to electron-hole Auger scattering. The effectiveness of the Auger mechanism for $P\rightarrow S$ relaxation in self-assembled dots has been previously addressed within model Hamiltonians only.\cite{jiang_PhysicaE_1998,ferreira_APL_1999} Here it will be calculated by using a fully atomistic approach. When the hole is absent (due to its capture by a hole-quencher, or when only an electron is injected into the dot) the Auger mechanism is not possible. In CdSe colloidal dots the alternative mechanism corresponds to the coupling of the electrons in the dot with virtual phonons of the environment.\cite{guyot-sionnest_JCP_2005}. In (In,Ga)As/GaAs self-assembled dots the polaron decay has been proposed instead.\cite{inoshita_PRB_1997,verzelen_PRB_2000,seebeck_PRB_2005} \subsection{Polaron decay for a single excited electron (no hole)} This mechanism has been invoked to explain the electron relaxation to state $e_0$ in the {\em absence} of a hole. The confined electron states are assumed to be strongly coupled with the continuum of states arising from the phonon replicas of the localized states (e.g. $S$, $P$), thereby, forming stable polaron states. In turn, these polaron states relax when the phonon component of the polaron relaxes due to phonon anharmonicity.\cite{verzelen_PRB_2000} Thus, assuming that the phonon component of the polaron originates from LO phonons, the phonon-bottleneck is circumvented by the emission of an LO and a TA phonon. This mechanism requires that the $P$-$S$ energy difference be of the order of the zone-center optical phonon energy. In colloidal dots ${\cal E}(P)-{\cal E}(S)\sim 200$-$500\;{\rm meV}$ for electrons while $\hbar\omega_{LO}\sim 30\;{\rm meV}$, so the polaron decay mechanism is not possible. On the other hand, for holes in colloidal dots ${\cal E}(P)-{\cal E}(S)\sim 10$-$30\;{\rm meV}$, which would make the polaron decay possible. In (In,Ga)As/GaAs self-assembled dots, ${\cal E}(P)-{\cal E}(S)\sim 50\;{\rm meV}$ for electrons and ranges from $5$-$20\;{\rm meV}$ for holes while $\hbar\omega_{LO}\sim 30\;{\rm meV}$, thus making the polaron decay feasible. In the case of the inter-shell $P\rightarrow S$ transition in (In,Ga)As/GaAs, the polaron state has been predicted to relax within a few picoseconds,\cite{verzelen_PRB_2000} leaving the excited electron in the $S$ state. This model explains the observed relaxation times in the absence of a hole (Sec. \ref{inter-shell_decay}).\cite{note-100} Further data that has been taken as evidence of the polaron model in (In,Ga)As/GaAs dots corresponds to the anticrossings in the energies of allowed magneto-photoluminescence transitions as the field is swept.\cite{hameau_PRL_1999} The magnitude of the anticrossings ($\sim 3\;{\rm meV}$) present in the spectra is consistent with those predicted by the polaron model (Ref. \onlinecite{hameau_PRL_1999}). We note that in low-symmetry dots all states have the same $a_1$-symmetry even without phonon displacements, and therefore they would anticross in the presence of a magnetic field. Whether the reason for lowering the symmetry to $a_1$ is phonon coupling or simply the correct atomistic dot symmetry of the non-vibrating dot remains to be determined. \section{Calculation of Auger cooling due to electron-hole scattering} \label{Auger_calculation} We have calculated the Auger cooling lifetime of electrons in In$_{\rm 0.6}$Ga$_{\rm 0.4}$As/GaAs quantum dots within a pseudopotential-based atomistic approach\cite{zunger_pssb_2001} in order to establish if this mechanism leads to $P\rightarrow S$ decay times within magnitude needed to explain low-excitation experiments in which a photoexcited hole is present. \subsection{Method of calculation} \label{method} We begin by calculating the single-particle ladder $\{e_0,\,e_1,\,e_2,\dots\}$ and $\{h_0,\,h_1,\,h_2,\dots\}$ of electron and hole states, respectively, of the (In,Ga)As/GaAs quantum dot. The wave function $\psi_j$ and energy ${\cal E}_j$ of these states are solutions of the atomistic single-particle Schr\"odinger equation \begin{equation} \label{SP.Shrodinger} \{-\frac{1}{2}\nabla^2+V_{SO}+\sum_{l,\alpha}\,v_{\alpha}({\bf R}-{\bf R}_{l,\alpha})\}\psi_j={\cal E}_j\,\psi_j. \end{equation} \noindent Here, the actual potential of the solid (dot+GaAs barrier) is described by a superposition of (semiempirical) screened pseudopotentials $v_{\alpha}$ for atom of type $\alpha$ (In,Ga,As) with position ${\bf R}_{l,\alpha}$ within the dot or barrier, and a non-local pseudopotential $V_{SO}$ that accounts for the spin-orbit interaction.\cite{williamson_PRB_2001} To solve Eq. (\ref{SP.Shrodinger}), we expand $\psi_j$ in a linear combination of Bloch bands $u^{(M)}_{n,{\bf k}}({\bf R})$ of material $M$ (InAs, GaAs), with wave vector {\bf k} and band index $n$, subjected to strain $\tilde\varepsilon$:\cite{wang_PRB_1999} \begin{equation} \psi_j({\bf R})=\sum_{M}\sum_{n,{\bf k}}\,C^{(j)}_{n,{\bf k};M}\,u^{(M)}_{n,{\bf k};\varepsilon}({\bf R}) . \end{equation} \noindent This expansion has a main advantage over a plane-wave expansion: The Bloch bands $u^{(M)}_{n,{\bf k};\varepsilon}({\bf R})$ can be intuitively chosen, which reduces the computational demand significantly.\cite{wang_PRB_1999} To calculate the electron Auger cooling lifetime $\tau(P\rightarrow S)$ due to electron-hole scattering at low temperatures, we proceed in two steps. \subsubsection{Calculation of the Auger scattering rates for individulal electron-hole configurations} We consider as initial electron-hole configurations $|e_{i}h_j\rangle$ those corresponding to the electron in the $P$-shell states $\{e_1,e_2\}$ and the hole in low-lying states $h_j$; and as the final scattering states those that correspond to an electron occupying the $S$-shell state $e_0$ and a hole in a deep state $h_k$ [Fig. \ref{Fig_1}(c)], i.e $|e_0h_k\rangle$. Then, we calculate the net, characteristic Auger scattering rate of the transition $|e_{i}\rangle\rightarrow|e_0\rangle$ ($i=1,2$), with a hole in state $h_{j}$, by using Fermi's golden rule: \begin{equation} \label{Eq_1} \frac{1}{\tau_{i_h}(e_{i}\rightarrow e_0)}=\frac{2\pi}{\hbar}\sum_{k}\,|J^{\,(eh)}_{ij;0k}|^2\,\delta[E(i;j)-E(0;k)]. \end{equation} \noindent Here, $E(i_e;i_h)$ and $E(0;k)$ correspond to the many-particle energy of the initial and final state, respectively, calculated at the single-configuration level of approximation.\cite{note_02} The electron-hole Coulomb scattering matrix elements $J^{\,(eh)}_{i_ei_h;0k}$ are given by \begin{equation} \label{Coulomb_integrals} J^{\,(eh)}_{ij;0k}=\int\int\,d{\bf R}d{\bf R}'\, \frac{[\psi^{(h)}_{j}({\bf R})]^{*}[\psi^{(e)}_{i}({\bf R}')]^{*}\psi^{(e)}_{0}({\bf R}')\psi^{(h)}_{k}({\bf R})} {\epsilon({\bf R},{\bf R}')|{\bf R}-{\bf R}'|}, \end{equation} \noindent where $\epsilon({\bf R},{\bf R}')$ is the microscopic dielectric function derived by Resta.\cite{resta_PRB_1977} Note that in the actual computations, we introduce a phenomenological broadening $\Gamma$ of the final states that allow us to replace $\delta(x)$ in Eq. (\ref{Eq_1}) with a Gaussian function $(\Gamma\sqrt{2\pi})^{-1}\,\exp(-x^2/2\Gamma\,^2)$. One should understand $\Gamma$ as a phenomenological way to account for the phonon-induced (e.g. phonon broadening) finite lifetime $\tau_h$ of the excited single-particle hole states: $\Gamma\sim 2\pi\hbar/\tau_h$. Considering that experimentally the relaxation of a hole in the wetting layer to $h_0$ takes about $0.6{\rm ps}$ (Ref. \onlinecite{sosnowski_PRB_1998}), we estimate a lower bound for $\Gamma$ of $10\;{\rm meV}$. The phenomenological parameter $\Gamma$ has been used in previous calculations (Refs. \onlinecite{jiang_PhysicaE_1998} and \onlinecite{wang_PRL_2003}). Figure \ref{Fig_2} shows the characteristic Auger relaxation lifetime $\tau_{h_0}(e_1\rightarrow e_0)$ calculated for two values of $\Gamma$ in two lens-shaped In$_{\rm 0.6}$Ga$_{\rm 0.4}$As/GaAs quantum dots---D1 and D2---of size ($252\;$\AA, $35\;$\AA). These dots differ only in the random alloy disorder realization. For a phenomenological broadening $\Gamma=5\,{\rm meV}$, $\tau_{D1}(P\rightarrow S) \sim 20\;{\rm ps}$ and $\tau_{D_2}(P\rightarrow S)\sim 35\;{\rm ps}$. The strong difference shows that $\tau_{h_0}(e_1\rightarrow e_0)$ depends strongly upon the energy structure of the final states. For a more plausible value of the broadening, $\Gamma=10\;{\rm meV}$, $\tau_{h_0}(e_1\rightarrow e_0)\sim 5\;{\rm ps}$ for both dots.\cite{note-200} In addition, we find that $\tau_{h_0}(e_1\rightarrow e_0) \simeq \tau_{h_0}(e_2\rightarrow e_0)$; D2 presents a difference of $1.5\;{\rm ps}$ among these lifetimes. We also show, for a comparison, $\tau_{h_0}(e_1\rightarrow e_0)$ for dot D1 under a hydrostatic pressure of $2.4\;{\rm GPa}$. Because this pressure does not change significantly the intraband energy structure of the confined states, but it primarily increases the localization of their wave functions,\cite{narvaez_PRB_2005a} the characteristic relaxation lifetime is smaller than at ambient pressure. \begin{figure} \includegraphics[width=8.5cm]{./Fig_2.eps} \caption{{\label{Fig_2}}Electronic Auger cooling characteristic lifetime $\tau_{h_0}(e_1\rightarrow e_0)$ calculated with two phenomenological broadenings---$\Gamma=5\;{\rm meV}$ and $10\;{\rm meV}$---for dots of same size (b,h)=($252\;$\AA,$35\;$\AA). Dots D1 (open squares) and D2 (solid circles) correspond to different random alloy disorder realizations.} \end{figure} \subsubsection{Solution of the rate equations describing the $P\rightarrow S$ electron relaxation} Once we have calculated the characteristic times $\tau_{i_h}(e_{i_e}\rightarrow e_0)$, we notice that (i) at low temperatures ($k_BT \ll {\cal E}^{(h)}_{1}-{\cal E}^{(h)}_{0}$) there are two relevant initial electron-hole configurations $|1\rangle=|e_1h_0\rangle$ and $|2\rangle=|e_2h_0\rangle$ that decay to a single scattering configuration $|s\rangle=|e_0h_k\rangle$. (ii) In addition, due to the $P\rightarrow P$ intra-shell relaxation, configuration $|2\rangle$ decays to $|1\rangle$ with a relaxation time $\tau(e_2\rightarrow e_1)=\tau_{21}$ between $15\,{\rm ps}$ and $35\,{\rm ps}$.\cite{zibik_PRB_2004} Thus, we find the time-dependent occupation of $n_1$, $n_2$, and $n_S$ by solving numerically the following set of rate equations. \begin{eqnarray} \frac{d n_1}{d t}&=& -[\gamma^{(+)}+(\tau_{h_0}(e_1\rightarrow e_0))^{-1}]n_1+\gamma^{(-)}n_2 \nonumber\\ \frac{d n_2}{d t}&=& -[\gamma^{(-)}+[\tau_{h_0}(e_2\rightarrow e_0)]^{-1}]n_2+\gamma^{(+)}n_1 \nonumber\\ \frac{d n_s}{d t}&=&[\tau_{h_0}(e_1\rightarrow e_0)]^{-1}n_1+[\tau_{h_0}(e_2\rightarrow e_0)]^{-1}n_2 \end{eqnarray} \noindent with initial conditions taken to be $n_1(0)=n_2(0)=1/2$ and $n_S(0)=0$. These initial conditions reflect the fact that the electrons captured in the dot have the same probability to decay to $P_1$ or $P_2$ (see Sec. \ref{inter-shell_decay}). Here, $\gamma^{(+)}$ and $\gamma^{(-)}$ are the rates of transitions $n_1\rightarrow n_2$ and $n_2\rightarrow n_1$, respectively, with \begin{equation} \gamma^{(+)}=\frac{1}{\tau_{21}}[\exp(\Delta E/k_B T)-1]^{-1} \end{equation} \noindent and \begin{equation} \gamma^{(-)}=\frac{1}{\tau_{21}}[1+(\exp(\Delta E/k_B T)-1)^{-1}]; \end{equation} \noindent where $\Delta E=E(2;0)-E(1;0)$. Finally, we extract electron Auger relaxation $\tau(P\rightarrow S)$ by fitting the time-dependence of the occupation probability $n_s$ to the expression $1-\exp[-t/\tau(P\rightarrow S)]$. For the characteristic times $\tau_{h_0}(e_1\rightarrow e_0)$ and $\tau_{h_0}(e_2\rightarrow e_0)$ calculated with $\Gamma=10\;{\rm meV}$, and $\tau_{21}=15\;{\rm ps}$, the fit is excellent. \begin{figure} \includegraphics[width=8.5cm]{./Fig_3.eps} \caption{{\label{Fig_3}}(Color online.) Auger cooling lifetime $\tau(P\rightarrow S)$ {\em vs} temperature for seven lens-shaped quantum dots of different sizes. The pair (b,h) indicates the base diameter and height of the dots. Data from Refs. [\onlinecite{norris_JPD_2005,siegert_PRB_2005,muller_APL_2003,sosnowski_PRB_1998}] are also shown.} \end{figure} \subsection{Predicted $\tau(P\rightarrow S)$ and comparison with data} Figure \ref{Fig_3} shows $\tau(P\rightarrow S)$ versus temperature for lens-shaped dots of different sizes [{\rm (base,$\,$height)}]. In these calculations the broadening $\Gamma=10\,{\rm meV}$ is larger than the average energy spacing of the relevant final states and $\tau_{21}=15\;{\rm ps}$. Two features are prominent. (i) $\tau(P\rightarrow S)$ decreases with both increasing height at a fixed base and increasing base at a fixed height. (ii) The Auger cooling lifetime of ($150\;$\AA,$75\;$\AA) is similar to that of dots with size ($252\;$\AA,$35\;$\AA) due to their similar single-configuration exciton gap (see below). {\em Comparison with data:} In Fig. \ref{Fig_3} we also show data extracted from differential transmission spectroscopy experiments (Ref. \onlinecite{norris_JPD_2005}) and time-resolved photoluminescence experiments (Refs. [\onlinecite{siegert_PRB_2005,muller_APL_2003,sosnowski_PRB_1998}]) in (In,Ga)As/GaAs dots appear as squares and diamonds. A comparison with our calculated values shows the following. (i) We find excellent agreement between our calculated $\tau(P\rightarrow S)$ for the ($252\;$\AA,$35\;$\AA) dot D1 and the value of $5.2\;{\rm ps}$ found by Sosnowski and co-workers in differential transmission spectroscopy in (In,Ga)As/GaAs dots with gap of $1.265\;{\rm eV}$ (Ref. \onlinecite{sosnowski_PRB_1998}). Dot D2 and the dot with size ($150\;$\AA, $75\;$\AA) also compare well with experiment. (ii) The value of $2.5\;{\rm ps}$ for $\tau(P\rightarrow S)$ at $5\;{\rm K}$ (Fig. \ref{Fig_3}) in InAs/GaAs dots with energy gap of $1.08\;{\rm eV}$ that has been derived by M\"uller {\em et al.} (Ref. \onlinecite{muller_APL_2003}) from pump-probe intraband spectroscopy is in satisfactory agreement with our predicted values for ($252\;$\AA, $50\;$\AA), ($252\;$\AA, $65\;$\AA), and ($200\;$\AA, $75\;$\AA) dots. (iii) Our results for the flat dot ($h=20\;${\AA} and $35\;${\AA}) compare well with the $\tau(P\rightarrow S)$ data of Norris {\em et al.} (Ref. \onlinecite{norris_JPD_2005}) at low temperatures. The data of Siegert {\em et al} (Ref. \onlinecite{siegert_PRB_2005}) below $100\;{\rm K}$ is comparable to our low-temperature predicted values. Note that Norris {\em et al.} have found that above $100\;{\rm K}$ thermal escape of carriers [Fig. \ref{Fig_1}(e)] is important, which explains the large abrupt reduction of the Auger decay time seen in the data.\cite{norris_JPD_2005} \begin{figure} \includegraphics[width=8.5cm]{./Fig_4.eps} \caption{{\label{Fig_4}}Calculated Auger-cooling lifetime $\tau(P\rightarrow S)$ at $T=10\;{\rm K}$ versus the single-configuration exciton gap for several lens-shaped quantum dots.} \end{figure} \subsection{Trend of $\tau(P\rightarrow S)$ with exciton gap} Figure \ref{Fig_4}(a) shows the calculated low-temperature ($10\,{\rm K}$) Auger relaxation lifetime as a function of the dot exciton gap for several In$_{0.6}$Ga$_{0.4}$As/GaAs quantum dots.\cite{note_01} Two important features emerge: (i) We find that $\tau(P\rightarrow S)$ ranges from $1$-$7\;{\rm ps}$ and decreases with the gap of the dots. As the $S$-$P$ splittings of the lens-shaped dots is nearly the same, we attribute the reduction of $\tau(P\rightarrow S)$ to the {\em increase} of the joint density of states \begin{equation} \label{JDOS} g[E(i_e,i_h)]=\sum_{k}\delta[E(i_e;i_h)-E(0;k)] \end{equation} \noindent that takes place as the gap of the dot {\em decreases}, due to the increase in the density of single-particle hole states. \subsection{Comparison with other calculations for (In,Ga)As/GaAs dots} We have compared our results with two {\em model} calculations. (i) The 8-band ${\bf k}\cdot {\bf p}$ calculation of Jiang and Singh (Ref. \onlinecite{jiang_PhysicaE_1998}) and (ii) the parabolic, single-band effective-mass calculation of Ferreira and Bastard (Ref. \onlinecite{ferreira_APL_1999}). Our results agree well with the calculation in (i). Namely, Jiang and Singh show an increase of the characteristic Auger cooling lifetime with decreasing $\Gamma$. In addition, the results of Jiang and Sing compare satisfactorily (within a factor of two) with the value of $\tau(P\rightarrow S)$ observed by Sosnowsky {\em et al}.\cite{sosnowski_PRB_1998} A direct comparison with (ii) is not fully applicable since Ferreira and Bastard consider different initial states than those considered here (Sec. \ref{method}). In particular, the starting electron-hole pair states correspond, in our language, to $|e_1h_1\rangle$ and $|e_1h_2\rangle$. However, it is interesting to see that Ferreira and Bastard find that the Auger-cooling lifetime is within $0.1$ and $6\;{\rm ps}$. Moreover, depending on the choice of initial e-h states, this lifetime either increases as gap decreases (in contrast to our predictions; Fig. \ref{Fig_4}) or viceversa. \subsection{Digression: Comparison with calculations and data for CdSe colloidal dots} Wang {\em et al.}\cite{wang_PRL_2003} have calculated $\tau(P\rightarrow S)$ for CdSe colloidal dots using the same methodology as in this paper---pseudopotential-based atomistic approach---finding, respectively, relaxation times of $0.6\;{\rm ps}$ and $0.2\;{\rm ps}$ for a dots with radii of $29\;${\AA} and $38\;${\AA}. These results show that in contrast to In$_{\rm 0.6}$Ga$_{\rm 0.4}$As/GaAs dots, $\tau(P\rightarrow S)$ increases with decreasing the dot gap. Moreover, for In$_{0.6}$Ga$_{0.4}$As/GaAs dots, we predict $\tau(P\rightarrow S)$s that are about a factor of 10 slower. The ${\bf k}\cdot{\bf p}$-based calculation of Efros and co-workers\cite{efros_SSC_1995} predicts Auger decay lifetimes in CdSe colloidal dots of $\sim 2\;{\rm ps}$ almost independently of dot size for radii between $20\;${\AA} and $40\;${\AA}. While the magnitude of $\tau(P\rightarrow S)$s that we find in InGaAs/GaAs is comparable to that of Efros and co-workers, the gap dependence is strikingly different. On the other hand, bleaching experiments in CdSe colloidal quantum dots show that the Auger cooling lifetime of electrons is below a picosecond and {\em decreases} as the exciton gap {\em increases}.\cite{klimov_JPCB_2000} [Note that the calculations of Wang and co-workers\cite{wnag_PRL_2003} capture reproduce these experimental findings.] We predict that $\tau(P\rightarrow S)\sim 1-7\;{\rm ps}$ in (In,Ga)As/GaAs self-assembled quantum dots and shows the opposite gap dependence [Fig. \ref{Fig_4}]. The gap dependence of $\tau(P\rightarrow S)$ in both colloidal and self-assembled dots is dictated by the gap (size) dependence of (i) the joint density of states [Eq. (\ref{JDOS})] and (ii) the magnitude of the Coulomb scattering integrals [Eq. (\ref{Coulomb_integrals})]. While in (In,Ga)As/GaAs self-assembled dots the changes with size in the joint density of states prevails, in CdSe colloidal dots the changes of the Coulomb integrals dictates the gap dependence of $\tau(P\rightarrow S)$. \section{Summary} We have discussed several dynamical processes that photoexcited electrons and holes undergo in (In,Ga)As/GaAs self-assembled quantum dots, and calculated the inter-shell $P$-to-$S$ electron decay lifetime in (In,Ga)As/GaAs self-assembled dot due to Auger electron-hole scattering. When only an electron (or only a hole) is present due to doping and this sole carrier is excited by a photon, its decay must involve a non-Auger mechanism (perhaps polaron decay). But when both an electron and hole are present we show that this Auger cooling takes place within picoseconds, which makes it an efficient inter-shell relaxation process compared to radiative recombination ($\sim {\rm ns}$). In addition, we predict that the lifetime $\tau (P\rightarrow S)$ increases with the exciton gap. Our pseudopotential-based calculations confirm earlier predictions of simplified, {\em model} calculations. The values we find for $\tau (P\rightarrow S)$ compare well with recent data in the presence of photoexcited holes. This finding complemented with our review of the data in the literature allows us to conclude that in the presence of a photoexcited hole there is no need to invoke the alternative polaron-decay mechanism for inter-shell electron relaxation. This conclusion could be tested in (In,Ga)As/GaAs dots by measuring the rate of hole thermalization versus the electron excess energy, or by measuring the electron relaxation rate after modifying the surface of the dot so as to cause hole trapping. Finally, a consistent picture of electron relaxation within quantum dots appears to demand two relaxation mechanisms: electron-hole Auger scattering and polaron decay. \begin{acknowledgments} The authors thank Alberto Franceschetti (NREL) for useful discussions. This work was funded by U.S. DOE-SC-BES-DMS under Contract No. DE-AC36-99GO10337 to NREL. \end{acknowledgments}
1,314,259,992,791
arxiv
\section{Introduction} \label{sec:intro} \begin{figure*}[tp] \centering \includegraphics[width=0.99\linewidth]{figs/main.png} \caption{We propose new pairwise pixel loss functions that capture the spatial structure of segmentation. Given an image ({\bf a}), the task is to predict the ground-truth labeling ({\bf b}). When a deep neural net is trained with conventional softmax cross-entropy loss on individual pixels, the predicted segmentation ({\bf c}) is often based on visual appearance and oblivious of the spatial structure of each semantic class. Our work imposes an additional pairwise pixel label affinity loss ({\bf d}), matching the label relations among neighouring pixels between the prediction and the ground-truth. We also learn the neighbourhood size for each semantic class, and our adaptive affinity fields result ({\bf e}) picks out both large bicycle shields and thin spokes of round wheels. } \label{fig:main} \end{figure*} Semantic segmentation of an image refers to the challenging task of assigning each pixel a categorical label, e.g., {\it motorcycle} or {\it person}. Segmentation performance is often measured in a pixel-wise fashion, in terms of mean Intersection over Union (mIoU) across categories between the ground-truth (\fig{main}b) and the predicted label map (\fig{main}c). Much progress has been made on segmentation with convolutional neural nets (CNN), mostly due to increasingly powerful pixel-wise classifiers, \eg VGG-16~\cite{simonyan2014very,long2015fully} and ResNet~\cite{he2016deep,wu2016high}, with the convolutional filters optimized by minimizing the average pixel-wise classification error over the image. Even with big training data and with deeper and more complex network architectures, pixel-wise classification based approaches fundamentally lack the spatial discrimination power when foreground pixels and background pixels are close or mixed together: Segmentation is poor when the visual evidence for the foreground is weak, \eg glass motorcycle shields, or when the spatial structure is small, \eg thin radial spokes of all the wheels (\fig{main}c). There have been two main lines of efforts at incorporating structural reasoning into semantic segmentation: Conditional Random Field (CRF) methods~\cite{krahenbuhl2011efficient,zheng2015conditional} and Generative Adversarial Network (GAN) methods \cite{goodfellow2014generative,luc2016semantic}. \begin{enumerate} \setlength{\parskip}{0pt}\setlength{\leftskip}{-0.5em}\setlength{\itemsep}{1mm} \item CRF enforces label consistency between pixels measured by the similiarity in visual appearance (\eg raw pixel value). An optimal labeling is solved via message passing algorithms~\cite{chen2015learning,liu2017deep}. CRF is employed either as a post-processing step~\cite{krahenbuhl2011efficient,chen2016deeplab}, or as a plug-in module inside deep neural networks~\cite{zheng2015conditional,liu2015semantic}. Aside from its time-consuming iterative inference routine, CRF is also sensitive to visual appearance changes. \item GAN is a recent alternative for imposing structural regularity in the neural network output. Specifically, the predicted label map is tested by a discriminator network on whether it resembles ground truth label maps in the training set. GAN is notoriously hard to train, particularly prone to model instability and mode collapses \cite{radford2015unsupervised}. \end{enumerate} We propose a simpler approach, by learning to verify the spatial structure of segmentation {\it during training only}. Instead of enforcing semantic labels on individual pixels and matching labels between neighbouring pixels using CRF or GAN, we propose the concept of {\it Adaptive Affinity Fields (AAF)} to capture and match the relations between neighbouring pixels in the label space. How the semantic label of each pixel is related to those of neighboring pixels, \eg whether they are {\it same or different}, provides a distributed and pixel-centric description of semantic relations in the space and collectively they describe {\it Motorcycle wheels are round with thin radial spokes}. We develop new affinity field matching loss functions to learn a CNN that automatically outputs a segmentation respectful of spatial structures and small details. The pairwise pixel affinity idea has deep roots in perceptual organization, where local affinity fields have been used to characterize the intrinsic geometric structures in early vision~\cite{poggio1985early}, the grouping cues between pixels for image segmentation via spectral graph partitioning \cite{shi2000normalized}, and the object hypothesis for non-additive score verification in object recognition at the run time \cite{amir1998grouping}. Technically, affinity fields at different neighbourhood sizes encode structural relations at different ranges. Matching the affinity fields at a fixed size would not work well for all semantic categories, \eg thin structures are needed for {\it persons} seen at a distance whereas large structures are for {\it cows} seen close-up. One straightforward solution is to search over a list of possible affinity field sizes, and pick the one that yields the minimal affinity matching loss. However, such a practice would result in selecting trivial sizes which are readily satisfied. For example, for large uniform semantic regions, the optimal affinity field size would be the smallest neighbourhood size of 1, and any pixel-wise classification would already get them right without any additional loss terms in the label space. We propose \textit{adversarial learning} for size-adapted affinity field matching. Intuitively, we select the right size by pushing the affinity field matching with different sizes to the extreme: Minimizing the affinity loss should be hard enough to have a real impact on learning, yet it should still be easy enough for the network to actually improve segmentation towards the ground-truth, {\it i.e.,} a best worst-case learning scenario. Specifically, we formulate our AAF as a $minimax$ problem where we simultaneously {\it maximize} the affinity errors over multiple kernel sizes and {\it minimize} the overall matching loss. Consequently, our adversarial network learns to assign a smaller affinity field size to {\it person} than to {\it cow}, as the person category contains finer structures than the cow category. Our AAF has a few appealing properties over existing approaches (Table~\ref{tab:comparison}). \begin{table}[h] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{l|c|c|c|c} \Xhline{1pt} ~{\bf Method}~ & ~{\bf Structure Guidance}~ & ~{\bf Training}~ & ~{\bf Run-time Inference}~ & ~{\bf Performance}~ \\ \hline \hline ~CRF~\cite{krahenbuhl2011efficient}~ & input image & medium & yes & 76.53 \\ \hline ~GAN~\cite{goodfellow2014generative}~ & ground-truth labels & hard & no & 76.20 \\ \hline Our ~AAF~ & ~label affinity ~ & easy & no & {\bf 79.24} \\ \Xhline{1pt} \end{tabular}} \vspace{0.5pt} \caption{Key differences between our method and other popular structure modeling approaches, namely CRF~\cite{krahenbuhl2011efficient} and GAN~\cite{goodfellow2014generative}. The performance (\% mIoU) is reported with PSPNet~\cite{zhao2016pyramid} architecture on the Cityscapes~\cite{Cordts2016Cityscapes} validation set. } \label{tab:comparison} \end{table} \begin{enumerate} \setlength{\parskip}{0pt}\setlength{\leftskip}{-0.5em}\setlength{\itemsep}{1mm} \item It provides a {\bf versatile representation} that encodes spatial structural information in distributed, pixel-centric relations. \item It is {\bf easier to train} than GAN and {\bf more efficient} than CRF, as AAF only impacts network learning during training, requiring no extra parameters or inference processes during testing. \item It is {\bf more generalizable to visual domain changes}, as AAF operates on the label relations not on the pixel values, capturing desired intrinsic geometric regularities despite of visual appearance variations. \end{enumerate} We demonstrate its effectiveness and efficiency with extensive evaluations on Cityscapes~\cite{Cordts2016Cityscapes} and PASCAL VOC 2012~\cite{everingham2010pascal} datasets, along with its remarkable generalization performance when our learned networks are applied to the GTA5 dataset ~\cite{Richter_2016_ECCV}. \section{Related Works} \label{sec:work} Most methods treat semantic segmentation as a pixel-wise classification task, and those that model structural correlations provide a small gain at a large computational cost. \noindent \textbf{Semantic Segmentation.} Since the introduction of fully convolutional networks for semantic segmentation ~\cite{long2015fully}, deeper~\cite{wu2016high,zhao2016pyramid,li2017not} and wider~\cite{noh2015learning,ronneberger2015u,yu2015multi} network architectures have been explored, drastically improving the performance on benchmarks such as PASCAL VOC~\cite{everingham2010pascal}. For example, Wu {\it et al. } \cite{wu2016high} achieved higher segmentation accuracy by replacing backbone networks with more powerful ResNet~\cite{he2016deep}, whereas Yu {\it et al. } \cite{yu2015multi} tackled fine-detailed segmentation using atrous convolutions. While the performance gain in terms of mIoU is impressive, these pixel-wise classification based approaches fundamentally lack the spatial discrimination power when foreground and background pixels are close or mixed together, resulting in unnatural artifacts in Fig.~\ref{fig:main}c. \noindent \textbf{Structure Modeling.} Image segmentation has highly correlated outputs among the pixels. Formulating it as an independent pixel labeling problem not only makes the pixel-level classification unnecessarily hard, but also leads to artifacts and spatially incoherent results. Several ways to incorporate structure information into segmentation have been investigated~\cite{krahenbuhl2011efficient,chen2015learning,zheng2015conditional,liu2015semantic,lin2016efficient,bertasius2016convolutional,mostajabi2018regularizing}. For example, Chen {\it et al. } \cite{chen2016deeplab} utilized denseCRF~\cite{krahenbuhl2011efficient} as post-processing to refine the final segmentation results. Zheng {\it et al. } \cite{zheng2015conditional} and Liu {\it et al. } \cite{liu2015semantic} further made the CRF module differentiable within the deep neural network. Pairwise low-level image cues, such as grouping affinity~\cite{maire2016affinity,liu2017learning} and contour cues~\cite{bertasius2016semantic,chen2016semantic}, have also been used to encode structures. However, these methods are sensitive to visual appearance changes, or require expensive iterative inference procedures. Our work provides another perspective to structure modeling by matching the relations between neighbouring pixels in the label space. Our segmentation network learns to verify the spatial structure of segmentation only during training; once it is trained, it is ready for deployment without run-time inference. \section{Our Approach: Adaptive Affinity Fields} \label{sec:method} We first briefly revisit the classic pixel-wise cross-entropy loss commonly used in semantic segmentation. The drawbacks of pixel-wise supervision lead to our concept of region-wise supervision. We then describe our region-wise supervision through affinity fields, and introduce an adversarial process that learns an adaptive affinity kernel size for each category. We summarize the overall AAF architecture in Fig.~\ref{fig:architecture}. \begin{figure}[t] \centering \includegraphics[width=1\textwidth]{figs/architecture.png} \caption{Method overview: Learning semantic segmentation with adaptive affinity fields. The adaptive affinity fields consist of two parts: the affinity field loss with multiple kernel sizes and corresponding categorical adversarial weightings. Note that the adaptive affinity fields are only introduced during training and there is no extra computation during inference.} \label{fig:architecture} \end{figure} \subsection{From Pixel-wise Supervision to Region-wise Supervision} Pixel-wise cross-entropy loss is most often used in CNNs for semantic segmentation~\cite{long2015fully,chen2016deeplab}. It penalizes pixel-wise predictions independently and is known as a form of {\it unary supervision}. It implicitly assumes that the relationships between pixels can be learned as the effective receptive field increases with deeper layers. Given predicted categorical probability $\hat{y}_i(l)$ at pixel $i$ \wrt its ground truth categorical label $l$, the total loss is the average of cross-entropy loss at pixel $i$: \begin{equation} \mathcal{L}_\text{unary}^i = \mathcal{L}_\text{cross-entropy}^i = -\log \hat{y}_i(l). \end{equation} Such a unary loss does not take the semantic label correlation and scene structure into account. The objects in different categories interact with each other in a certain pattern. For example, cars are usually on the road while pedestrians on the sidewalk; buildings are surrounded by the sky but never on top of it. Also, some shapes of a certain category occur more frequently, such as rectangles in trains, circles in bikes, and straight vertical lines in poles. This kind of inter-class and inner-class pixel relationships are informative and can be integrated into learning as structure reasoning. We are thus inspired to propose an additional region-wise loss to impose penalties on inconsistent unary predictions and encourage the network to learn such intrinsic pixel relationships. Region-wise supervision extends its pixel-wise counterpart from independent pixels to neighborhoods of pixels, {\it i.e.,} , the region-wise loss considers a patch of predictions and ground truth jointly. Such region-wise supervision $\mathcal{L}_\text{region}$ involves designing a specific loss function for a patch of predictions $\mathcal{N}(\hat{y_i})$ and corresponding patch of ground truth $\mathcal{N}({y_i})$ centered at pixel $i$, where $\mathcal{N}(\cdot)$ denotes the neighborhood. The overall objective is hence to minimize the combination of unary and region losses, balanced by a constant $\lambda$: \begin{equation} S^* = \argmin_S \mathcal{L} = \argmin_S \frac{1}{n}\sum_i\Big( \mathcal{L}_\text{unary}^i(\hat{y_i}, y_i) + \lambda \mathcal{L}_\text{region}^i\big(\mathcal{N}(\hat{y_i}), \mathcal{N}(y_i)\big) \Big), \end{equation} where $n$ is the total number of pixels. We omit index $i$ and averaging notations for simplicity in the rest of the paper. The benefits of the addition of region-wise supervision have been explored in previous works. For example, Luc {\it et al. } \cite{luc2016semantic} exploited GAN~\cite{goodfellow2014generative} as structural priors, and Mostajabi {\it et al. } \cite{mostajabi2018regularizing} pre-trained an additional auto-encoder to inject structure priors into training the segmentation network. However, their approaches require much hyper-parameter tuning and are prone to overfitting, resulting in very small gains over strong baseline models. Please see Table~\ref{tab:comparison} for a comparison. \subsection{Affinity Field Loss Function} \label{sec:affinity} Our affinity field loss function overcome these drawbacks and is a flexible region-wise supervision approach that is also easy to optimize. The use of pairwise pixel affinity has a long history in image segmentation ~\cite{shi2000normalized,stella2003multiclass}. The grouping relationships between neighbouring pixels are derived from the image and represented by a graph, where a node denotes a pixel and a weighted edge between two nodes captures the similarity between two pixels. Image segmentation then becomes a graph partitioning problem, where all the nodes are divided into disjoint sets, with maximal weighted edges within the sets and minimal weighted edges between the sets. We define pairwise pixel affinity based not on the image, but on ground-truth label map. There are two types of label relationships between a pair of pixels: whether their labels are the same or different. If pixel $i$ and its neighbor $j$ have the same categorical label, we impose a grouping force which encourages network predictions at $i$ and $j$ to be similar. Otherwise, we impose a separating force which pushes apart their label predictions. These two forces are illustrated in Fig.~\ref{fig:details} left. Specifically, we define a pairwise affinity loss based on KL divergence between binary classification probabilities, consistent with the cross-entropy loss for the unary label prediction term. For pixel $i$ and its neighbour $j$, depending on whether two pixels belong to the same category $c$ in the ground-truth label map $y$, we define a non-boundary term $\mathcal{L}_{\text{affinity}}^{i\bar{b}c}$ for the grouping force and an boundary term $\mathcal{L}_{\text{affinity}}^{ibc}$ for the separating force in the prediction map $\hat{y}$: \begin{equation} \mathcal{L}_{\text{affinity}}^{ic} = \begin{cases} \mathcal{L}_{\text{affinity}}^{i\bar{b}c} = D_{KL}(\hat{y}_j(c)|| \hat{y}_i(c)) & \text{if } y_i(c) = y_j(c) \\ \mathcal{L}_{\text{affinity}}^{ibc} = \max\{0, m - D_{KL}(\hat{y}_j(c) || \hat{y}_i(c))\} & \text{otherwise} \end{cases} \end{equation} $D_{KL}(\cdot)$ is the Kullback-Leibler divergence between two Bernoulli distributions $P$ and $Q$ with parameters $p$ and $q$ respectively: $D_{KL}(P||Q)= p\log \frac{p}{q}+\bar{p}\log \frac{\bar{p}}{\bar{q}}$ for the binary distribution $[p,1-p]$ and $[q,1-q]$, where $p,q \in [0,1]$. For simplicity, we abbreviate the notation as $D_{KL}(p||q)$. $\hat{y}_j(c)$ denotes the prediction probability of $j$ in class $c$. The overall loss is the average of $\mathcal{L}_{\text{affinity}}^{ic}$ over all categories and pixels. \noindent {\bf Discussion 1.} Our affinity loss encourages similar network predictions on two pixels of the same ground-truth label, regardless of what their actual labels are. The collection of such pairwise bonds inside a segment ensure that all the pixels achieve the same label. On the other hand, our affinity loss pushes network predictions apart on two pixels of different ground-truth labels, again regardless of what their actual labels are. The collection of such pairwise repulsion help create clear segmentation boundaries. \noindent {\bf Discussion 2.} Our affinity loss may appear similar to CRF~\cite{krahenbuhl2011efficient} on the pairwise grouping or separating forces between pixels. However, a crucial difference is that CRF models require iterative inference to find a solution, whereas our affinity loss only impacts the network training with pairwise supervision. A similar perspective is metric learning with contrastive loss~\cite{chopra2005learning}, commonly used in face identification tasks. Our affinity loss works better for segmentation tasks, because it penalizes the network predictions directly, and our pairwise supervision is {\it in addition to} and {\it consistent with} the conventional unary supervision. \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{figs/details.png} \caption{{\bf Left:} Our affinity field loss separates predicted probabilities across the boundary and unifies them within the segment. {\bf Right:} The affinity fields can be defined over multiple ranges. Minimizing the affinity loss over different ranges results in trivial solutions which are readily satisfied. Our size-adaptive affinity field loss is achieved with adversarial learning: Maximizing the affinity loss over different kernel sizes selects the most critical range for imposing pairwise relationships in the label space, and our goal is to minimize this maximal loss -- i.e., use the best worst case scenario for most effective training. } \label{fig:details} \end{figure} \subsection{Adaptive Kernel Sizes from Adversarial Learning} \label{sec:aaf} Region-wise supervision often requires a preset kernel size for CNNs, where pairwise pixel relationships are measured in the same fashion across all pixel locations. However, we cannot expect one kernel size fits all categories, since the ideal kernel size for each category varies with the average object size and the object shape complexity. We propose a size-adaptive affinity field loss function, optimizing the weights over a set of affinity field sizes for each category in the loop: \begin{equation} \mathcal{L}_\text{multiscale} = \sum_c \sum_k w_{ck} \mathcal{L}_\text{region}^{ck} \text{ \ \ s.t. } \sum_k w_{ck} = 1 \text{\ and \ } w_{ck} \ge 0 \end{equation} where $\mathcal{L}_\text{region}^{ck}$ is a region loss defined in Eqn. (2), yet operating on a specific class channel $c$ with kernel size $k\times k$ with a corresponding weighting $w_{ck}$. If we just minimize the affinity loss with size weighting $w$ included, $w$ would likely fall into a trivial solution. As illustrated in Fig~\ref{fig:details} right, the affinity loss would be minimum if the smallest kernels are highly weighted for non-boundary terms and the largest kernels for boundary terms, since nearby pixels are more likely to belong to the same object and far-away pixels to different objects. Unary predictions based on the image would naturally have such statistics, nullifying any potential effect from our pairwise affinity supervision. To optimize the size weighting without trivializing the affinity loss, we need to push the selection of kernel sizes to the extreme. Intuitively, we need to enforce pixels in the same segment to have the same label prediction as far as possible, and likewise to enforce pixels in different segments to have different predictions as close as possible. We use the best worst case scenario for most effective training. We formulate the adaptive kernel size selection process as optimizing a two-player minimax game: While the segmenter should always attempt to minimize the total loss, the weighting for different kernel sizes in the loss should attempt to maximize the total loss in order to capture the most critical neighbourhood sizes. Formally, we have: \begin{equation} S^* = \argmin_S \max_w \mathcal{L}_\text{unary} + \mathcal{L}_\text{multiscale}. \end{equation} For our size-adaptive affinity field learning, we separate the non-boundary term $\mathcal{L}_\text{affinity}^{\bar{b}ck}$ and boundary term $\mathcal{L}_\text{affinity}^{bck}$ in Eqn (3) since their ideal kernel sizes would be different. Our adaptive affinity field (AAF) loss becomes: \begin{align} S^* &= \argmin_S \max_w \mathcal{L}_\text{unary} + \mathcal{L}_\text{AAF},\\ \mathcal{L}_\text{AAF} &= \sum_c \sum_k (w_{\bar{b}ck} \mathcal{L}_\text{affinity}^{\bar{b}ck} + w_{bck} \mathcal{L}_\text{affinity}^{bck}),\\ \nonumber \text{s.t. } \sum_k w_{\bar{b}ck} &= \sum_k w_{bck}= 1 \text{\ and \ } w_{\bar{b}ck},w_{bck} \ge 0. \end{align} \section{Experimental Setup} \label{sec:exp_setup} \subsection{Datasets} \label{subsec:datasets} We compare our proposed affinity fields and AAF with other competing methods on the PASCAL VOC 2012~\cite{everingham2010pascal} and Cityscapes~\cite{Cordts2016Cityscapes} datasets. \noindent \textbf{PASCAL VOC 2012.} \label{sec:voc} PASCAL VOC 2012~\cite{everingham2010pascal} segmentation dataset contains 20 object categories and one background class. Following the procedure of~\cite{long2015fully}, ~\cite{zhao2016pyramid}, ~\cite{chen2016deeplab}, we use augmented data with the annotations of~\cite{hariharan2011semantic}, resulting in 10,582, 1,449, and 1,456 images for training, validation and testing. \noindent \textbf{Cityscapes.} \label{sec:cityscapes} Cityscapes~\cite{Cordts2016Cityscapes} is a dataset for semantic urban street scene understanding. 5000 high quality pixel-level finely annotated images are divided into training, validation, and testing sets with 2975, 500, and 1525 images, respectively. It defines 19 categories containing flat, human, vehicle, construction, object, nature, \etc \subsection{Evaluation Metrics} \label{subsec:metrics} All existing semantic segmentation works adopt \textbf{pixel-wise mIoU}~\cite{long2015fully} as their metric. To fully examine the effectiveness of our AAF on fine structures in particular, we also evaluate all the models using \textbf{instance-wise mIoU} and \textbf{boundary detection metrics}. \noindent \textbf{Instance-wise mIoU.} Since the pixel-wise mIoU metric is often biased toward large objects, we introduce the instance-wise mIoU to alleviate the bias, which allow us to evaluate fairly the performance on smaller objects. The per category instance-wise mIoU is formulated as $\hat{U}_c = \frac{\sum_x n_{c,x} \times U_{c,x}}{\sum_x n_{c,x}},$ where $n_{c,x}$ and $U_{c,x}$ are the number of instances and IoU of class $c$ in image $x$, respectively. \noindent \textbf{Boundary detection metrics.} We compute semantic boundaries using the semantic predictions and benchmark the results using the standard benchmark for contour detection proposed by~\cite{amfm_pami2011}, which summarizes the results by precision, recall, and f-measure. \subsection{Methods of Comparison} \label{subsec:baseline_methods} We briefly describe other popular methods that are used for comparison in our experiments, namely, GAN's adversarial learning~\cite{goodfellow2014generative}, contrastive loss~\cite{chopra2005learning}, and CRF~\cite{krahenbuhl2011efficient}. \\ \noindent \textbf{GAN's Adversarial Learning.} We investigate a popular framework, the Generative Adversarial Networks (GAN)~\cite{goodfellow2014generative}. The discriminator $D$ in GAN works as injecting priors for region structures. The adversarial loss is formulated as \begin{equation} \mathcal{L}_{\text{adversarial}}^i = \log D(\mathcal{N}(y_i)) + \log(1-D(\mathcal{N}(\hat{y_i}))). \end{equation} We simultaneously train the segmentation network $S$ to minimize $\log (1-D(\mathcal{N}(\hat{y_i})))$ and the discriminator to maximize $\mathcal{L}_{\text{adversarial}}$. \\ \noindent \textbf{Pixel Embedding.} We study the region-wise supervision over feature map, which is implemented by imposing the contrastive loss~\cite{chopra2005learning} on the last convolutional layer before the softmax layer. The contrastive loss is formulated as \begin{equation} \mathcal{L}_{\text{contrast}}^{i} = \begin{cases} \mathcal{L}_{\text{contrast}}^{i\bar{e}} = \|f_j-f_i\|^2_2 & \text{if } y_i(c) = y_j(c) \\ \mathcal{L}_{\text{contrast}}^{ie} = \max\{0, m - \|f_j-f_i\|^2_2\} & \text{otherwise,} \end{cases} \end{equation} where $f_i$ denotes $L_2$-normalized feature vector at pixel $i$, and $m$ is set to $0.2$. \\ \noindent \textbf{CRF-based Processing.} We follow ~\cite{chen2016deeplab}'s implementation by post-processing the prediction with dense-CRF~\cite{krahenbuhl2011efficient}. We set $bi\_w$ to $1$, $bi\_xy\_std$ to $40$, $bi\_rgb\_std$ to $3$, $pos\_w$ to $1$, and $pos\_xy\_std$ to $1$ for all experiments. It is worth mentioning that CRF takes additional $40$ seconds to generate the final results on Cityscapes, while our proposed methods introduce no inference overhead. \subsection{Implementation Details} \label{subsec:imp_details} Our implementation follows the ones of base architectures, which are PSPNet~\cite{zhao2016pyramid} in most cases or FCN~\cite{long2015fully}. We use the poly learning rate policy where the current learning rate equals the base one multiplied by $(1-\frac{\text{iter}}{\text{max\_iter}})^{0.9}$. We set the base learning rate as $0.001$. The training iterations for all experiments is $30K$ on VOC dataset and $90K$ on Cityscapes dataset while the performance can be further improved by increasing the iteration number. Momentum and weight decay are set to $0.9$ and $0.0005$, respectively. For data augmentation, we adopt random mirroring and random resizing between 0.5 and 2 for all datasets. We do not upscale the logits (prediction map) back to the input image resolution, instead, we follow \cite{chen2016deeplab}'s setting by downsampling the ground-truth labels for training ($output\_stride=8$). PSPNet~\cite{zhao2016pyramid} shows that larger ``cropsize'' and ``batchsize'' can yield better performance. In their implementation, ``cropsize'' can be up to $720\times 720$ and ``batchsize'' to $16$ using $16$ GPUs. To speed up the experiments for validation on VOC, we downsize ``cropsize'' to $336\times 336$ and ``batchsize'' to $8$ so that a single GTX Titan X GPU is sufficient for training. We set ``cropsize'' to $480\times 480$ during inference. For testing on PASCAL VOC 2012 and all experiments on Cityscapes dataset, we use $4$-GPUs to train the network. On VOC dataset, we set the ``batchsize'' to 16 and set ``cropsize'' to $480 \times 480$. On Cityscaeps, we set the ``batchsize'' to 8 and ``cropsize'' to $720 \times 720$. For inference, we boost the performance by averaging scores from left-right flipped and multi-scale inputs ($scales = \{0.5,0.75,1,1.25,1.5,1.75\}$). For affinity fields and AAF, $\lambda$ is set to $1.0$ and margin $m$ is set to $3.0$. We use ResNet101~\cite{he2016deep} as the backbone network and initialize the models with weights pre-trained on ImageNet~\cite{ILSVRC15}. \begin{table*}[b] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{l|c c c c c c c c c c c c c c c c c c c c|c} \Xhline{1pt} Method & aero & bike & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & mbike & person & plant & sheep & sofa & train & tv & mIoU \\ \hline \hline \rowcolor{Gray} FCN & 86.95 & 59.25 & 85.18 & 70.33 & 73.92 & 78.86 & 82.30 & 85.64 & 33.57 & 69.34 & 27.41 & 78.04 & 71.45 & 70.45 & 85.54 & 57.42 & 71.55 & 32.48 & 74.91 & 59.10 & 68.91 \\ PSPNet & 92.56 & 66.70 & 91.10 & 76.52 & 80.88 & 94.43 & 88.49 & 93.14 & 38.87 & 89.33 & 62.77 & 86.44 & 89.72 & 88.36 & 87.48 & 56.95 & 91.77 & 46.23 & 88.59 & 77.14 & 80.12 \\ \hline \hline \rowcolor{Gray} Affinity & 88.66 & 59.25 & 87.85 & 72.19 & 76.36 & 80.65 & 80.74 & 87.82 & 35.38 & 73.45 & 30.17 & 79.84 & 68.15 & 73.52 & 87.96 & 53.95 & 75.46 & 37.15 & 76.62 & 73.42 & 71.07 \\ \rowcolor{Gray} AAF & 88.15 & 67.83 & 87.06 & 72.05 & 76.45 & 85.43 & 80.58 & 88.33 & 35.47 & 72.76 & 31.55 & 79.68 & 67.01 & 77.96 & 88.20 & 50.31 & 73.16 & 42.71 & 78.14 & 73.87 & 71.95 \\ GAN & 92.36 & 65.94 & 91.80 & 76.35 & 77.70 & 95.39 & 89.21 & 93.30 & 43.35 & 89.25 & 61.81 & 86.93 & 91.28 & 87.43 & 87.21 & 68.15 & 90.64 & 49.64 & 88.79 & 73.83 & 80.74 \\ Emb. & 91.28 & 69.50 & 92.62 & 77.60 & 78.74 & 95.03 & 89.57 & 93.67 & 43.21 & 88.76 & 62.47 & 86.68 & 91.28 & 88.47 & 87.44 & 69.21 & 91.53 & 52.17 & 89.30 & 74.60 & 81.36 \\ Affinity & 91.52 & {\bf 74.74} & 92.09 & 78.17 & 80.73 & 95.70 & 89.52 & 92.83 & 43.29 & 89.21 & 60.33 & 87.50 & 90.96 & 88.77 & 88.88 & \textbf{71.00} & 88.54 & 50.61 & 89.64 & 78.22 & 81.80 \\ AAF & 92.97 & 73.68 & 92.49 & 80.51 & 79.73 & 96.15 & 90.92 & 93.42 & \textbf{45.11} & 89.00 & 62.87 & 87.97 & 91.32 & 90.28 & \textbf{89.30} & 69.05 & 88.92 & \textbf{52.81} & 89.05 & 78.91 & \textbf{82.39} \\ \Xhline{1pt} \end{tabular}} \vspace{0.5pt} \caption{Per-class results on Pascal VOC 2012 validation set. Gray colored background denotes using FCN as the base architecture.} \label{tab:voc} \end{table*} \begin{table*}[t] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{l|c c c c c c c c c c c c c c c c c c c|c} \Xhline{1pt} Method & road & swalk & build. & wall & fence & pole & tlight & tsign & veg. & terrain & sky & person & rider & car & truck & bus & train & mbike & bike & mIoU \\ \hline \hline \rowcolor{Gray} FCN & 97.31 & 79.28 & 89.52 & 38.08 & 48.63 & 49.70 & 59.37 & 69.94 & 90.86 & 56.58 & 92.38 & 75.91 & 46.24 & 92.26 & 50.41 & 64.51 & 39.73 & 54.91 & 73.07 & 66.77 \\ PSPNet & 97.96 & 83.89 & 92.22 & 57.24 & 59.31 & 58.89 & 68.39 & 77.07 & 92.18 & 63.71 & 94.42 & 81.80 & 63.11 & 94.85 & 73.54 & 84.82 & 67.42 & 69.34 & 77.42 & 76.72 \\ \hline \hline \rowcolor{Gray} Affinity & 97.52 & 80.90 & 90.42 & 40.45 & 49.81 & 55.97 & 63.92 & 73.37 & 91.49 & 59.01 & 93.30 & 78.17 & 52.16 & 92.85 & 52.53 & 65.78 & 39.28 & 52.88 & 74.53 & 68.65 \\ \rowcolor{Gray} AAF & 97.58 & 81.19 & 90.50 & 42.30 & 50.34 & 57.47 & 65.39 & 74.83 & 91.54 & 59.25 & 93.11 & 78.65 & 52.98 & 93.15 & 53.10 & 67.58 & 38.40 & 51.57 & 74.80 & 69.14 \\ CRF & 97.96 & 83.82 & 92.14 & 57.16 & 59.28 & 57.48 & 67.71 & 76.61 & 92.09 & 63.67 & 94.35 & 81.62 & 62.98 & 94.81 & 73.59 & 84.81 & 67.49 & 69.22 & 77.28 & 76.53 \\ GAN & 97.95 & 83.59 & 92.01 & 56.92 & 60.17 & 58.63 & 68.37 & 77.36 & 92.28 & 62.70 & 94.42 & 81.59 & 62.27 & 94.94 & 78.09 & 82.79 & 56.75 & 69.19 & 77.78 & 76.20 \\ Affinity & 98.08 & 85.58 & 92.60 & 58.33 & 61.45 & \textbf{66.80} & \textbf{74.19} & 81.29 & 92.90 & 65.34 & 94.87 & 84.00 & 65.84 & 95.50 & 76.84 & 85.80 & 64.19 & 72.32 & 79.83 & 78.72 \\ AAF & 98.18 & 85.35 & 92.86 & 58.87 & 61.48 & 66.64 & 74.00 & 80.98 & 92.95 & 65.31 & 94.91 & \textbf{84.27} & \textbf{66.98} & 95.51 & 79.39 & 87.06 & 67.80 & 72.91 & 80.19 & \textbf{79.24} \\ \Xhline{1pt} \end{tabular}} \vspace{0.5pt} \caption{Per-class results on Cityscapes validation set. Gray colored background denotes using FCN as the base architecture.} \label{tab:cityscapes} \end{table*} \section{Experimental Results} \label{sec:exp_results} We benchmark our proposed methods on two datasets, PASCAL VOC 2012~\cite{everingham2010pascal} and Cityscapes~\cite{Cordts2016Cityscapes}. All methods are evaluated by three metrics: mIoU, instance-wise mIoU and boundary detection recall. We include some visual examples to demonstrate the effectiveness of our proposed methods in Fig.~\ref{fig:results}. \subsection{Pixel-level Evaluation} \label{sec:pix_eval} \noindent \textbf{Validation Results.} For training on PASCAL VOC 2012~\cite{everingham2010pascal}, we first train on $train\_aug$ for 30K iterations and then fine-tune on $train$ for another 30K iterations with base learning rate as $0.0001$. For Cityscapes~\cite{Cordts2016Cityscapes}, we only train on finely annotated images for 90K iterations. We summarize the mIoU results on validation set in Table~\ref{tab:voc} and Table~\ref{tab:cityscapes}, respectively. With FCN~\cite{long2015fully} as base architecture, the affinity field loss and AAF improve the performance by $2.16\%$ and $3.04\%$ on VOC and by $1.88\%$ and $2.37\%$ on Cityscapes. With PSPNet~\cite{zhao2016pyramid} as the base architecture, the results also improves consistently: GAN loss, embedding contrastive loss, affinity field loss and AAF improve the mean IoU by $0.62\%$, $1.24\%$, $1.68\%$ and $2.27\%$ on VOC; affinity field loss and AAF improve by $2.00\%$ and $2.52\%$ on Cityscapes. It is worth noting that large improvements over PSPNet on VOC are mostly in categories with fine structures, such as ``bike'', ``chair'', ``person'', and ``plant''. \noindent \textbf{Testing Results.} On PASCAL VOC 2012, the training procedure for PSPNet and AAF is the same as follows: We first train the networks on $train\_aug$ and then fine-tune on $train\_val$. We report the testing results on VOC 2012 and Cityscapes in Table~\ref{tab:voc_test} and Table~\ref{tab:cityscapes_test}, respectively. Our re-trained PSPnet does not reach the same performance as originally reported in the paper because we do not bootstrap the performance by fine-tuning on hard examples (like ``bike'' images), as pointed out in \cite{chen2017rethinking}. We demonstrate that our proposed AAF achieve $82.17\%$ and $79.07\%$ mIoU, which is better than the PSPNet by $1.54\%$ and $2.77\%$ and competitive to the state-of-the-art performance.\\ \begin{table*}[t] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{l|c c c c c c c c c c c c c c c c c c c c|c} \Xhline{1pt} Method & aero & bike & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & mbike & person & plant & sheep & sofa & train & tv & mIoU \\ \hline \hline PSPNet & 94.01 & 68.08 & 88.80 & 64.87 & 75.87 & 95.60 & 89.59 & 93.15 & 37.96 & 88.20 & 72.58 & 89.96 & 93.30 & 87.52 & 86.65 & 61.90 & 87.05 & 60.81 & 87.13 & 74.65 & 80.63 \\ AAF & 91.25 & \textbf{72.90} & 90.69 & 68.22 & 77.73 & 95.55 & 90.70 & 94.66 & \textbf{40.90} & 89.53 & 72.63 & 91.64 & 94.07 & 88.33 & \textbf{88.84} & \textbf{67.26} & 92.88 & 62.62 & 85.22 & 74.02 & \textbf{82.17} \\ \Xhline{1pt} \end{tabular}} \vspace{0.5pt} \caption{Per-class results on Pascal VOC 2012 testing set.} \label{tab:voc_test} \end{table*} \begin{table*}[b!] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{l|c c c c c c c c c c c c c c c c c c c|c} \Xhline{1pt} Method & road & swalk & build. & wall & fence & pole & tlight & tsign & veg. & terrain & sky & person & rider & car & truck & bus & train & mbike & bike & mIoU \\ \hline \hline PSPNet & 98.33 & 84.21 & 92.14 & 49.67 & 55.81 & 57.62 & 69.01 & 74.17 & 92.70 & 70.86 & 95.08 & 84.21 & 66.58 & 95.28 & 73.52 & 80.59 & 70.54 & 65.54 & 73.73 & 76.30 \\ \hline AAF & 98.53 & 85.56 & 93.04 & 53.81 & 58.96 & \textbf{65.93} & 75.02 & 78.42 & 93.68 & 72.44 & 95.58 & 86.43 & 70.51 & 95.88 & 73.91 & 82.68 & 76.86 & 68.69 & 76.40 & \textbf{79.07} \\ \Xhline{1pt} \end{tabular}} \vspace{0.5pt} \caption{Per-class results on Cityscapes test set.} \label{tab:cityscapes_test} \end{table*} \begin{table*}[b!] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{l|c c c c c c c c c c c c c c c c c c c c|c} \Xhline{1pt} Method & aero & bike & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & mbike & person & plant & sheep & sofa & train & tv & mIoU \\ \hline \hline PSPNet & 87.54 & 53.08 & 83.53 & 76.95 & 45.13 & 87.68 & 68.77 & 89.01 & 39.26 & 88.78 & 51.49 & 88.88 & 84.41 & 85.95 & 77.60 & 48.68 & 86.25 & 54.18 & 88.25 & 66.11 & 73.60 \\ \hline \hline Affinity & 89.42 & 61.72 & 84.64 & 79.86 & 57.57 & 88.81 & 71.74 & 88.91 & 44.78 & 89.55 & 52.55 & 91.22 & 86.12 & 87.40 & 81.10 & 58.33 & 85.15 & 60.61 & 88.47 & 68.86 & 76.73 \\ AAF & 89.76 & \textbf{61.74} & 84.40 & \textbf{81.87} & \textbf{58.04} & 89.03 & \textbf{73.68} & 90.46 & \textbf{46.67} & 89.65 & \textbf{55.63} & 91.33 & 85.85 & 88.36 & \textbf{81.93} & \textbf{59.84} & 84.52 & \textbf{62.67} & 89.35 & 68.80 & \textbf{77.54} \\ \Xhline{1pt} \end{tabular}} \vspace{0.5pt} \caption{Per-class instance-wise IoU results on Pascal VOC 2012 validation set.} \label{tab:voc_inst} \end{table*} \begin{table*}[t] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{l|c c c c c c c c c c c c c c c c c c c|c} \Xhline{1pt} Method & road & swalk & build. & wall & fence & pole & tlight & tsign & veg. & terrain & sky & person & rider & car & truck & bus & train & mbike & bike & mIoU \\ \hline \hline PSPNet & 97.64 & 78.23 & 88.36 & 34.48 & 42.00 & 51.68 & 50.71 & 68.29 & 89.65 & 40.14 & 86.63 & 78.35 & 75.91 & 92.09 & 87.28 & 90.85 & 62.74 & 85.33 & 73.02 & 72.28\\ \hline \hline Affinity & 97.73 & 80.51 & 89.32 & 38.21 & 45.89 & \textbf{61.31} & \textbf{59.75} & 73.41 & 90.62 & 43.22 & 88.20 & 81.18 & \textbf{80.29} & 93.24 & 89.60 & 94.10 & 50.69 & 84.76 & 75.59 & 74.61\\ AAF & 97.86 & 80.40 & 89.44 & 38.38 & 46.33 & 61.19 & \textbf{59.75} & 73.55 & 90.63 & 42.51 & 88.48 & \textbf{81.27} & 80.08 & 93.18 & 89.47 & 93.73 & 60.74 & 86.40 & 75.84 & \textbf{75.22}\\ \Xhline{1pt} \end{tabular}} \vspace{0.5pt} \caption{Per-class instance-wise IOU results on Cityscapes validation set.} \label{tab:cityscapes_inst} \end{table*} \subsection{Instance-level Evaluation} \label{sec:inst_eval} We measure the instance-wise mIoU on VOC and Cityscapes validation set as summarized in Table~\ref{tab:voc_inst} and Table~\ref{tab:cityscapes_inst}, respectively In instance-wise mIoU, our AAF is higher than base architecture by $3.94\%$ on VOC and $2.94\%$ on Cityscapes. The improvements on fine-structured categories are more prominent. For example, the ``bottle'' is improved by $12.89\%$ on VOC, ``pole'' and ``tlight'' is improved by $9.51\%$ and $9.04\%$ on Cityscapes. \subsection{Boundary-level Evaluation} \label{sec:boundary_eval} \begin{table*}[b!] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{l|c c c c c c c c c c c c c c c c c c c c|c} \Xhline{1pt} Method & aero & bike & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & mbike & person & plant & sheep & sofa & train & tv & mean \\ \hline \hline PSPNet & .694 & .420 & .658 & .417 & .624 & .626 & .562 & .667 & .297 & .587 & .279 & .667 & .608 & .513 & .554 & .235 & .547 & .413 & .551 & .512 & .527 \\ Affinity & .745 & \textbf{.573} & .708 & .524 & .693 & .678 & .627 & .690 & \textbf{.455} & .620 & .383 & .732 & .655 & .602 & \textbf{.648} & \textbf{.370} & .583 & .546 & .609 & \textbf{.635} & \textbf{.610} \\ AAF & .746 & .559 & .704 & .524 & .684 & .675 & .622 & .701 & .441 & .612 & .391 & .728 & .653 & .595 & .647 & .355 & .580 & .547 & .608 & .628 & .606 \\ \Xhline{1pt} \end{tabular}} \vspace{0.5pt} \caption{Per-class boundary recall results on Pascal VOC 2012 validation set.} \label{tab:voc_boundary} \end{table*} \begin{table*}[b!] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{l|c c c c c c c c c c c c c c c c c c c|c} \Xhline{1pt} Method & road & swalk & build. & wall & fence & pole & tlight & tsign & veg. & terrain & sky & person & rider & car & truck & bus & train & mbike & bike & mean \\ \hline \hline PSPNet & .458 & .771 & .584 & .480 & .537 & .587 & .649 & .687 & .650 & .589 & .587 & .733 & .631 & .812 & .577 & .734 & .569 & .550 & .697 & .625 \\ \hline Affinity & .484 & .826 & .686 & .532 & .632 & \textbf{.760} & \textbf{.769} & \textbf{.780} & .754 & .663 & .655 & \textbf{.814} & \textbf{.748} & .852 & .627 & .792 & .589 & .651 & .798 & \textbf{.706}\\ AAF & .482 & .826 & .685 & .533 & .643 & .756 & .768 & .780 & .753 & .645 & .653 & \textbf{.814} & .746 & .851 & .644 & .789 & .590 & .642 & \textbf{.801} & .705\\ \Xhline{1pt} \end{tabular}} \vspace{0.5pt} \caption{Per-class boundary recall results on Cityscapes validation set.} \label{tab:cityscapes_boundary} \end{table*} Next, we analyze quantitatively the improvements of boundary localization. We include the boundary recall on VOC in Table~\ref{tab:voc_boundary} and Cityscapes in Table~\ref{tab:cityscapes_boundary}. We omit the precision table due to smaller performance difference. The overall boundary recall is improved by $7.9\%$ and $8.0\%$ on VOC and Cityscapes, respectively. It is worth noting that the boundary recall is improved for every category. This result demonstrates that boundaries of all categories can all benefit from affinity fields and AAF. Among all, the improvements on categories with complicated boundaries, such as ``bike'', ``bird'', ``boat'', ``chair'', ``person'', and ``plant'' are significant on VOC. On Cityscapes, objects with thin structures are improved most, such as ``pole'', ``tlight'', ``tsign'', ``person'', ``rider'', and ``bike''. \subsection{Adaptive Affinity Field Size Analysis} \label{sec:aaf_analysis} We further analyze our proposed AAF methods on: 1) optimal affinity field size for each category, and 2) effective combinations of affinity field sizes. \noindent \textbf{Optimal Adaptive Affinity Field Size.} We conduct experiments on VOC with our proposed AAF on three $k \times k$ kernel sizes where $k=3,5,7$. We report the optimal adaptive kernel size on the contour term calculated as $k^{e}_c=\sum_{k} w_{eck} \times k$, and summarized in Fig.~\ref{fig:aff_size}. As shown, ``person'' and ``dog'' benefit from smaller kernel size ($3.1$ and $3.4$), while ``cow'' and ``plant''from larger kernel size ($4.6$ and $4.5$). We display some image patches with the corresponding effective receptive field size. \noindent \textbf{Combinations of Affinity Field Sizes.} We explore the effectiveness of different selections of $k \times k$ kernels, where $k \in \{3,5,7\}$, for AAF. Summarized in Table~\ref{tab:aaf_ablation}, we observe that combinations of $3 \times 3$ and $5 \times 5$ kernels have the optimal performance. \begin{figure}[t] \centering \includegraphics[width=0.99\linewidth]{figs/aff_size.png} \caption{{\bf Left:} The optimal weightings for different kernel sizes of the edge term in AAF for each category on PASCAL VOC 2012 validation set. {\bf Right:} Visualization of image patches with corresponding effective receptive field sizes, suggesting how kernel sizes capture the shape complexity in critical regions of different categories.} \label{fig:aff_size} \end{figure} \begin{table*}[t] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{c|c|c|c c c c c c c c c c c c c c c c c c c c|c} \Xhline{1pt} $k=3$ & $k=5$ & $k=7$ & aero & bike & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & mbike & person & plant & sheep & sofa & train & tv & mIoU \\ \hline\hline $\checkmark$ & $\times$ & $\times$ & 89.02 & 68.86 & 90.05 & 73.52 & 77.87 & 94.04 & 86.94 & 91.04 & 40.85 & 85.82 & 54.08 & 84.31 & 89.12 & 84.91 & 86.72 & 67.52 & 85.56 & 52.55 & 87.60 & 73.78 & 79.00 \\ $\checkmark$ & $\checkmark$ & $\times$ & 90.19 & 68.48 & 89.87 & 76.91 & 77.56 & 93.84 & 89.08 & 91.45 & 40.67 & 85.82 & 57.23 & 85.33 & 89.77 & 85.97 & 86.93 & 65.68 & 85.12 & 52.22 & 87.25 & 74.07 & 79.45 \\ $\checkmark$ & $\checkmark$ & $\checkmark$ & 89.45 & 68.46 & 90.44 & 75.82 & 77.03 & 94.09 & 88.01 & 91.42 & 38.67 & 85.98 & 56.16 & 84.32 & 89.22 & 84.98 & 87.09 & 67.35 & 87.15 & 55.20 & 88.22 & 73.30 & 79.40 \\ \Xhline{1pt} \end{tabular}} \vspace{0.5pt} \caption{Per-category IOU results of AAF with different combinations of kernel sizes $k$ on VOC 2012 validation set. `$\checkmark$' denotes the inclusion of respective kernel size as opposed to `$\times$'.} \label{tab:aaf_ablation} \end{table*} \subsection{Generalizability} We further investigate the robustness of our proposed methods on different domains. We train the networks on the Cityscapes dataset~\cite{Cordts2016Cityscapes} and test them on another dataset, Grand Theft Auto V (GTA5)~\cite{Richter_2016_ECCV} as shown in Fig.~\ref{fig:results}. The GTA5 dataset is generated from the photo-realistic computer game--\textit{Grand Theft Auto V}~\cite{Richter_2016_ECCV}, which consists of 24,966 images with densely labelled segmentation maps compatible with Cityscapes. We test on GTA5 Part 1 (2,500 images). We summarize the performance in Table~\ref{tab:gtav}. It is shown that without fine-tuning, our proposed AAF outperforms the PSPNet~\cite{zhao2016pyramid} baseline model by $9.5\%$ in mean pixel accuracy and $1.46\%$ in mIoU, which demonstrates the robustness of our proposed methods against appearance variations. \begin{table*}[t] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{l|c c c c c c c c c c c c c c c c c c c |c|c} \Xhline{1pt} Method & road & swalk & build. & wall & fence & pole & tlight & tsign & veg. & terrain & sky & person & rider & car & truck & bus & train & mbike & bike & mIoU & pix. acc \\ \hline \hline PSPNet & 61.79 & 34.26 & 37.30 & 13.31 & 18.52 & 26.51 & 31.64 & 17.51 & 55.00 & 8.57 & 82.47 & 42.73 & 49.78 & 69.25 & 34.31 & 18.21 & 25.00 & 33.14 & 6.86 & 35.06 & 68.78 \\ Affinity & 75.26 & 30.34 & 44.10 & 12.91 & 20.19 & 29.78 & 31.50 & 23.98 & 64.25 & 11.83 & 74.32 & 48.28 & 49.12 & 67.39 & 25.76 & 23.82 & 20.29 & 41.48 & 5.63 & \textbf{36.86} & 75.13 \\ AAF & 83.07 & 27.82 & 51.16 & 10.41 & 18.76 & 28.58 & 31.74 & 24.98 & 61.38 & 12.25 & 70.65 & 50.53 & 48.06 & 53.35 & 26.80 & 20.97 & 24.50 & 39.56 & 9.37 & 36.52 & \textbf{78.28} \\ \Xhline{1pt} \end{tabular}} \vspace{0.5pt} \caption{Per-class results on GTA5 Part 1.} \label{tab:gtav} \end{table*} \begin{figure*}[b] \centering \includegraphics[width=\linewidth]{figs/results.png} \caption{Visual quality comparisons on the VOC 2012~\cite{everingham2010pascal} validation set (the first four rows), Cityscapes~\cite{Cordts2016Cityscapes} validation set (the middle two rows) and GTA5~\cite{Richter_2016_ECCV} part 1 (the bottom row): (a) image, (b) ground truth, (c) PSPNet~\cite{zhao2016pyramid}, (d) affinity fields, and (e) adaptive affinity fields (AAF).} \label{fig:results} \end{figure*} \section{Summary} \label{sec:conclusion} We propose adaptive affinity fields (AAF) for semantic segmentation, which incorporate geometric regularities into segmentation models, and learn local relations with adaptive ranges through adversarial training. Compared to other alternatives, our AAF model is 1) effective (encoding rich structural relations), 2) efficient (introducing no inference overhead), and 3) robust (not sensitive to domain changes). Our approach achieves competitive performance on standard benchmarks and also generalizes well on unseen data. It provides a novel perspective towards structure modeling in deep learning.
1,314,259,992,792
arxiv
\section{Introduction}\label{introduction} Neutron stars are a natural laboratory for matter at extreme densities with their central densities extending to multiples of nuclear density ($\rho_\mathrm{nuc} = 2 \times 10^{14} \mathrm{g}/\mathrm{cm}^3$). As such, observations in the gravitational wave~(GW) or electromagnetic wave~(EM) bands enable a better understanding of the equation of state~(EoS) of matter at such high densities. The science obtained from GW observations of the GW170817~\cite{Abbott:2017D} and GW190425~\cite{Abbott:2020uma} binary neutron star mergers by the LIGO/Virgo collaboration along with EM followup of the former have revealed many aspects of NSs, and results by NICER in the X-ray band have provided important estimates for both the mass and radius of a particular NS (PSR J0030+0451)~\cite{Miller:2019cac,Riley:2019yda,Bogdanov_2019,Bilous_2019}. NSs probe a region of the EoS not well understood. In particular, low densities can be studied in the lab with hadronic matter and extended to higher densities with theoretical extrapolations. At the highest densities, perturbative quantum chromodynamics~(pQCD) predicting free quark matter is well accepted. However, between these two regimes there is significant uncertainty and, on fairly general terms, reason to expect (at least one) phase transition~(PT). Binary neutron star mergers are of particular relevance to explore this region. Significant work has studied how such a PT inside a NS might manifest and impact potential observables. In particular, a number of groups have studied numerically the merger of two NSs described by various EoSs with PT. Early work compared the DD2 EoS with a PT to a hyperon phase~\cite{Radice:2016rys}. Another EoS was studied with a conformally flat, SPH code and found differences in the postmerger oscillation frequencies associated with a PT~\cite{Bauswein:2018bma,Bauswein:2019skm,PhysRevD.102.123023}. A similar result with higher postmerger frequencies were found with a PT to quark matter using a fully nonlinear, GR HD code~\cite{Most:2018eaw,Most:2019onn,Weih:2019xvw}. Yet another group studied BNS mergers within an EoS derived from holography, finding lower oscillation frequencies of post-merger remnants~\cite{Ecker:2019xrw}. These numerical investigations suggest that more precision in the postmerger regime is needed with GW observatories to observe these PTs, although Ref.~\cite{Chen:2019rja} argues that tens of current detections may allow current technology to discern PTs. In contrast though, Ref.~\cite{Annala:2019puf} makes an interesting argument with current observations that high mass NSs likely contain a sizable quark matter core. In this work, with the goal of further identifying potentially relevant features associated with a PT in NSs, we consider one particular EoS in piecewise polytropic form to which we add PTs of different forms. These PTs are not motivated by some particular theory, but instead take a generic approach, subject to mass constraints. We employ them only within regimes in which they remain causal\footnote{Note that many oft-used EoSs extend into acausal regimes at high densities.}. We then study the dynamics of single stars (which can be rotating and/or magnetized) and the mergers of quasi-circular stellar binaries. This study adds to the body of work on the impacts of PT in BNS --in good agreement with the main qualitative features already observed-- as well as considering some novel aspects. In particular, the potential role of magnetic field, increased accretion, and transitions into and out of the PT. \section{Setup}\label{details} We use the distributed adaptive mesh refinement code {\sc MHDuet} to evolve the CCZ4 formalism of the Einstein equations coupled to a magnetized perfect fluid, as described in detail in Ref.~\cite{Liebling:2020jlq}. The code is generated with the platform {\it Simflowny}~\cite{arbona13,arbona18} to run under the {\tt SAMRAI} infrastructure \cite{hornung02,gunney16}, which allows excellent parallel scaling over thousands of processors. The code has been extensively tested, demonstrating the expected convergence rate of the solutions in different GR and MHD scenarios~\cite{Palenzuela:2018sly,2020PhRvD.101l3019V,Liebling:2020jlq}. {\sc MHDuet} employs the Method of Lines, with a fourth-order Runge-Kutta time integrator which ensures the stability and convergence of the solution for time steps smaller than a factor of $0.4$ times the grid spacing~\cite{Butcher:2008}. The space-time evolution equations are discretized in space using centered finite-difference, fourth-order-accurate operators for the derivatives, and sixth-order Kreiss-Oliger dissipation to filter the high-frequency modes unresolved in our grids. For the fluid, we employ High-Resolution Shock-Capturing (HRSC) methods~\cite{toro97} to deal with the possible appearance of shocks and to take advantage of the existence of weak solutions in the equations. The fluxes at the cell interfaces are calculated by combining the Lax-Friedrich flux splitting formula~\cite{shu98} with the fifth-order, monotonicity-preserving reconstruction method MP5~\cite{suresh97}. In our previous study of neutron star mergers~\cite{Liebling:2020jlq} one of the adopted EoSs describing NSs was the SLy EoS, which is still consistent with current observations~\cite{Abbott:2018exr,Miller:2019cac}. Here, our starting point is a modification to this EoS, which we describe with three different piecewise polytropes, as specified in Table~\ref{tab:eos} and displayed in Fig.~\ref{fig:eoses}. We make this modification so that the addition of a PT to the EoS requires only four polytropes, which is the default choice of the code, and refer to this fiducial EoS without the addition of a PT as ``SLy.'' Note that with these four polytropes it is already possible to parameterize a wide variety of first order phase transitions in the EoS, as shown in Fig.~\ref{fig:eoses}. \begin{table}[h] \centering % \caption{Characterization of the different EoSs used in this work. Each EoS is defined as a piecewise polytrope with $n=4$ segments and with $ K_0[CGS] = 3.59389\times 10^{13}$ and $\Gamma_0 = 1.35692$. Each segment is delineated by a transition density $\rho_i$ expressed in \texttt{cgs} units. Note that for the modified SLy, polytropes 1 and 2 have the same value of $\Gamma_i$ and only the lettered EoSs have a segment with $\Gamma_i=0$. } % \begin{tabular}{ccccclll} \hline EoS & $\Gamma_1$ & $\Gamma_2$ & $\Gamma_3$ & $\log_{10} \rho_0$ & $\log_{10} \rho_1$ & $\log_{10}\rho_2$ & $\Delta_\rho(\times 10^{14})$ \\ \hline SLy & 2.9965 & 2.9965 & 2.851 & 14.165 & 14.7 & 14.9367 & ---\\ A & 2.9965 & 0.0 & 2.851 & 14.165 & 14.9367 & 15.1 & 3.95\\ B & 2.9965 & 0.0 & 2.851 & 14.165 & 14.9367 & 15.3 & 11.3\\ C & 2.9965 & 0.0 & 2.851 & 14.165 & 14.95 & 14.97 & 0.42\\ \hline \end{tabular} \label{tab:eos} \end{table} A piecewise polytrope is chosen such that the pressure is given in terms of the density by \begin{equation} p_\mathrm{cold}(\rho) = K_i \rho^{\Gamma_i} \end{equation} where $i$ denotes the particular section of the piecewise function and runs from 0 to 3. Enforcing continuity determines the constants $K_i$. The PT consists of a section in which $\Gamma_2=0$ beginning at some onset density $\rho_1$ and extending to some density $\rho_1+\Delta_\rho$, similar to that adopted in Refs.~\cite{Lindblom:1998dp,Chen:2019rja}. For this study, the stiffness of the EoS at densities above the transition ($\rho>\rho_1+\Delta_\rho$) remains the same as that for the SLy EoS above $\rho_1$, although in principle we are able to modify this as well. We consider here just a few such variations. The parameters for each EoS are listed in Table~\ref{tab:eos}, and we display them in Fig.~\ref{fig:eoses}. In particular, we show the pressure as a function of density which defines a barotropic EoS. We also show in the right panel the family of isolated, spherically symmetric stellar solutions which the EoS generates from solving the TOV equations with \texttt{Magstar} from the \texttt{Lorene} package~\cite{lorene}. One generally expects a change in stability at the extrema of such mass-radius curves. Thus, stars near the region at which the PT occurs (near the cyan circle) are expected to be stable for EoSs A and C, but not B, and the evolutions discussed below (Section~\ref{sec:isolated}) are consistent with this expectation. Also shown is a cyan circle indicating the particular star the evolution of which is discussed below in Section~\ref{sec:isolated}. Besides the piecewise polytrope, an additional {\em thermal} component of the EoS is included during the evolution to account for the thermal component of the fluid, represented as an ideal fluid with $\Gamma_\mathrm{thermal}=1.75$. Therefore, the pressure and the internal energy $\epsilon$ have two components \begin{eqnarray} \label{piecewise_p_eps_cold} p &=& p_\mathrm{cold} (\rho) + (\Gamma_\mathrm{thermal} -1) \,\rho\, \epsilon_\mathrm{thermal} ~~,~~ \\ \epsilon &=& \epsilon_\mathrm{cold}(\rho) + \epsilon_\mathrm{thermal}. \end{eqnarray} At the initial time, $\epsilon_\mathrm{thermal}$ vanishes because the initial data relies only upon the piecewise polytrope with $p = p_\mathrm{cold}$. We monitor this thermal component in representative cases as another indication of the dynamics. Along with the pressure and the thermal component, the speed of sound as a function of density is calculated for the initial data as \begin{equation} c_s^2 = \frac{\Gamma_i \, p}{h} \label{eq:csq} \end{equation} where $h = \rho \left( 1+\epsilon\right) + p$ is the enthalpy. The sound speed is shown in Fig.~\ref{fig:eoses} in the same frame as the pressure. The effect of the phase transition is to decrease the maximum speed of sound attained. Dotted black lines in the left panel of Fig.~\ref{fig:eoses} indicate the region within which all the EoSs here are causal ($c^2_{s}\le 1$), and we note that our evolutions do not probe above this high density regime. Despite the phase transition causing the EoS not to be a 1-to-1 function of density, our inversion from conservative to primitive variables causes no numerical problems. We note that the total pressure has both cold and thermal components, and is still a uniquely defined function of the density and internal energy. In particular in solving the transcendental function needed for the inversion, only $p_\mathrm{cold}$, not its inverse appears, which is unambiguously well defined. For the magnetized isolated stars studied in this paper, we assume an initially poloidal magnetic field confined to the stellar interior and calculated from the vector potential $A_ {\phi} \propto r^2 (P - P_\mathrm{cut})$, where $P_\mathrm{cut}$ is a hundred times the pressure of the atmosphere (about $2\times 10^{32}$ dyn/cm$^2$) and $r$ is the distance to the rotation axis. The maximum magnetic intensity at the center is $6\times 10^{13}$ G. We then have the freedom to rotate this configuration some angle, $\theta$, with respect to the rotational axis (here we choose $\theta=10^o$). \begin{figure} \includegraphics[width=3.0in]{figures/eoses/outputoverWcsq.pdf} \caption{Characterization of the EoSs studied here. \textbf{Left:} Standard plot of the pressure versus density for the variations of the base SLy EoS. The pressure $p_0$ is the pressure at nuclear density as shown with gray dotted lines. The curve for EoS~C is difficult to distinguish from SLy because its $\Delta_\rho$ is so small. Also shown is the sound speed squared (calculated as in Eq.~\ref{eq:csq}) for the different EoSs (these are the curves in the bottom right of the panel, contrasting with line segments representing the pressure). Note that only those with a `strong' PT satisfy $c_s^2\le 1$, an argument that figures into the recent Ref.~\cite{Annala:2019puf}. \textbf{Right:} Family of isolated NS solutions (TOV stars) corresponding to each EoS showing the (gravitational) mass versus stellar radius. The cyan circle indicates the particular star evolved in Fig.~\ref{fig:singlestar}. } \label{fig:eoses} \end{figure} \section{Results} We discuss first the evolutions of isolated stars (spinning or not). Although observable effects are much more likely to arise from the dynamics of binaries than isolated stars, it is informative to study the dynamics of stars in isolation as a first step to fully understand the problem. We have performed evolutions with different resolutions which indicate that the results presented are consistent and within the convergent regime. For example, the finest level covered the isolated stars with approximately $77$, $116$, and $155$ points across the star. Mass was conserved with high accuracy, varying less than a percent during our simulations and improving with resolution. We follow this with a discussion of binary mergers. \subsection{Isolated NSs} \label{sec:isolated} Past studies of the evolution of individual stars generally observe stable stars to oscillate around their static solutions because of slight numerical perturbations, or, if unstable, to collapse to black holes. Indeed these two outcomes are observed here. However, to force more significant dynamics, we increase the artificial atmosphere (introduced as part of the standard method of solving for relativistic hydrodynamics) by six orders of magnitude above the usually small level chosen (roughly at the level of $0.3 \rho_\mathrm{nuclear}$). We stress we do not consider such a scenario physically generic, but, in particular, this choice allows us to explore the extent to which possible accretion from a companion can trigger interesting dynamics in a star with an EoS allowing for a PT. This atmosphere induces strong accretion by the NS (estimated at $\dot M \simeq 10^{-8}$). Several important features are observed as a consequence: (i)~strong density waves are produced that, as they reach the stellar surface, both reflect an inward propagating wave and expel a thin layer of fluid, (ii)~this outer layer sweeps against the atmosphere, essentially halting further accretion for a significant fraction of the stellar dynamical time, and (iii)~the propagation speed of such waves is changed significantly in the region where the PT takes place, producing other waves at the interface. The combined impact of these effects plays a strong role regulating the longer term behavior of the star as we discuss below. In Fig.~\ref{fig:singlestar}, we characterize the dynamics of one particular non-rotating star for three different EoSs. The initial data for this star is indicated with a cyan circle on the right panel of Fig.~\ref{fig:eoses}, and, because its maximum density is below the PT onset density ($\rho_1$ for EoS A and C), it is a solution for all three EoSs. The top frame shows the maximum density as a function of time for the three evolutions. With no PT, the black curve shows the density oscillate as the star is `pushed' by the large atmosphere. In contrast, the two evolutions with a PT display large excursions in density consistent with undergoing a phase transition in the core. For each of the EoSs with a PT, we show the density range over which the PT occurs by a horizontal band, magenta for EoS A and brown for EoS~C. These excursions in maximum density generally send the maximum density to a value slightly above each density range. It is important to realize that only some inner core undergoes the PT, and we show in the bottom frames of Fig.~\ref{fig:singlestar} the radial profile of the density at three different times. The earliest such profile, when the densities are at their peaks, shows clearly the region containing non-hadronic matter, roughly a region out to 2 km. In some of the later excursions, instead of a convex core of non-hadronic fluid, a small, irregular shell forms such that the density maximum is no longer at the center of the star. The first panel on the bottom of Fig.~\ref{fig:singlestar} might be misinterpreted as indicating that these stars have different masses despite beginning with the same initial data. A closer look at the total mass, calculated far away from the source, indicates that this is not the case. Furthermore, a simple model can explain how these profiles characterize stars with the same mass. Comparing the profiles for EoS~A and SLy, we assume: (i)~the non-hadronic core extends to a radius of roughly 2\,km while both stars have radius 8\,km, (ii)~the non-hadronic core has twice the density of the SLy star, and, finally, (iii)~the SLy star has some constant density everywhere, while the EoS~A star has a different constant density except in its core. It is straightforward to check that, setting their masses equal and under these assumptions, an overall decrease of roughly 2\% in density (if over the entire star) would allow the star to double the density of its inner core, and this difference is roughly consistent with what appears in the figure. It is interesting to contrast the dynamics of the SLy star with that of EoS A and to wonder why the PT excursions appear damped. As already noted, the generation of thermal energy with EoS A does not appear to play a significant dynamical role given the similarities of the two solutions in the middle frame of Fig.~\ref{fig:singlestar} despite the fact that one has thermal energy present. One can also consider numerical dissipation, unavoidable in such evolutions. Running at increasing resolutions decreases the expected dissipation, and such tests indicate that dissipation is small and develops slowly in time. Indeed, the size of the non-hadronic core shrinks quickly with each excursion even in the higher resolution runs. Hence, it cannot explain the damped excursions at early times. If one looks at the radius of the stars as a function of time, one sees that the average radius for both stars increases by about $5\%$ in the first millisecond. Looking at the right panel of Fig.~\ref{fig:eoses}, this increase indicates that the star now oscillates around a slightly different star (slightly smaller gravitational mass) than the one providing the initial data. As the SLy star expands and contracts, its kinetic energy oscillates as well. The EoS A star, on the other hand, has both much less kinetic energy and a less coherent oscillation of it. Instead, the energy provided by the expansion of the star presumably ends up in the non-hadronic core. As the core gets smaller, so does the amount of kinetic energy liberated by exiting the PT. The dearth of kinetic energy at late times in the EoS A star can be seen in the late-time periods in which the maximum density is relatively constant. At these late times, the EoS A star lacks the coherent motion and kinetic energy present in the SLy star, presumably due to the increased occurrences of characteristics crossing (regions which are handled with a high-resolution shock capturing scheme) as a result of the continuing PTs and the resulting disparate propagation speeds. We note one final aspect about these non-rotating stars. Our discussion begins with a comment about use of a large atmosphere to force the dynamics, and so here we comment on what is instead seen with the normal, small atmosphere for these stars. The stars once again oscillate, but the amplitude of the oscillations of radius and maximum density instead have amplitude roughly a fraction of a percent. The dynamics for SLy, EoS A, and C are essentially the same with one exception. The maximum density of the star described by EoS A does demonstrate some excursions into the range $\rho_1 < \rho < \rho_1 + \Delta \rho$ but fails to exceed this range. These excursions only occur for some intermediate times once the amplitude of the oscillations have developed and before the star has settled. \begin{figure} \includegraphics[width=3.5in]{figures/singlestar/outputprofiles.pdf} \caption{Dynamics of isolated, non-rotating stars. The star represented by a cyan circle in Fig.~\ref{fig:eoses} is evolved with three different EoSs, and the maximum of the density as a function of time is shown (\textbf{top}). Also shown are two horizontal bands indicating the regions of the PT ($\rho_1 \le \rho \le \rho_1+\Delta \rho$) for EoS A (magenta) and EoS~C (brown). For the SLy EoS with no PT (solid black), the star oscillates as expected. For the EoS A (dotted blue), as the density increases, it shoots past the amplitude of the star with no PT, indicative of the core changing phase. These excursions to hybrid phase ultimately settle back to the hadronic star. At \textbf{bottom}, profiles of the density along the positive $x$-axis are shown for the three times indicated by vertical lines in the top frame. In the first of these, stars described by EoS A and C have non-hadronic cores. Also shown with the scale on the right is $\epsilon_\mathrm{thermal}$ representing the fractional thermal component of the internal energy. These curves can be differentiated from the density both because they are all drawn with solid line and because the thermal energy differs from zero only well inside the star. At early times, the internal energy is entirely comprised of the contribution from the cold component. } \label{fig:singlestar} \end{figure} We also consider rotating and magnetized stellar evolutions, by setting a small poloidal magnetic field, slightly misaligned with respect to the rotation axis (i.e., $10$ degrees), only in the interior of the star. These evolutions also incorporate the same large atmosphere and grid structure as the non-rotating solutions just discussed. As we discuss later, the onset of the PT introduces non-uniform rotation in the interior of the star which distorts the field topology. However, this distortion takes place deep inside the NS and the matter pressure keeps it confined. As a result, there is essentially no impact on the field behavior within the outer envelope of the NS which implies no direct observational consequences. In Fig.~\ref{fig:singlestarrotatingwB}, we once again show the maximum density, and we also show the central magnitude of the magnetic field for a star rotating at a frequency of $800\,\mathrm{Hz}$ and with initial central density a bit lower than the onset density. The plot demonstrates a significant difference in magnetic field strength at the center of the star. However, the snapshots of the magnetic field on a meridional plane at four different times shows that the changes within the core do not propagate to the surface. We compare the base EoS with no PT with those of EoS A and C which have a PT but are otherwise stable. When looking at the density, one sees behavior similar to that for the non-rotating case shown in Fig.~\ref{fig:singlestar}. That is, the density for each star with the PT has large vertical excursions as the stellar core undergoes the PT. However, the oscillations return the star periodically to a purely hadronic star. The apparently dramatic excursions of maximum density at late times actually represent the small scale dynamics of the core in which a small region reaches the onset density of the EoS. The cessation of the excursions around $t\approx 2$ ms, however, appears at roughly the time at which the maximum density drifts downward away from the onset density. This drift occurs as well for the star with no PT suggesting that the large scale dynamics is for the star to move towards a different equilibrium. The bottom panel of Fig.~\ref{fig:singlestarrotatingwB} shows significant differences in the central magnetic field magnitude. Although the magnetic field is likely not affecting the dynamics, it is responding to the changing stellar structure resulting from the transformation of the stellar core to a non-hadronic state. However, we also show the magnetic field lines along a meridional plane at a few different times. The changes in the magnetic field configuration apparent in the figure occur only in the inner part of the star and remain confined to that region as pressure overwhelms any potential propagating effect. On time scales longer than the milliseconds of these evolutions, say of order seconds, such differences might potentially reach the surface. However, the magnitudes of the differences, not so large even in these evolutions, would likely be diminished further as they reach the surface. Without changes to the surface magnetic field, the ability to observe a NS experiencing a PT electromagnetically is likely very limited. \begin{figure} \includegraphics[width=3.5in]{figures/singlestarrotatingwB/output.pdf}\\ \includegraphics[width=3.5in]{figures/singlestarrotatingwB/linesandcurl.png} \caption{Dynamics of a particular isolated, rotating, magnetized star. This star has an initial central density a bit lower than the lowest density star shown in Fig.~\ref{fig:singlestar} and rotates at $800\,\mathrm{Hz}$. \textbf{Top:} Maximum density as a function of time. The star with no PT (black solid) shows the usual stable oscillation. The solid, horizontal, magenta line indicates the onset density of the PT. \textbf{Middle:} The magnitude of the magnetic field at the center as a function of time. \textbf{Bottom:} Snapshots of the magnetic field on a meridional plane at times 0.5, 1.0, 1.5, and 2.0 milliseconds (from left to right). The top row of these panels shows EoS~A and the bottom row shows the SLy star. The colors indicate the curl of the magnetic field all with the same colormap, and the black lines indicate the magnetic field. Dotted cyan vertical lines are shown in the upper panels indicating the times at which these snapshots are taken. Although the maximum of the magnetic field between the two EoSs looks similar, the structure in the core is significantly different. The passage of the EoS A star through the phase transition appears to contract the core and result in changes to the magnetic field in the core region. However, these changes do not appear to propagate to the surface. } \label{fig:singlestarrotatingwB} \end{figure} \subsection{Binary NSs} \begin{table*}[t]\centering \begin{ruledtabular} \begin{tabular}{lllllllllllll} EoS & $q$ & $M_{0}^{\rm ADM} $ & $m_{b}^{(1)}, m_{g}^{(1)}$ & $m_{b}^{(2)}, m_{g}^{(2)}$ & $R^{(1)}$ & $R^{(2)}$ & $C^{(1)}$ & $C^{(2)}$ & $k_2^{(1)}$ & $k_2^{(2)}$ & $\kappa_2^T$ & $f_{2}$ \\ & & ~[$M_{\odot}$] & ~~[$M_{\odot}$] & ~~[$M_{\odot}$] & [km] & [km] & & & & & & [khz] \\ \hline SLy & 1.0 & 2.471 & 1.37, 1.20 & 1.37, 1.20 & 11.46 & 11.46 & 0.1607 & 0.1607 & 0.1028 & 0.1028 & 119.9 & 3.02 \\ C & & & & & & & & & & & & 3.27 \\ SLy & 0.92 & 2.373 & 1.37, 1.20 & 1.25, 1.10 & 11.46 & 11.49 & 0.1607 & 0.1480 & 0.1028 & 0.1122 & 154.4 & 3.08 \\ C & & & & & & & & & & & & 3.02 \\ SLy & 0.86 & 2.637 & 1.60, 1.37 & 1.35, 1.18 & 11.43 & 11.46 & 0.1850 & 0.1586 & 0.08484 & 0.1044 & 81.23 & 3.22 \\ C & & & & & & & & & & & & 3.56 \\ \end{tabular} \end{ruledtabular} \caption{Summary of the neutron star binaries studied here. The initial data were computed using the {\sc Bin star} solver from the {\sc Lorene} package~\cite{lorene}. All the binaries start from an initial separation of $37.7$~km. The outer boundary is located at $756$~km and the highest resolution level covers both stars with a resolution of $\Delta x_{\rm min}=100$ m. The table displays the mass ratio of the binary $q \equiv M_1/M_2$, the baryon (gravitational) mass of each star $m_b^{(i)}$ ($m_g^{(i)}$), its circumferential radius $R^{(i)}$ and its compactness $C^{(i)}$ (i.e., when the stars are at infinite separation), the tidal Love numbers of the individual stars, the polarizability parameter of the binary, and the main GW frequency $f_2$ of the post-merger remnant (displayed in Fig.~\ref{fig:binaries_f2_fit}). Note that binaries with EOSs~A and~B are not listed because they quickly collapse to black hole. } \label{table:equal_mass} \end{table*} We study three particular binaries constructed by \texttt{Lorene} to be in a quasi-circular orbit. These runs use six levels of resolution, the first 5 of which are fixed with a ratio of two between resolutions. The last level is dynamic, tracking the stars, with a refinement ratio of four. The domain extends in each axis from $\pm 752$ km. The finest resolution runs consist of $241^3$ points on the coarse level (with a finest level grid spacing of $\Delta x=98$ m) and others use $193^3$ (finest level $\Delta x=123$ m). For just a few runs and for shorter periods to test that we were in the convergent regime, runs used up to $343^3$ points. We note that these binary simulations do not use the large atmosphere adopted for the isolated stars and do not have any magnetization. Certain other details about these binaries are summarized in Table~\ref{table:equal_mass}. Note that these binaries have different total masses which complicates direct comparison among the evolutions. Future work includes constructing binaries maintaining certain parameters, but these variations in total mass allow us to scan a broad range of mass ratio. We construct an equal mass binary with stellar central densities a bit below the onset density for the EoSs considered here. Once again we consider evolutions with EoSs A, B, C, and SLy (without a PT). The initial data, being below the PT, is the same for all four runs. The total gravitational (or ADM) mass of the system is $2.47 M_\odot$ and the initial orbital angular velocity is $2190$~rad/s. Each star has a baryonic mass of $M_b=1.37 M_\odot$ and is initially separated from the other by $37.73$~km. The evolutions of this binary with different EoSs are displayed in Fig.~\ref{fig:binary_1.37}. The top frame shows the dominant strain mode, $h_{2,2}$ while the middle frame shows the phase difference for each case compared to the EoS with no PT. The bottom frame shows the maximum density, $\rho_\mathrm{max}$, as a function of time. Within this frame are shown the density bands indicating the PT, thin brown for EoS~C and wide magenta for EoS A. This bottom frame indicates that the binaries apparently share the same evolution at times before merger because the density fails to reach the density at which the EoSs differ. The reader, however, may notice small differences in the strain in the top frame and small phase differences at these pre-merger times. These differences are due to differences arising from the double time integration of $\psi_4$ over somewhat different time lengths and boundaries. As such, we consider phase differences of order 0.1 radians as something of a floor, and expect physical, and potentially observable, phase differences as those that exceed this floor value. The binary with the SLy EoS merges and forms a hyper-massive neutron star which remains stable for at least 10 ms after the collision. The cases with a PT depart from this behavior. It is not at all surprising that the result for EoS~B collapses promptly at merger based on the stability properties of the EoS at high mass. This expectation can be explained by examining the TOV solutions associated with EoS~B as plotted in the right frame of Fig.~\ref{fig:eoses}. A local maximum occurs in the mass-versus-radius plot right near the region describing hybrid solutions and such an extremum indicates a change in stability; in contrast, the other EoSs do not have an extremum in that region. We also observe the binary described by EoS A collapsing quickly upon merger. Again, by examining the TOV solutions in the right frame of Fig.~\ref{fig:eoses} one can clearly see that EoS~A does not support as massive stars as EoS~C, and one expects the remnant to be fairly massive. The remnant under EoS~C survives the merger, and, once the maximum density increases after the stars merge (see bottom panel of Fig.~\ref{fig:binary_1.37}), the GW signal shows differences (see the middle panel showing the phase difference between the GW signals). However, not until a bit more than two milliseconds after merger does the phase difference between EoS~C and SLy begin to grow steadily. \begin{figure} \includegraphics[width=3.0in]{figures/binary_1.37/outputSTRAINS.pdf} \caption{Dynamics of an equal-mass, binary NS merger. \textbf{Top:} The real component of the wavestrain as a function of time. \textbf{Middle:} The difference in phase with respect to the SLy EoS. \textbf{Bottom:} The maximum density as a function of time. The horizontal band indicates the density region in which the PT occurs: wide magenta (EoS A) and thin brown (EoS~C). The vertical, cyan line shows the time at which the two stars touch. The mergers with EoSs A and C both result in prompt collapse at merger. The small phase differences at early times are likely due to the double time integration to get the strains and not to any physical effect. } \label{fig:binary_1.37} \end{figure} Because the GW differences occur post-merger, we analyze the frequency differences with a fast-Fourier-transform~(FFT) of just the post-merger region of the signal. We show this FFT and the signals in this region in Fig.~\ref{fig:binary_fft}. The remnant with the PT oscillates at a higher frequency consistent with the results of Ref.~\cite{Most:2018eaw,Most:2019onn,Weih:2019xvw}. Such a behavior can be understood in terms of a simple model. In the equal mass case, the PT takes place at a central region in the remnant. There, the density is higher and, as a consequence of its moment of inertia decreasing (due to approximate mass conservation), the angular frequency goes up. In cases where such a region is sufficiently large, its contribution to gravitational waves from the system dominates. Thus, there is a tendency towards higher frequencies due to the PT. \begin{figure} \includegraphics[width=3.0in]{figures/binary_fft/output.pdf} \caption{The FFT of the post-merger signals from the equal mass binary mergers shown in Fig.~\ref{fig:binary_1.37}. The power spectral density is shown normalized to the maximum. The remnant that undergoes a PT oscillates at a higher frequency than that without the PT, indicating a more compact remnant. } \label{fig:binary_fft} \end{figure} \begin{figure} \includegraphics[width=3.0in]{figures/binary_unequal/output.pdf} \caption{Dynamics of the $q=0.92$ unequal-mass, binary NS merger with masses $1.2 M_\odot$ and $1.1 M_\odot$. The vertical, cyan line shows the time at which the two stars touch. } \label{fig:binary_unequal} \end{figure} \begin{figure} \includegraphics[width=3.0in]{figures/binary_unequal_fft/output.pdf} \caption{The FFT of the post-merger signals from the $q=0.92$ unequal mass binary mergers shown in Fig.~\ref{fig:binary_unequal}. The power spectral density is shown normalized to the maximum. } \label{fig:binary_unequal_fft} \end{figure} We next consider unequal mass binaries. The study of asymmetric BNS binaries has become yet more relevant and interesting in light of the recent observation of a BNS with asymmetric mass ratio of $q=0.78$ via pulsar timing~\cite{Ferdman:2020huz}. First, we consider one with mass ratio\footnote{We define the mass ratio in terms of the two gravitational masses of the binary components with $M^2_g \ge M^1_g$ and $q\equiv M^1_g/M^2_g$.} $q=0.92$ shown in Fig.~\ref{fig:binary_unequal}. In this case, the overall qualitative behavior is very similar to the equal mass case. Importantly however, the main frequency in the gravitational waves produced post-merger, as shown in Fig.~\ref{fig:binary_unequal_fft}, is quite close for both the purely hadronic and PT cases. Two factors help explain why the post-merger frequencies are closer together than in the $q=1$ case. The first is that the non-hadronic core for this remnant is smaller than that formed in the equal mass case and therefore presumably contributes little to increasing the rotational frequency via conservation of angular momentum. This is related to the fact that this binary reaches the smallest post-merger densities of these binaries (see Fig.~\ref{fig:nearmerger} discussed below). The other factor is a bit more subtle and particular to the unequal mass case. Unlike the ``dumbbell'' formed in the early merger of the equal mass case, the remnant of an unequal mass merger is dominated by the more massive object which happens to contain the non-hadronic core. As the core gets closer to the rotational axis, its contribution to GW production decreases and its own quadrupole moment begins to be the dominant effect. Next, we study a case with $q=0.86$. The idea here was to construct a binary with one hybrid star that has a maximum density above the PT of the EoS while the other is a normal hadronic star with lower central density. This high mass star could potentially undergo the PT in the opposite sense as those described above. However, we were not able to generate such a binary with \texttt{Lorene}, and so instead we started with a purely hadronic, unequal mass binary. Note, this difficulty arises only for this mass ratio, not the previously presented cases. However, we stress that the choice of a sufficiently massive star leads to the PT taking place dynamically and the resulting binary is of mixed type. Indeed, once evolved with EoS A or C, the higher mass star quickly undergoes the PT (see the inset of the bottom panel of Fig.~\ref{fig:binary_unequal6ab} showing the early time behavior of the maximum density). As shown in Fig.~\ref{fig:binary_unequal6ab}, the maximum density at early times shows large differences among the three EoSs. With EoS SLy, the binary evolves without any significant change to the maximum density. In contrast, both EoS A and C show a quick rise in the maximum density indicating that the stars are undergoing their respective PT. The high mass star with EoS A quickly collapses. As shown in Fig.~\ref{fig:binary_unequal6ab_fft}, the post-merger differences resemble those in the equal mass case with the EOS~C remnant oscillating at a higher frequency. Common to all three binaries studied here, the maximum density decreases just after the stars touch and just before merger (see the bottom panels just after the vertical cyan line in Figs.~\ref{fig:binary_1.37}, \ref{fig:binary_unequal}, and \ref{fig:binary_unequal6ab}). This decrease in maximum density for all the binaries considered here is also shown in Fig.~\ref{fig:nearmerger}. Such a decrease is already expected from post Newtonian calculations (e.g.~\cite{PhysRevLett.76.4878}) and has potentially important consequences. For stars above the PT, the drop in maximum density does not appear significant enough for the star to drop through the PT to a purely hadronic star {\em before} the stars come into contact (behavior already indicated by PN arguments where $\delta \rho_c/\rho_c \lesssim 0.3\%$ in e.g. the equal mass case). However, the trend towards a decrease in central density continues up to $\delta \rho_c/\rho_c \simeq 2\%$ (for the equal mass case, see Fig.~\ref{fig:nearmerger}). Such a decrease could imply a transition back to a purely hadronic case for stars barely above the PT. After such transitory density minima, strong density oscillations could potentially have a correlated behavior ``in and out'' of the PT for some time (akin to the oscillations discussed before in the case of isolated neutron stars). However, this scenario might only arise within a narrow set of physical parameters. \begin{figure} \includegraphics[width=3.0in]{figures/binary_unequal6ab/output.pdf} \caption{Dynamics of a $q=0.86$, unequal-mass, binary NS merger with masses $1.37 M_\odot$ and $1.18 M_\odot$. This binary lacks a PT in the ID because \texttt{Lorene} seems unable to produce a binary with one hadronic star and one with a non-hadronic core. However, the density of the high mass star is such that the evolution quickly shows it to go through the PT, as shown in the inset which covers the period $0\le t \le 0.06$~ms. One is then evolving a binary with one hybrid star and one hadronic. The vertical, cyan line shows the time at which the two stars touch. } \label{fig:binary_unequal6ab} \end{figure} \begin{figure} \includegraphics[width=3.0in]{figures/binary_unequal6ab_fft/output.pdf} \caption{The FFT of the post-merger signals from the $q=0.86$ unequal mass binary mergers shown in Fig.~\ref{fig:binary_unequal6ab}. The power spectral density is shown normalized to the maximum. } \label{fig:binary_unequal6ab_fft} \end{figure} Let us now take a closer look at the dependence of the main frequency, $f_2$, of the post-merger GW signal with the effective ``remnant tidal polarizability parameter,''\footnote{Also known as one of a family of ``dimensionless tidal parameters'' in the effective one body approach~\cite{2010PhRvD..81h4016D}.} $\kappa_2^T$, defined as \begin{equation} \kappa_2^T = 2 \left[ q \left( \frac{X^{(1)}}{C^{(1)}} \right)^5 k_2^{(1)} + \frac{1}{q} \left( \frac{X^{(2)}}{C^{(2)}} \right)^5 k_2^{(2)} \right] \end{equation} where $q=M^{(2)}/M^{(1)} \le 1$, $X^{(i))}=M^{(i)}/(M^{(1)} + M^{(2)})$ and $C^{(i)} = M^{(i)}/R^{(i)}$, being $k_2^{(i)}$ the individual tidal Love numbers of each star. A rather robust functional dependence has been found that relates these two quantities (e.g. ~\cite{PhysRevLett.112.201101,Lehner:2016lxy,Vretinaris:2019spn}). Fig.~\ref{fig:binaries_f2_fit} displays a particular fit of this frequency as a function of this polarizability parameter, obtained by extracting this value from the remnant of many binary neutron star simulations, with different EoS without a PT~\cite{2016PhRvD..93l4051R}. We include on the figure our values for our modified SLy and those obtained with EoS~C computed from the FFTs of the postmerger signals. Because we evolve our binaries just a few milliseconds after merger, we also include the frequency fit $f_{2,i}$ which represents the transient frequency just after merger. \begin{figure} \includegraphics[width=3.0in]{figures/binaries_f2_fit/output.pdf} \caption{Post-merger $f_2$ frequencies for the various binaries studied here along with fits to these frequencies. The fits $f_2 = 5.832-1.118 \left(\kappa_2^T\right)^{1/5}$ and $f_{2,i}= 6.401-1.299 \left(\kappa_2^T\right)^{1/5}$ come from Ref.~\cite{2016PhRvD..93l4051R}. The frequencies for EoS~C appear to follow a similar trend, and a fit to just these three points with the same form results in $f_{2,C}= 7.482-1.624 \left(\kappa_2^T\right)^{1/5}$. } \label{fig:binaries_f2_fit} \end{figure} The equal mass ($q=1$) and $q=0.86$ cases show significant differences in the post-merger frequencies, which could potentially identify the PT (see e.g. Ref.~\cite{Bauswein:2018bma}). As noted, the $q=0.92$ case shows a small difference; restricting the FFT to times after 13 ms shows that the $f_2$ frequency for EoS~C is arguably consistent to that of the case without the PT. It is not completely clear how complicated the dependence of the post-merger frequency may be when PTs are involved, and the impact of both mass ratio and total mass of the binary. More thorough coverage of the parameter space would be needed to construct a model able to predict the expected values of $f_2$ in general cases. Despite this need for more coverage, the frequencies obtained for EoS~C do appear to follow a similar functional form as that obtained for EoSs with no PT, and we present that fit in the figure. \begin{figure} \includegraphics[width=3.0in]{figures/nearmerger/output.pdf} \caption{Near-merger behavior of maximum density for the various binaries studied here. The times for all binaries have been shifted such that $t=0$ denotes the moment when the stars first touch. The maximum density dips soon after the stars touch. } \label{fig:nearmerger} \end{figure} Another aspect of the binary merger that may lead to differences due to the presence of a PT is the development of the $m=1$ mode~\cite{Paschalidis:2015mla,Lehner:2016wjg}. This mode grows more quickly for unequal mass binaries, and so the wide range of mass ratios in our simulations does well to probe for such differences. In Fig.~\ref{fig:m=1mode}, we compare the magnitude of the $\Psi_4{}^{2,1}$ mode for SLy and EoS~C with the $q=0.92$ mass ratio, our longest post-merger evolution and one that typifies the differences in the other two cases. As shown in the figure, the mode for each case grows qualitatively similarly, suggesting that the PT does not significantly affect the growth rate of this instability. \begin{figure} \includegraphics[width=3.0in]{figures/m=1mode/output.pdf} \caption{Comparison of the growth of the $m=1$ mode for the $q=0.92$ binary postmerger. The growth of this mode, although different between the two EoSs, is roughly comparable. } \label{fig:m=1mode} \end{figure} \section{Discussion} We compare evolutions of NSs and BNSs with EoSs that differ only in the presence of a somewhat generic PT. That is, the PT is arbitrary and not motivated by a particular theory of high density matter. Only a few different PTs are adopted among a large space of possibilities. With individual NSs, hadronic stars with core densities close to the PT can undergo the PT and oscillate between hadronic and hybrid states. The dynamics of this oscillation appears to involve a complex interplay of a few factors. The star expands and contracts, generates thermal energy, and all the while the fraction of the core becoming non-hadronic decreases. This behavior arises from the accretion and pressure afforded by the artificially high atmosphere chosen here that is likely unrealistic. However this interesting behavior might instead be triggered astrophysically. For example, it would be interesting to assess whether the onset of this behavior might be induced by strong tidal interactions in an eccentric binary as explored in~\cite{Yang:2018bzx}. With rotation and magnetic field, the core is similarly dynamic. However, we observe no significant change to the surface magnetic field. Evolutions of binary mergers of hadronic stars with densities close to the onset density of a PT, both equal and unequal masses. show a difference in their GW signatures at merger when the maximum density reaches the onset density. The phase difference increases as the post-merger regime continues and the remnant oscillates at higher frequency than the hadronic remnant. We expect that, at least in some cases, these differences may be observable, although one can imagine possible degeneracies with EOSs lacking a PT but that otherwise produce condensed cores at high densities. We also consider the novel scenario of a BNS composed of a hybrid star along with a hadronic star. Such a necessarily unequal mass binary, here with $q=0.86$, is particularly relevant given the recent detections via pulsar timing of a very unequal ($q=0.78$) BNS~\cite{Ferdman:2020huz} and with asymmetric compact object binaries by LIGO/Virgo. A binary such as this offers the possibility that the hybrid star decompresses and becomes more hadronic dynamically. Although our $q=0.86$ binary is composed of a hybrid and a hadronic star, this binary collapsed to a black hole at merger. However, were a PT occurring at smaller density chosen, one could then construct such a scenario that avoided collapse at merger. We also computed the dominant post-merger oscillation frequency of the remnant for these mergers. Future generations of gravitational wave observatories are expected to have a bandwidth extending to the post-merger regime with the hope of differentiating a PT in the EoS via these frequency differences. Further theoretical analysis of the many outcomes will be required to guide detection efforts as well as development of efficient ways to search for such subtle observables, e.g.~\cite{Yang:2017xlf,Whitaker}. \bigskip \begin{acknowledgments} We would like to thank Will East for helpful discussions and to Juan Calderon for addressing our attention to the effects of PT on the $m=1$ instability. This work was supported by the NSF under grants PHY-1912769 and PHY-2011383. CP acknowledges support from the Spanish Ministry of Economy and Competitiveness grants AYA2016-80289-P and PID2019-110301GB-I00 (AEI/FEDER, UE). LL was supported in part by NSERC through a Discovery Grant, and CIFAR. Computations were performed at XSEDE, Marenostrum and the Niagara supercomputer at the SciNet HPC Consortium. Computer resources at MareNostrum and the technical support provided by Barcelona Supercomputing Center were obtained thanks to time granted through the $17^\mathrm{th}$ (project Tier-0 GEEFBNSM) and $20^\mathrm{th}$(Proposal 2019215177) PRACE regular calls. SciNet is funded by: the Canada Foundation for Innovation; the Government of Ontario; Ontario Research Fund - Research Excellence; and the University of Toronto. Research at Perimeter Institute is supported by the Government of Canada and by the Province of Ontario through the Ministry of Research, Innovation and Science. \end{acknowledgments} \bibliographystyle{utphys}
1,314,259,992,793
arxiv
\section{Introduction} Quantum computation may still be in its infancy, but new advances have allowed for the first experimental demonstrations of quantum computing in recent years \cite{google,blattion1,64IBM,chinesephotonic,2020demonstrationHoneywell}. Quantum computers themselves promise to aid in solving classically-intractable problems for such fields as quantum chemistry \cite{quantum_chem1,quantum_chem2,benjamin1,Bromley_2020}, quantum machine learning \cite{biamonte1,QMLreview}, and quantum cryptography \cite{quantumcrypto,PiraQCrypto}, among others. However, such promise comes with a catch: protecting the quantum states in a quantum computer from deleterious noise channels has proven to be a most difficult task, preventing scalability and implementations of most quantum algorithms \cite{PreskillNISQ}. Indeed, current prototypes of \textit{quantum processing units} (or QPUs) available from Rigetti, Honeywell, IBM, Google, and others are considered still too resource-constrained to be able to demonstrate full fault tolerance \cite{QEC1,2020egan-brown,Linke_2017}. In \textit{Noisy Intermediate-Scale Quantum} (or NISQ) technology, quantum computers are still rapidly evolving and severely limited: Firstly, quantum computing as a field has not yet settled on a particular technology approach \cite{fullstack}; some of the leading candidate prototypes are constructed from superconducting qubits \cite{superconducting1,blueprintsuper,superFT,chinesesuper,couplingsuper,sete-zeng-rigetti}, trapped-ion qubits \cite{ions,ions2,ions3,ions4}, as well as other contending proposals \cite{silicon,photons,Bourassa_2021,bartolucci2021fusionbased,chamberland2020buildingconcatenated}. Secondly, most of these devices exhibit finite- and fixed-connectivity constraints between neighboring qubits (a notable exception to this is trapped-ion technology, in which one can \textit{in principle} have "all-to-all" connectivity \cite{ions}). Thirdly, noise considerations have severely hampered developments in quantum devices \cite{knillNoise}. As such, efficient methods for executing quantum algorithms on first-generation quantum hardware require special attention. In light of these difficulties, the goal of efficient delegation of the finite resources in a QPU for usage with near-term quantum algorithms has become exigent and pervasive. Many different approaches exist for optimizing such systems, which range widely from full-stack compilation \cite{fullstack} and neural-network based approaches \cite{ML1,ML2}, to more theoretically-motivated methods such as quantum gate-synthesis techniques adapted for quantum hardware \cite{gate-synthesis,gate-synthesis2,wille1,wille2}. The present work focuses on a problem that has come to be known as the \textit{qubit-mapping problem}. The general problem can be stated as follows: Given the finite connectivity of a NISQ-era device and its accompanying noise statistics, what is the optimal way to assign the qubits from a virtual quantum algorithm to the physical qubits of a NISQ-era processor, with some type of maximized guarantees on fidelity and/or circuit runtime? Distinct methods of solving this problem have been addressed \cite{aliroquantum,hardware-aware,lao-mapping1,ML1,ML2,noisy1,temporal1,qubitproblem1}; in spite of this progress, little work has been done in order to better understand the capabilities and limitations of various qubit-mapping algorithms, especially in the case of noise awareness \cite{noisy1}. Due to the computational hardness of the mapping problem, one commonly resorts to heuristic solutions. It is known that in heuristic mapping algorithms, placement of the initial mapping is crucial for efficient execution of the quantum algorithm \cite{hardware-aware}; as such, we will consider only initial-mapping strategies in this work. For a review of the qubit-mapping problem, we refer the reader to references \cite{bandic,carmina_overview}. The purpose of this article is threefold: Firstly, we introduce a new heuristic mapping algorithm and benchmark it with several sets of quantum circuits. This heuristic mapping algorithm is \textit{noise-aware}, i.e. our mapper explicitly takes into account the error-rate statistics for two-qubit, single-qubit, and measurement operations. We numerically analyze our heuristic algorithm and provide effective upper and lower bounds for its performance. These effective bounds are calculated by using two additional qubit-mapping algorithms in an effort to compare. The first of these algorithms utilizes a brute-force approach, which iterates successively through every possible mapping permutation and selects the best success rate (with respect to a metric defined in \cref{success-rate-metric-explain}). A trivial mapper is used as the lower bound and assigns virtual qubits from a quantum circuit to a physical device in accordance with the numbering scheme used on the QPU. We observe that, for two benchmarks whose graph-theoretic representations (to be explained in \cref{background-qmapping-section}) admit the same vertex degree, the manner in which the two-qubit gates are distributed plays a significant role in the success rate measured, as well as the actual amount of vertex-to-vertex connections in the graph representation of the quantum circuit (we shall refer to these as \textit{interaction graphs} as discussed in \cref{background-qmapping-section}); specifically, our results concern the differences between adding nonlinear edges to interaction graphs (which correspond to non-nearest neighbor two-qubit gate interactions) in both \textit{depth-first} and \textit{breadth-first} methods. Thirdly, we investigate the scaling properties of our heuristic algorithm for mapping quantum algorithms with nearest-neighbor two-qubit structures (such circuit properties are shown to give rise to linear interaction graphs); we find that our heuristic mapper performs well versus the trivial solution when mapping linear algorithms to large quantum-device architectures, as long as less than $75\%$ of the device is occupied; this nontrivial performance gain is seen to not only be substantial, but to enlarge as the size of the processor itself grows, relative to the percentage of the QPU that is occupied. The structure of this paper is as follows: \cref{background-qmapping-section} introduces the qubit-mapping problem in detail; we provide an instructional example to motivate our work. \cref{description_algs-section} provides details on the structure of each of the algorithms employed in this work, in addition to an explanation of how we calculated the success rate for our simulations. We separate our results in \cref{results-intro-section} into several parts: We first describe the benchmarks utilized in our analysis in \cref{real-benchmarks-section}; next, we discuss the results obtained from incorporating non-nearest-neighbor two-qubit gates in the benchmarks we tested (\cref{nonlinear-section}), using and comparing a \textit{breadth versus depth} analysis of two-qubit gate additions; and finally, in \cref{large-linear-section} we examine the outcomes of our simulations with large-linear interaction graphs, scaled onto QPU coupling-graphs with dimensions of $n \times n$ qubits, where $n > 3$. In \cref{discussion-section}, we conclude our work, and discuss possible future directions for research. \begin{figure*} \centering \hskip-0.5cm \includegraphics[width=\textwidth]{example-circuit.png} \caption{a) depicts an example quantum circuit. As shown in b), this circuit is decomposed into a graph-theoretic version of the algorithm (also known as an \textit{interaction graph}) which illustrates the interaction via two-qubit gates of qubits in the original algorithm; edges represent two-qubit gates, while vertices represent qubits which are acted upon. Weights are added to vertices and edges in order to account for more gate invocations. In c) the geometric connectivity of the interaction graph is analyzed and SWAP gates are added for any interaction-graph edges which cannot be exactly mapped to the QPU coupling graph; for example, if we define a mapping $\{q_{1}\mapsto Q_{1}, q_{2}\mapsto Q_{2},q_{3} \mapsto Q_{3} \}$, then the interaction-graph edge between $q_{1}$ and $q_{4}$ cannot be explicitly mapped, since $Q_{1}$ and $Q_{4}$ do not share an edge connection in the QPU coupling graph shown in d). The modified quantum circuit is then mapped to the QPU in d); in this way, the appropriate vertices, in accordance with some metric to be defined, are assigned to the graph-theoretic object representing the quantum device (referred to as the \textit{coupling graph} of the QPU). In many types of qubit-mapping algorithms, qubits can then be arranged and mapped \textit{temporally}, as well as \textit{spatially} \cite{hardware-aware,lao-mapping1,qubitproblem1}; for this reason, the final arrow between c) and d) carries the designation \textit{Initial Mapping}.} \label{fig:circuit-mapping-example} \end{figure*} \section{Background on the Qubit-Mapping Problem}\label{background-qmapping-section} Most quantum circuits that are devised theoretically do not take into account the actual physical hardware constraints of a given quantum device. In order to accommodate NISQ-era hardware, quantum-programming frameworks such as Qiskit \cite{Qiskit} include supports which allow developers to write algorithms without explicitly taking into account hardware limitations. As such, quantum compilers must perform several steps in order to prepare the quantum algorithm for actual execution on a device. Broadly speaking, these steps are: $1)$ to decompose the abstract quantum gates into \textit{elementary gates}; $2)$ to map the qubits of the quantum circuit to the physical qubits of the quantum processor, and insert SWAP operations into the algorithm in order to satisfy the connectivity constraints of a given quantum device; $3)$ and to optimize the resultant quantum circuit, with an aim to minimize quantities such as the execution time and gate count, among other cost functions. The qubit-mapping problem consists of the second step of quantum compilation and will be the main focus of this article. In the context of the qubit-mapping problem, we consider two objects: an \textit{interaction graph}, which is a graph representation of the quantum circuit that we would like to execute (vertices represent qubits and single-qubit gates, edges two-qubit gates), and the \textit{coupling graph}, which is a graph representation of the QPU's geometric connectivity. Most realistic QPU layouts are accompanied with \textit{calibration and noise statistics} which are added to the graph; these data usually include two-qubit gate error rates, single-qubit error rates, execution times (gate length), relaxation energies, and decoherence characteristic times $T1$ and $T2$ \cite{hardware-aware}. The goal is to match the geometric connectivity of the interaction graph to that of the coupling graph as closely as possible, in our case, while regarding the noise characteristics of the device as well. In the present work, we do not directly utilize calibration statistics from a real quantum computer; instead, we have analyzed the statistics from several of the IBM quantum computers \cite{64IBM,Murali_2020software,noisy1}, and assume errors on the same order of magnitude (which are typically $\sim 10^{2}$ for single-qubit errors, and $\sim 10^{3}$ for two-qubit gate errors and measurement errors \cite{noisy1}). This procedure is discussed in detail in \cref{description_algs-section} . Additionally, several proposals show that qubits can be given \textit{initial mappings}, which later may be modified in timestep fashion as the execution of the algorithm progresses. As the basis for this work considers only the initial-mapping portion of the qubit-mapping problem, we refer the reader to \cite{temporal1,qubitproblem1,lao-mapping1,Tan_2020-medina-talk} for work involving \textit{time-scheduling techniques}. As an instructive example, consider the quantum algorithm in \cref{fig:circuit-mapping-example}a. Before assigning qubits from the quantum algorithm to the physical QPU, the corresponding circuit is itself decomposed into a graph-theoretic form referred to as the \textit{interaction graph}; such a decomposition is intended to visualize the action of mapping and fitting of an algorithm to a relevant portion of the QPU lattice such that the geometric constraints of the circuit are respected. As shown in \cref{fig:circuit-mapping-example}b, the resulting interaction graph is the \textit{complete graph} $K_{4}$, and cannot be exactly embedded into the QPU coupling graph shown in \cref{fig:circuit-mapping-example}d. In order to correctly map this algorithm, one may add a SWAP gate operation to the qubits $q_{1}$ and $q_{3}$, and then perform the required two-qubit gate between $q_{3}$ and $q_{4}$, as depicted in the modified algorithm of \cref{fig:circuit-mapping-example}c; other SWAP gates are added as well in \cref{fig:circuit-mapping-example}. The SWAP gate itself degrades the final-state fidelity of the algorithm. Due to the disadvantageous characteristics of utilizing SWAP gates, many qubit-mapping algorithms explicitly attempt to minimize the amount of SWAP gates employed \cite{bandic,lao-mapping1}. The strategy described above is not the only existing approach. Indeed, solutions to the the qubit-mapping problem can be separated into two broad categories: $1)$ \textit{optimal} (or brute-force) optimization and $2)$ heuristic optimization algorithms \cite{bandic}. Additionally, most current studies of qubit-mapping focus on minimizing the number of SWAP gates \cite{lao-mapping1,bandic,Tan_2020-medina-talk,palerwille,carmina_overview}. However, our approach to the qubit-mapping problem is distinct because we specifically take into account NISQ-era error rates, building on the work pioneered by \cite{noisy1}. In literature concerning the qubit-mapping problem, this scheme is known more broadly as \textit{noise-aware qubit mapping} \cite{bandic,carmina_overview}. We will discuss the cost function utilized in this paper in \cref{description_algs-section}. \begin{figure} \centering \includegraphics[scale=0.32]{simple-nonlinear.png} \caption{In this work, we introduce the notions of interaction-graph \textit{linearity} and \textit{nonlinearity}. a) displays a nearest-neighbor quantum circuit, with a linear interaction graph; b) shows that adding nonlinear edges to the interaction graph is equivalent to the addition of an extra two-qubit gate, which is not nearest-neighbor in the corresponding quantum circuit.} \label{fig:simple-nonlinear} \end{figure} Our work additionally introduces the concepts of interaction-graph \textit{linearity} and \textit{nonlinearity} in the context of the qubit-mapping problem. Two examples of this idea can be seen in \cref{fig:simple-nonlinear}. In \cref{fig:simple-nonlinear}a, we take a four-vertex interaction graph with three edges (known as a \textit{Hamiltonian path} $P_{4}$) to be equivalent to a four-qubit quantum circuit exhibiting nearest-neighbor two-qubit gates. As is shown in \cref{fig:simple-nonlinear}b, if an extra edge is added, the interaction graph becomes a \textit{cycle graph} $C_{4}$, and corresponds to the addition of an extra two-qubit gate between the first and last qubits in the quantum circuit. In graph theory, the two objects in \cref{fig:simple-nonlinear} are well known; the problem of identifying suitable \textit{Hamiltonian-path} and \textit{cycle-graph} solutions in a simple undirected graph is related to the famous \textit{traveling salesman problem} and is known to be NP-Complete \cite{wilson1,combinatorial,2021deterministicnearestneighbor}. In this article, we shall focus primarily on the two differing cases described above: quantum algorithms whose interaction graphs admit $1)$ linear and $2)$ nonlinear forms (disjointed interaction graphs are briefly discussed in \cref{real-benchmarks-section}). The emphasis on linear interaction graphs is justified for two main reasons. Firstly, in fields such as quantum chemistry, the simulation of fermionic quantum systems can be carried out by encoding the qubits via a \textit{Jordan-Wigner transformation} \cite{quantum_chem1,benjamin1,quantum_chem2}; such an encoding scheme can give rise to circuits known as \textit{linear SWAP networks} \cite{quantum_chem3,quantum_chem4}, which exhibit linearity in their corresponding interaction graphs. Secondly, a quantum algorithm with a linear interacting-graph representation is, in a sense, hardware agnostic; such algorithms exhibit only nearest-neighbor two-qubit gate invocations, and are thus adaptable to any architecture. Designing a qubit-mapping algorithm for the goal of executing such algorithms was therefore paramount in our considerations. Conversely, our motivation for investigating \textit{graph nonlinearity} as it applies to the qubit-mapping problem is not motivated necessarily with respect to realistic implementations \textit{per se}, but rather as a method with which to understand the advantages and limitations concomitant with the design of our heuristic mapping algorithm. In this way, we qualify and quantify our greedy heuristic with respect to possible realistic implementations, in addition to addressing some advantages and limitations of our mapper. In the following section, we will detail our approach for the qubit-mapping algorithm, specifically made with the previously-mentioned goals in mind. \section{Description of the Mapping Algorithms}\label{description_algs-section} All of the qubit-mapping algorithms utilized in this work generally function in the following manner: firstly, an $n \times n$ square lattice (representing the geometric connectivity of the QPU) is initialized, along with single-qubit and two-qubit error rates, as well as measurements, as a NetworkX object \cite{networkx} which will serve as an approximation for the QPU device's coupling graph. Next, a quantum algorithm written in cQASM \cite{cQASM} is parsed into a NetworkX object as well. The qubit-mapping algorithm is then called, a final-mapping solution is assigned, and the mapping is evaluated using a cost function that is described at the end of this section (we will refer to this cost function as the \textit{metric}). The interaction graphs used in this article for QPU simulations were mapped to coupling graphs which consist of $3 \times 3$ QPU lattice grids in the simulations from \cref{nonlinear-section}; for the large-linear heuristic simulations of \cref{large-linear-section}, we map to $n \times n$ grid lattice QPUs qubits, where $n > 3$. For all of the mapping algorithms described in this section, we will refer to an interaction graph, comprised of some set of vertices $V$ and some set of edges $E$ as $\tilde{C}(V,E)$; additionally, we shall refer to a QPU coupling graph with vertex set $V'$ and edge set $E'$ as $\tilde{Q}(V',E')$. For the algorithms that we designed, several assumptions were made: \begin{enumerate} \item If $\Delta(V) \geq \Delta(V')$ (where $\Delta(V)$ represents the \textit{maximal vertex degree} \cite{wilson1}), then SWAP gates will be introduced in order to implement the proper qubit-qubit interactions for $\tilde{C}(V,E)$. We mean here that if the degree of the quantum algorithm's interaction graph exceeds the degree of the QPU's coupling graph, SWAP gates are necessitated. This assumption \textit{does not exclude} the use of SWAP operations in the case that $\Delta(V) < \Delta(V')$; indeed, SWAP gates will almost certainly be required if the geometric connectivities of the interaction and coupling graphs do not exactly match (i.e., a graph homomorphism does not exist from the interaction graph to the QPU coupling graph \cite{wilson1}). \item We additionally assume that $|V| \leq |V'|$ (where $|V|$, $|V'|$ represent the total number of vertices in the interaction and coupling graphs, respectively), i.e. that the number of vertices in the interaction graph is smaller than or equal to the number of vertices in the coupling graph. If $|V| > |V'|$, then the mapping process aborts, and an error message is displayed. \end{enumerate} \subsection{The Heuristic Mapping Algorithm \& Traffic Coefficient}\label{heuristic-description-section} Our heuristic mapping algorithm functions as follows. After the initialization steps from the preceding paragraph are completed, our greedy-heuristic algorithm assesses the maximum-degree vertices on the interaction graph via calculation of the \textit{traffic coefficients}, we we will describe in more detail shortly. Next, the coupling-graph edge with the lowest error rate is identified. Afterwards, the qubit with the lowest error rate out of the two nearest-neighbor qubits is identified, and the first interaction-graph qubit is assigned to this coupling-graph qubit (the "first" interaction-graph qubit is defined as the one with the largest \textit{maximal traffic coefficient} $\mathcal{T_{\text{traf}}}$). The maximal traffic coefficient is calculated as follows. First, for the i$^{\text{th}}$ interaction-graph qubit, we sum the total number of single- and two-qubit gate invocations; the \textit{frequency} of an interaction-graph qubit is subsequently labeled $f_{i}$, as shown in \cref{eq:eq1}. Here, we have weighted two-qubit gate invocations ($N_{d,i}$) with an extra linear multiplier of $2$ in order to weigh interaction-graph qubits which exhibit a large percentage of the two-qubit gates utilized in a given quantum algorithm more heavily than single-qubit gates; in this way, we account for \textit{both} the nontriviality of the single-qubit gate invocations ($N_{s,i}$) and the higher error rates of two-qubit gates, which are typically at least one order of magnitude worse than for single-qubit gates \cite{Murali_2020software,noisy1}. After a bit of algebra (\cref{eq:eq2,eq:eq3}), one sees that the frequency of the $i^{\text{th}}$ interaction-graph is re-written as a \textit{traffic coefficient} $t_{i}$; these traffic coefficients are summed and normalized, such that $ct_{i}$ provides a "percentage-wise" overview of the total interactions for the $i^{\text{th}}$ interaction-graph qubit in an algorithm. We then take the \textit{maximal traffic coefficient} $\mathcal{T_{\text{traff}}}$ as corresponding to the first interaction-graph qubit to be mapped, as shown in \cref{eq:eq4}. \begin{align} f_{i} = \sum_{i}\big[N_{s,i} + 2N_{d,i}\big] \label{eq:eq1},\\ 1 - \frac{1}{f_{i}} = t_{i} \label{eq:eq2},\\ c \cdot \sum_{i} t_{i} = 1 \label{eq:eq3},\\ \max_{i} t_{i} = \mathcal{T_{\text{traf}}} \label{eq:eq4}, \end{align} The maximal traffic coefficient $\mathcal{T_{\text{traf}}}$ represents the percentage of interactions for the "most active" qubit in the algorithm, and is used as a way to ascertain which interaction-graph qubits must be prioritized for the best-connected, lowest error-rated portions of the QPU coupling graph via our greedy heuristic that is described in \cref{heuristic-description-section}. In order to map the rest of the interaction graph, a variant of Dijkstra's algorithm \cite{dijkstra} is utilized, which takes into account the error rates of two- and single-qubit gates (represented on the coupling graph as edges and vertices with assigned error rates) in order to find the "shortest path" (in this case the term \textit{shortest path} refers to the particular sequence of gates which lead to the lowest error rate, which is the shortest path, since weights are placed on the edges as two-qubit error rates) to the next available qubit in the interaction graph; once the next candidate interaction-graph qubit is designated, the algorithm surveys the interaction-graph qubits that have already been mapped. Finally, our heuristic algorithm assigns the new candidate to the QPU coupling graph, as closely as possible (such that the least amount of errors is generated) to the originally-mapped interaction-graph qubit. This process continues until the entire algorithm has been mapped to the closest-possible qubits on the interaction graph. \subsection{The Brute-Force and Trivial Mapping Algorithms} The \textit{brute-force} algorithm utilized in this work functions as follows: first, the lattice QPU is initialized, and the error rates for all quantities are defined; next, the brute-force search algorithm generates a list of all possible mapping permutations for a quantum-algorithm mapping solution. Each permutation is mapped, and is evaluated using the metric described. The preceding permutation's metric value is compared to the current iteration, and the permutation with the highest success-rate metric value is kept, while the inferior one is discarded. This process continues until the best permutation is found. The \textit{trivial mapping} algorithm functions by sequentially assigning interaction-graph qubits to correspondingly-numbered coupling-graph qubits; a quantum algorithm with qubits $q_{1}\dots q_{r}$, where $r \leq n^{2}$ ($n^{2}$ for an $n \times n$ ), will be mapped to a coupling graph of a QPU with qubits $Q_{1}\dots Q_{s}$, where $s \geq r$, by assigning interaction-/coupling-graph qubit pairs as $\{q_{1} \mapsto Q_{1},q_{2} \mapsto Q_{2},\dots, q_{r} \mapsto Q_{r}\}$. The success-rate metric is then subsequently evaluated, in order to compare with the other two mapping strategies. \subsection{Evaluation of the Success-Rate Metric}\label{success-rate-metric-explain} The cost function used to quantify the performance of all mapping algorithms in this work is described below. The purpose of this metric is to approximate the fidelity of the final quantum state after the quantum circuit is mapped and executed, with all gates invoked. Other metrics exist \cite{lao-mapping1,noisy1,qubitproblem1,temporal1}; however, we limit our attention here to a metric that are based on success-rate measures. The single-qubit gate, two-qubit gate, and SWAP-gate product metrics are calculated as shown below in \cref{eq:single-metric,eq:double-qubit-metric,eq:swap-product,eq:product_metric_total}. These product metrics, as explained above, relate specifically to the coupling graphs that we utilize in this work. \begin{align} \sigma_{s} = \prod_{i}^{n'<n}(1-\xi_{s,i})^{N_{s,i}} \label{eq:single-metric},\\ \sigma_{d} = \prod_{i}^{\delta_{d}}(1-\xi_{d,i})^{N_{d,i}} \label{eq:double-qubit-metric}, \\ \sigma^{\text{SW}} = \prod_{j}^{\delta^{\text{SW}}}\prod_{i}^{l}(1-\xi^{\text{SW}}_{i})_{j}^{(2Nn^{\text{SW}}_{i})} \label{eq:swap-product}, \\ \sigma_{\text{total}} = \sigma_{s} \cdot \sigma_{d} \cdot \sigma^{\text{SW}} \label{eq:product_metric_total}, \end{align} where on the first line, $\sigma_{s},n',n,\xi_{s,i},N_{s,i}$ are: the total single-qubit gate metric value; the total number of qubits in the interaction graph; the total number of qubits in the coupling graph; the single-qubit gate error rate for each qubit; and the number of single-qubit gate invocations per qubit, respectively. On the second line, $\sigma_{d},\delta_{d},\xi_{d,i},N_{d,i}$ are: the total two-qubit gate metric value; the total number of edges on the NISQ device; the two-qubit gate error rate per edge; and the number of two-qubit gate invocations per edge, respectively. On the final line, $\sigma^{\text{SW}},\delta^{\text{SW}},l,\xi^{\text{SW}}_{i},2N^{\text{SW}}_{i}$ represent: the total SWAP-gate metric value; the total number of separate edges that need SWAP gates; the total number of edges which physically separate the SWAP-corrected pair of coupling-graph qubits; the two-qubit gate error rate per edge; and the number of two-qubit gate invocations, for which the error rates overall are squared (this takes into account the cost of moving the qubit both to and from the closest unoccupied region, using the SWAP gate), respectively. Finally, $\sigma_{\text{total}}$ represents the total metric calculated from the product of all other success rates, as mentioned above. For simplicity, we do not explicitly take into account how SWAP gates may be invoked on real QPU devices \cite{wille1,wille2}. Since the heuristic mapping algorithm itself functions by allocating interaction-graph qubits to coupling-graph qubits based on a measure of the success-rate, one may ask how useful it is to utilize a mapping metric based on the same measures. The reason for this can be seen as follows: when considering the optimal solution for any mapping algorithm, there are a variety of different cost functions that one may utilize. However, \textit{any mapping} that is based on an exact calculation of the fidelity of the final quantum state after running it on a physical QPU will give the best indication of performance for a mapping solution \cite{nishio-errors}. Since direct computations of the fidelity are computationally resource-intensive and scale exponentially with the size of the QPU \cite{PreskillNISQ,nielsen-chuang-qit}, we opt to utilize a function that may be regarded as related to the calculation of the fidelity (i.e.\ a success rate measurement, based on the error rates as presented). In this way, we attempt to employ a cost function that is as relevant as possible, without the computational demands incurred by large-scale simulations. In the next section, we will discuss in full the results obtained from studying several realistic interaction-graph benchmarks, benchmarks whose interaction graphs exhibit high degrees of nonlinear edges, and benchmarks which increase in size and sequentially occupy more and more of a given QPU coupling-graph's qubits. \section{Results}\label{results-intro-section} The results are organized as follows: \cref{real-benchmarks-section} details the results obtained from realistic benchmark interaction graphs, mapped to a $3 \times 3$ coupling graph. \cref{nonlinear-section} discusses the results from mapping quantum benchmarks with interaction graphs that exhibit increasing amounts of non-nearest neighbor two-qubit gate combinations. The results from scaling the size of linear interaction graphs onto ever-increasing QPU coupling graphs are described in \cref{large-linear-section}. All benchmarks were tested on a Dell Latitude $7400$ laptop with a $1.9$ GHz x $4$ Intel i7-8665U quadcore processor and $8.0$ GB of RAM. Each benchmark was mapped using our simulation 100, 100, and 1000 times in \cref{real-benchmarks-section,nonlinear-section,large-linear-section}, respectively; success rates were averaged over all trials. Simulations of Sequences I and II (as shown in \cref{sequences-graphs}) took approximately 60 hours of continuous runtime. \subsection{Realistic Interaction-Graph Benchmarks}\label{real-benchmarks-section} The benchmarks from \cite{noisy1} were utilized in this section. The corresponding interaction graphs for each of the benchmarks listed are shown in \cref{martonosi-coupling}. All of the benchmarks were tested on a $3 \times 3$ lattice coupling graph, and the results are reported in \cref{martonosi_raw}. In this section, we did not explicitly take into account the linearity or nonlinearity of the interaction graphs; linear and nonlinear interaction-graph benchmarks will be investigated and compared in detail in \cref{nonlinear-section,large-linear-section}. \begin{figure} \centering \includegraphics[width=\columnwidth]{martonosi-benchmark-graphs.png} \caption{Interaction graphs of several realistic benchmarks which were tested in \cref{real-benchmarks-section}; the benchmarks themselves were taken from \cite{noisy1}. a1)-a3) show the BV4, BV6, and BV8 benchmarks, respectively; b) represents the QFT and HS2 benchmark algorithms; the interaction graphs in c) and d) were used for the HS4 and HS6 algorithms; e) depicts the Fredkin, Or, Peres, and Toffoli algorithms; and f) displays the Adder benchmark.} \label{martonosi-coupling} \end{figure} The results obtained from the brute-force, heuristic-, and trivial mapping algorithms with respect to the benchmarks detailed in \cref{martonosi-coupling} are shown in \cref{martonosi_raw}. The algorithms themselves are grouped: the first three groupings of results pertain to the \textit{hidden-shift algorithmic} benchmarks; the second three groupings relate the results obtained for the \textit{Bernstein-Vazirani} benchmarks; as for the next grouping, the results belonging to the \textit{Toffoli, Or, Fredkin}, and \textit{Peres} benchmarks exhibit triangular interaction graphs; and finally, the \textit{Quantum Fourier Transform} (QFT) and \textit{Adder} benchmarks admit interaction graphs as shown in \cref{martonosi-coupling}b and \cref{martonosi-coupling}f, respectively. Red bars denote the results from the brute-force mapping algorithm; blue and green bars denote the results from the commensurate heuristic- and trivial mapping algorithms, respectively. The y-axis of \cref{martonosi_raw} displays the calculated success rates. As is evidenced in the graphs, the brute-force mapping algorithm provides an effective upper bound for the performance of the heuristic mapping algorithm (we use the term \textit{effective upper bound} with the view that better solutions may exist if one utilizes \textit{time-scheduling techniques}, as mentioned in \cref{background-qmapping-section}); additionally, the trivial mapping algorithm allows for an interpretation as an effective lower bound, with the heuristic algorithm pivoting between both of these effective bounds. Our heuristic algorithm outperforms the trivial mapping algorithm in virtually all cases except for those involving disjointness in the interaction-graph quantum algorithms, as indicated by the success-rate values of \cref{martonosi-coupling} a2) and \cref{martonosi-coupling} a3). \begin{figure} \centering \includegraphics[width=\columnwidth]{martonosi-benchmarks.png} \caption{Success rate for the benchmarks in \cite{noisy1}. Brute-force mapping results are shown in red, followed by blue and green bars, which represent the heuristic and trivial mapping results, respectively.} \label{martonosi_raw} \end{figure} One of the main reasons for this discrepancy can be seen in the design of the heuristic mapper itself: our heuristic mapping algorithm was designed for nearest-neighbor connectivity on the interaction graph. Conversely, one can see that most of the connected interaction graphs are mapped to the coupling graph with success rates that approach the \textit{effective upper-bounded} (brute-force) solution. Indeed, these observations indicate that a more systematic study on our heuristic mapping algorithm is warranted; in particular, one would like to have a more precise picture of how much \textit{interaction-graph nonlinearity} our heuristic mapper can tolerate, while still providing non-trival improvements over a trivial mapping solution. In the following sections we will investigate these possibilities, with a focus on interaction-graph nonlinearity, as well as considering depth enlargement and the scaling of linear interaction graphs on large-$n$ QPU coupling graphs. We will show that in particular, both the \textit{amount} and the \textit{distribution} of non-nearest neighbor two-qubit gates utilized in a quantum algorithm play a prominent role in determing the success rate of our heuristic mapper, relative to brute-force and trivial solutions. Moreover, we will show that our heuristic mapping algorithm outperforms the trivial-mapper for $n \times n$ QPU lattices in the regime $n > 3$. \subsection{Mapping Nonlinear Interaction-Graph Benchmarks}\label{nonlinear-section} The results of this section detail a comparison between the heuristic mapping algorithm and a brute-force algorithm and trivial mapping algorithm. These two additional mapping algorithms provide \textit{effective} upper and lower bounds on the performance of the heuristic mapping algorithm, just as in the previous section. $4$-, $6$-, and $8$-qubit interaction graphs were used as benchmarks to be mapped onto a $3 \times 3$ lattice coupling graph. Two sequences of two-qubit gate additions for $6$- and $8$-qubit interaction graphs are utilized, and are shown in \cref{sequences-graphs}. We sequentially add nonlinear edges to interaction graphs, starting from their linear counterparts. Our aim is twofold: Firstly, we wish to characterize and associate the performance of our heuristic algorithm with the \textit{amount of nonlinearity} in the corresponding interaction graph; furthermore, we secondly are interested in the \textit{particular way} that these nonlinear edges are distributed on the interaction graph; in this section, we added nonlinear edges in \textit{depth-first} as well as a \textit{breadth-first} manners, and compared the two approaches. With both of these descriptors evaluated, we stand to gain a more comprehensive understanding of the advantages and limitations of our greedy heuristic as it relates to distinct interaction graphs. In the small-$n$ regime (for an $n \times n$ QPU coupling graph), brute-force algorithms can be utilized, as the size of the coupling and interaction graphs are sufficiently small. Even so, it must be stated that in all of our simulation results, our brute-force mapper requires several orders of magnitude longer to complete each trial than the heuristic or trivial mappers. \begin{figure*} \centering \includegraphics[width=\textwidth]{sequence1-revised.png} \caption{Interaction graphs for the first and second benchmark sequences. Red edges depict newly-added nonlinear edges to the interaction graphs that were tested with our greedy heuristic. In the interaction graphs from Sequence I, nonlinear edges are appended such that every vertex degree is maximized before subsequently appending nonlinear edges to consecutively higher-numbered vertices; this process continues until a $K_{s}$ graph is attained for Sequence I (We refer to this technique as a \textit{depth-first} edge-assignment procedure). As for Sequence II, a \textit{breadth-first} edge-assignment procedure is utilized, as nonlinear edges are appended such that every vertex exhibits approximately the same degree until a $K_{s}$ graph is reached.} \label{sequences-graphs} \end{figure*} The sequences of nonlinear chords that were sequentially added to $s$-qubit linear-path interaction graphs are shown in \cref{sequences-graphs}, where $s \in \{4,6,8\}$ (only one sequence of nonlinear chords is possible for $4$-qubit interaction graphs, and is shown in \cref{sequences-graphs}). Nonlinear edges are added until reaching a complete $K_{s}$ graph. These edges are added as follows: in Sequence I (\cref{sequences-graphs}), a cycle is immediately created upon adding an edge; subsequently, chords are added to the $s$-qubit cycle such that the degree of every sequentially-numbered vertex is maximized before proceeding to add chords to the consecutively-numbered vertices. We refer to this style of adding edges as a \textit{depth-first} approach. Sequence II, however, exhibits a \textit{breadth-first} approach to nonlinear-edge addition, as chords are added in a manner such that the vertex degree of all vertices remain approximately equal. The red-highlighted edges represent newly-added chords to a particular sequence. In this way, we examine and compare the effects of adding nonlinear edges (corresponding to non-nearest neighbor two-qubit gates) in two different fashions. \begin{figure*} \centering \includegraphics[width=\textwidth]{sequence1-nonlinear-revised.png} \caption{a)-c) show the results for the nonlinear benchmarks in \cref{sequences-graphs} for Sequence I (the depth-first benchmarks); the x-axis of the figure follows the sequence of nonlinear-edge additions. The average runtime per trial for all of the benchmarks was approximately $60$ seconds, $0.5$, and $0.2$ milliseconds for the brute-force, heuristic, and the trivial mapping algorithms, respectively. The total runtime for the entire simulation was approximately 60 hours. d)-e) show the results for the nonlinear benchmark interaction graphs in \cref{sequences-graphs} for Sequence II. These breadth-first benchmarks exhibited an average runtime per trial were approximately equal to those stated for the depth-first benchmarks.} \label{nonlinear-seq1-results} \end{figure*} The results for $4$-, $6$-, and $8$-qubit interaction graphs that were mapped in accordance with the interaction graphs in Sequence I and II are depicted in \cref{nonlinear-seq1-results}. \cref{nonlinear-seq1-results}a, \cref{nonlinear-seq1-results}b and \cref{nonlinear-seq1-results}c represent the results procured from interaction graphs in Sequence I; \cref{nonlinear-seq1-results}d and \cref{nonlinear-seq1-results}e represent the Sequence II results. In each subfigure, brute-force, heuristic-, and trivial mapping algorithms are shown in red, blue, and green, respectively. The success rates of each mapping algorithm $\{\sigma_{\text{brute}},\sigma_{\text{heuristic}},\sigma_{\text{trivial}}\}$ are graphed as a function of the number of nonlinear edges that have been added from the original sequence start in Ia4), Ia6), and Ia8) in \cref{sequences-graphs}. The average runtime per trial for the brute-force, heuristic-, and trivial mapping algorithms are on the order of about $60$ seconds for the brute-force algorithm versus $0.5$ and $0.2$ milliseconds for the heuristic and trivial mappers, respectively. These figures largely stay the same for the Sequence II benchmarks, as the brute-force mapper exhibits an average solution time of several orders of magnitude higher than the other two qubit-mapping algorithms. \cref{nonlinear-seq1-results}a shows that our greedy heuristic's success rate can approach the brute-force mapper's for $4$-qubit interaction graphs, no matter how much nonlinearity is added; indeed, it is observed that even a quantum algorithm representing a $K_{4}$ interaction graph can be effectively mapped to the $3 \times 3$ lattice coupling graph in our simulations. In contrast, \cref{nonlinear-seq1-results}b and \cref{nonlinear-seq1-results}c depict performance decreases for our heuristic optimization as more and more nonlinear edges are added to the interaction graph; \cref{nonlinear-seq1-results}c, however, is unique in that the performance of the heuristic mapping algorithm achieves roughly analogous performance to that of the trivial mapping algorithm, until finally, after $18$ nonlinear edges have been added, a "critical point" is reached. We define this critical point to be the point at which the heuristic mapping algorithm's success rate matches that of the trivial mapping algorithm's. This behavior is not witnessed in the $6$-qubit interaction graphs; indeed, although the $6$-qubit benchmarks do exhibit steadily-decaying performance, our results indicate that this tendency is more prevalent with larger quantum circuits, such as the $8$-qubit benchmarks. The breadth-first benchmarks of Sequence II exhibit a different trend from the first-sequence interaction graphs. \cref{nonlinear-seq1-results}d largely appears to agree with \cref{nonlinear-seq1-results}b; however, the second-sequence interaction graphs in \cref{nonlinear-seq1-results}e exhibit much worse heuristic mapping success rates, compared to the first-sequence alternatives. In point of fact, the critical point occurs much more quickly while following Sequence II. Approximately $18$ nonlinear edges were added in the depth-first analysis of Sequence I; in contrast, the point wherein the heuristic mapper's success rate is equal to that of the trivial mapper's is observed after approximately $7$-$8$ nonlinear edges have been added from Sequence II (breadth-first). Let us mention a few things here. Firstly, the data obtained from these simulations reveal that our heuristic mapper can adequately map interaction graphs which exhibit a low degree of nonlinear edges, provided that the quantum circuit in question is not very deep. As we increase the sheer amount of the two-qubit gate invocations between interaction-graph qubits, it can be seen that our greedy heuristic can tolerate a small amount of interaction-graph nonlinearity, especially if this nonlinearity is concentrated only on a small subset of the total vertices. Another observation that we noted is that, for a given number of nonlinear edges added to a linear interaction graph, the success rate varies significantly for our heuristic mapper, depending on the distribution of the nonlinear edges on the interaction graphs; again, this difference suggests that not only the amount of nonlinear edges plays a role in our heuristic algorithm's calculated success rate, but also the \textit{particular manner} in which the nonlinear edges are distributed over the interaction graph. These same simulations were performed for the same benchmarks, with the only difference being that the number of total gates for each benchmark was multiplied by two and four. Our results largely conform with those shown in \cref{nonlinear-seq1-results}, as only the scaling of the success-rate metric changes. This observation is indeed expected, as the design of the experiment and the cost function themselves facilitate such a rescaling. Lastly, as shown in \cref{nonlinear-seq1-results}c and \cref{nonlinear-seq1-results}e, the heuristic's and trivial mapper's success rates are relatively close. One may surmise that a reason for this proximity may be in fact related to the high occupancy of the coupling-graph qubits during our simulations ($8/9$ of the available QPU qubits were utilized). The next section will provide information related to the large-linear interactions graphs that were tested and how they scale in the regime $n > 3$ for a corresponding $n \times n$ QPU coupling graph. We specifically address the question of how the occupancy percentage of the QPU affects our mapper's solution. \subsection{Scaling Properties in the Regime $n > 3$}\label{large-linear-section} The benchmarks tested in this section are all linear, but vary in size so as to occupy different percentages of an $n \times n$ QPU coupling graph, where $n = 3,\dots,10$. A coupling graph of lattice dimensions $n \times n$ is initialized, and sequentially larger and larger linear interaction graphs are mapped to a coupling graph until it is $100\%$ occupied; as in the previous simulations, vertices of the coupling graph are taken to be qubits. Addtionally, no explicit noise scaling was utilized as the coupling graph increases; the magnitudes of noise for two-qubit and single-qubit error rates, as well as measurements are kept the same as described in \cref{description_algs-section}. As brute-force algorithms were not utilized for this section (becoming practically intractable if $n > 3$), success rates were instead compared between the heuristic and trivial mappers, as shown in \cref{largelinear_results}b and \cref{largelinear_results}c. \begin{figure*} \centering \includegraphics[width=\textwidth]{LL-v2-no-noise.png} \caption{Success-rate differences between the success rates of the heuristic and trivial mappings $(\sigma_{\text{heurstic}}, \sigma_{\text{trivial}})$ taken from $n$-qubit linear interaction graphs which have been mapped to $n \times n$ QPU coupling graphs. a) displays the success rates differences without any noise scaling. b) and c) detail the success rate $\sigma_{\text{total}}$ calculated for both $n = 4$ and $n = 6$ QPU coupling graphs, respectively.} \label{largelinear_results} \end{figure*} The results from our study are displayed in \cref{largelinear_results}. \cref{largelinear_results}a shows the success-rate difference $\{\sigma_{\text{heuristic}}-\sigma_{\text{trivial}}\}$, calculated between our greedy heuristic and the trivial mapper, with respect to linear interaction graphs that progressively fill the entire QPU coupling graph. The different colors in \cref{largelinear_results}a denote differing $n$-values for the lattice dimensions of the coupling graph. \cref{largelinear_results}b and \cref{largelinear_results}c display the actual success rates calculated under no noise scaling behavior, for $4 \times 4$ and $6 \times 6$ coupling-graph dimensions, respectively. In these graphs, one can observe several trends: firstly, as the $n$-value of the coupling graph increases, the success-rate difference becomes steadily larger for linear interaction graphs that occupy the same percentage of the coupling graph when fully mapped, until about $75\%$ of the QPU is occupied. Next, after approximately $75\%$ of the coupling graph has been filled, the success-rate difference becomes negligible; this success-rate difference implies that an effectively negligible difference between the heuristic- and the trivial mapping algorithms is observed. Lastly, after roughly $85-90\%$ of the lattice coupling graph is occupied, the success-rate difference is not only negligible, but starts to dip below zero, the main observation here being that the greedy heuristic cannot find a solution that outperforms the success rate measured from the trivial mapper. We will discuss the consequences of this observation in \cref{discussion-section}. Additionally, we performed the same tests for coupling graphs with increasing $n$-values such that a uniform increase in noise of order $4$ times larger than the noise in \cref{largelinear_results}, as well as under exponentially increasing noise. The purpose of these rescaled-noise simulations was to investigate potentially more realistic noise, as device error rates are expected to increase due to crosstalk as quantum processors become larger \cite{Hangleiter2020crosstalkdiagnosis,Murali_2020software}. Our results from these simulations largely confirm the same trends that were mentioned in the preceding paragraphs, albeit with steeper linear decays. In the last section, we will summarize our main results. Particular emphasis will be placed on interpreting the observations noted in the previous sections. We will then close by highlighting some future research directions. \section{Discussion \& Future Work}\label{discussion-section} \begin{figure}[h] \centering \hskip-0.5cm \includegraphics[width=\columnwidth]{heuristic-bad.png} \caption{An example scenario in which our heuristic mapping algorithm could perform worse in comparison to a trivial mapping algorithm. On the $5 \times 5$ coupling graph pictured above, a) and b) represent possible final-mapping solutions for the heuristic- and trivial mapping algorithms, respectively. In a), one may initiate the mapping process in a highly-connected region of the coupling graph; however, as one proceeds using the algorithm to consecutively map the $15$ qubits in our quantum-circuit example, one may encounter a situation in which the heuristic-based solution may involve several more SWAP gates than normally anticipated. This situation is shown as a dotted line in blue (tracing out the nearest-neighbor assignment path of our heuristic) which essentially runs into a corner in the lower-right portion of the QPU coupling graph, stopping at the vertex shown in blue. From this point forward, the next shortest-path distance would not be a nearest-neighbor vertex, and SWAPs will undoubtedly be needed in order to realize such a mapping solution (shown by the dotted line in magenta, which terminates at the red-labelled vertex $q_{14}$). In b), a trivial mapping solution would better utilize the space and connectivity available for a QPU when less choices are available.} \label{fig:heuristic-bad} \end{figure} In this article, we have introduced a heuristic qubit-mapping algorithm for the purpose of exploring the advantages and limitations for mapping different types of interaction graphs. The heuristic algorithm itself is \textit{noise-aware}, and the success rate is high, relative to a brute-force and trivial mapper, which serve to effectively bound the performance of our heuristic from above and below. For small, low-depth quantum circuits, our heuristic is shown to provide a significant performance gain over a trivial mapping solution, and in some cases for realistic benchmarks even approaches that of a brute-force mapping algorithm. As the main motivation for this work concerned investigations of \textit{connected interaction graphs}, we did not explicitly think of ways to correct the issue with disjointed interaction-graph success rates; we reserve such explorations for future work. With this work, we have taken the first step towards characterizing interaction graphs of quantum circuits which admit amenable mapping solutions, specifically for a particular mapping algorithm. We have accomplished this initiative in two main stages: Firstly, we have investigated the performance of our brute-force, heuristic, and trivial mappers with low-depth quantum circuits which admit nonlinear interaction graphs. Two particular sequences of non-nearest neighbor two-qubit gate additions were considered, both of which highlight the inherent limitations of the heuristic mapping algorithm that we devised. The manner in which nonlinear edges are added to the circuit (i.e. depth-first or breadth-first), as well as the sheer amount of nonlinear edges utilized, both were found to play a role in our greedy heuristic's calculated success rate. These two observations are evidenced from an analysis of \cref{nonlinear-seq1-results}c and \cref{nonlinear-seq1-results}e: although our heuristic mapper performs better than the trivial mapping for most of the depth-first edge additions (Sequence I), we see that the performance is \textit{not much better}, implying that our heuristic can tolerate a relatively low amount of interaction-graph nonlinearity, in the case of the number of qubits in the interaction graph being relatively high. This observation comes as no surprise, seeing as the algorithm was designed to accommodate linear interaction graphs. Furthermore, for two interaction graphs with the same number of nonlinear edges, it was found that our heuristic mapper's performance largely depends on the particular manner in which the nonlinear interaction-graph edges are distributed. We verified that this fact is reinforced for deeper algorithms by running simulations with larger-depth interaction graphs of the same form as described in \cref{sequences-graphs}. Secondly, we investigated the scaling behavior of our heuristic mapping algorithm in the regime $n > 3$ for $n \times n$ QPU coupling graphs. In this regime, our brute-force solutions to the mapping problem become intractable; as such, our heuristic mapper was compared to the trivial mapper described in \cref{description_algs-section}. The results indicate that the greedy heuristic scales well for quantum circuits with linear interaction graphs, as long as less than approximately $75\%$ of the QPU coupling graph is filled (in comparison to our trivial mapper). If one occupies more of the available space on the QPU, one can expect a trivial benefit from utilizing our heuristic mapper, until approximately $85\%$ of the processor is allocated; after this marker, performance losses can be expected (with respect to our trivial mapper), as our results denoted. Additionally, for comparable percentages of the coupling graph that are filled, one can expect higher success rates relative to the trivial mapping solution as $n$ is steadily increased. These same simulations were additionally employed for uniformly and exponentially increasing noise parameters, concomitant with observations that larger QPU devices experience more problems with error rates due to crosstalk \cite{Hangleiter2020crosstalkdiagnosis,Murali_2020software}. Our results confirm and reinforce the remarks stated above. A few final comments are in order here. In reference to the results from \cref{large-linear-section}, one may ask why our heuristic mapping algorithm tends to underperform as we occupy larger percentages of a QPU coupling graph. An answer to this question can be seen when one considers that, as the coupling graph is filled up, less and less nearest-neighbor choices are left for the heuristic algorithm to evaluate. As the algorithm itself functions by selecting first the minimum error-rate vertex of the minimal error-rate edge from the supremum of vertex degrees, the heuristic-based solution will necessitate more SWAP gates as the processor is further allocated. In this regime (i.e.\ when over $75\%$ of the QPU is occupied), the trivial mapping algorithm used in this study would be expected to outperform the our heuristic mapper for quantum circuits that give rise to linear interaction graphs. An example of just such a difficulty is explained in more detail in \cref{fig:heuristic-bad}; indeed, in a) and b), one sees the result of a greedy shortest-path mapper applied to a $5 \times 5$ coupling graph. In \cref{fig:heuristic-bad}a, we may start with a high-degree vertex (shown in green); as the algorithm proceeds to find nearest-neighbor solutions, the algorithm's attempt to map the $15$-qubit linear interaction graph in our example may run into a portion of the device that is less-highly connected (shown by the blue-dotted line which terminates at the blue-labelled vertex). When such an event happens, our heuristic will continue searching for the nearest possible neighbor that is free; unfortunately, in this case several SWAP gates would be needed in order to realize the mapping shown (denoted by the magenta-dotted line that terminates at the red vertex $q_{14}$). In \cref{fig:heuristic-bad}b, we see an example of our trivial mapping algorithm would make better use of the space requirements for the $15$-qubit quantum circuit. One solution to this issue with our heuristic may be to utilize \textit{look-ahead} or \textit{look-behind} techniques, which would serve to match not only the nearest neighbor on the coupling graph, but to additionally analyze several interaction-graph vertices \cite{palerwille,will-lookahead,2018tackling}. Such a qubit-mapping algorithm may improve overall success rates; in any case, we leave further discussion of this possibility to future work. As a final comment, it was observed that our brute-force mapping algorithm scaled badly after $n = 3$ for the coupling graphs tested; this does not necessarily imply that \textit{any exact simulation} is intractable. Although we note that finding an exact solution to the qubit-mapping problem is NP-hard, one may be able to write an exact mapping algorithm which does in fact scale better than ours for larger $n$-values. Admittedly, there are still many avenues left for possible investigation in the future, as much work needs to be done in order to better characterize and/or match quantum algorithms to prospective qubit mappers. One may investigate more sophisticated heuristic and exact algorithms, as there is certainly room for improvement over the heuristic mapping algorithm that we have presented here. Additionally, one may analyze more realistic noise conditions, in addition to more realistic QPU coupling graphs, as the authors in \cite{bandic} propose. Various other qubit-mapping algorithm proposals exist as well \cite{lao-mapping1,temporal1,hardware-aware,qubitproblem1,aliroquantum,ML1,ML2}; many of these could be analyzed and classified as well. We intend to investigate these possibilities in future work. \section{Acknowledgements} We thank Jens Eisert for fruitful discussions. MS would like to thank HQS Quantum Simulations GmbH, where a portion of this work was completed. \clearpage
1,314,259,992,794
arxiv
\section{Introduction} The history of how dark matter was ``discovered'' and how the term Dark Matter itself was coined has been told somewhere else~\cite{Sanders:2010cle,Bertone:2016nfn}, and it is beyond the scope of these proceedings to delve further on it. There is, however, a turning point somewhere between the 1930's, when the first indications of an anomaly in the rotation curves of galaxies was assigned to ``extinguished stars, dark clouds, meteors, comets and so on''~\cite{Lundmark:30aa}, and the present day, where we know that dark matter can not be made up of normal, that is baryonic, matter. Several microlensing experiments during the 90s and 2000s~\cite{Alcock:2000ph,Tisserand:2006zx,Wyrzykowski:2011tr} found that the amount of non-luminous objects in the halo of the Milky Way lies at the few percent level, insufficient to account for the additional matter needed to explain the rotational curve of the galaxy. Around the same time, precise estimations of the amount of primordial light elements from big-bang nucleosynthesis set a very precise limit on the total amount of baryons in the Universe (see~\cite{Cyburt:2015mya} for a review). Further, the first measurements of the Cosmic Microwave Background (CMB) radiation from space with the COBE mission in the 90s~\cite{Smoot:1998jt}, improved later with WMAP~\cite{Hinshaw:2012aka} and Planck~\cite{Aghanim:2018eyx}, showed the need for a non-baryonic component in the matter budget of the universe. The problem then became, and remains, to identify and detect this new form of matter. The easiest solution, at least from the point of view of a particle physicist, is to introduce a new particle species that needs to be heavy (we want it to play a role in gravitational structure formation), weakly interacting (we do not want it to disturb the evolution of the Universe) and stable (its effects must perdure today). Other solutions avoiding dark matter, like modifying the classical gravitational interaction for low accelerations, Modified Newtonian Dynamics (MOND)~\cite{Famaey:2011kh}, or its relativistic extensions, although able to explain remarkably well the rotation curves of galaxies, are faced with difficulties when trying to become global explanations, that is, to explain the dark matter distribution at the galaxy-cluster level. We will not describe these approaches here, but concentrate on the standard weakly--interacting new particle paradigm. Any dark matter candidate particle, $\chi$, will be in thermal equilibrium in the early universe as long as the reaction $\chi\chi \rightarrow SM SM$ holds, SM being any relevant Standard Model particle that couples to the $\chi$. As the universe expands, the equilibrium will last until the $\chi$'s are diluted so that the above reaction becomes unlikely, leaving a relic density of $\chi$'s, whose value depends on the cross section that drives the equilibrium condition. The relic density of $\chi$'s, $\Omega_{\chi}$, as a function of their annihilation cross section, $\sigma_{\chi\chi}$, can be expressed as $\Omega_{\chi} h^2 \propto {10^{-26}(\rm{cm^3/s)}}/{\left < \sigma_{\chi\chi} v \right >}$~\cite{Garrett:2010hd}. The strength of the needed cross section is of the order of the weak interaction, so the particle solution to the dark matter problem provides then an intriguing link between the Standard Model of Particle Physics (SM) and the $\Lambda$CDM model, the standard model of cosmology. $\Lambda$CDM needs the known elementary particles and forces from the SM, a weakly-interacting, stable, cold dark matter candidate (or candidates) and a cosmological constant. With these ingredients, numerical simulations exactly predict the grow of fluctuations from the early universe into a large-scale structure of galaxies compatible with observations~\cite{Planelles:2014zaa}, a triumph of $\Lambda$CDM. A key point worth mentioning is that the same numerical simulations but without a dark-matter component, fail to reproduce the large-scale universe as we know it, which can be taken as additional ``evidence'' for dark matter. Similarly, N-body simulations of galaxy formation tell us that galaxies are embedded in clumpy, complex dark matter halos that extend well beyond the visible galaxy. Understanding halos is key to interpreting experimental results, as we will mention below. There have been several proposals to describe the dark matter density distribution in galactic halos as a function of the distance from the center of the galaxy~\cite{Navarro:1995iw,Einasto:1965czb,Moore:1999gc,Kravtsov:1997dp}, but the different models can be parameterized with a single generic function~\cite{Zhao:1995cp}, \begin{equation} \rho_{DM}(r)\,=\,\frac{\rho_0}{\left ( \delta + \frac{r}{r_s} \right )^\gamma \cdot \left ( 1 + (\frac{r}{r_s})^\alpha \right )^{(\beta-\gamma)/\alpha} } \label{eq:profiles} \end{equation} where $r$ is the distance from the galactic center, $\rho_0$ is a normalization constant, and the parameters $\alpha$, $\beta$ and $\gamma$ determine the shape of the halo. Different combinations of these parameters can recover the standard halo profiles proposed in~\cite{Navarro:1995iw,Einasto:1965czb,Moore:1999gc,Kravtsov:1997dp}, and shown in Figure~\ref{fig:profiles}. But equation~(\ref{eq:profiles}) can easily incorporate new parametrizations. These profiles describe the smooth distribution of dark matter around galaxies. A possible clumpy component must be added on top in order to describe the outcome of N-body simulations. \begin{figure}[t] \begin{minipage}{0.47\linewidth} \includegraphics[width=\textwidth]{plots/dm_density_profiles.pdf} \caption{The density of dark matter as a function of distance to the galactic center in commonly assumed dark matter halo density profiles for the Milky Way. Reprinted with permission from~\cite{Abbasi:2011eq}.} \label{fig:profiles} \end{minipage} \hfill \begin{minipage}{0.47\linewidth} \vspace{-0.4cm} \includegraphics[width=\textwidth]{plots/detection_techniques_final.png} \vspace{1.cm} \caption{Current approaches to dark matter detection.} \label{fig:techniques} \end{minipage} \end{figure} Figure~\ref{fig:profiles} shows that there is consensus on the dark mater distribution at the location of the Solar System (about 8 kpc from the galactic centre), but predictions diverge considerably at distances near the center of the galaxy. This is of course also true when considering other galaxies and assuming the same type of profiles apply universally. \section{Dark matter detection techniques and results} Figure~\ref{fig:techniques} shows schematically the different experimental techniques currently used, or under development, to detect dark matter. We can distinguish three generic complementary approaches: direct detection, indirect detection and production in the laboratory. Within each group one can identify a wealth of different detection methods, targets or type of signals. To describe each of them lies outside the scope of these proceedings. In what follows we will just touch upon a few distinct aspects of each group and show the current experimental status. \subsection{Direct searches} Direct searches are based on looking for nuclear recoils from dark matter-nucleus interactions in a suitable target. The expected number of recoils (i.e., interactions) of a dark matter particle of mass $m_{\chi}$ on a target made of particles of mass $m_A$ is given by~\cite{Schumann:2019eaa} \begin{equation} \frac{dR}{dE}(E,t)\,=\,\frac{\rho_0}{m_{\chi} m_A}\, \int v\cdot f(v) \frac{d\sigma}{dE}(E,v) d^3 v, \label{eq:recoils} \end{equation} where $\rho_o$ is the local dark matter density, $f(v)$ is the dark matter velocity distribution and $\sigma$ is the dark matter-nucleus cross section. Equation~(\ref{eq:recoils}) has too many unknowns to be useful out of the box: $\rho_o$, $f(v)$, $m_{\chi}$ and $\sigma$. So a model for the astrophysics input, $\rho_o$ and $f(v)$, needs to be assumed, and results are then expressed in terms of $m_{\chi}$ versus $\sigma$. The choice of target ($m_A$) is important because, along with the location of the detector to reduce background, is the only handle an experimentalist has on the above equation. The choice of target influences also which kind of interactions an experiment is more sensitive to. The cross section can be divided into a spin-dependent component, $\sigma^{\mathrm{SD}}_{\chi\mathrm{-}p}$, reflecting the coupling from axial-vector terms of the Lagrangian, and a spin-independent component, $\sigma^{\mathrm{SI}}_{\chi\mathrm{-}p}$, from scalar and vector couplings in the Lagrangian. Targets with large angular momentum provide sensitivity to the spin-dependent part, while larger targets favour the spin-independent part, which is just proportional to the number of nucleons in the system. Note also that both the quark content of the nucleon and the nucleon distribution of target nuclei play an essential role in calculating observables and interpreting experimental results, and they can be a source of uncertainty in the quoted limits or in comparisons with other experiments~\cite{Bottino:1999ei,Bottino:2001dj,Ellis:2008hf,deAustri:2013saa,Hoferichter:2018acd}. The background in a nuclear recoil experiment arises from radioactivity in the surroundings or from the materials of the detector itself. The experimental efforts in the last couple of decades have been focused in achieving extremely stable, radio-pure, shielded detectors, with an energy threshold as low as possible (typically a few keV). As detectors increase sensitivity to lower recoil energies, the background from elastic neutrino scattering on target electrons, or coherent neutrino scattering on target nuclei becomes an issue. This is an irreducible background since even underground sites in deep mines are subject to a continuous flux of atmospheric and solar neutrinos. Usually called the ``neutrino floor'', this background can be dealt with by developing detectors sensitive to the direction of the recoil, and therefore to the direction of the incomming dark matter particle. This is a relatively new approach and there is a wealth of R\&D in this direction~\cite{Mayet:2016zxu}. Directional sensitivity will give a handle to reject events from the direction of the Sun, most likely induced by a solar neutrino, and will give the possibility to better exploit the expected annual variations in the recoil rate due to the relative velocity of the Earth in the dark matter halo. Depending on the location of the detector, a daily modulation in the recoil rate (day-night effect), induced by the variation of the relative velocity with the dark matter as the Earth rotates, can also be expected. \\ The assumed dark matter velocity distribution can have also other consequences in the interpretation of direct detection results~\cite{Kuhlen:2009vh,Necib:2018iwb,Wu:2019nhd}. Note that direct detection experiments are sensitive to the high-velocity tail of the, really unknown, $f(v)$ distribution (high-energy particles produce stronger recoils in the target, easier to detect). So, different assumptions on $f(v)$ bear on the final result, as shown in Figure~\ref{fig:fv}. The figure shows expected limits on the DM-nucleon spin-independent scattering cross section assuming a standard Maxwell-Boltzmann velocity distribution in equation~\ref{eq:recoils}, as compared with a distribution derived from recent Gaia and Sloan Digital Sky Survey (SDSS) data (see~\cite{Necib:2018iwb} for details of underlying assumptions on detector performance). As expected from simple kinematics, suppressing the high energy tail of the assumed $f(v)$ results in a worse sensitivity of the detector to low DM masses. \begin{figure}[t] \begin{minipage}{0.47\linewidth} \includegraphics[width=\textwidth]{plots/fv_effect_on_limit.pdf} \caption{95\% background-free C.L. limits on the DM-nucleon spin-independent scattering cross section as a function of DM mass. A background-free, Xenon target experiment with an exposure of 1 kton x year and a 4.9 keV energy threshold for the nuclear recoil was assumed as benchmark. ``SHM'' stands for Standard Halo Model, while ``Total'' assumes the new velocity distribution extracted from Gaia and SDSS data. Figure from~\cite{Necib:2018iwb}. Copyright AAS. Reproduced with permission.} \label{fig:fv} \end{minipage} \hfill \begin{minipage}{0.47\linewidth} \includegraphics[width=\textwidth]{plots/DD_results_current_Sep2019_Marc_Schumann.pdf} \caption{Current experimental limits on spin-independent dark matter-nucleon cross section. Parameter combinations above the lines (i.e., the green shaded area) are disfavoured at 90\% confidence level. The dashed line represents the neutrino floor. The regions labeled ``DAMA'' mark the preferred parameter space if the annual modulation seen by DAMA/LIBRA~\cite{Bernabei:2018yyw} would be interpreted as originating from dark matter interactions. Figure courtesy of M. Schumann. Updated from~\cite{Schumann:2019eaa}.} \label{fig:DD_limits} \end{minipage} \end{figure} Figure~\ref{fig:DD_limits} shows a summary of results from current direct detection experiments. The plot shows the limits on the spin-independent dark matter-nucleon cross section as a function of dark matter mass. Regions above the curves are disfavoured at 90\% confidence level. The dashed line represents the neutrino floor, the level where neutrino coherent scattering on the detector target nuclei becomes an irreducible background. The regions labeled ``DAMA'' mark the preferred parameter space if the annual modulation seen by DAMA/LIBRA~\cite{Bernabei:2018yyw} would be interpreted as originating from dark matter interactions. These regions are however disfavoured by every single one of the other experiments with sensitivity to that region, so accomodating the DAMA claim that the detected annual modulation is due to dark matter is challenging. It would require extremely ad-hoc interactions between dark matter and baryons in order for the signal to have escaped all other experiments. The measurement remains unexplained, though several ideas have been proposed. The plot shows that current experiments quickly loose sensitivity for very low dark matter masses due to threshold effects. Here is where the field faces a technological challenge in the coming years. Using targets and detection techniques that provide sensitivity to electron recoils (in e.g., semiconductors, noble liquids, carbon nanotubes), rather than to nuclei, is a way forward to increase sensitivity to lower dark matter masses~\cite{Crisler:2018gci,Cavoto:2017otc}. Electron recoils have more complex kinematics than nuclear recoils and the detection is challenging. Other proposals rely on producing sub-GeV dark matter particles in dedicated beam-dump accelerator experiments, using missing momentum and energy techniques (see section~\ref{sec:colliders} below). \subsection{Indirect searches} Indirect searches for dark matter focus on detecting an anomalous flux of photons, neutrinos or cosmic rays produced in annihilations or decay of dark matter particles gravitationally accumulated in heavy objects, like galaxies, the Sun or the Earth. Detecting the different signatures require very different types of detectors: air shower arrays, Cherenkov telescopes, neutrino telescopes or particle detectors in balloons or satellites. Note that these kind of detectors were not originally intended to search for dark matter but have proven to be unique complementary tools to the direct search efforts. For one thing, they can probe a different side of the velocity distribution of galactic dark matter (low velocity dark matter particles are more likely to be captured in the Galactic center, the Sun or the Earth). For another, they are sensitive to different backgrounds and systematics than direct search experiments. They are also sensitive to the signatures of dark matter decay, unlike direct searches. There are two sources of possible dark matter signatures where only neutrino telescopes are of use: annihilations in the center of the Sun or Earth. Among the dark matter annihilation products, only neutrinos will escape the dense interiors of these objects. These are also one of the mot background-free searches possible since neither the Sun or the Earth are expected to be sources of high energy (above GeV) neutrinos (except for neutrinos produced in cosmic ray interactions in the atmosphere of the Sun, which can constitute in principle a background to dark mater searches~\cite{Seckel:1991ffa,Moskalenko:1993ke,Ingelman:1996mj,Ng:2017aur,Edsjo:2017kjk}). Assuming equilibrium between capture and annihilation, the neutrino flux from the Sun, $d\Phi_{\nu}/dE_{\nu}$, is proportional to the annihilation rate of dark matter, $\Gamma_A$, which in turn can be related to the capture cross section, that is, the dark matter-nucleon cross section~\cite{Jungman:1995df}. \begin{equation} \frac{d\Phi_{\nu}}{dE_{\nu}}\,=\,\frac{\Gamma_A}{4\pi D^2}\,\frac{dN_{\nu}}{dE_{\nu}} \label{eq:indirect_sun} \end{equation} where $dN_{\nu}/dE_{\nu}$ is the neutrino spectrum from the annihilations. \begin{figure}[t] \begin{minipage}{0.47\linewidth} \includegraphics[width=\textwidth]{plots/3yearSolar_SDcrosssection.pdf} \caption{Limits on the spin-dependent dark matter-proton cross-section ($\sigma^{\mathrm{SD}}_{\chi\mathrm{-}p}$), compared to results from other neutrino detectors and direct detection experiments. Various points corresponding to neutralinos from a scan of the phenomenological minimally supersymmetric standard model (pMSSM) are also shown, colour coded by their leading annihilation channel. Points close to the red end of the spectrum annihilate into harder channels such as $\tau^+\tau^-$ and can be excluded by the red line from IceCube. Figure from~\cite{Aartsen:2016zhm}.} \label{fig:solar_limits} \end{minipage} \hfill \begin{minipage}{0.47\linewidth} \includegraphics[width=\textwidth]{plots/HESS_limits.pdf} \caption{95\% C. L. upper limits on $\left < \sigma_{\chi\chi} \textrm{v} \right >$ as a function of dark matter mass from the H.E.S.S. experiment, assuming dark matter annihilation into photons. The result is based on observations of the inner 300 pc of the galactic center region. Observed limits (red dots) and mean expected limit (black solid line) are shown together with the 1$\sigma$ (green band) and 2$\sigma$ (yellow band) containment bands. Figure from~\cite{Rinchiuso:2019rrh}.} \label{fig:HESS_limits} \end{minipage} \end{figure} Since the Sun is essentially a target made of protons, it is the spin-dependent cross section that can be measured (or constrained) from a detection (or non-detection) of an anomalous neutrino flux from the Sun. Figure~\ref{fig:solar_limits} shows the limits obtained by IceCube, Super-K and ANTARES on the spin-dependent dark matter-nucleon cross section as a function of dark matter mass, assuming full annihilation to $b\bar{b}$, $W^+W^-$ and $\tau^+\tau^-$~\cite{Aartsen:2016zhm}. For the Earth, being much younger than the Sun, it is not obvious that equilibrium between capture of surrounding dark matter particles and annihilation in its interior has reached equilibrium. In this case one can still constrain the dark-matter nucleon cross section, but under the assumption of a given value for the annihilation cross section. Since the most abundant isotopes in the Earth inner core, mantle and crust are spin-0 nuclei ($Fe$, $Si$ and $O$), it is the spin-independent dark matter-nucleon cross section that is probed in dark matter searches from the Earth with neutrino telescopes. Things are a bit different in searches for dark matter from the galactic center or halo, or other galaxies. The neutrino, gamma or cosmic-ray flux from dark matter annihilations in those cases depends on the thermally averaged product of the dark matter self-annihilation cross-section times the dark matter velocity, $\left < \sigma_{\chi\chi} v \right >$, and the so called $J$-factor, the integral of the squared of the dark matter density along the line of sight to the object under consideration, \begin{equation} \frac{d\Phi_x}{dE_x}\,=\,\frac{1}{4\pi}\,\frac{\left < \sigma_{\chi\chi} v \right >}{2 m^2_{\chi}}\,\frac{dN_x}{dE_x} \times \int_{l.o.s.} \rho^2_{DM}(r) dr d\Omega \label{eq:indirect_galaxy} \end{equation} where $x$ stands for neutrinos, gamma rays or cosmic rays. The $J$-factor depends on the halo profile chosen (see equation (\ref{eq:profiles})), and results from indirect dark matter searches are commonly given under the assumption of a specific halo model. Assuming a particle physics model which gives the expected particle spectrum, $dN_x/dE_x$, and a halo model, then an experimental measurement of $d\Phi_x/dE_x$ can be used to probe $\left < \sigma_{\chi\chi} \textrm{v} \right >$ versus the dark matter mass, $m_{\chi}$. An example of a search for dark matter annihilations into photons in the galactic center is shown in Figure~\ref{fig:HESS_limits}, which corresponds to a recent search by the H.E.S.S. collaboration~\cite{Rinchiuso:2019rrh}. The uncertainty introduced by the choice of halo model in indirect dark matter searches is well illustrated in Figure~\ref{fig:halo_model}. The figure shows the sensitivity of two planned facilities, the Cherenkov Telescope Array (CTA)~\cite{CTA:2019aaa} and the Southern Gamma-ray Survey Observatory (SGSO)~\cite{Albert:2019afb} under the assumption of two different halo models. The difference can be orders of magnitude. \begin{figure}[t] \begin{minipage}{0.47\linewidth} \includegraphics[width=\textwidth]{plots/sigmaV_CTA_SGSO_BUR_tau_v2.pdf} \caption{95\% C.L. sensitivity on the thermally averaged cross section for dark matter annihilation into $\tau^+\tau^-$ as a function of dark matter mass, for both Einasto and Burkert profiles of the Galactic halo. See~\cite{Viana:2019ucn} for details.} \label{fig:halo_model} \end{minipage} \hfill \begin{minipage}{0.47\linewidth} \includegraphics[width=\textwidth]{plots/AMS_positrons.pdf} \caption{Positron flux as a function of energy measured by AMS~\cite{Aguilar:2019owu}. The red data points show the measured positron flux. The data can be fitted with a diffuse low-energy term contribution, explained with positrons produced in the collisions of cosmic rays with interstellar gas, and a yet-unidentified contribution at higher energies denoted ``source term'' in the plot.} \label{fig:ams_positrons} \end{minipage} \end{figure} Cosmic rays have given so far room for speculation since the detection of a positron excess by PAMELA~\cite{Adriani:2008zr} (with some weak indications from previous experiments) and confirmed by AMS~\cite{Aguilar:2019owu}. The positron spectrum measured by AMS is shown in Figure~\ref{fig:ams_positrons} and can be explained as a low energy part originating from the collisions of cosmic rays in the interstellar medium, and a high energy part of unclear origin. Explanations involving dark matter annihilations have been proposed (the literature is too vast here, but see, e.g.,~\cite{Bergstrom:2008gr,Kopp:2013eka}), but conventional explanations based on positron production by astrophysical sources such as pulsars or supernova remnants are also possible~\cite{Yuan:2013eja,Kohri:2015mga}. Or not~\cite{Abeysekara:2017old}. Use Occam's razor at your discretion. It has to be pointed out that any non-conventional explanation of the positron excess in cosmic rays has to agree with the fact that the antiproton flux seems to be quite well understood (see however~\cite{Lin:2016ezz,Cuoco:2016eej}), making the dark matter explanation of the positron excess quite contrived. We indeed need a complete picture of the antimatter in cosmic rays, including precise measurements of $He$ and $d$ with the spectra extending beyond the TeV region to be able to assess whether there is any cut-off in the spectra. \subsection{Collider searches} \label{sec:colliders} Can we eventually find dark matter in colliders? Strictly speaking: not really. We can find new particles that could do as dark matter, but we would need external confirmation from Astroparticle experiments to really determine that they are the stuff that holds galaxies together. Still the idea of producing the dark matter particles in the controlled environment of an accelerator and be able to measure their properties is appealing. There is an active program of searches for physics beyond the Standard Model, which include dark matter, at the LHC~\cite{Aaboud:2017phn,Aaboud:2019rtt} and there is recently an increased interest in dedicated fixed-target experiments, with proposals being considered at CERN, JLAB, FNAL and SLAC~\cite{Akesson:2018vlm,Boyce:2012ym,Battaglieri:2017qen,Banerjee:2016tad}. These searches are tricky since we are trying to detect a particle that does practically not interact with matter. Collider experiments tag the production of potential dark matter candidates by missing energy plus initial-state radiation, missing energy plus ``recoil'' visible particles or by looking for resonances at the mass of the new particle. Still, the parameter space is large, with the possibility of vector, axial-vector or scalar type of interactions, with their respective couplings as free parameters. The dark matter mass and the mass of the mediator of the interaction are further free parameters. Assumptions on some of these parameters are unavoidable, so the results presented by accelerator experiments need to be always understood under the assumptions taken. This is important when comparing results with Astroparticle experiments, for example using limits on the $\sigma^{\mathrm{SD}}_{\chi\mathrm{-}p}$ or $\sigma^{\mathrm{SI}}_{\chi\mathrm{-}p}$ cross sections. Collider experiments do not have access directly to these quantities, but they are usually derived under some model assumptions, while Astroparticle experiments measure fluxes of particles that can be cast in a more direct way in terms of those quantities. There are different model dependencies and systematics in both approaches and comparisons must be made with care. Figure~\ref{fig:ATLAS_DM} shows the limit on the spin-independent dark matter nucleon cross section obtained from an analysis of ATLAS data, compared with results from direct detection experiments. Collider searches can be competitive at very low dark matter masses, but the figure also shows that model dependencies can be strong (compare the blue and red lines, which correspond to two different assumptions on the nature of the dark matter). \begin{figure}[t] \begin{minipage}{0.47\linewidth} \includegraphics[width=\textwidth]{plots/ATLAS_DM.pdf} \caption{Comparison of the upper limits at 90\% C.L. on the $\sigma^{\mathrm{SI}}_{\chi\mathrm{-}N}$ obtained by ATLAS, compared with several direct detection results. The regions above the contours are excluded. See~\cite{Aaboud:2019rtt} for details.} \label{fig:ATLAS_DM} \end{minipage} \hfill \begin{minipage}{0.47\linewidth} \includegraphics[width=\textwidth]{plots/Belle_exclusion.pdf} \caption{Sensitivity of the Belle-II detector to the axion-photon coupling as a function of the axion mass, compared with disfavoured regions obtained with other technoques. Figure courtesy of S. Cunliffe. Adapted from~\cite{Dolan:2017osp}} \label{fig:BELLE_ALPS} \end{minipage} \end{figure} Electron-positron colliders provide in general a cleaner interaction environment, and this is also true for dark matter searches. Belle has developed a program of searches for Axion-like particles (ALPs)~\cite{Dolan:2017osp} , where an ALP produced in $e^+e^-$ annihilations decay into a detectable photon pair. Figure~\ref{fig:BELLE_ALPS} shows that Belle has complementary sensitivity to other search techniques for ALPs. \subsection{Outlook} The search for dark matter is a complex, multidisciplinary, experimentally driven field, since many possibilities on its nature remain open. We have seen in the last couple of decades an impressive technological development in direct detection techniques which have increased the sensitivity of the experiments to the dark matter cross section by several orders of magnitude. After a necessary exploratory era, direct detection experiments are converging to a few well established techniques, and collaborations are joining efforts for the next generation of large volume detectors. The challenge in direct detection is lowering thresholds and coping with the so-called neutrino floor. Direction-sensitive detectors will be able to deal with the background of nuclear recoils induced by elastic coherent solar neutrino scattering, and provide sensitivity to probe the very low dark matter mass region. Rather than being a hard limit, the neutrino floor will rather become a swampland, difficult to navigate due to the additional background, but possible with the right equipment. Indirect dark matter searches with photons, neutrinos or cosmic rays from annihilation or decay of dark matter in the cosmos provide a complementary approach, subject to different systematics, and are sensitive to different variables. In some cases indirect searches are competitive since they can probe different mass regions or different dark matter properties. Yet another complementary search technique for dark matter is performed in the controlled environment of particle physics labs. They do not have the ability to determine the lifetime of a potential signal, but benefit from the potential to precisely measure particle couplings and masses. In the big picture the question remains: is the dark sector really so simple as one stable particle, while the visible sector comprises several fundamental particles and families? We probably need to start considering more complex scenarios, keeping complimentary search techniques open. Theory can lead the way here, and there are already models assuming more complex scenarios than the standard one-stable-particle-freeze-out-from-thermal-equilibrium mechanism.
1,314,259,992,795
arxiv
\section{Introduction} High angular resolution techniques for astronomical imaging have matured rapidly in recent years (see e.g. Beichman \& Ridgway \cite{beich}) and have been applied to a variety of Galactic and extra-galactic sources. For ground-based near-IR imaging, adaptive optics techniques have been very succesful in approaching the theoretical diffraction limit of the telescope (Rigaut et al. \cite{rigaut}). The systems allow on-line correction for atmospheric perturbations using either a natural guide star (part of the object under study, or an unrelated star in the near vicinity) or laser guide star (see e.g. Lloyd-Hart et al. \cite{lloyd}). A natural extension of this technique is to achieve two-dimensional polarimetric observations in the near-infrared with the, in principle, simple provision of a polarizer in the beam. The combination of both techniques allows information on the detailed polarization of an extended source, or determination of the individual polarization of close multiple sources. It may be applied to reflection nebulae for determining the position of embedded illuminating sources, for study of the line of sight geometry of dust scattering regions and for the orientation of magnetic fields in star forming regions or quasar jets. The extinction of interstellar dust peaks in the UV and declines to longer wavelengths (e.g. Mathis \cite{mathis}), but the continuum emission from grains at typical temperatures of a few hundred K in regions heated by starlight increases strongly above 2$\mu$m. In addition molecular emission and absorption bands are stronger above 3$\mu$m. The 1-2$\mu$m region therefore provides an ideal window for the study of the close environment of dust embedded sources, such as regions around proto-stars or emerging young stars. For typical interstellar grains the low extinction in the near-IR enables information on the scattering properties of the grains, or the study of scattering regions, which have high optical extinction. Near-IR polarization is thus entirely analogous to optical polarization study but can be extended to more embedded environments. At longer wavelengths the grain emission dominates and any polarization of the radiation is controlled by anisotropic emission mechanisms such as aligned non-spherical grains (Davis \& Greenstein \cite{dg}). Examples of IR polarimetry include: detection of extended dust disks in young stellar objects (see e.g. Piirola et al. \cite{piscalco}, for observational results and Berger \& M\'enard \cite{jeanphi}, for theoretical work); dust structures in AGB envelopes (e.g Sahai et al. \cite{sahai} for CRL 2688); detection of dust in interstellar jets (e.g. Hodapp 1984); magnetic field structure in star forming regions (e.g. Whittet et al. \cite{whitgera}) and polarization in galaxies (e.g. Jones \cite{jones}) and quasars (Sitko \& Yudong \cite{sitko}). Extending polarimetry to the IR also brings the potential of high spatial resolution, both through the dependence of diffraction on wavelength and the decrease in atmospheric seeing size with wavelength. In the near-IR, the dominant contribution to polarization is therefore from scattering of radiation by grains, and their finite relative size requires that Mie theory must be used to predict the scattering properties. However the optical properties of typical interstellar grains are fairly well represented by models based on laboratory and observational data (Draine \& Lee \cite{draine}), so that the scattering properties of interstellar grains in the near-IR can be predicted. Whilst polarization data naturally provides geometric information on the location of illuminating sources, the scattering efficiency with scattering angle is required to derive geometric information about the line of sight location of the scatterers (White et al. \cite{white}). For high dust column optical depths, multiple scattering may occur and has then to be modelled using Monte Carlo methods (c.f. e.g. Witt \cite{witt}, Warren-Smith \cite{warren}; Whitney \& Hartmann \cite{whitney}; Fischer, Henning \& Yorke \cite{olaf}; Code \& Whitney \cite{code}). As part of a programme to study the nature of the dust in the Homunculus nebula around the massive star (or stars) $\eta$ Carinae and determine information about the 3-D structure of the reflection nebula, near-IR imaging polarimetry data were obtained with the ESO ADONIS system. $\eta$ Car and the Homunculus is an ideal source for adaptive optics since the central point source is very bright and the nebula is not so extended that off-axis anisoplanicity becomes an important effect. The present paper is devoted to the details and subtleties of the data collection and removal of the instrumental signature vital to the derivation of a polarization map. A following paper will present the scientific results on the high resolution near-IR polarization of $\eta$ Car and the Homunculus. Sect. 2 is devoted to a brief description of the ADONIS instrument; Sect. 3 then considers the observational strategy. The fundamentals of the data reduction are described in Sect. 4 and the polarimetric calibration of the instrument in Sect. 5. Sect. 6 exposes the different deconvolution techniques applied to the data and their resulting effect on the polarization maps. \section{The ADONIS adaptive optics system} \subsection{The adaptive optics system} ADONIS is the ADaptive Optics Near INfrared System (see e.g. Beuzit \& Hubin \cite{BH} or Beuzit et al. \cite{beuzit}) supported by ESO for common users since December 1994 at the F/8.1 Cassegrain focus of the La Silla 3.6 m telescope. Fig ~\ref{layout} shows the optical layout of the ADONIS adaptive optics (AO) system. A tip-tilt and a 64 element deformable mirror corrects the distortions of the image in real time and a Shack-Hartmann wavefront sensor (WFS) provides the difference signal for the deformable mirror using a bright reference star close to the object. The detector for the Shack Hartmann sensor can be chosen as either an intensified Reticon for bright sources (m$_v$ $<$ 8 mag.) or an electron bombarded CCD for fainter sources (8 $<$ m$_v$ $<$ 13 mag., 25 to 200Hz sampling). Both detectors are sensitive in the visible wavelength region. An off-axis tiltable mirror allows the sky background, in a field of radius $\leq$30$''$, to be chopped with the on-source image. The output F/45 focus delivers the image to a near-IR detector - either a Rockwell 256$^{2}$ HgCdTe array (SHARP II for 1-2.5$\mu$m, Hofmann et al. \cite{hofmann}) or a LIR HgCdTe 128$^{2}$ anti-blooming CCD (COMIC for 1-5$\mu$m, Marco et al. \cite{marco}). \subsection{The camera} The SHARP II camera was selected for the near-IR polarimetric observations. This camera has a fast shutter at the internal cold Lyot stop, allowing integration times as short as 20msec. The present observations were made with the standard J, H, K filter set and a narrow band 2.15$\mu$m continuum filter, with a width of 0.017$\mu$m, and denoted hereafter K$_c$. \subsection{The polarizer} \label{polarizer} The polarizer, from Graseby Inc., is a wire grid of 0.25$\mu$m period, on a CaF$_2$ subtrate. It is especially designed to work in the spectral range 1 to 9$\mu$m and has a transmission of 83\% perpendicular to the wire grid at 1.5$\mu$m. It is remotely rotated by the ADOCAM control system to any desired absolute position angle within tolerances of 0.1$^\circ$. This polarizer, as a pre-focal instrument, is inserted into the beam in front of the camera; and is not cooled. However since the polarizer is not oriented perfectly perpendicular to the optical axis there is a small image motion on the detector when rotating the polarizer (see Sect.~\ref{derivpol} and Fig.~5). \section{Observational technique} \label{obsmode} The magnification giving a pixel scale of 0.05$''$ has been selected to ensure an adequate sampling of the PSF, at H band. The field was thus 12.8$\times$12.8$''$; for the study of extended sources larger than the field size it is obviously necessary to employ several pointings and mosaic the resulting images after basic data reduction. ADONIS has a limit of 30$''$ for the radial extent of the offset sky so values less than [30 - half detector size] ($''$) must be employed in order to have unvignetted sky background frames. Special care has been taken in the selection of the offset sky position to avoid any overlapping with the extended object observed. For all sources, object and chopped sky images were obtained at each position of the polarizer. A data cube of 256$\times$256 spatial pixels $\times$ M frames, where M is the number of object and sky frames, was acquired. Table \ref{tab-pol-standards} lists the details of the ADONIS polarimetry observations of the science and calibration sources. Sets of chopped images were obtained at nine different positions of the polarizer, each 22.5$^{\circ}$ apart, from 0 to 180$^\circ$. The minimum number of frames required to determine the linear polarization and its position angle is 3 (spanning more than 90$^{\circ}$ in position angle). By effectively oversampling the polarization curve (viz. the variation of detected signal with polarizer rotation angle) one can at least hope to average out shorter term variations in atmospheric transmission in order to improve the quality of the polarization measurement. Expressed in terms of the Stokes parameters I, Q and U (see e.g. Azzam \& Bashara \cite{azzb}), I depends on the total signal whilst Q and U depend on the difference in signals between images taken at polarizer angles of 0, 90, 45 and 135$^\circ$. Then the linear polarization p is given by, ~~ $ p(\%) = 100 \times \sqrt(q^2 + u^2) $ ~~ where q=Q/I and u=U/I. The position angle of linear polarization is, ~~ $ \theta(^\circ) = 28.648 \times tan^{-1}(U/Q) $ ~~ (Serkowski \cite{serk}). Determining the polarization from $\geq$ twice as many images as necessary leads to improvement in polarization accuracy provided that any photometric variations are on timescales different from the exposure time of individual images at each polarizer angle. The worst case scenario is when photometric variations occur on a timescale similar to the exposure times, so that the measured difference signals vary wildy - the polarization determined by fitting a cosine curve then approaches zero. The chosen exposure times per polarizer angle were in the range 1 to 50 s depending on the source brightness (see Table \ref{tab-pol-standards}). Observing a polarized source with the polarizer at 0 and 180$^{\circ}$ polarizer positions should give the same detected counts and is therefore a direct way to monitor the photometric variations during the observational sequence. Column 8 of Table \ref{tab-pol-standards} lists half the difference (in percentage) between integrated counts in the star profile for the 0 and 180$^\circ$ images (i.e. rms on the mean of the 0 and 180$^\circ$ signal values). For R Monocerotis, the semi-stellar peak of the reflection nebula NGC~2261, the aperture covers the central extended source (full extent 8$''$), whilst for OH~0739-14, a reflection nebula around an embedded young star, an area 10$''$ in size was used for the statistics. \begin{table*} \begin{tabular}{lcccrrrl} Source & Type & Date & Band & No. & T$_{exp}$ & No. Poln. & 0-180$^\circ$ \\ & & & & Frms. & (ms)~~ & sequence & semi-difference (\%) \\ \hline HD~93737 & Low poln. & 1996 Mar 02 & K & 20 & 50 & 3 & 0.76,0.65,0.22 \\ & standard. & 1996 Mar 03 & H & 20 & 50 & 2 & 2.32,1.01 \\ & & 1996 Mar 04 & J & 20 & 40 & 1 & 0.15 \\ \hline HD~64299 & Low poln. & 1996 Mar 03 & J & 10 & 5000 & 1 & 0.11 \\ & star & & H & 10 & 3000 & 1 & 1.01 \\ & & & K & 10 & 3000 & 1 & 0.89 \\ & & 1996 Mar 04 & J & 3 & 10000 & 1 & 0.05 \\ & & & H & 3 & 6000 & 1 & 0.05 \\ & & & K & 5 & 10000 & 1 & 0.65\\ \hline HD~94510 & Low poln. & 1996 Mar 04 & K$_c$ & 30 & 40000 & 1 & 0.32 \\ & star & & & & & & \\ \hline OH~0739-41 & Extended IR & 1996 Mar 02 & J & 4 & 30000 & 1 & 0.45 \\ & poln. source & & H & 4 & 5000 & 1 & 0.48 \\ & & & K & 4 & 5000 & 1 & 0.20 \\ \hline R Monocerotis & Extended IR & 1996 Mar 03 & J & 10 & 1000 & 1 & 1.26 \\ & poln. source & & H & 20 & 400 & 1 & 0.24 \\ & & & K & 30 & 100 & 1 & 0.18 \\ \hline $\eta$ Carinae & Polarized & 1996 Mar 02 & K & 200 & 50 & 4 & \\ & source & 1996 Mar 03 & H & 200 & 50 & 2 & \\ & & & H & 100 & 50 & 1 & \\ & & 1996 Mar 04 & J & 200 & 50 & 2 & \\ & & & K$_c$ & 100 & 50 & 2 & \\ \hline \end{tabular} \caption{List of polarization sources observed. The exposure time (T$_{exp}$) is given per frame.} \label{tab-pol-standards} \end{table*} \section{Data reduction} The data reduction applied to AO polarimetry data consists of the removal of the detector signature and sky subtraction, which is common to IR imaging in general, followed by registration and derivation of the polarization parameters. \subsection{Removal of the detector signature} \label{flatfield} The basic data reduction steps were performed with the `eclipse' package (Devillard \cite{nico}). Flat-fields were acquired on the twilight sky at the beginning of each night of observation, in an exactly similar way as for the targets, at nine angles of the polarizer. The integration times were 7, 10 and 20s for the J, H and K bands respectively; no flat field was taken with the K$_c$ filter. The flat field images must first be processed to flag bad pixels, caused by either permanently dead pixels or ones whose sensitivity undergoes large fluctuation during the exposure. Two methods have been employed depending on the number of frames available in a cube: sky variation or median threshold. The `sky variation' method works on a data cube, with preferably many planes ($^>_\sim$20) in order to obtain reliable statistics on the variations. The standard deviation ($\sigma$) with frame number is computed for each pixel in the frame. A histogram plot of the standard deviations has a Gaussian shape representing the response to the, assumed constant, sky signal. All pixels whose response is too low (dead) or too high (noisy), compared to a central $\pm \sigma/2$ interval, are rejected. The `median threshold' method can be applied to a small number of input frames (such as flat field data) and detects the presence of spikes above or below the local mean in each individual image independently. If the signal is assumed to be smooth enough, bad pixels are found by computing the difference between the image and its median filtered version, and thresholding it. This latter method is not as stringent as using the temporal variation, but is the only possibility when there are an insufficient number of images to calculate reliable statistics. Some bad pixels may however remain in the images after applying the bad pixel correction by either method; however the number is small and they can be manually added to the bad pixel map. Slightly different bad pixel maps were found for the different positions of the polarizer; which could be explained by a polarization sensitivity of the pixels ($\sim$1\%), since the NICMOS detector sensitivity is slightly polarization dependent, or simply by the random variation of hot pixels. Once corrected for the bad pixels, the twilight flats were normalized, then multiple exposures were averaged for the same position of the polarizer to derive the flat field maps. The target data cubes were corrected with the bad pixel map derived using the `sky variation' method from the background sky frames and divided by the flat-field to give flat-fielded, cleaned images, where the sky contribution is still to be subtracted. All these operations were performed independently for the nine positions of the polarizer. \subsection{Sky subtraction} \label{skysub} The sky background can be bright in the IR and may also be polarized so it is criticical in the case of polarimetry to ensure that the uncertainties introduced by sky subtraction are minimized. Several tests were performed to determine the impact of the method of sky subtraction, in conjunction with the bad pixel correction, on the data. The first method considers one sky and a bad pixel map for each position of the polarizer; the second method a single averaged sky (all polarizer positions confounded) but individual bad pixel maps for each position; whilst the third method uses the same averaged sky and bad pixel map for all polarizer angles. All three methods were tested (Ageorges \cite{nancy}) and the results demonstrated that the largest modification of pixel values, and therefore photometry, comes from the bad pixel map used. The third method produced the largest discrepancies from the expected $cos(2 \theta)$ curve, where $\theta$ is the polarizer position angle. The first method is clearly to be preferred since the effect of any polarization of the sky signal on the target data is correctly removed and any short term variation in sky background is subtracted. It was found, from sky background level in the polarization calibrator data, that the sky subtraction has been successful to better than 1\% (rms noise of 3.5 ADUs). For the 0 and 180$^{\circ}$ data, a further test of the quality of the sky subtraction was performed: the skies have been exchanged, i.e. 'sky 0' has been used for the data taken at PA 180$^{\circ}$ and conversely. This resulted in 'photometric' variations less than 0.05\%, thus giving us further confidence in our sky subtraction method. \subsection{Photometric quality} \label{photometry} The photometric quality of the data can be checked in two different ways: either by comparing the photometry of an object when acquired at 0$^{\circ}$ and at 180$^{\circ}$ or by plotting the measured signal against the polarizer angle where a $cos(2 \theta)$ form should be obtained for polarized data. The latter is illustrated in Fig.~\ref{new-intens}, for J band data of the NE lobe of the Homunculus nebula around $\eta$ Carinae. The signal is plotted with time as the polarizer was rotated from 0 to 180$^\circ$; every ensemble of 200 points (within the dashed vertical lines) corresponds to frames acquired at the same position of the polarizer. The spread of points at a given polarizer angle gives a measure of the photometric variation. The images, used to create this plot, have been overexposed on purpose in order to get as much signal as possible on the faint nebula. The central region of the images has thus been obtained outside the linear regime of the CCD. The intensity variation over this image has thus been recalculated avoiding a 30$\times$30 pixels area centered on $\eta$ Car. This is represented Fig.~\ref{new-intens} together with a plot of the intensity variation over a 50$\times$ 50 pixels area centered on a lobe of the nebula, away from $\eta$ Car and thus obtained in the linear regime of the CCD. observed above is reduced by a factor of 2. Fig.~3, representing the photometric variation of frames acquired at 0 and 180$^{\circ}$, clearly illustrates the fact that the night of these observations was not photometric: there is a 0.3mag. extinction of the data acquired at 0$^{\circ}$ compared to that at 180$^{\circ}$. In Fig.~\ref{new-intens} it is clear that there is a discrepant point, at 157.5$^{\circ}$, since this does not fit into the smooth $cos(2 \theta)$ progression of the curve. This problem, found for every source observed, was attributed to a technical problem of unknown origin; it appears from the figure that the polarizer may actually have been at an angle of 45$^\circ$. All maps taken at this polarizer angle were ignored in the subsequent derivation of polarization parameters, thus reducing the number of independent polarizer angles to 7 (0 and 180$^{\circ}$ being equivalent). \subsection{Derivation of polarization maps} \label{derivpol} The polarization degree for each pixel, binned pixel area or within an aperture was determined by fitting a $cos(2 \theta)$ curve to the variation of signal with polarizer rotation angle $\theta$ for the eight signal values (excluding the value at 157.5$^\circ$). A least-squares procedure was used with linearization of the fitting function and weighting by the inverse square of the errors (Bevington \cite{bev}). The error on the polarization was determined from the inverted curvature matrix and the error on the position angle by the classical expression (Serkowski \cite{serk}): ~~ $ \sigma_{\theta}(deg.) = 28.648 ( \sigma_{p}/p ) $ ~~ when $\sigma_{p}/p$ was $\geq$8 or from the error distribution of $\sigma_{\theta}/\theta$ given by Naghizadeh-Khouei \& Clarke (\cite{nag}) when $\sigma_{p}/p \leq 8$. The errors on the individual points in the images at each polarizer rotation angle take into account the number of images averaged, the read-out noise and the sky background contribution. Since the detector offset is not fixed per image it was necessary to bootstrap for the value of the sky level. A series of polarization maps were made with increasing sky contribution at a fixed polarization error per pixel. The sky signal was adopted when it produced polarization vectors which began to deviate from the expected centrosymmetric pattern (e.g. to the NE of R Mon - see Fig.~\ref{RMon}) in the regions of lowest signal. Thus the polarization errors are not absolute errors. Applying a polarization error cut-off to the maps produces maps consistent with the expected structure (which can also be partially checked by binning the data). Fig.~4 shows a typical fit to the $cos(2 \theta)$ curve for a 8$\times$8 pixels binned region of the R Monocerotis H band image (see Table \ref{tab-pol-standards} and Fig.~\ref{RMon}). The error bars on the individual points arise from the photon statistics on the object and sky frames, with read-out noise considered. It was noted in Sect.~\ref{polarizer} that the rotation of the polarizer induces an image shift on the detector. Fig.~5 is an illustration of the displacement observed, for images of $\eta$ Carinae in K$_c$, while rotating the polarizer from 0$^{\circ}$ to 180$^{\circ}$ in steps of 22.5$^{\circ}$ (see Sect.~\ref{obsmode} for details on the observation procedure). Since the PSF is variable in time, reproducability is not guaranteed. However the displacements were found to agree with those in Fig.~5 for different targets (mostly unpolarized standard stars - see Table \ref{tab-pol-standards}), and in different filters, to better than 0.5 pixel and so were adopted to register the images at different polarizer angles. For a point source, where only the integrated polarization is of interest, the exact position of the source is not relevant provided all the signal is included in the summing aperture. However for extended sources, such as for $\eta$ Carinae and the Homunculus nebula, a polarization map which exploits the available spatial resolution is desired. It is therefore extremely important to ensure that the data are centered on the same position for all position angles observed, to avoid some smearing of the information. For unsaturated stellar images, the centroid of the point source can be used as a fiducial to shift the images to a common centre. In the case of saturated images it proved possible to obtain reliable centering by using a very large aperture for the centroid; this is then weighted by the outer (unsaturated) regions of the PSF. However if the source is polarized, and in particular if there is polarization structure across the point source then centroids at particular angles will be dependent on the source polarization. It was found that if the images were shifted to match the centroids at the 8 polarizer angles for the R Mon data, then a map with uniform, almost zero, polarization was derived, in contradiction to the known (aperture) polarimetry of this source (e.g. Minchin et al. \cite{minchin}). In such a case the set of image shifts, derived from unpolarized point sources (Fig.~5), were applied to the data and the polarization maps were determined. Fig.~\ref{RMon} shows the resulting J, H and K polarization vector maps superposed on logarithmic intensity plots; the raw data has been binned 4$\times$4 pixels, i.e. 0.2$''$. Those shifts applied are closer to reality than those determined by the centroid of R Mon, but good to within $\pm$0.5 pixel. This might explain the difference in structure between our H band map and that of Close et al. (\cite{close}). extract of the Close et al. map. Considerable structure across the central (almost point) source is evident. The cut-off of the maps is determined by the value of the 1$\sigma$ polarization error (4, 4 and 6\% respectively for J, H and K). The structures seen in the J, H \& K band maps (Fig.~\ref{RMon}) change with wavelength, which might be an optical depth effect of the inclined disk. The striking difference between the maps in Fig.~\ref{RMon} and the one reproduced in Ageorges \& Walsh (\cite{agewal}) comes from the calibration of the data. Indeed the latter were preliminary results and the first polarization maps derived with ADONIS. \section{Calibration of the polarimetric data} \label{calib} In order to determine the source intrinsic polarization and its position angle, several corrections are necessary. The instrument possesses an instrumental polarization which must be vectorially subtracted from the measured polarization. The instrumental contribution is derived from the observation of unpolarized standards. The interstellar medium between the source and the observer also possesses an intrinsic polarization which needs to be corrected. The typical ISM polarization values are $\le$2\% and can be neglected when observing high polarization sources. If the ISM polarization is not negligible, then it must be determined from measurements of stars in the neighbourhood of the source (see e.g. Vrba et al. \cite{vrba}); alternatively the distance dependence of the ISM polarization must be determined from measurements of many stars. The zero point of the polarization position angle is checked by observing non variable polarized standards, or polarized sources with reliable measurements. The lattest offer an excellent check on the polarizing efficiency of the instrument (i.e. response to a 100\% polarized source should be 100\%). \subsection{Sky polarization} \label{skypol} In the optical during dark time the sky polarization is typically 3-4\% (Scarrott, private communication). In the nights of our measurements, the sky polarization has been found to be consistent with zero within the error bars (typically $\leq$0.5\%). Since it is the ratio of polarized intensity between the source and the sky that matters most and since the latter have carefully been subtracted (see Section~\ref{skysub}), the sky contribution has been ignored in processing the data. \subsection{Instrumental calibration} \label{instcal} \subsubsection{Choice of the polarization calibrators} \label{polcal} Despite extensive polarization observations, there is a distinct lack of any such standards in the IR. The polarized reflection nebulae OH~0739-41 and NGC~2261 (illuminated by R Monocerotis) were observed because of their extensive IR polarization data (Heckert \& Zeilik \cite{heckert} and Shure et al. \cite{shure} for OH~0739-14, Minchin et al. \cite{minchin} and Close et al. \cite{close} for R Mon), although neither can be claimed as true, non-varying standards. Since the observations are achieved using an adaptive optics system, the polarization standard could also be used as a PSF calibrator. Since the correction is optimized continuously, the resulting PSF is variable in time. Any point source observed as PSF calibrator needs to be close ($<$ 10$^{\circ}$) to the target and be as similar as possible in terms of visible magnitude and spectral type, to ensure identical correction efficiency. Owing to the lack of polarization standards in the infrared, the polarization calibrators were chosen to be as close as possible to the source and bright enough to be used as reference for the wavefront sensor. In two cases, for HD~64299 and HD~95410, which have, respectively a B polarization of 0.151\% (Turnshek et al. \cite{turnshek}) and a V polarization of 0.004\% (Tinbergen \cite{tinbergen}) it was assumed that the IR polarization is negligible, although no measurements exist at these wavelengths. In reducing the data taken on 1996 March 02 it was found that the derived polarization for any source (even OH 0739-14) was consistent with zero polarization, and, in addition, did not exhibit the expected shift of image centroid with polarizer angle (Fig.~5. Either the photometric conditions were exceptionally poor (this is not borne out by large discrepancies between the 0 and 180$^\circ$ signal values - see Table \ref{tab-pol-standards}) or, more probably, an instrumental problem, such as the polarizer not rotating to the requested angle, was present. The polarization information was therefore discarded for this night. However the K band image of $\eta$ Carinae had excellent spatial resolution and was retained (Walsh \& Ageorges, \cite{walage}). \subsubsection{ADONIS instrumental polarization} \label{instpol} For the unpolarized (actually low polarization) standards, the integrated counts within a circular aperture including all the flux from the star profile (radius typically 2$''$) above the sky background was measured for each angle of the polarizer and a $cos(2 \theta)$ curve fitted to the data. Table \ref{pol-stan-data} lists the results. HD~93737 has a measured V band polarization of 1.07\% at position angle 122.4$^\circ$ (Mathewson \& Ford \cite{matfor}). Given the typical shape of the interstellar extinction curve (the `Serkowski law', see e.g. Whittet \cite{whittet}), the probable values of the interstellar polarization for this star, assumed to have a typical Galactic interstellar extinction, are 0.5, 0.3 and 0.2\% at J, H and K respectively. The position angle is usually similar between the visible and IR (see eg. Whittet et al. \cite{whitgera}). For the purposes of computing the instrumental polarization it was assumed that the polarization was zero. The first two sets of data on HD~93737 on 1996 Mar 02 (see Table \ref{tab-pol-standards}) are not included on account of the problem with the data on that first night (see Sect.~\ref{instcal}). In addition the first sequence of H band data on HD~93737 had poor photometry (see Table \ref{tab-pol-standards}) and was not considered. There is a spread in the values indicating typical errors of $\pm$0.3\% in linear polarization and $\pm$15$^\circ$ in position angle. Given the errors the J, H and K values are consistent with an instrumental polarization of 1.7\%. Adopted values are listed in the last row of the Table \ref{pol-stan-data}. Given that only a single measurement was performed at K$_c$, it is probably not significant that the instrumental polarization in this band is higher and that the position angle differs from the K band measurement. \begin{table*} \begin{tabular}{lccrrrr} Target & Date & \multicolumn{1}{c}{J} & \multicolumn{1}{c}{H} & \multicolumn{1}{c}{K} & \multicolumn{1}{c}{K$_c$} \\ \ & & \multicolumn{4}{c}{Linear poln. (\%) \& PA ($^\circ$)} \\ \hline HD~93737 & 1996 Mar 02 & & & 1.71, ~88 & \\ & 1996 Mar 03 & & 1.59, ~89 & & \\ HD~64299 & 1996 Mar 03 & 1.51, 111 & 1.99, ~86 & 2.16, 136 & \\ & 1996 Mar 04 & 1.67, ~97 & 1.48, 112 & 1.71, ~89 & \\ HD~94510 & 1994 Mar 04 & 2.12, 104 & & & 2.05, 140 \\ \hline Mean & - & 1.74, 105 & 1.69, ~96 & 1.86, 104 & \\ Adopted & - & 1.7, ~~105 & 1.7, ~~~90 & 1.7, ~~~90 & 2.0, ~~140 \\ \hline \end{tabular} \caption{Polarization of low polarization stars - instrumental polarization measurement} \label{pol-stan-data} \end{table*} Once the instrumental polarization (intensity and angle) is determined this correction can be applied to the polarization maps point-by-point. Goodrich (\cite{goodrich}, in Appendix) describes the application of the instrumental correction. \subsubsection{Position angle calibration} \label{pacalib} On producing polarization maps for the Homunculus nebula around $\eta$ Carinae, it was noticed that the polarization vectors did not point back to the position of $\eta$ Carinae. There is no reason for such a behaviour since it is known to be a reflection nebula. If the illumination were by an extended source then the offset should not be one of simple rotation. A novel method was used to determine the single offset required to aligned all the polarization vectors in a centrosymmetric pattern around the position of $\eta$ Carinae. A least squares problem was solved to minimize the impact parameter at the position of $\eta$ Carinae produced by the perpendiculars to all the polarization vectors in the Homunculus by application of a single rotation. A consistent value of 18$\pm$1$^\circ$ was found for the J, H and K images. In order to verify that this was not an artifact of the $\eta$ Carinae nebula and the fact that the central point source was saturated, the 18$^\circ$ correction was applied to the polarization maps of NGC~2261. It was found that the vectors in the high polarization spur to the NE were well aligned with the direction expected for illumination by the peak of R Mon. Thus the calibration of the absolute position angle can be made without reference to a polarized standard. \begin{table*} \begin{tabular}{lrrr} Data source & \multicolumn{3}{c}{Polarization (\%) \& PA ($^\circ$)} \\ & J~~~~~ & H~~~~~ & K~~~~~ \\ \hline This work (PA uncorrected) & 10.6, ~77 & 11.1, ~74 & 8.1, ~77 \\ Minchin et al. \cite{minchin} & 11.1, 100 & ~8.5, 103 & 5.6, 102 \\ \hline \end{tabular} \caption{JHK Polarization of R Monocerotis in an 8$''$ aperture} \label{tab-rmon-pol} \end{table*} \section{Results on restoration of polarization images} \label{restore} In order to measure polarization structure in the vicinity of a bright point source, it is necessary to deconvolve the point source response from the data frames taken at each position angle of the polarizer and then to form the polarization maps from the deconvolved images. The aim here is to detect polarization structure within an offset distance of a few times the diffraction limit from the point source. Several different approaches to restoration have been attempted in order to obtain detailed information on the fine structure of the Homunculus nebula close to the central source $\eta$ Carinae. This was motivated by the need to detect and measure the polarization of the three knots found in the 0.4$''$ vicinity of $\eta$ Car by speckle imaging in the optical (Weigelt \& Ebersberger \cite{weiebe} and Falcke et al. \cite{falcke}). The polarization data for $\eta$ Car will be used to exemplify these experiments; the scientific conclusions will be reported in Walsh \& Ageorges (\cite{walage}). A preliminary discussion of restoration of these images, without considering the polarization, has been given by Ageorges \& Walsh (\cite{agewalspie}). \subsection{Image restoration trial} Two deconvolution techniques have been applied to the data: Richardson-Lucy (R-L) iterative deconvolution (Lucy \cite{lucy}, Richardson \cite{richard}) and blind deconvolution (`IDAC', Jefferies \& Christou \cite{jeff}, Christou et al. \cite{christou1}). The major difference between these methods is related to the treatment of the point spread function (PSF). With the Richardson-Lucy method, a PSF is required a priori to deconvolve the data, while for blind deconvolution, the PSF is determined from variations in the target object data. The blind deconvolution method uses an initial estimate, which can be a Gaussian for example. Since the adaptive optics PSF changes with time and is not spatially invariant (see e.g. Christou et al. \cite{christou2}) , blind deconvolution should be better suited than the Richardson-Lucy method, which assumes a PSF constant in time. The exact spatial variation of the AO PSF is not known. However in the present case, this is a minor problem since the source itself ($\eta$ Car) has been used as wavefront sensor reference star. Moreover with the pixel scale chosen, all the valuable information in the short band data is enclosed in the isoplanatic angle; the spatial variation of the PSF is thus negligible over the area of the $\eta$ Carinae images, which is not the case for the time variation. A comparison of the Richardson-Lucy method and IDAC - 'Iterative Deconvolution Algorithm in C', i.e the blind deconvolution algorithm used, was made using the K$_c$ data on $\eta$ Car (Table 2). The aim was to test the reality of structures revealed in the near environment of the central star of this reflection nebula. For the R-L restoration, the Lucy-Hook algorithm (Hook \& Lucy \cite{hook}), in its software implementation under IRAF (`plucy'), was employed. The principle is the same as for the Richardson-Lucy method, except that it restores in two channels, one for the point source and the other for the background (considered smooth at some spatial scale). The estimated position of the point source is provided and the initial guess for the background is flat. K$_c$ data taken at polarizer angles of 0 and 180$^\circ$ were restored (called K$_c$0 and K$_c$180). For the K$_c$0 image, blind deconvolution was also performed. It should be noted that although the polarizer angles are effectively identical, the Strehl ratio is not identical between the two data sets (K$_c^1$ \& K$_c^2$) and is higher for K$_c^1$ (27.9\% against 22.1\%). Although this could be considered an advantage, it has a drawback since the four bumps around the PSF (see Fig.~\ref{psf-ima} for the appearance of the PSF) are more pronounced. These bumps (`waffle pattern') correspond to a null mode of the wavefront sensor as a result of an inadequacy in the control loop. The problem of the four bumps distributed symetrically around the source is that although they are in the PSF they do not vary; they are fixed in time and position and therefore not removed from the image as part of the PSF. There is however a way to overcome this problem, and that is by forcing them to be in the PSF. Fig.~\ref{deconv} presents the deconvolution results obtained with both methods on the two separate data sets (Table \ref{tab-pol-standards}) and Fig.~\ref{psf-ima} shows the PSF derived from blind deconvolution. The 'plucy' deconvolved data have been restored to convergence and then convolved with a Gaussian of 3 pixels FWHM. The blind deconvolved data were not restored to convergence but limited to 1000 iterations to be comparable, in terms of number of iterations, with the Lucy deconvolution. The resulting image seems thus more noisy than the Lucy deconvolved ones. Note that neither of the methods used succeeded in removing the 4 bumps from the K$_c$ images of the first observational sequence (K$_c^1$0). The data acquired at the polarizer position angles of 0 and 180$^{\circ}$, deconvolved with the same algorithm (`plucy') both show identical structures (upper row of Fig.~\ref{deconv}). This example serves to illustrate the stability of the `plucy' method when applied to AO data while using a reasonable PSF estimate. The image from the first polarizer sequence, polarizer angle 0$^\circ$, deconvolved using IDAC is shown as the lower right image in Fig.~\ref{deconv} and is to be compared with the upper left image deconvolved with `plucy'. It is clear that similar structures appear in both restorations and that there are no significant features in one restoration which do not appear in the other. The differences in the images are mainly due to the fact that the blind deconvolution has been stopped before fully resolving the data and the final image is thus more noisy. Moreover the presence of the four bumps is enhanced in this image. The major difficulty in this deconvolution is that these noise structures are convolved with extended emission from the Homunculus nebula. Being in the middle of the nebula, the flux identified on these bumps is then a convolved product of the waffle pattern and the extended structure of the nebula. It is thus very difficult for the program to isolate these four 'point sources' and recover properly the true shape of the nebula at these positions. In order to fully compare the different deconvolution techniques, blind deconvolution has been pushed to convergence for K$_c$0 (data set 1 \& 2). The results (Fig.~\ref{rec-ima}) are to be compared with the right hand side of Fig.~\ref{deconv}. The structures close to $\eta$ Car emphasized by the two deconvolution processes, excluding the four bumps, confer a degree of confidence in the scientific results which will be presented in Walsh \& Ageorges (\cite {walage}). \subsection{Polarimetry restoration trial} In the case of polarimetric data, the deconvolution problem is more severe since the photometry must be preserved in the restored images in order to derive a polarization map. The Richardson-Lucy algorithm is superior to blind deconvolution in that it should preserve flux. Experiments were performed on the K$_c$ $\eta$ Car data set, restoring each of the nine polarizer images with the PSF derived from the unpolarized standard at the same polarizer angle. The results were poor even when the restored image was convolved with a Gaussian of 3 pixels FWHM. They illustrate the effect of the variable PSF and thus the difficulty to recover polarization data at high angular resolution so close to the star. Huge fluctuations in the value of the polarization were seen in the vicinity of $\eta$ Car. The differing PSF of the unpolarized star and of $\eta$ Car (the AO correction was much better for the $\eta$ Car images than for the standard star) produced restored images with large differences in flux at a given pixel in the different polarization images. At present there is no known method to recover the true PSF from the data and conserve the flux through restoration. A possible (although computer intensive) solution is to determine the PSF from blind deconvolution and use the result for the PSF in another algorithm known to preserve the flux. This has been performed here: the PSF determined by blind deconvolution has been used both with the Richardson-Lucy and Lucy-Hook algorithms. Since the IDAC blind deconvolution algorithm normalised the input image at the beginning of the iterations, the final image was rescaled back to the original total count to allow error estimation of the polarization image. Polarization maps for the three methods (`IDAC' alone and combined with R-L and `plucy' methods) have been created and compared after reconvolution with a 3 pixel Gaussian. From the high resolution restored images an attempt has been made to derive the polarization map. Fig.~\ref{highp} illustrates the result obtained while using the PSF determined by the blind deconvolution with the R-L algorithm (30 iterations with the accelerated version), after reconvolution with a Gaussian of 3 pixel FWHM. The overall centro-symmetric pattern of polarization observed at larger scale and resolution is recognisable here as well. The major deviation from this pattern at $\Delta \alpha$ and $\Delta \delta$ zero (i.e. east-west and north-south through the image of $\eta$ Car) is due to the spider of the telescope. The presence of this feature is hard to identify on the intensity map underplotted but clearly present at this position in the original (undeconvolved) data. Fig.~\ref{highpcomp} is a vectorial difference between results obtained with Lucy deconvolution and blind deconvolution. Special care has been taken to avoid the vector difference to add when the position angles were separated by close to 2$\pi$. Some vectors at the border of the noise cut-off (e.g. at $\Delta \alpha \approx -3.0''$) detected in the Lucy map but not in the other are not represented here to avoid confusion with the differential vectors plotted. Major differences can be found at approximately 0.5$''$ from the center and correspond to differences in the deconvolution due to the wings of $\eta$ Carinae. At $\Delta \alpha$ = 0 and $\Delta \delta$ = 0 $\pm$ 0.3$''$, the important difference between the two reconstructed polarization maps is meaningless since these positions correspond to the spider of the telescope and the data are poorly restored here. \section{Conclusions} The process of data acquisition and reduction for polarization observations taken with the ESO ADONIS adaptive optics system has been described. Whilst certain precautions both in the observing method and in data reduction are required for imaging polarimetry and adaptive optics seperately, several other problems are presented arising from the combination of the two methods. \begin{itemize} \item Since the PSF varies in time the wavefront sensor reference star should be as similar as possible to the target object in terms of brightness (since the achieved Strehl ratio strongly depends on the reference star magnitude) and spectral characteristic, to ensure similar AO correction. Since the PSF varies across the field of view (depending on the anisoplanatic angle), it is also preferable to select the WFS reference star as close as possible to the target object. In practice this is rarely achieved when the target itself can not be used as WFS reference star. However a good estimate of the PSF provided by the reference star allows accurate deconvolution of the target without the introduction of artifacts arising from the differing PSF's. \item Imaging polarimetry requires good photometric conditions. By oversampling the cos(2$\theta$) polarization curve at more than three position angles of the polarization analyser, an averaging over the photometric conditions is achieved. However depending on the time period of the photometric variations the averaging can result in zero polarization even from a substantially polarized source. In principle the use of an AO system should not compromise the photometric quality of the observations. \item The two polarization calibrations that are required impose the observation of an unpolarized source, to determine the instrumental polarization, and of a target with known polarization, to calibrate the angle of polarization. In the IR there is a very distinct lack of unpolarized and polarized standards. Stars with known very low optical polarization are suitable as IR unpolarized standards since the Serkowski interstellar polarization law shows that the polarization is much less in the IR than the optical. However polarized standards typically have a circumstellar origin to their high polarization and the value in the IR cannot be predicted. Many of the reflection nebulae around Young Stellar Objects have variable polarization and are therefore not ideal polarized standards. \end{itemize} Several strategies have been described for flat fielding and sky subtraction and it was shown how deviations from the expected cos(2$\theta$) curve can give an indication of the photometric conditions at the time of observation and allow any discrepant polarizer angles to be discarded as was found for the ADONIS polarizer at PA 157.5$^\circ$. The instrumental polarization for ADONIS was determined at 1.7\% over the J, H and K range. Polarization maps have been succesfully produced for the reflection nebula around $\eta$ Carinae (the Homunculus). By using the PSF's determined from blind deconvolution at the same polarizer angles as the data, it has been shown that polarization structure can be revealed as close as two times the diffraction limit to a point source. The interpretation of the ADONIS AO polarization results on $\eta$ Carinae will be presented in a forthcoming paper (Walsh \& Ageorges \cite{walage}). \section{Acknowledgements} We would like to thank the ESO ADONIS team for their advice and help during the development of the data reduction strategy. S. M. Scarrott is also acknowledged for useful comments on imaging polarimetry. \newpage
1,314,259,992,796
arxiv
\section{Introduction} Surface growth is ubiquitous in a plethora of phenomena, from epitaxial growth to superconductors to many applications in biology \citep{Vicsek,HHZ,Barabasi,Krug}. There is a family of ``standard" stochastic growth equations which describe different classes of surface growth \citep{Barabasi,Krug}. Perhaps the most famous of them is the Kardar-Parisi-Zhang (KPZ) equation \citep{KPZ} which describes fluctuations of the height of a growing surface resulting from random deposition, surface relaxation and nonlinearity. In the KPZ equation, and in many other growth models, the interface exhibits self-affine properties, and earlier work mostly dealt with dynamic scaling behavior of global measures such as the interface roughness \citep{Vicsek,HHZ,Barabasi,Krug}. Recently, the focus of research on the KPZ equation shifted toward studies of the complete one-point probability distribution of the interface height at a finite time. Several groups have achieved remarkable progress in finding exact representations for this probability distribution in $1+1$ dimensions for several classes of initial conditions, see Refs. \citep{Corwin,Quastel2015,HHT,Spohn2016} for reviews. In (much simpler) linear models, the finite-time one-point height distribution in $1+1$ dimension is well-defined for such well-known equations as the Edwards-Wilkinson (EW) equation \cite{EW1982,Krug} and the Mullins-Herring equation with conserved or non-conserved noise \cite{Krug,Mullins}. What happens in higher dimensions and/or for other surface growth models? Here we mostly address this and related questions for a class of prototypical linear stochastic growth models of the type \cite{Krug}: \begin{equation} \label{eq:generalized_EW} \partial_{t}h=-\left(-\nu\nabla^{2}\right)^{m}h+\sqrt{D}\,\eta\left(\vect{x},t\right). \end{equation} where $h(\vect{x},t)$ is the height of the interface growing on an infinite $d$-dimensional substrate, $\nu$ is the diffusivity, $m=1,2,\dots$ is a positive integer, and $D$ is the noise magnitude. The term $\eta\left(\vect{x},t\right)$ describes a Gaussian noise with the correlation function \begin{equation} \label{eq:noise} \!\!\!\!\left\langle \eta\left(\vect{x},t\right)\eta\left(\vect{x}',t'\right)\right\rangle =\left(\nabla^{2}\right)^{\alpha}\delta\left(\vect{x}-\vect{x}'\right)\delta\left(t-t'\right). \end{equation} For the non-conserved noise, $\alpha=0$, $\eta$ is a white noise both in space and in time. For the conserved noise, $\alpha=1$, $\eta$ can be written as \begin{equation} \label{eq:noise_divergence} \eta\left(\vect{x},t\right)=\nabla\cdot\vect{\xi}\left(\vect{x},t\right), \end{equation} where $\vect{\xi}$ is a white noise: \begin{equation} \left\langle \xi_{i}\left(\vect{x},t\right)\xi_{j}\left(\vect{x}',t'\right)\right\rangle =\delta_{ij}\delta\left(\vect{x}-\vect{x}'\right)\delta\left(t-t'\right). \end{equation} The systems described by Eq.~(\ref{eq:generalized_EW}) differ by their relaxation mechanism (diffusion, surface diffusion, \textit{etc}.), character of noise (non-conserved or conserved), and the dimension of space. For concreteness, we will assume a flat initial condition, $h\left(\vect{x},t=0\right)=0$. Because of the translational invariance of the substrate, we can study the probability distribution ${\mathcal{P}}\left(h_{0},t\right)$ of observing $h(\vect{x},t)=h_0$ at a finite time $t$ at \emph{any} point: for example, at $\vect{x}=0$. In Sec. II we will find the critical dimension, at or above which the variance of ${\mathcal{P}}\left(h_{0},t\right)$ is infinite, and so ${\mathcal{P}}\left(h_{0},t\right)$ is ill-defined unless the model is regularized at small scales. For example, for the non-conserved EW equation \citep{EW1982}, where $m=1$ and $\alpha=0$, the variance is infinite at $d\geq 2$. For the conserved Mullins-Herring equation ($m=2,\,\alpha=1$) \cite{Krug,Mullins} the variance of ${\mathcal{P}}\left(h_{0},t\right)$ is also infinite at $d\geq 2$, whereas for the conserved EW equation ($m=\alpha=1$) it is infinite in \emph{all} physical dimensions. The divergence of the finite-time variance in this class of models has the character of an ultraviolet (UV) catastrophe. When encountering divergences like this, one usually resorts to a microscopic cutoff for regularization \cite{Krug}. A small price to pay for such a regularization is a partial loss of universality, as the variance now explicitly depends on the microscopic cutoff, which is different for different models belonging to the same universality class. The difference only affects the \emph{amplitude} of the power-law dependence of the variance on time. Still, one can think of a simple and robust alternative that does not require a small-scale cutoff. Here we suggest to characterize local height fluctuations by the probability distribution ${\mathcal P}[\bar{h}(t)]$ of \emph{local average height} at time $t$, defined by averaging the surface height $h(\vect{x},t)$ over a small but macroscopic $d$-dimensional domain $\Omega$ of volume $v$: \begin{equation} \label{eq:hbar_general_dim} \bar{h}\left(t\right)=\frac{1}{v}\int_{\Omega} h\left(\vect{x},t\right)d\vect{x} . \end{equation} For models~(\ref{eq:generalized_EW}) the $\bar{h}$-distribution is well-defined in arbitrary dimension. An additional advantage of this local measure is that, at fixed time, it exhibits a crossover from a time-independent (equilibrium or steady-state) asymptotic, obtained for very small $v$, to a far-from-equilibrium, time-dependent asymptotic for sufficiently large $v$. The local average height (\ref{eq:hbar_general_dim}) has been previously used for studying local roughness distributions \citep{Halpin2014,Almeida2014,Reis2015, Carrasco2016}. To our knowledge, the probability distribution of $\bar{h}(t)$ itself has not been previously considered, unlike the distribution of the \emph{global} average height, which has been studied for the KPZ equation \citep{Lee2006,Kelling2011}. An important additional goal of this work is to show how one can use the weak-noise theory \cite{Fogedby1998,Fogedby1999,Fogedby2009,KK2007,KK2008,MV2016,MKV_PRL2016,KMS2016,Janas2016} to determine both the probability distribution of $\bar{h}$, and the ``optimal path" of the interface height: the most likely time history of $h(\vect{x},t)$ conditioned on reaching a specified value of $\bar{h}$ at a specified time. As a simple test case, in Sec. III we will calculate the variance of $\bar{h}(t)$ for the one-dimensional stochastic EW equation with conserved noise, \begin{equation} \label{eq:EW} \partial_{t}h=\nu\partial_{x}^{2}h+\sqrt{D}\,\partial_{x}\xi\left(x,t\right),\quad |x|<\infty, \end{equation} which describes surface relaxation in the absence of deposition and desorption \citep{Kim1999}. This is a particular case of Eq.~(\ref{eq:generalized_EW}) with $m=\alpha=d=1$, and we choose it because it is the simplest one for which the finite-time one-point distribution is ill-defined. In Sec. IV we will develop the weak-noise theory (WNT) for Eq.~(\ref{eq:generalized_EW}), and solve the WNT equations explicitly for Eq.~(\ref{eq:EW}). Sec. V deals with a closely related problem of the finite-time height-difference distribution for the \emph{non-conserved} EW equation in $1+1$ dimension. In Sec. VI we go beyond the linear models and discuss the properties of the finite-time one-point height statistics for the KPZ equation in $2+1$ dimensions. We summarize our results in Sec. VII. \section{One-point height distribution and local average height distribution} Consider the height-height correlation function \begin{equation} C\left(\vect{x}_{1},\vect{x}_{2},t\right)=\left\langle h\left(\vect{x}_{1},t\right)h\left(\vect{x}_{2},t\right)\right\rangle. \label{corr} \end{equation} When $h\left(\vect{x}_{1},t\right)$ is governed by Eq.~(\ref{eq:generalized_EW}), a standard calculation (that we present, for completeness, in Appendix \ref{appendix:single_point}) yields \begin{eqnarray} \label{eq:correlation_function} C\left(\vect{x}_{1},\vect{x}_{2},t\right) &=& \frac{D}{\left(2\pi\right)^{d}}\int d\vect{k} \, e^{i\vect{k}\cdot\left(\vect{x}_{1}-\vect{x}_{2}\right)} \nonumber\\ &\times& \frac{k^{2\alpha-2m}}{2\nu^{m}}\left[1-e^{-2\left(k^{2}\nu\right)^{m}t}\right]. \end{eqnarray} As one can see, $C\left(\vect{x_{1}}\neq \vect{x_{2}},t>0\right)$ is well-defined because the integral over $\mathbf{k}$ converges. To show it, we can set $\vect{x_{1}} = 0$, because the system is homogeneous in space. The correlator $C\left(0,\vect{x},t\right)$ can depend only on the distance $x = \left|\vect{x}\right|$, because the system is isotropic. Correspondingly, it is convenient to evaluate the integral (\ref{eq:correlation_function}) in the ($d$-dimensional) spherical coordinates. Integrations over all the angles give a function of $x$, and only a single integral, over $k= \left|\vect{k}\right|$, remains. At $\vect{x}\neq 0$ this integral converges at $k\to \infty$ due to the oscillatory term $e^{i\vect{k}\cdot\vect{x}}$ in the original integrand. The convergence at $k=0$ is guaranteed, at finite $t$, by the time-dependent factor inside the square brackets under the integral. Now we can address the finite-time interface-height variance at a point $\vect{x}$. This quantity is immediately obtained from $C\left(\vect{x}_{1},\vect{x}_{2},t\right)$: \begin{equation} \label{eq:variance_def} \!\! \text{Var}\left[h\left(\vect{x},t\right)\right]= \left\langle h\left(\vect{x},t\right)^{2}\right\rangle =C\left(\vect{x},\vect{x},t\right)=C\left(0,0,t\right); \end{equation} it is independent of $\vect{x}$. It is easily seen from Eq.~(\ref{eq:correlation_function}) that, at finite $t$, the variance (\ref{eq:variance_def}) is finite if and only if the space dimension is smaller than the critical dimension: \begin{equation} \label{eq:critical_dimension} d<d_{c}=2m-2\alpha. \end{equation} For $d \ge d_c$, the variance (\ref{eq:variance_def}) diverges at $k\to \infty$. This divergence -- a UV catastrophe -- is present both in infinite and in finite systems. For example, for the EW equation with non-conserved noise ($m=1, \alpha=0$) the critical dimension (\ref{eq:critical_dimension}) is $d_c=2$, while for conserved noise ($\alpha=1$) it is $d_c=0$. For the Mullins-Herring equation with non-conserved noise ($m=2, \alpha=0$), the critical dimension is $d_c=4$, while for conserved noise ($\alpha=1$) it is $d_c=2$. The UV catastrophe at $d \ge d_c$ is not unique to the finite-time one-point height distribution. In finite systems, describable by Eqs.~(\ref{eq:generalized_EW}) and (\ref{eq:noise}) in the absence of a small-scale regularization, the finite-time interface width \begin{equation} \label{eq:widthglobal} W=\left\langle \frac{1}{V}\int d\vect{x_{1}}\,\left[h\left(\vect{x_{1}},t\right)-\frac{1}{V}\int d\vect{x_{2}}h\left(\vect{x_{2}},t\right)\right]^{2}\right\rangle ^{1/2} \end{equation} (where the spatial integration is over the entire system, and $V$ is the system's volume) also diverges, at $d \ge d_c$, due to the divergence of the term $\left\langle h\left(\vect{x_{1}},t\right)^{2}\right\rangle$. In this context the UV catastrophe is well known to experts. For example, for the non-conserved noise, when $d_c=2m$, it is evident from Eq.~(3.28) of the review \citep{Krug}. In practice, the UV catastrophe is usually avoided by introducing a small-scale cutoff such as the lattice constant, finite correlation length of the noise, \textit{etc}. This leads to a partial loss of universality, as explained in the Introduction. A finite correlation length can also cause difficulties in attempts of exact solution. As a possible alternative that keeps the noise white in space, we suggest to characterize local fluctuations of the interface by the distribution of the local average height (\ref{eq:hbar_general_dim}). Let us assume, for concreteness, that the spatial average in Eq.~(\ref{eq:hbar_general_dim}) is performed over a $d$-dimensional hypercube $\left[-L,L\right]^{d}$. Since Eqs.~(\ref{eq:generalized_EW}) and (\ref{eq:hbar_general_dim}) are linear in $h$, the fluctuations of $\bar{h}$ are Gaussian, and it suffices to evaluate their variance. See Appendix \ref{appendix:local_average_height} for a brief derivation. The result is \begin{eqnarray} \label{eq:variance_exact} \text{Var}\left[\bar{h}\left(t\right)\right] &=& \frac{D}{\left(2\pi\right)^{d}L^{2d}}\int d\vect{k}\,\prod_{i=1}^{d}\frac{\sin^{2}\left(k_{i}L\right)}{k_{i}^{2}} \nonumber\\ & \times & \frac{k^{2\alpha-2m}}{2\nu^{m}}\left[1-e^{-2\left(k^{2}\nu\right)^{m}t}\right]. \end{eqnarray} It is straightforward to show that the integral in Eq.~(\ref{eq:variance_exact}) converges in all dimensions, so for the models~(\ref{eq:generalized_EW}) and (\ref{eq:noise}) this quantity is well-defined. \section{EW equation with conserved noise} \label{sec:EW_fourier} As a simple illustration, we consider Eq.~(\ref{eq:EW}). Formally, it can be viewed as a particular case of the Langevin equation \begin{equation} \partial_{t}\rho=\nabla\cdot\left[{\mathcal D}\left(\rho\right)\nabla\rho+\sqrt{\sigma\left(\rho\right)} \, \xi\right] \end{equation} which provides a coarse-grained description of a family of diffusive lattice gases with density $\rho\left(x,t\right)$, diffusivity ${\mathcal D}\left(\rho\right)$ and mobility $\sigma\left(\rho\right)$ \cite{Spohn1991}. In this particular case the diffusivity and mobility of the ``lattice gas" are both density-independent. For diffusive lattice gases, the equilibrium can be described in terms of a free energy density $\mathcal{F}\left(\rho\right)$ which satisfies the fluctuation-dissipation relation \citep{Spohn1991, Derrida2007} \begin{equation} \mathcal{F}^{\prime\prime}\left(\rho\right)=\frac{2{\mathcal D}\left(\rho\right)}{\sigma\left(\rho\right)}. \end{equation} For Eq.~(\ref{eq:EW}) this gives \begin{equation} \label{eq:free_energy} \mathcal{F}\left(h\right)=\frac{\nu h^{2}}{D}. \end{equation} For $d=1$, and in the limit of $t\to\infty$, Eq.~(\ref{eq:correlation_function}) yields: \begin{equation} C(x_1,x_2,t\to \infty) =\frac{D}{2 \nu}\delta\left(x_{1}-x_{2}\right). \label{CEWcorrelator} \end{equation} Indeed, the interface height at thermal equilibrium is delta-correlated, which is consistent with the UV catastrophe of the one-point height variance. Equation~(\ref{CEWcorrelator}) also directly follows from Eq.~(\ref{eq:free_energy}) \cite{Spohn1991}. According to Eq.~(\ref{eq:critical_dimension}), the critical dimension for this model is zero, so the finite-time one-point height distribution of this model is ill-defined in all physical dimensions. Let us determine the distribution of the local average height (\ref{eq:hbar_general_dim}). Rescale time $t$ by the observation time $T$, the spatial coordinate $x$ by $\sqrt{\nu T}$ and the interface height $h$ by $\kappa=D^{1/2}\nu^{-3/4}T^{-1/4}$. The resulting rescaled conserved EW equation is parameter-free: \begin{equation} \label{eq:langevin_dimensionless} \partial_{t}h=\partial_{x}^{2}h+\partial_{x}\xi\left(x,t\right). \end{equation} The local average height (\ref{eq:hbar_general_dim}) at $t=1$, in the rescaled variables, is \begin{equation} \label{eq:qbar_def} \bar{h}\left(t=1\right)=\frac{1}{2\ell}\int_{-\ell}^{\ell}h\left(x,1\right)dx, \end{equation} where $\ell=L/\sqrt{\nu T}$. Equation~(\ref{eq:variance_exact}) yields \begin{equation} \label{eq:variance_dimnesionless_exact} \text{Var}\left[\bar{h}\left(t=1\right)\right]=\frac{1}{4\ell^{2}}\left[\sqrt{\frac{2}{\pi}}\left(1-e^{-\frac{\ell^{2}}{2}}\right)+\ell \, \text{erfc}\left(\frac{\ell}{\sqrt{2}}\right)\right], \end{equation} where $\text{erfc} \,z = 1-\text{erf}\,z = (2/\sqrt{\pi}) \int_z^{\infty} e^{-\zeta^2}\,d\zeta$. In the physical units, \begin{eqnarray} \label{eq:variance_physical_units} \text{Var}\left[\bar{h}\left(t=T\right)\right] &=& \frac{D}{4L^{2}}\sqrt{\frac{T}{\nu}}\left[\sqrt{\frac{2}{\pi}}\left(1-e^{-\frac{L^{2}}{2\nu T}}\right)\right. \nonumber\\ &+&\left.\frac{L}{\sqrt{\nu T}} \, \text{erfc}\left(\frac{L}{\sqrt{2\nu T}}\right)\right]. \end{eqnarray} Because of the term including $\text{erfc}$, these expressions diverge at $\ell\to 0$, or $L\to 0$, as expected. We now examine the long- and short-time behaviors of the variance. In the long-time limit, $\ell \ll 1$, the leading-order asymptote is \begin{equation} \label{eq:variance_small_l_limit} \text{Var}\left[\bar{h}\left(t=1\right)\right]\simeq \frac{1}{4\ell} , \end{equation} Correspondingly, the local average height distribution is \begin{equation} \label{eq:distribution_small_l_limit} P\left[\bar{h}\left(t=1\right)\right]\simeq\sqrt{\frac{2\ell}{\pi}} \, e^{-2\ell\bar{h}\left(1\right)^{2}}. \end{equation} In the physical variables, the distribution is \begin{equation}\label{eq:distribtion_small_l_phys} {\mathcal P}\left[\bar{h}\left(t=T\right)\right]\simeq \left(\frac{2\nu L}{\pi D}\right)^{1/2} \, e^{-\frac{2\nu L \bar{h}^2}{D}}; \end{equation} it is independent of $T$ as expected from an equilibrium distribution. Furthermore, if we assume that \begin{equation} h\left(x,t=T\right)\simeq\begin{cases} \bar{h}, & \left|x\right|<L,\\ 0, & \left|x\right|>L, \end{cases} \label{table} \end{equation} then the term $$ \frac{2 \nu L \bar{h}\left(t=T\right)^{2}}{D} = \frac{\nu \bar{h}\left(t=T\right)^{2}}{D} \times 2L $$ describes the increase of the free energy of the interface compared with the flat state $h=0$, see Eq.~(\ref{eq:free_energy}). The optimal interface history, that we will determine shortly, fully supports this interpretation. Accounting for the subleading correction to Eq.~(\ref{eq:variance_small_l_limit}), we obtain \begin{equation} \text{Var}\left[\bar{h}\left(1\right)\right]\simeq\frac{1}{4\ell}\left(1-\frac{\ell}{\sqrt{2\pi}}\right). \end{equation} The correction is negative, so the probability to observe the same $\bar{h}(t=1)$ is smaller than in equilibrium, as to be expected on the physical grounds. In the short-time limit, $\ell \gg 1$, Eq.~(\ref{eq:variance_dimnesionless_exact}) becomes \begin{equation} \label{eq:variance_large_l_limit} \text{Var}\left[\bar{h}\left(t=1\right)\right]\simeq\frac{1}{2\sqrt{2\pi} \, \ell^{2}}. \end{equation} The probability to observe a given $\bar{h}$ at short times is strongly suppressed as to be expected. To better understand Eq.~(\ref{eq:variance_large_l_limit}), let us calculate the variance of the total rescaled ``mass", $2\ell\bar{h}\left(1\right)$, which enters the interval $\left[-\ell,\ell\right]$ for $\bar{h}>0$, or exits this interval for $\bar{h}<0$: \begin{equation} \label{varnoell} \text{Var}\left[2\ell\bar{h}\left(t=1\right)\right] = 4\ell^{2}\text{Var}\left[\bar{h}\left(t=1\right)\right]\simeq \sqrt{\frac{2}{\pi}}. \end{equation} This quantity does not depend on $\ell$, and the reason for this will become clear when we determine the optimal interface history in this limit. \section{Optimal interface history} \subsection{General} Now we return to the more general Eq.~(\ref{eq:generalized_EW}) and show how one can use the weak-noise theory (WNT) of stochastic surface growth \citep{Fogedby1998,Fogedby1999,Fogedby2009,KK2007,KK2008,MV2016,MKV_PRL2016,KMS2016,Janas2016} to determine the optimal history $h\left(\vect{x},t\right)$ of the interface profile, conditioned on a given value of $\bar{h}$ at a specified time $t=T$. For nonlinear evolution equations, the leading-order calculations of the WNT theory enable one to evaluate the distribution of $\bar{h}(T)$ only up to pre-exponential factors. For linear equations, like Eq.~(\ref{eq:generalized_EW}), the expected distributions are Gaussian. Therefore, the pre-exponential factors can be found from normalization, and the WNT yields exact results. The WNT equations can be obtained via a saddle-point evaluation of the action integral for Eq.~(\ref{eq:generalized_EW}), see Appendix \ref{appendix:WNT}. They can be written as Hamilton's equations for the optimal path $h(x,t)$ and the conjugate ``momentum density" $p(x,t)$: \begin{eqnarray} \partial_{t}h &=& \delta H/\delta p=-\left(-\nu\nabla^{2}\right)^{m}h+\left(-\nabla^{2}\right)^{\alpha}p, \label{eq:q_general_dim} \\ \partial_{t}p &=& -\delta H/\delta h=\left(-\nu\nabla^{2}\right)^{m}p, \label{eq:p_general_dim} \end{eqnarray} where the Hamiltonian is \begin{equation}\label{H1general} \!\!H=\int_{-\infty}^{\infty}\!\!\!\!dx\,\mathcal{H},\quad\mathcal{H}= -h\left(-\nu\nabla^{2}\right)^{m}p+\frac{1}{2}\left(\nabla^{\alpha}p\right)^{2}. \end{equation} The initial condition for the flat interface is \begin{equation} h\left(\vect{x},t=0\right)=0. \end{equation} An additional condition, at $t=T$, stems from the integral constraint (\ref{eq:hbar_general_dim}). As shown in Appendix \ref{appendix:WNT}, it has the form \begin{equation} \label{eq:initial_condition_p_general} p\left(\vect{x},t=T\right)=\begin{cases} \Lambda, & \vect{x}\in\Omega ,\\ 0, & \vect{x}\notin\Omega , \end{cases} \end{equation} where $\Lambda$ is an a priori unknown Lagrange multiplier whose value is ultimately set by Eq.~(\ref{eq:hbar_general_dim}). Once Eqs.~(\ref{eq:q_general_dim}) and (\ref{eq:p_general_dim}) are solved, one can evaluate the probability $\mathcal{P}$ of observing a specified value of $\bar{h}(T)$ from the relation $-\ln{\mathcal{P}}\simeq S/D$, where \begin{equation} S=\frac{1}{2}\int_{0}^{T}dt\int d\vect{x}\,\left(\nabla^{\alpha}p\right)^{2} \end{equation} is the action evaluated along the optimal path. For the KPZ equation in one dimension (which includes, as a simple limit, the non-conserved EW equation), the WNT was employed in Refs. \cite{KK2007,MKV_PRL2016,KMS2016,Janas2016} for determining the finite-time one-point height distribution for different initial conditions. The WNT was also used for the stochastic Mullins-Herring equation with conserved and non-conserved noise \citep{MV2016}. We now proceed to solve the WNT equations for the one-dimensional conserved EW equation when the process is conditioned on a given local average height. As the conserved EW equation can be formally viewed as a lattice gas, here the WNT equations represent a particular case of the macroscopic fluctuation theory of lattice gases \citep{Bertini2015}. \subsection{Conserved EW Equation in 1+1 Dimensions} Upon the rescaling of $x$, $t$ and $h$ leading to Eq.~(\ref{eq:langevin_dimensionless}), Eqs.~(\ref{eq:q_general_dim}) and (\ref{eq:p_general_dim}) become \begin{eqnarray} \partial_{t}h &=& \delta H/\delta p \; = \partial_{x}^{2}h - \partial_{x}^{2}p, \label{eq:q} \\ \partial_{t}p &=& \! -\delta H/\delta h= -\partial_{x}^{2}p,\label{eq:p} \end{eqnarray} where $p$ is rescaled by $\nu\kappa=D^{1/2}\nu^{1/4}T^{-1/4}$. The rescaled Hamiltonian is \begin{equation}\label{H1} H=\int_{-\infty}^{\infty} \!\!\! dx\,\mathcal{H},\quad\mathcal{H}=\partial_{x}p\left(-\partial_{x}h+\frac{1}{2}\partial_{x}p\right), \end{equation} while the rescaled action, $s=S/D$, is \begin{equation} \label{action} s=\int_{0}^{1} \! dt\int_{-\infty}^{\infty} \!\!\! dx\,\left(p\partial_{t}h-\mathcal{H}\right)=\frac{1}{2}\int_{0}^{1} \! dt\int_{-\infty}^{\infty} \!\!\! dx\,\left(\partial_{x}p\right)^{2} . \end{equation} Integrating by parts and using Eq.~(\ref{eq:p}), we obtain a convenient expression for $s$ which does not involve integration over time: \begin{eqnarray} \label{eq:action_space_integral} s \!\!&=& \! -\frac{1}{2}\int_{0}^{1} \!\! dt\int_{-\infty}^{\infty} \!\!\!\! dx\,p\,\partial_{x}^{2}p=\frac{1}{2}\int_{-\infty}^{\infty} \!\!\!\! dx\int_{0}^{1} \!\! dt\,p\,\partial_{t}p \nonumber\\ &=& \! \frac{1}{4} \! \int_{-\infty}^{\infty} \!\!\!\!\! dx \! \int_{0}^{1} \!\!\! dt\,\partial_{t}\left(p^{2}\right) = \frac{1}{4} \! \int_{-\infty}^{\infty} \!\!\!\!\! dx\left[p^{2}\left(x,1\right)-p^{2}\left(x,0\right)\right].\nonumber\\ \end{eqnarray} As follows from Eq.~(\ref{H1}), there are two invariant zero-energy manifolds. The manifold $\partial_x p=0$ corresponds to the deterministic EW equation $\partial_t h = \partial_x^2 h$. The second zero-energy manifold, \begin{equation}\label{eqmanifold} p = 2h , \end{equation} describes thermal equilibrium. Indeed, Eqs.~(\ref{eq:q}) and (\ref{eqmanifold}) yield the time-reversed deterministic EW equation \begin{equation}\label{timereversed} \partial_t h = -\partial_x^2 h. \end{equation} Therefore, an activation trajectory at equilibrium coincides with the time-reversed relaxation trajectory, as to be expected \cite{Onsager}. In the limit $\ell \ll 1$, the system has sufficient time to explore equilibrium fluctuations in order to reach the specified local average height. In this limit the action must be equal to the difference between the free energies of the final and initial states. Indeed, evaluating the action, using Eq.~(\ref{eq:action_space_integral}), on the equilibrium manifold~(\ref{eqmanifold}), we obtain \begin{equation} \label{eq:action_thermal_equilibrium} s=\int_{-\infty}^{\infty} \!\!\! dx \, h^{2}\left(x,t=1\right), \end{equation} the (rescaled) free energy (\ref{eq:free_energy}) cost of the height profile $h\left(x,t=1\right)$. This cost must be minimized with respect to all possible height profiles $h\left(x,t=1\right)$ with local average height $\bar{h}$ (\ref{eq:qbar_def}). As a result, the minimum is achieved on a discontinuous height profile: \begin{equation} \label{eq:optimal_qfinal_in_equilibrium} h\left(x,t=1\right)=\begin{cases} \bar{h}, & \left|x\right|<\ell,\\ 0, & \left|x\right|>\ell, \end{cases} \end{equation} and so $s=2\ell\bar{h}^{2}$, in agreement with Eq.~(\ref{eq:distribution_small_l_limit}). For finite $\ell$ the system does not live on the equilibrium manifold, and we must solve Eqs.~(\ref{eq:q}) and (\ref{eq:p}) explicitly, with boundary conditions $h\left(x,t=0\right)=0$ and \begin{equation} p\left(x,t=1\right)=\begin{cases} \lambda, & \left|x\right|<\ell,\\ 0, & \left|x\right|>\ell, \end{cases} \label{p1} \end{equation} with an a priori unknown $\lambda$. Solving Eq.~(\ref{eq:p}) backward in time with the initial condition (\ref{p1}), we obtain \begin{equation} \label{eq:sol_p} \!\! p\left(x,t\right) \! = \! \frac{\lambda}{2} \! \left[\text{erf}\left( \! \frac{x+\ell}{\sqrt{4\left(1-t\right)}}\right) \! -\text{erf}\left( \! \frac{x-\ell}{\sqrt{4\left(1-t\right)}}\right) \! \right] \! . \end{equation} Next, we introduce the auxiliary field \begin{equation} \label{eq:r_def} r\left(x,t\right)\equiv h\left(x,t\right) - \frac{1}{2}p\left(x,t\right). \end{equation} Using Eqs.~(\ref{eq:q})--(\ref{eq:p}), one can see that this field satisfies the diffusion equation \begin{equation} \label{eq:r} \partial_{t}r=\partial_{x}^{2}r. \end{equation} Using the flat initial condition for $h$ and evaluating $p\left(x,t=0\right)$ from Eq.~(\ref{eq:sol_p}), we can solve Eq.~(\ref{eq:r}): \begin{equation} \label{eq:sol_r} \!\!\! r \! \left(x,t\right) \! = \! - \frac{\lambda}{4} \! \left[\text{erf}\left( \! \frac{x+\ell}{\sqrt{4\left(1+t\right)}}\right) \!\! - \! \text{erf}\left( \! \frac{x-\ell}{\sqrt{4\left(1+t\right)}}\right) \! \right] \! . \end{equation} Plugging Eqs.~(\ref{eq:sol_p}) and (\ref{eq:sol_r}) into Eq.~(\ref{eq:r_def}), we obtain the optimal interface height profile: \begin{eqnarray} \label{eq:sol_q} h\left(x,t\right) \! &=& \! - \frac{\lambda}{4}\left[\text{erf}\left(\frac{x-\ell}{\sqrt{4\left(1-t\right)}}\right)- \text{erf}\left(\frac{x+\ell}{\sqrt{4\left(1-t\right)}}\right)\right.\nonumber\\ &+& \! \left.\text{erf}\left(\frac{x+\ell}{\sqrt{4\left(1+t\right)}}\right)- \text{erf}\left(\frac{x-\ell}{\sqrt{4\left(1+t\right)}}\right)\right]. \end{eqnarray} The Lagrange multiplier $\lambda$ is then found from Eq.~(\ref{eq:qbar_def}): \begin{equation} \label{sol_lambda} \lambda=\frac{2\ell\bar{h}}{\ell+\sqrt{2/\pi} \, \left(1-e^{-\ell^{2}/2}\right)-\ell \, \text{erf}\left(\ell/\sqrt{2}\right)} \, . \end{equation} \begin{figure} \includegraphics[width=0.4\textwidth,clip=]{fig1a.eps} \includegraphics[width=0.4\textwidth,clip=]{fig1b.eps} \includegraphics[width=0.4\textwidth,clip=]{fig1c.eps} \caption{The optimal path of the interface, conditioned on reaching a rescaled local average height $\bar{h}=1$ at $t=T$. Shown is the height $h(x,t)$, rescaled by $\kappa=D^{1/2}\nu^{-3/4}T^{-1/4}$, as a function of the rescaled coordinate $x/\sqrt{\nu T}$, for $\ell = 10$ (a), $\ell = 1$ (b) and $\ell = 0.1$ (c). The initial ($t=0$) and final ($t=T$) profiles are marked by (1) and (2), respectively. On panels (a) and (b) the height is plotted at times $t=0$, $T/4$, $T/2$, $3T/4$, and $T$. On panel (c) the times are $t=0$, $0.9T$, $0.97T$, $0.99T$, and $T$.} \label{fig:optimal_history} \end{figure} Figure \ref{fig:optimal_history} shows the optimal paths $h(x,t)$ for $\ell=10$, $1$ and $0.1$. For large $\ell$, or short times, the optimal interface dynamics are localized at the boundaries $x=\pm \ell$. This is the reason that $\ell$ drops from Eq.~(\ref{varnoell}). For $\ell \to 0$ the optimal profile at $t=T$ approaches the equilibrium one, described by Eq.~(\ref{eq:optimal_qfinal_in_equilibrium}). Furthermore, the profile stays flat in this case, $h(x,t)\simeq 0$, for most of the dynamics, and the optimal fluctuation only develops towards the end. For very small $\ell$ in Eqs.~(\ref{eq:sol_q}) and (\ref{sol_lambda}), the final interface profile is approximated by \begin{equation} \label{eq:final_profile_approx} \! h\left(x,1\right)\simeq\bar{h}\left[\theta\left(x-\ell\right)-\theta\left(x+\ell\right)+ \! \frac{\ell}{\sqrt{2\pi}}\exp\left( \! -\frac{x^{2}}{8}\right) \! \right]. \end{equation} The first two terms in Eq.~(\ref{eq:final_profile_approx}) are the thermal equilibrium terms, while the last term is the leading non-equilibrium correction. We now turn to the evaluation of the rescaled action (\ref{action}). Using Eqs.~(\ref{eq:sol_p}) and (\ref{sol_lambda}) in Eq.~(\ref{eq:action_space_integral}), and performing the integral over $x$ (see Appendix \ref{appendix:erf_integral}), we obtain \begin{equation} \label{eq:S_sol} s=\frac{2\ell^{2}\bar{h}^{2}}{\ell\,\text{erfc}\left(\ell/\sqrt{2}\right)+\sqrt{2/\pi}\,\left(1-e^{-\ell^{2}/2}\right)} \, . \end{equation} As expected, this corresponds to a Gaussian distribution ${\mathcal P}(\bar{h})$ whose variance, $\bar{h}^2/\left(2s\right)$, coincides with the exact result (\ref{eq:variance_dimnesionless_exact}). In the physical units, the action $S=sD$ coincides with the result from Eq.~(\ref{eq:variance_physical_units}). A graph of $s/\bar{h}^2$ as a function of $\ell$ is plotted in Fig.~\ref{fig:action}. \begin{figure} \includegraphics[width=0.42\textwidth,clip=]{fig2.eps} \caption{The rescaled action divided by the local average height squared (solid line), alongside with the small-$\ell$ limit (dashed) and the large-$\ell$ limit (dot-dashed).} \label{fig:action} \end{figure} It is interesting that, similarly to other non-equilibrium problems that are exactly soluble in the framework of a weak-noise theory \cite{KMSvoid}, one can define the ``non-equilibrium free energy" \begin{equation} F\left[p\left(x\right)\right]=\frac{1}{4}\int_{-\infty}^{\infty}\!\!\!dx\,p^{2}\left(x\right), \end{equation} so that the dimensionless action (\ref{eq:action_space_integral}) is the difference between the values of $F$ at $t=1$ and $t=0$. This simplification, however, does not save us the need of solving the dynamical problem in order to evaluate the momentum density $p\left(x,t\right)$ at $t=1$ and $t=0$. \section{EW equation with non-conserved noise} \subsection{Distribution of the Height Difference} Now let us consider the nonconserved EW equation \citep{EW1982} \begin{equation} \label{eq:EW_nc} \partial_{t}h=\nu\partial_{x}^{2}h+\sqrt{D_0}\,\xi\left(x,t\right). \end{equation} Taking the derivative of Eq.~(\ref{eq:EW_nc}) gives us the equation \begin{equation} \label{eq:EW_f} \partial_{t}f=\nu\partial_{x}^{2}f+\sqrt{D_0}\, \partial_x \xi\left(x,t\right), \end{equation} where $f=\partial h/\partial x$. Eq.~(\ref{eq:EW_f}) is mathematically equivalent to Eq.~(\ref{eq:EW}). Furthermore, there is a simple connection between the local average of $f$ and the height difference function of $h$: \begin{equation} \bar{f}\left(t\right)=\frac{1}{2L}\int_{-L}^{L}f\left(x,t\right)dx=\frac{1}{2L}\int_{-L}^{L}\frac{\partial h\left(x,t\right)}{\partial x}dx=\frac{\Delta}{2L}, \end{equation} where $\Delta=h\left(L,t\right)-h\left(-L,t\right)$. The probability distribution of $\Delta$ at a specified time $T$ is therefore immediately obtained from the distribution of $\bar{f}$; it is a Gaussian distribution whose variance is given by \begin{equation} \text{Var}\left[\Delta\left(T\right)\right]=4L^{2}\text{Var}\left[\bar{f}\left(T\right)\right]. \end{equation} Using Eq.~(\ref{eq:variance_physical_units}), we obtain: \begin{equation} \label{eq:Var_Delta_nonconserving} \text{Var}\!\left(\Delta\right)\!=\!D_{0}\sqrt{\frac{T}{\nu}}\!\left[\!\sqrt{\!\frac{2}{\pi}}\!\left(1\!-e^{-\frac{L^{2}}{2\nu T}}\right)\!+\!\frac{L}{\sqrt{\nu T}}\,\text{erfc}\!\left(\!\frac{L}{\sqrt{2\nu T}}\right)\!\right]\!. \end{equation} This result has been known for a long time, see Eqs.~(2.19) and (2.21) of Ref.~\cite{Natterman1992,Footnote2}. As shown below, the WNT gives an additional insight into the problem by providing us with the optimal path of the system conditioned on a specified height difference. \subsection{Optimal History Conditioned on Height Difference} The optimal history $h\left(x,t\right)$ of the non-conserved one-dimensional EW equation (\ref{eq:EW_nc}), conditioned on a given height difference $\Delta$, is obtained by integrating Eq.~(\ref{eq:sol_q}) with respect to $x$ with the boundary conditions $h\left(x\to\pm\infty,t\right)\to0$. It is given by: \begin{eqnarray} \label{eq:hsol_nonconserving} h\left(x,t\right)\!\!&=&\!\!-\frac{\lambda}{4}\!\!\sum_{j_{1},j_{2}=\pm1}\!\!\!\!j_{1}j_{2}\!\left\{ \!\sqrt{\frac{4\left(1+j_{2}t\right)}{\pi}}\exp\!\left[\!-\frac{\left(x+j_{1}\ell\right)^{2}}{4\left(1+j_{2}t\right)}\right]\right. \nonumber\\ &+& \left.\left(x+j_{1}\ell\right)\text{erf}\left[\frac{x+j_{1}\ell}{\sqrt{4\left(1+j_{2}t\right)}}\right]\right\}, \end{eqnarray} where $t$ and $x$ are rescaled as in Eq.~(\ref{eq:langevin_dimensionless}), but the interface height $h$ is rescaled by $\mu = D_0^{1/2}\nu^{-1/4}T^{1/4}$. The value of $\lambda$ is found from Eq.~(\ref{sol_lambda}) with $\bar{h}$ replaced by $\Delta\sqrt{\nu T}/\left(2\mu L\right)$. Figure \ref{fig:optimal_history_conserved} depicts the optimal interface height histories $h(x,t)$ for the non-conserved EW equation, conditioned on a given height difference $\Delta$. In the short-time limit, $\ell \gg 1$, the optimal interface dynamics~(\ref{eq:hsol_nonconserving}) are localized at the boundaries $x=\pm\ell$. Around each of the boundaries, the optimal profile is well approximated by the solution to the \emph{one-point} problem: when we condition the process on reaching height $\pm\Delta /2$, respectively, at $t=T$. This can be seen by comparing the optimal path (\ref{eq:hsol_nonconserving}) around $x=\pm\ell$ with the optimal path of the one-point problem, see Eq.~(33) in Ref. \citep{MKV_PRL2016}. Taking the limit $L\to\infty$ in Eq.~(\ref{eq:Var_Delta_nonconserving}) we obtain the action in the physical units: \begin{equation} S=\frac{\Delta^{2}}{2\text{Var}\left(\Delta\right)}\simeq\frac{\Delta^{2}}{2D_{0}}\sqrt{\frac{\pi\nu}{2T}}. \end{equation} It is equal to twice the action evaluated on the solution to the one-point problem conditioned on reaching height $\Delta/2$, see Eq.~(39) in Ref. \cite{MKV_PRL2016}. In the long-time limit, $\ell \ll 1$, the final optimal profile can be approximated, close to the origin, by \begin{equation} \label{eq:optimal_final_profile_nonconserving} h\left(x,T\right)\simeq\begin{cases} +\frac{\Delta}{2}, & x>L,\\ \frac{\Delta x}{2L}, & \left|x\right|\le L,\\ -\frac{\Delta}{2}, & x<-L, \end{cases}\quad\left|x\right|,L\ll\sqrt{\nu T} \end{equation} (in the physical units). As one can easily check, this profile minimizes the free energy \begin{equation} \label{eq:free_energy_nonconserved} F_{\text{EW}}\left[h\right]=\frac{\nu}{D_{0}}\int dx\left(\partial_{x}h\right)^{2} \end{equation} of the nonconserved EW equation, as to be expected in thermal equilibrium. In its turn, the action in this limit, \begin{equation}\label{equilactionEW} S=\frac{\nu\Delta^2}{2 D_0L}, \end{equation} coincides with the free energy (\ref{eq:free_energy_nonconserved}) evaluated on the optimal profile~(\ref{eq:optimal_final_profile_nonconserving}). These long-time results also hold for the KPZ equation in one dimension, since the free energy~(\ref{eq:free_energy_nonconserved}) yields the stationary probability distribution of height profiles in the KPZ equation as well \citep{HHZ,Barabasi}. \begin{figure}[ht] \includegraphics[width=0.4\textwidth,clip=]{fig3a.eps} \includegraphics[width=0.4\textwidth,clip=]{fig3b.eps} \includegraphics[width=0.4\textwidth,clip=]{fig3c.eps} \caption{The optimal path conditioned on reaching a rescaled height difference $\Delta=2$ at $t=T$ for the nonconserved EW equation (\ref{eq:EW_nc}). The interface height $h(x,t)$ is rescaled by $D_0^{1/2}\nu^{-1/4}T^{1/4}$, $x$ is rescaled by $\sqrt{\nu T}$. $\ell = 10$ (a), $\ell = 1$ (b) and $\ell = 0.1$ (c). The initial ($t=0$) and final ($t=T$) profiles are marked by (1) and (2), respectively. On panels (a) and (b) the height is plotted at times $t=0$, $T/4$, $T/2$, $3T/4$, and $T$. On panel (c) the times are $t=0$, $0.9T$, $0.97T$, $0.99T$, and $T$.} \label{fig:optimal_history_conserved} \end{figure} \section{KPZ Equation} Until now we have been dealing with linear interface growth models described by Eqs.~(\ref{eq:generalized_EW}) and (\ref{eq:noise}). However, in the absence of a small-scale cutoff, the ill-posedness of the finite-time one-point height distribution also appears in nonlinear growth models. As an important example, let us consider the KPZ equation \cite{KPZ} \begin{equation} \label{eq:2DKPZ} \partial_{t}h=\nu \nabla^{2} h+\frac{\lambda}{2}\left(\nabla h\right)^2 +\sqrt{D}\,\eta\left(\vect{x},t\right), \end{equation} which generalizes the nonconserved EW equation ($m=1$, $\alpha=0$) by accounting for an important nonlinearity which breaks the up-down symmetry. In view of the forthcoming results let us introduce finite spatial correlations of the Gaussian noise by replacing Eq.~(\ref{eq:noise}) (with $\alpha=0$) with the following one: \begin{equation}\label{corrnoise} \langle\eta(\mathbf{x}_{1},t_{1})\eta(\mathbf{x}_{2},t_{2})\rangle = C(|\mathbf{x}_{1}-\mathbf{x}_{2}|)\delta(t_{1}-t_{2}). \end{equation} We will represent the spatial correlator as $C(r) = \delta^{-d} c(r/\delta)$, where the volume integral of $C(r)$ is equal to $1$. Sending the correlation length $\delta$ to zero, one restores the limit of the delta-correlated noise. Let us consider $d=2$. As previously, we start from a flat interface at $t=0$ and study the probability distribution to observe the interface height $H$ at the origin at time $t=T$. The rescaling transformation \begin{equation}\label{rescalingKPZ} t/T\to t, \quad x/\sqrt{\nu T} \to x, \quad \lambda h/\nu\to h \end{equation} brings Eq.~(\ref{eq:2DKPZ}) to the following form: \begin{equation}\label{KPZrescaledd} \partial_{t}h=\nabla^2 h+\frac{1}{2}\left(\nabla h\right)^2+\sqrt{\epsilon} \,\eta(\mathbf{x},t), \end{equation} where $\epsilon=D\lambda^2/\nu^3$, and we have assumed, without loss of generality, that $\lambda>0$. The rescaled correlation length is $\delta/\sqrt{\nu T}$. The (normalized to unity) one-point height distribution at the origin at time $T$ can depend only on three dimensionless parameters: $\tilde{H}=\lambda H /\nu$, $\epsilon$ and $\nu T/\delta^2$: \begin{equation}\label{exactscaling} {\mathcal P}(H,T) = \frac{\lambda}{\nu}\,f_{\epsilon}\left(\frac{\lambda H}{\nu},\frac{\nu T}{\delta^2}\right). \end{equation} Importantly, and exclusively for $d=2$, the time-dependence in the right-hand-side of the exact Eq.~(\ref{exactscaling}) appears only through the combination $\nu T/\delta^2$ which includes the correlation length of the noise \cite{units}. In the weak-coupling regime, $\epsilon\ll 1$, the body of the height distribution is Gaussian, as described by the EW equation, until exponentially long times \cite{Natterman1992}. As explained in Sec. II, the height-distribution variance diverges here as $\delta \to 0$. The height-distribution tails are also ill-defined at $\delta \to 0$ at $d\geq 2$, as follows from the WNT of the KPZ equation \cite{KK2008}. What happens in the strong-coupling regime, $\epsilon\gg 1$? In the absence of analytic results for ${\mathcal P}(H,T)$, Halpin-Healy \cite{Halpin2012,Halpin2013} performed extensive numerical simulations of this regime for $d=2$. He simulated both the KPZ equation itself, and several discrete models, believed to belong to the KPZ universality class. He did it for different initial conditions (including the flat one), which do not introduce a macroscopic length scale. He clearly observed, for all these models (and at long but not too long times, when the system size is still irrelevant), a universal self-similar distribution of the typical fluctuations of the interface height. According to Refs. \cite{Halpin2012,Halpin2013}, this self-similar distribution can be represented as \begin{equation}\label{HHscaling} {\mathcal P} (H,T)= \frac{1}{c_1\, T^{\beta}}\, g \left(\frac{H-c_2 T}{c_1\,T^{\beta}} \right), \end{equation} where $\beta \simeq 0.240$ in agreement with earlier simulations, and the constant coefficients $c_1$ and $c_2$ are model-dependent. As observed in Refs.~\cite{Halpin2012,Halpin2013}, for a proper choice of $c_1$ and $c_2$, the function $g$ is universal. The self-similar behavior of the height distribution (\ref{HHscaling}) is of the same type as the one rigorously established (at long times, for typical fluctuations, and for several types of initial conditions) at $d=1$ \citep{Corwin,Quastel2015,HHT,Spohn2016}. Equations~(\ref{exactscaling}) and (\ref{HHscaling}) are compatible only if, for typical fluctuations, \begin{equation}\label{jointscaling} {\mathcal P}(H,T) = \frac{\lambda}{\nu\, C_1(\epsilon)}\, \left(\frac{\delta^2}{\nu T}\right)^\beta\, F \left[\frac{\frac{\lambda H}{\nu}-C_2(\epsilon) \frac{\nu T}{\delta^2}}{C_1(\epsilon)\, \left(\frac{\nu T}{\delta^2}\right)^{\beta}}\right], \end{equation} where $F(w)$ is a universal function of its single argument, and $C_1$ and $C_2$ are universal functions of $\epsilon$ up to numerical coefficients that can be model-dependent. According to Eq.~(\ref{jointscaling}), the standard deviation of height from its mean behaves as \cite{largeepsilon} \begin{equation}\label{varKPZ} \sqrt{\left\langle h^{2}\left(0,T\right)\right\rangle -\left\langle h\left(0,T\right)\right\rangle ^{2}} \sim \frac{\nu C_1(\epsilon)}{\lambda} \left(\frac{\nu T}{\delta^2}\right)^{\beta}. \end{equation} This quantity clearly diverges, and the one-point height distribution becomes ill-defined, in the limit of $\delta \to 0$, that is for white spatial noise. As one can see from Eq.~(\ref{varKPZ}), the KPZ nonlinearity \emph{amplifies} the UV catastrophe of the EW equation at $d=2$: the divergence becomes a power-law (rather than logarithmic) in $\delta$. The systematic interface velocity \cite{largeepsilon}, \begin{equation}\label{VKPZ} \mathcal{V}=\frac{\nu^2 C_2(\epsilon)}{\lambda \delta^2}, \end{equation} which results from the rectification of the noise by the nonlinearity, also diverges as $\delta \to 0$, but a similar divergence occurs already at $d=1$, see \textit{e.g.} Ref. \cite{Hairer}. In numerical simulations there is always a small-scale cutoff, such as the grid size in numerical integration schemes, the lattice constant in discrete models, etc. Still, one should remember that the \emph{amplitudes} of the scaling relations, stemming from Eq.~(\ref{jointscaling}), such as Eq.~(\ref{varKPZ}), are non-universal: they are determined by a system-dependent small-scale cutoff. The far tails of the distribution, not necessarily described by Eq.~(\ref{HHscaling}), also depend on the small-scale cutoff. It would be interesting to explore whether the local-average-height statistics provides a viable alternative. \section{Summary and Discussion} For every interface growth model, described by Eqs.~(\ref{eq:generalized_EW}) and~(\ref{eq:noise}), there is a critical dimension $d_c$ (\ref{eq:critical_dimension}), at or above which the finite-time one-point height distribution is ill-defined because of a UV catastrophe. Here we introduced a macroscopic regularization of this catastrophe, by shifting the attention to the local average height (\ref{eq:hbar_general_dim}). For Eqs.~(\ref{eq:generalized_EW}) and~(\ref{eq:noise}) the distribution of this quantity is well-defined in any dimension without need for a regularization of the model at small scales. We calculated the variance of this (Gaussian) distribution for all models described by Eq.~(\ref{eq:generalized_EW}) and for all dimensions, see Eq.~(\ref{eq:variance_exact}). In addition, we formulated the weak-noise theory (WNT) which allows one to determine the optimal path of the system: the most probable history of the interface conditioned on a given value of the local average height $\bar{h}$ at a specified time. We performed explicit calculations for the simple case of the conserved EW equation in 1+1 dimensions (\ref{eq:EW}). We then used these results to study the distribution of the height difference in the nonconserved EW equation, and to determine the optimal path given such a height difference. The ill-posedness of the finite-time one-point height distribution at $d\geq d_c$ also appears in nonlinear interface models without a small-scale cutoff, for example, for the KPZ equation in $2+1$ dimensions. As we argue, the amplitudes of scaling relations, uncovered in numerical simulations at $d\geq d_c$, depend on an (explicit or implicit) small-scale cutoff. Moreover, the nonlinearity significantly changes the cutoff dependence of the amplitudes compared with the non-conserved EW-equation in $2+1$ dimensions. It would be interesting to explore whether, and under which conditions, the local average height provides a viable regularization alternative to the small-scale cutoff in nonlinear models. When the noise is \emph{typically} weak, the statistics of the local average height can be probed using the weak-noise theory (WNT). For nonlinear models the WNT equations are much harder to solve than the simple linear equations that we analyzed here. Still, useful analytic asymptotics for the optimal path and the action can be found in different limits and for different initial conditions, as has been shown for $d=1$ in Refs. \citep{KK2007,KK2008,MKV_PRL2016,KMS2016,Janas2016,KK2009}. Also, an efficient numerical algorithm for solving the WNT equations is available \citep{CS,Grafke}. It would be interesting to use the WNT for determining the far tails of the distribution of the local average height in the KPZ equation at $d\ge 2$ and small $\epsilon$. \section*{Acknowledgments} We are very grateful to Joachim Krug for valuable advice and to Arkady Vilenkin for discussions. N.S. and B.M. acknowledge financial support from the Israel Science Foundation (grant No. 807/16) and from the United States-Israel Binational Science Foundation (BSF) (grant No. 2012145). B.M. also acknowledges support from the University of Cologne through the Center of Excellence ``Quantum Matter and Materials." \bigskip\bigskip
1,314,259,992,797
arxiv
\section{Introduction} Nowadays there are many approaches \cite{app,carlip} to quantum gravity but so far no one is fully successful. Therefore it is still worth to take a risk to develop a new approach. It seems that the Teleparallel Equivalent of General Relativity (TEGR) was never used as a point of departure for a construction of a model of quantum gravity and therefore we would like to check whether it is possible to quantize gravity in this formulation (for the latest review of TEGR see \cite{mal-rev}). More precisely, we would like to check whether it is possible to quantize TEGR using the method of canonical quantization or, if it is needed, a modification of the method. Since TEGR is a background independent theory we would like to quantize it in a background independent manner. TEGR in its canonical formulation is a constrained system (see e.g. \cite{maluf-1,maluf,oko-tegr,bl}). Therefore it is quite natural to attempt to apply the Dirac strategy of canonical quantization of such systems which requires two steps to be carried out: $(i)$ first one neglects constraints and constructs a space of kinematic quantum states, that is, quantum states corresponding to all classical states constituting the whole phase space $(ii)$ then among the kinematic quantum states one distinguishes physical quantum states as those corresponding to classical states satisfying all the constraints. The space of kinematic quantum states is usually a Hilbert space and to carry out the second step one tries to find operators on the Hilbert space corresponding to the constraints and singles out physical quantum states as those annihilated by the operators (this procedure is valid if all the constraints are of the first class). In this paper we construct a space of kinematic quantum states for TEGR treated as a theory of cotetrad fields on a four-dimensional manifold. More precisely, the construction is valid for any theory of cotetrad fields the phase space of which coincides with that of TEGR---an example of such a theory is the Yang-Mills-type Teleparallel Model (YMTM) considered in \cite{itin,os}. The space of quantum states for TEGR, which since now will be denoted by ${\cal D}$, will be constructed according to a method presented in \cite{q-stat} combined with some Loop Quantum Gravity (LQG) techniques \cite{acz,cq-diff,rev,rev-1}. This method being a generalization of a construction by Kijowski \cite{kpt} provides us with a space of quantum states which is not a Hilbert space but rather {\em a convex set of quantum states}---these states can be seen as algebraic states (i.e. linear positive normed functionals) on a $C^*$-algebra which can be thought of as an algebra of some quantum observables. We will also show that spatial diffeomorphisms act naturally on the space ${\cal D}$ which allows to hope that ${\cal D}$ can be used as an element of a background independent quantization of TEGR. The construction of ${\cal D}$ is similar to a construction of a space of quantum states for the degenerate Pleba\'nski gravity (DPG) \cite{q-stat} and the descriptions of both constructions follow the same pattern. It may be helpful to study first the construction in \cite{q-stat} since it is simpler than that of ${\cal D}$. Let us mention that except the space ${\cal D}$ it is possible to construct other spaces of kinematical quantum states for TEGR---in this paper we will briefly describe the other spaces and comment on their possible application to quantization of TEGR. To proceed further with quantization of TEGR it is necessary to single out physical quantum states in the space ${\cal D}$, that is, to carry out the second step of the Dirac strategy. Since ${\cal D}$ is not a Hilbert space the standard procedure mentioned above by means of which one distinguishes physical quantum states has to be modified in a way. At this moment we are not able to present a satisfactory and workable modification of the procedure (some remarks on this very important issue can be found in \cite{q-stat}), but we hope that this problem will be solved in the future. The paper is organized as follows: Section 2 contains preliminaries, in Section 3 the space of quantum states for TEGR is constructed, in Section 4 we define an action of spatial diffeomorphisms on ${\cal D}$, Section 5 contains a short description of the other spaces of quantum states, in Section 6 we discuss the results. Finally, in Appendix A we show that the space ${\cal D}$ is identical to one of the other spaces. \section{Preliminaries} \subsection{Cotetrad fields} Let $\mathbb{M}$ be a real four-dimensional oriented vector space equipped with a scalar product $\eta$ of signature $(-,+,+,+)$. We fix an orthonormal basis $(v_A)$ $(A=0,1,2,3)$ of $\mathbb{M}$ such that the components $(\eta_{AB})$ of $\eta$ given by the basis form the matrix ${\rm diag}(-1,1,1,1)$. The matrix $(\eta_{AB})$ and its inverse $(\eta^{AB})$ will be used to, respectively, lower and raise capital Latin letter indeces $A,B,C,D\in\{0,1,2,3\}$. Denote by $\mathbb{E}$ the subspace of $\mathbb{M}$ spanned by the vectors $\{v_1,v_2,v_3\}$. The scalar product $\eta$ induces on $\mathbb{E}$ a positive definite scalar product $\delta$. Its components $(\delta_{IJ})$ in the basis $(v_1,v_2,v_3)$ form a matrix ${\rm diag}(1,1,1)$. The matrix $(\delta_{IJ})$ and its inverse $(\delta^{IJ})$ will be used to, respectively, lower and raise capital Latin letter indeces $I,J,K,L,M\in\{1,2,3\}$. In some formulae we will use the three-dimensional permutation symbol which will be denoted by $\varepsilon_{IJK}$. \subsection{Phase space \label{phsp}} The goal of this paper is to construct a space of quantum states for theories of a particular phase space consisting of some fields defined on a three-dimensional {\em oriented} manifold $\Sigma$---a point in the phase space consists of: \begin{enumerate} \item a quadruplet of one-forms $(\theta^{A})$, $A=0,1,2,3$, on $\Sigma$ such that the metric \begin{equation} q:=\eta_{AB}\theta^A\otimes\theta^B \label{q} \end{equation} is Riemannian (positive definite); \item a quadruplet of two-forms $(p_B)$, $B=0,1,2,3$, on $\Sigma$. \end{enumerate} $p_A$ is the momentum conjugate to $\theta^A$. The set of all $(\theta^A)$ satisfying the assumption above will be called a {\em Hamiltonian configuration space} and denoted by $\Theta$, while the set of all $(p_A)$ will be called a {\em momentum space} and denoted by $P$. Thus the phase space is the Cartesian product $P\times\Theta$. The Poisson bracket between two functions $f_1$ and $f_2$ on the phase space is given by the following formula \begin{equation} \{f_1,f_2\}=\int_\Sigma\Big(\frac{\delta f_1}{\delta {\theta}^A}\wedge\frac{\delta f_2}{\delta p_A}-\frac{\delta f_2}{\delta {\theta}^A}\wedge\frac{\delta f_1}{\delta p_A}\Big) \label{poiss-0} \end{equation} ---a definition of the variational derivative with respect to a differential form can be found in \cite{os}. As shown in, respectively, \cite{oko-tegr} and \cite{os} both TEGR and YMTM possess such a phase space. It turns out \cite{q-suit} that it is possible to construct quantum states via the method presented in \cite{q-stat} starting from the phase space description above (which in a sense is a natural description)---see Section \ref{other}. However, as it was argued in \cite{q-suit}, a space of these quantum states possesses an undesired property. Therefore the space of quantum states ${\cal D}$ will be constructed starting from another description \cite{ham-nv} of the phase space. Let $\iota$ be a function defined on a space of all global coframes on $\Sigma$ valued in $\{-1,1\}$. Since for every $(\theta^A)=(\theta^0,\theta^I)\in\Theta$ the triplet $(\theta^I)$ is a global coframe on the manifold \cite{q-suit} $\iota$ can be regarded as a function on $\Theta$. Every function $\iota$ which is a constant function on every path-connected subset of $\Theta$ defines new variables on the phase space \cite{ham-nv} which provide new description of the space. According to it a point in the phase space consists of: \begin{enumerate} \item a collection $(\xi^I_\iota,\theta^J)\equiv \theta$, where $\xi_\iota^I$, $I=1,2,3$, is a real function (a zero-form) on $\Sigma$ and $(\theta^J)$, $J=1,2,3$, are one-forms on $\Sigma$ constituting a global coframe; \item a collection $(\zeta_{\iota I},r_J)\equiv p$, where $\zeta_{\iota I}$, $I=1,2,3$, is a three-form on $\Sigma$ and $r_J$, $J=1,2,3$, is a two-form on the manifold. \end{enumerate} $\zeta_{\iota I}$ is the momentum conjugate to $\xi^I_\iota$ and $r_J$ is the momentum conjugate to $\theta^J$. Thus all the $(\xi^I_\iota,\theta^J)$ constitute the Hamiltonian configuration space $\Theta$ while all the $(\zeta_{\iota I},r_J)$ constitute the momentum space $P$. The Poisson \eqref{poiss-0} reads now as follows \begin{equation} \{f_1,f_2\}=\int_\Sigma\Big(\frac{\delta f_1}{\delta {\xi}_\iota^I}\wedge\frac{\delta f_2}{\delta \zeta_{\iota I}}+\frac{\delta f_1}{\delta {\theta}^I}\wedge\frac{\delta f_2}{\delta r_I}-\frac{\delta f_2}{\delta {\xi}_\iota^I}\wedge\frac{\delta f_1}{\delta \zeta_{\iota I}}-\frac{\delta f_2}{\delta {\theta}^I}\wedge\frac{\delta f_1}{\delta r_I}\Big). \label{poiss} \end{equation} Regarding a relation of the latter description to the former let us first express the dependence of $(p_A,\theta^B)$ on $(\zeta_{\iota I},r_J,\xi_\iota^K,\theta^L)$ \cite{ham-nv}: \begin{equation} \begin{aligned} &p_0=\iota(\theta^K)\sqrt{1+\xi_{\iota J}\xi_\iota^J}\,\vec{\theta}^I\lrcorner\,\zeta_{\iota I},&& p_I=r_I-\xi_{\iota I}\,\vec{\theta}^J\lrcorner\,\zeta_{\iota J},\\ &\theta^0=\iota(\theta^J)\frac{\xi_{\iota I}}{\sqrt{1+\xi_{\iota K}\xi_\iota^K}}\,\theta^I, & & \theta^I=\theta^I. \end{aligned} \label{old-new} \end{equation} Here $\vec{\theta}^I$ is a vector field on $\Sigma$ obtained from $\theta^I$ by raising its index by a metric inverse to the metric $q$---in a local coordinate frame $(x^i)$ on $\Sigma$ \[ \vec{\theta}^I:=q^{ij}\theta^I_j\partial_{x^i}. \] Since \cite{ham-nv} \begin{equation} q=\Big(\delta_{IJ}-\frac{\xi_{\iota I}\xi_{\iota J}}{1+\xi_{\iota K}\xi_\iota^K}\Big)\theta^I\otimes\theta^J \label{q-xi} \end{equation} the vector field $\vec{\theta}^I$ is a function of both $\xi_\iota^J$ and $\theta^L$. The inverse dependence, that is, the dependence of $(\zeta_{\iota I},r_J,\xi_\iota^K,\theta^L)$ on $(p_A,\theta^B)$ reads \cite{ham-nv} \begin{equation} \begin{aligned} &\zeta_{\iota I}=\iota(\theta^K)\sqrt{\det (q_{MN})}\,q_{IJ}\,\theta^J\wedge p_0, \\ &r_I= \frac{\sqrt{\det (q_{MN})}}{2}\sgn(\theta^L)*(\theta^0\wedge\theta^J\wedge\theta^K)\,\varepsilon_{IJK}\,p_0+p_I,\\ &\xi^I_\iota=\frac{1}{2}\frac{\iota(\theta^L)}{\sgn(\theta^L)}*(\theta^0\wedge\theta_J\wedge\theta_K)\,\varepsilon^{IJK},\\ &\theta^I=\theta^I. \end{aligned} \label{new-old} \end{equation} Here $*$ is a Hodge operator defined by the metric $q$, and $(q_{IJ})$ are components of $q$ in the basis $(\theta^J)$. Let us emphasize that in \eqref{new-old} $q$ is treated as a function of $(\theta^A)$ (see \eqref{q}). Moreover, \begin{equation} \sgn(\theta^I):= \begin{cases} 1 & \text{if $(\theta^I)$ is compatible with the orientation of $\Sigma$}\\ -1 & \text{otherwise} \end{cases}. \label{sgn-th} \end{equation} \section{Construction of quantum states for a theory of the phase space $P\times\Theta$} \subsection{Choice of variables} The construction of a space of quantum states for TEGR we are going to present in this section can be successfully carried out starting from any variables $(\zeta_{\iota I},r_J,\xi_\iota^K,\theta^L)$. However, as proved in \cite{ham-nv} unless $\iota=\sgn$ or $\iota=-\sgn$, where $\sgn$ is given by \eqref{sgn-th}, the constraints of TEGR found in \cite{oko-tegr} and the constraints of YMTM found in \cite{os} cannot be imposed on the resulting space of quantum states. Therefore it is reasonable to restrict ourselves to variables \begin{align*} &(\zeta_{s I},r_J,\xi_s^K,\theta^L), &&(\zeta_{-s I},r_J,\xi_{-s}^K,\theta^L) \end{align*} defined by, respectively, $\iota=\sgn$ or $\iota=-\sgn$. Actually, we will construct the space ${\cal D}$ using the variables $(\zeta_{s I},r_J,\xi_s^K,\theta^L)$, and then we will show that a space ${\cal D}_{-s}$ built from the variables $(\zeta_{-s I},r_J,\xi_{-s}^K,\theta^L)$ coincides with ${\cal D}$. Since now we will use a simplified notation according to which \begin{equation} (\zeta_{I},r_J,\xi^K,\theta^L)\equiv(\zeta_{s I},r_J,\xi_{s}^K,\theta^L). \label{simp-n} \end{equation} \subsection{Outline of the construction \label{out}} Following \cite{q-stat} we will first choose $(i)$ a special set $\cal K$ of real functions on $\Theta$ and call the functions {\em configurational elementary degrees of freedom} and $(ii)$ a special set $\cal F$ of real functions on $P$ and call the functions {\em momentum elementary degrees of freedom}. The configurational d.o.f. will be then used to define functions on $\Theta$ of a special sort called {\em cylindrical functions}. Next, each momentum d.o.f. will define via the Poisson bracket \eqref{poiss} or its regularization a linear operator on the space of cylindrical functions. Thus we will obtain a {\em linear space} $\hat{\cal F}$ spanned by operators associated with elements of $\cal F$. In the next step of the construction we will choose a set $\Lambda$ such that each element of it is a pair $(\hat{F},K)$, where $\hat{F}$ is a finite dimensional linear subspace of $\hat{\cal F}$ and $K$ is a finite set of configurational elementary d.o.f.. Then we will define on $\Lambda$ a relation $\geq$ equipping it with the structure of a directed set and show that $(\Lambda,\geq)$ satisfies some special assumptions. This will finish the construction since at this moment we will refer to \cite{q-stat} where it was shown that from each directed set satisfying the assumptions one can build a space of quantum states. The construction of the space of quantum states from such a directed set $(\Lambda,\geq)$ proceeds as follows. Given $(\hat{F},K)\equiv\lambda\in \Lambda$, one uses elements of $K$ to reduce the ``infinite-dimensional'' space $\Theta$ to a space $\Theta_K$ of finite dimension. Next, one defines $(i)$ a Hilbert space ${\cal H}_\lambda$ as a space of functions on $\Theta_K$ square integrable with respect to a natural measure on $\Theta_K$ and $(ii)$ a space ${\cal D}_\lambda$ of all density operators on the Hilbert space (i.e. positive operators of trace equal $1$). It turns out that assumed properties of the set $(\Lambda,\geq)$ unambiguously induce on a set $\{{\cal D}_\lambda\}_{\lambda\in\Lambda}$ the structure of a projective family. The space of quantum states is then defined as the projective limit of the family. Let us emphasize that our choice of elementary d.o.f. as well as application of graphs, cylindrical functions and the operators defined on them by the Poisson bracket is motivated by LQG methods---see \cite{acz,cq-diff,rev,rev-1} and references therein. \subsection{Submanifolds of $\Sigma$} Each elementary d.o.f. we are going to use will be associated with a submanifold of $\Sigma$. Following the LQG methods since now till the end of this paper we will assume that the manifold $\Sigma$ is {\em real analytic}\footnote{Equally well we could assume that the manifold is {\em semi-analytic}---see e.g. \cite{lost,fl} for the definition of semi-analyticity.}. An {\em analytic edge} is a one-dimensional connected analytic embedded submanifold of $\Sigma$ with two-point boundary. An {\em oriented} one-dimensional connected $C^0$ submanifold of $\Sigma$ given by a finite union of analytic edges will be called an {\em edge}. The set of all edges in $\Sigma$ will be denoted by $\cal E$. Given an edge $e$ of two-point boundary, its orientation allows to call one of its endpoints {\em a source} and the other {\em a target} of the edge; if an edge is a loop then we distinguish one of its points and treat it simultaneously as the source and the target of the edge. An edge $e^{-1}$ is called an {\em inverse} of an edge $e$ if $e^{-1}$ and $e$ coincide as un-oriented submanifolds of $\Sigma$ and differ by their orientations. We say that an edge $e$ is a composition of the edges $e_1$ and $e_2$, $e=e_2\circ e_1$, if $(i)$ $e$ as an oriented manifold is a union of $e_1$ and $e_2$, $(ii)$ the target of $e_1$ coincides with the source of $e_2$ and $(iii)$ $e_1\cap e_2$ consists solely of some (or all) endpoints of $e_1$ and $e_2$. We say that two edges are {\em independent} if the set of their common points is either empty or consists solely of some (or all) endpoints of the edges. A {\em graph} in $\Sigma$ is a finite set of pairwise independent edges. Any finite set of edges can be described in terms of edges of a graph \cite{al-hoop}: \begin{lm} For every finite set $E=\{e_1,\ldots,e_N\}$ of edges there exists a graph $\gamma$ in $\Sigma$ such that every $e_j\in E$ is a composition of some edges of $\gamma$ and the inverses of some edges of the graph. \label{E-gamma} \end{lm} The set of all graphs in $\Sigma$ is naturally a directed set: $\gamma'\geq\gamma$ if each edge of the graph $\gamma$ is a composition of some edges of the graph $\gamma'$ and the inverses of some edges of $\gamma'$. Let $S$ be a two-dimensional embedded submanifold of $\Sigma$. Assume that $S$ is $(i)$ analytic, $(ii)$ oriented and $(iii)$ of a compact closure. We moreover require $S$ to be such that every edge $e\in{\cal E}$ can be {\em adapted} to $S$ in the following sense \cite{area}: $e$ can be divided into a finite number of edges $\{e_1,\ldots,e_N\}$, i.e. \[ e=e_N\circ e_{N-1}\circ\ldots\circ e_2\circ e_1, \] each of them either \begin{enumerate} \item is contained in the closure $\overline{S}$; \item has no common points with $S$; \item has exactly one common point with $S$ being one of its two distinct endpoints. \end{enumerate} We will call such a submanifold a {\em face}. A set of all faces in $\Sigma$ will be denoted by $\cal S$. A three-dimensional submanifold $V$ of $\Sigma$ of a compact closure and of an orientation inherited from $\Sigma$ will be called a {\em region}. A set of all regions in $\Sigma$ will be denoted by $\cal V$. \subsection{Elementary degrees of freedom} Note that the variables $(\xi^I,\theta^J)$ and $(\zeta_K,r_L)$ parameterizing the phase space $P\times\Theta$ are respectively, zero-forms (functions), one-forms, three-forms and two-forms which can be naturally integrated over submanifolds of $\Sigma$ of appropriate dimensions. Thus every point $y\in\Sigma$ defines naturally a function on $\Theta$: \begin{equation} \Theta\ni \theta\mapsto \kappa^I_y(\theta):=\xi^I(y)\in\mathbb{R}. \label{k-y} \end{equation} Similarly, every edge $e$ defines a function on $\Theta$: \begin{equation} \Theta\ni \theta\mapsto \kappa^J_e(\theta):=\int_e\theta^J\in\mathbb{R}. \label{k-e} \end{equation} We choose the set ${\cal K}$ of configurational elementary d.o.f as follows \[ {\cal K}:=\{\ \kappa^I_y, \kappa^J_e \ | \ I,J=1,2,3, \, y\in\Sigma, \, e\in{\cal E} \ \}. \] It is easy to realize that the functions in ${\cal K}$ separate points in $\Theta$. Note that for every $I=1,2,3$, every $e\in{\cal E}$ and every pair of edges $e_1,e_2\in{\cal E}$ for which the composition $e_2\circ e_1$ makes sense \begin{align} &\kappa^I_{e^{-1}}=-\kappa^I_{e}, && \kappa^I_{e_2\circ e_1}=\kappa^I_{e_2}+\kappa^I_{e_1}. \label{keke} \end{align} Every region $V$ defines a function on $P$: \begin{equation} P\ni p\mapsto \varphi^V_{I}(p):=\int_V\zeta_I\in\mathbb{R}. \label{phi-V} \end{equation} Similarly, every face $S$ defines a function on $P$: \begin{equation} P\ni p\mapsto \varphi^S_{J}(p):=\int_S r_J\in\mathbb{R}. \label{phi-S} \end{equation} We choose the set ${\cal F}$ of momentum elementary d.o.f as follows \[ {\cal F}:=\{ \ \varphi^V_{I}, \varphi^S_{J}\ | \ I,J=1,2,3, \, V\in{\cal V}, \, S\in{\cal S} \ \}. \] It is not difficult to check that the functions in ${\cal F}$ separate points in $P$. \subsection{Finite sets of configurational elementary d.o.f. \label{fin-sets}} Let $K=\{\kappa_{1},\ldots,\kappa_N\}\subset{\cal K}$ be a finite set of elementary d.o.f.. We say that $\theta\in \Theta$ is $K$-related to $\theta'\in \Theta$, \[ \theta\sim_{K} \theta', \] if for every $\kappa_{\alpha}\in {K}$ \[ \kappa_{\alpha}(\theta)=\kappa_{\alpha}(\theta'). \] Clearly, the relation $\sim_{K}$ is an equivalence one. Therefore it defines a quotient space \begin{equation} \Theta_{K}:=\Theta/\sim_{K}. \label{quot} \end{equation} Note now that there exist $(i)$ a canonical projection from $\Theta$ onto $\Theta_K$: \begin{equation} \Theta\ni \theta\mapsto{\rm pr}_{K}(\theta)=[\theta]\in \Theta_{K} \label{pr-K} \end{equation} and $(ii)$ an {\em injective} map\footnote{Note that each set $K$ is unordered, thus to define the map $\tilde{K}$ one has to order elements of $K$. However, every choice of the ordering is equally well suited for our purposes and nothing essential depends on the choice. Therefore we will neglect this subtlety in what follows.} from $\Theta_{K}$ into $\mathbb{R}^N$: \begin{equation} \Theta_{K}\ni[\theta]\mapsto\tilde{K}([\theta]):=(\kappa_{1}(\theta),\ldots,\kappa_{N}(\theta))\in\mathbb{R}^N, \label{k-inj} \end{equation} where $N$ is the number of elementary d.o.f. constituting $K$ and $[\theta]$ denotes the equivalence class of $\theta$ defined by the relation $\sim_{K}$. We will say that elementary d.o.f. in ${K}=\{\kappa_{1},\ldots,\kappa_{N}\}$ are {\em independent} if the image of $\tilde{K}$ is an $N$-dimensional submanifold of $\mathbb{R}^N$. A quotient space $\Theta_K$ given by a set $K$ of independent d.o.f. will be called a {\em reduced configuration space}. \begin{lm} Let $u=\{y_1,\ldots,y_M\}$ be a finite collection of points in $\Sigma$ and $\gamma=\{e_1,\ldots,e_N\}$ be a graph such that either $u$ or $\gamma$ is not an empty set ($N,M\geq0$ but $N+M>0$). Then for every $(z^I_{i},x^J_{j})\in\mathbb{R}^{3M}\times \mathbb{R}^{3N}$ there exists $\theta\in\Theta$ such that \begin{align*} &\kappa^I_{y_{i}}(\theta)=z^I_{i},&&\kappa^J_{e_{j}}(\theta)=x^J_{j} \end{align*} for every $I,J=1,2,3$, $i=1,\ldots,M$ and $j=1,2,\ldots,N$. \label{lm-Kug-xi} \end{lm} \noindent This lemma proven in \cite{q-suit} guarantees that if \begin{equation} {K}_{u,\gamma}:=\{ \ {\kappa}^I_{y_1},\ldots,{\kappa}^I_{y_M},\kappa^J_{e_1},\ldots,\kappa^J_{e_N} \ | \ I,J=1,2,3 \ \}. \label{K-ug} \end{equation} then \begin{equation} \Theta_{K_{u,\gamma}}\cong\mathbb{R}^{3M}\times\mathbb{R}^{3N}, \label{TKug-RN} \end{equation} under the map $\tilde{K}_{u,\gamma}$, i.e. $\tilde{K}_{u,\gamma}$ is a bijection. It means in particular that the d.o.f. constituting $K_{u,\gamma}$ are independent and $\Theta_{K_{u,\gamma}}$ is a reduced configuration space. We are also allowed to conclude that if $K$ is a one-element subset of ${\cal K}$ then $\tilde{K}$ is a bijection and consequently $K$ is a set of independent d.o.f. and $\Theta_K$ is a reduced configuration space. Consider now a finite set $K$ of configurational elementary d.o.f. containing some (possibly none) d.o.f. \eqref{k-y} and some (possibly none) d.o.f. \eqref{k-e}. Let $u$ be a set of points defining elements of $K$ of the type \eqref{k-y} and let $E$ be a set of edges defining elements of $K$ of the type \eqref{k-e}. Let $\gamma$ be a graph related to $E$ as stated in Lemma \ref{E-gamma}. Since every $e\in E$ is a combination of edges of $\gamma$ and their inverses we can apply Equations \eqref{keke} to each $\kappa^I_e\in K$ to conclude that $\kappa^I_e$ is a linear combination of d.o.f. in ${\cal K}_{u,\gamma}$. \begin{cor} For every finite set $K$ of configurational elementary d.o.f. there exists a finite set $u$ of points of $\Sigma$ and a graph $\gamma$ such that every d.o.f. in $K$ is a linear combination of d.o.f. in $K_{u,\gamma}$. \label{K-lc-Kug} \end{cor} Note now that if $\Theta_K$ is a reduced configuration space then the map $\tilde{K}$ can be used to define a differential structure on the space. It may happen that a set $K'$ of independent d.o.f. distinct from $K$ defines the same space: $\Theta_K=\Theta_{K'}$ i.e. $[\theta]=[\theta]'$ for every $\theta\in\Theta$, where $[\theta]'$ denotes the equivalence class of $\theta$ defined by the relation $\sim_{K'}$. Assume that then the differential structures on $\Theta_K=\Theta_{K'}$ given by $\tilde{K}$ and $\tilde{K}'$ coincide (we will prove soon that this {\em is} the case). Then following \cite{al-hoop} we can introduce the notion of cylindrical functions: \begin{df} We say that a function $\Psi:\Theta\to\mathbb{C}$ is a cylindrical function compatible with the set ${K}$ of independent d.o.f. if \begin{equation} \Psi={\rm pr}^*_{K}\,\psi \label{Psi-cyl} \end{equation} for some smooth function $\psi:\Theta_{K}\to\mathbb{C}$. \end{df} \noindent Note that each configurational elementary d.o.f. $\kappa$ is a cylindrical function compatible with $K=\{\kappa\}$. Denote by ${\rm Cyl}$ a complex linear space spanned by all cylindrical functions on $\Theta$. Let $\mathbf{K}$ be a set of all sets of independent d.o.f.. There holds the following important proposition \cite{q-stat}: \begin{pro} Suppose that there exists a subset $\mathbf{K}'$ of $\mathbf{K}$ such that for every finite set $K_0$ of configurational elementary d.o.f. there exists $K'_0\in\mathbf{K}'$ satisfying the following conditions: \begin{enumerate} \item the map $\tilde{K}'_0$ is a bijection; \item each d.o.f. in $K_0$ is a linear combination of d.o.f. in $K'_0$. \end{enumerate} Then \begin{enumerate} \item for every set $K\in\mathbf{K}$ the map $\tilde{K}$ is a bijection. Consequently, $\Theta_K\cong \mathbb{R}^N$ with $N$ being the number of elements of $K$ and the map $\tilde{K}$ defines a linear structure on $\Theta_K$ being the pull-back of the linear structure on $\mathbb{R}^N$; if $\Theta_{K}=\Theta_{K'}$ for some other set $K'\in\mathbf{K}$ then the linear structures defined on the space by $\tilde{K}$ and $\tilde{K}'$ coincide. \item if a cylindrical function $\Psi$ compatible with a set $K\in\mathbf{K}$ can be expressed as \[ \Psi={\rm pr}_{K'}\psi', \] where $K'\in\mathbf{K}$ and $\psi'$ is a complex function on $\Theta_{K'}$ then $\psi'$ is smooth and consequently $\Psi$ is compatible with $K'$; \item for every element $\Psi\in{\rm Cyl}$ there exists a set $K\in\mathbf{K}'$ such that $\Psi$ is compatible with $K$. \end{enumerate} \label{big-pro} \end{pro} \noindent It follows from Lemmas \ref{lm-Kug-xi} and Corollary \ref{K-lc-Kug} that a subset of $\mathbf{K}$ consisting of all sets $K_{u,\gamma}$, where $u$ runs through all finite subsets of $\Sigma$ and $\gamma$ runs through all graphs in $\Sigma$ satisfies the requirement imposed on the set $\mathbf{K}'$ by the proposition. Thus, according to Assertion 1 of the proposition, on every reduced configuration space $\Theta_K$ there exists a natural {\em linear structure} and, consequently, a natural differential structure. This means that the space ${\rm Cyl}$ introduced just above the proposition is well defined, Assertions 2 holds and by virtue of Assertion 3 for every element $\Psi\in {\rm Cyl}$ there exist a finite set $u$ of points in $\Sigma$ and a graph $\gamma$ such that $\Psi$ is compatible with $K_{u,\gamma}$. A simple but useful consequence of the results above is that on every reduced configuration space $\Theta_K$, where $K=\{\kappa_1,\ldots,\kappa_N\}$, one can define a linear coordinate frame $(x_1,\ldots,x_N)$: \begin{equation} \Theta\ni [\theta]\mapsto x_{\alpha}([\theta]):=\kappa_{\alpha}(\theta)\in \mathbb{R}, \label{lin-coor-0} \end{equation} in other words, \begin{equation} (\,x_1[\theta],\ldots,x_N([\theta])\,)=\tilde{K}([\theta]). \label{x-tilK} \end{equation} The frame \eqref{lin-coor-0} will be called {\em natural coordinate frame on} $\Theta_K$. \subsection{Operators corresponding to momentum d.o.f.} Consider a finite collection $u=\{y_1,\ldots,y_M\}$ of points in $\Sigma$ and a graph $\gamma=\{e_1,\ldots,e_N\}$ such that either $u$ or $\gamma$ is not an empty set ($N,M\geq0$ but $N+M>0$). Let us introduce a special notation for natural coordinates \eqref{lin-coor-0} defined on a reduced configuration space $\Theta_{K_{u,\gamma}}$: we will denote the coordinates by $(z^I_{i},x^J_{j})$, $I,J=1,2,3$, $i=1,\ldots,M$ if $M>0$ and $j=1,\ldots,N$ if $N>0$, where \begin{align} &z^I_{i}([\theta]):=\kappa^I_{y_{i}}(\theta), && x^J_{j}([\theta]):=\kappa^J_{e_{j}}(\theta), \label{lin-coor} \end{align} (here $[\theta]\in\Theta_{K_{u,\gamma}}$). The coordinates define vector fields \[ \{\ \partial_{z^I_{i}}, \partial_{x^J_{j}}\ \} \] on $\Theta_{K_{u,\gamma}}$---these vector fields will be used to express operators defined on ${\rm Cyl}$ by the momentum d.o.f. \eqref{phi-V} and \eqref{phi-S}. \subsubsection{Operators corresponding to d.o.f. \eqref{phi-V}} Using the Poisson bracket \eqref{poiss} we define an operator \begin{equation} {\rm Cyl}\ni\Psi\mapsto\hat{\varphi}^V_I\Psi:=\{\varphi^V_I,\Psi\}\in{\rm Cyl}. \label{hat-zeta-0} \end{equation} We know already (see the discussion just below Proposition \ref{big-pro}) that $\Psi$ is compatible with a set $K_{u,\gamma}$, i.e. $\Psi={\rm pr}^*_{K_{u,\gamma}}\psi$ for a function $\psi$ defined on $\Theta_{K_{u,\gamma}}$. Assume that $u=\{y_1,\ldots,y_M\}$. Then \begin{equation} \hat{\varphi}^V_I\Psi=\sum_{L=1}^3\sum_{l=1}^M {\rm pr}^*_{K_{u,\gamma}}(\partial_{z^L_{l}}\psi)\{\varphi^V_I,\kappa^L_{y_{l}}\}, \label{hat-zeta-1} \end{equation} where $\{\partial_{z^L_{l}}\}$ are vector fields on $\Theta_{K_{u,\gamma}}$ defined by the natural coordinates \eqref{lin-coor}. To find an explicite expression for $\{\varphi^V_I,\kappa^L_{y_{l}}\}$ note that \begin{align*} &\varphi^V_I(p)=\int_\Sigma {\cal I}_V \zeta_I, && \kappa^L_y(\theta)=\int_\Sigma\delta_y\xi^L, \end{align*} where ${\cal I}_V$ is the characteristic function of the region $V$ and $\delta_y$ is the Dirac distribution supported at $y\in\Sigma$. Hence \[ \{\varphi^V_I,\kappa^L_{y}\}=-\delta^L{}_I\int_\Sigma {\cal I}_V\delta_y. \] Let \begin{equation} \varepsilon(V,y):= \begin{cases} -1& \text{if $y\in V$}\\ 0 & \text{otherwise} \end{cases}. \label{Vy} \end{equation} Then \begin{equation} \{\varphi^V_I,\kappa^L_{y}\}=\hat{\varphi}^V_I\kappa^L_{y}=\delta^L{}_I\,\varepsilon(V,y). \label{ze-xi-const} \end{equation} Let us emphasize that $\hat{\varphi}^V_I\kappa^L_{y}$ is a {\em constant} real cylindrical functions which since now will be treated as a real number. Thus finally \begin{equation} \hat{\varphi}^V_I\Psi=\sum_{l=1}^N \varepsilon(V,y_{l})\,{\rm pr}^*_{K_{u,\gamma}}(\partial_{z^I_{l}}\psi), \label{hat-zeta} \end{equation} which means that $\hat{\varphi}^V_I$ preserves the space ${\rm Cyl}$. Thus $\hat{\varphi}^V_I$ is a linear operator on the space. \subsubsection{Operators corresponding to d.o.f. \eqref{phi-S}} With every elementary d.o.f. $\varphi^S_J\in{\cal F}$ we associate a flux operator $\hat{\varphi}^S_J$ \cite{acz}---it is a linear operator on ${\rm Cyl}$ defined via a suitable regularization of the Poisson bracket $\{{\varphi}^S_J,\Psi\}$, where $\Psi\in{\rm Cyl}$. Again, we express the cylindrical function as $\Psi={\rm pr}^*_{K_{u,\gamma}}\psi$ for some set $K_{u,\gamma}$ and a function $\psi$ on $\Theta_{K_{u,\gamma}}$. Assume that $\gamma=\{e_1,\ldots,e_N\}$. Then the operator $\hat{\varphi}^J_S$ acts on $\Psi$ as follows: \begin{equation} \hat{\varphi}^S_J\Psi:=\sum_{j=1}^N\epsilon(S,e_{j})\, {\rm pr}_{K_{u,\gamma}}^*(\partial_{x^J_{j}}\psi)\in{\rm Cyl}, \label{hphi_S} \end{equation} where $\{\partial_{x^J_{j}}\}$ are vector fields on $\Theta_{K_{u,\gamma}}$ given by the coordinate frame \eqref{lin-coor} and each $\epsilon(S,e_{j})$ is a certain real number. To define the number $\epsilon(S,e_j)$ we adapt the edge $e_{j}$ to $S$ obtaining thereby a set of edges $\{e_{j1},\ldots,e_{jn}\}$ and define a function $\epsilon$ on this set: $\epsilon(e_{ja})=0$ if $e_{ja}$ is contained in $\bar{S}$ or has no common points with $S$; in remaining cases \begin{enumerate} \item $\epsilon(e_{ja}):=\frac{1}{2}$ if $e_{ja}$ is either 'outgoing' from $S$ and placed 'below' the face or is 'incoming' to $S$ and placed 'above' the face; \item $\epsilon(e_{ja}):=-\frac{1}{2}$ if $e_{ja}$ is either 'outgoing' from $S$ and placed 'above' the face or is 'incoming' to $S$ and placed 'below' the face. \end{enumerate} Here the terms 'outgoing' and 'ingoing' refer to the orientation of the edges (which is inherited from the orientation of $e_{j}$) while the terms 'below' and 'above' refer to the orientation of the normal bundle of $S$ defined naturally by the orientations of $S$ and $\Sigma$. Then we define \[ \epsilon(S,e_{j}):=\sum_{a=1}^n \epsilon(e_{ja}). \] It is not difficult to realize that for every edge $e\in{\cal E}$ \begin{equation} \hat{\varphi}^S_J\kappa^L_{e}=\delta^L{}_J\,\epsilon(S,e) \label{hphiS-ke} \end{equation} which means that $\hat{\varphi}^S_J\kappa^L_{e}$ is a {\em constant} real cylindrical function which since now will be regarded as a real number. \subsubsection{Linear space of the operators} Let us introduce a space $\hat{{\cal F}}$ as a real linear space spanned by all operators \eqref{hat-zeta-0} and \eqref{hphi_S}: \[ \hat{\cal F}:=\spn_{\mathbb{R}}\{\ \hat{\varphi}\ | \ \varphi\in{\cal F}\ \}. \] Thus an element $\hat{\varphi}$ of $\hat{{\cal F}}$ is of the following form \[ \hat{\varphi}=\sum_{Ii} A^I_{i}\hat{\varphi}^{V_{i}}_I+\sum_{Jj} B^J_{j}\hat{\varphi}^{S_{j}}_J, \] where $A^I_{i},B^J_{j}$ are real numbers and both sums are finite. Let $\Psi={\rm pr}^*_{K_{u,\gamma}}\psi$ be a cylindrical function compatible with $K_{u,\gamma}$. Then \begin{multline} \hat{\varphi}\Psi={\rm pr}^*_{K_{u,\gamma}}\Big(\sum_{Iil} A^I_{i}\,\varepsilon(V_{i},y_{l})\,\partial_{z^I_{l}}\psi+\sum_{Jjn} B^J_{j}\,\epsilon(S_{j},e_{n})\,\partial_{x^J_{n}}\psi\Big)=\\=\sum_{I l} \Big({\rm pr}^*_{K_{u,\gamma}}\partial_{z^I_{l}}\psi\Big)\,\hat{\varphi}\kappa^I_{y_{l}}+\sum_{Jn} \Big({\rm pr}^*_{K_{u,\gamma}}\partial_{x^J_{n}}\psi\Big)\,\hat{\varphi}\kappa^J_{e_{n}}, \label{hphi-Psi} \end{multline} where in the first step we used \eqref{hat-zeta} and \eqref{hphi_S} and in the second one we applied \eqref{ze-xi-const} and \eqref{hphiS-ke}. \subsection{A directed set $(\Lambda,\geq)$} All considerations above were preparatory steps to the crucial one which is a choice of a directed set $(\Lambda,\geq)$---once such a set is {\em chosen properly} the prescription described in \cite{q-stat} can be used to build from it a unique space of quantum states. \subsubsection{General assumptions imposed on $(\Lambda,\geq)$ \label{ad-as}} Recall that $\mathbf{K}$ denotes a set of all sets of independent d.o.f.. Let $\hat{\mathbf{F}}$ be a set of all finite dimensional linear subspaces of $\hat{{\cal F}}$. A directed set $(\Lambda,\geq)$, where $\Lambda\subset\hat{\mathbf{F}}\times \mathbf{K}$, is chosen properly if it satisfies the following {\bf Assumptions} \cite{q-stat}: \begin{enumerate} \item \begin{enumerate} \item for each finite set $K_0$ of configurational elementary d.o.f. there exists $(\hat{F},K)\in\Lambda$ such that each $\kappa\in K_0$ is a cylindrical function compatible with $K$; \label{k-Lambda} \item for each finite set $F_0$ of momentum elementary d.o.f. there exists $(\hat{F},K)\in\Lambda$ such that $\hat{\varphi}\in\hat{F}$ for every $\varphi\in F_0$; \label{f-Lambda} \end{enumerate} \item \label{RN} if $(\hat{F},K)\in\Lambda$ then the image of the map $\tilde{K}$ given by \eqref{k-inj} is $\mathbb{R}^N$ (where $N$ is the number of elements of $K$)---in other words, $\tilde{K}$ is a bijection and consequently \[ \Theta_K\cong\mathbb{R}^N. \] \item if $(\hat{F},K)\in\Lambda$, then \begin{enumerate} \item for every $\hat{\varphi}\in \hat{{\cal F}}$ and for every cylindrical function $\Psi={\rm pr}_K^*\psi$ compatible with $K=\{\kappa_1,\ldots,\kappa_N\}$ \[ \hat{\varphi}\Psi=\sum_{\alpha=1}^N\Big({\rm pr}^*_K\partial_{x_{\alpha}}\psi\Big)\hat{\varphi}\kappa_{\alpha}, \] where $\{\partial_{x_{\alpha}}\}$ are vector fields on $\Theta_K$ given by the natural coordinate frame \eqref{lin-coor-0}; \label{comp-f} \item for every $\hat{\varphi}\in \hat{{\cal F}}$ and for every $\kappa\in K$ the cylindrical function $\hat{\varphi}\kappa$ is a real {\em constant} function on $\Theta$; \label{const} \end{enumerate} \item if $(\hat{F},K)\in\Lambda$ and $K=\{\kappa_{1},\ldots,\kappa_{N}\}$ then $\dim\hat{F}=N$; moreover, if $(\hat{\varphi}_1,\ldots,\hat{\varphi}_N)$ is a basis of $\hat{F}$ then an $N\times N$ matrix $G=(G_{\beta\alpha})$ of components \[ G_{\beta\alpha}:=\hat{\varphi}_{\beta}\kappa_{\alpha} \] is {\em non-degenerate}. \label{non-deg} \item if $(\hat{F},K'),(\hat{F},K)\in\Lambda$ and $\Theta_{K'}=\Theta_{K}$ then $(\hat{F},K')\geq(\hat{F},K)$; \label{Q'=Q} \item if $(\hat{F}',K')\geq(\hat{F},K)$ then \begin{enumerate} \item each d.o.f. $K$ is {\em a linear combination} of d.o.f. in $K'$; \label{lin-comb} \item $\hat{F}\subset\hat{F}'$. \label{FF'} \end{enumerate} \end{enumerate} \subsubsection{Speckled graphs \label{speckl}} In the considerations above an important role was played by sets $\{K_{u,\gamma}\}$. Therefore one may try to use these sets to define a set $\Lambda$ as one consisting of pairs $(\hat{F},K_{u,\gamma})$ which satisfy all Assumptions listed in the previous section. However, we will not use all sets $K_{u,\gamma}$ to define $\Lambda$ but will restrict ourselves to some of them. To justify our decision let us refer to the general construction presented in \cite{q-stat} (see its outline in Section \ref{out}). According to it for every $(\hat{F},K_{u,\gamma})\equiv\lambda\in\Lambda$ we will have to associate a Hilbert space ${\cal H}_\lambda$ of some square integrable functions on $\Theta_{K_{u,\gamma}}$ (then density operators on all such Hilbert spaces will be used to build the space ${\cal D}$). It seems to us that it would be highly desirable if one could define on each Hilbert space ${\cal H}_\lambda$ a sort of quantum geometry related to the Riemannian geometry of $\Sigma$. But to achieve this we have to guarantee that the d.o.f. in $K_{u,\gamma}$ can be used to extract some consistent information about the geometry. Since the geometry is given by the metric $q$ we have to require that the d.o.f. provide some consistent information about the metric. Let us now analyze this issue more carefully. The metric $q$ is defined by \eqref{q} in terms of the variables $(\theta^0,\theta^J)$, \begin{equation} q=-\theta^0\otimes\theta^0+\delta_{IJ}\theta^I\otimes\theta^J, \label{q-0J} \end{equation} Thus we should require the d.o.f. constituting $K_{u,\gamma}$ to give us information about both the one-form $\theta^0$ and the other forms $(\theta^J)$. Of course, information about $(\theta^J)$ is given by d.o.f. $\{\kappa^J_e\}$ defined as integrals \eqref{k-e} of the forms along edges of the graph $\gamma$. Therefore, to achieve a consistency, we should be able to approximate integrals of $\theta^0$ along the edges by means of the d.o.f.. Consider $K_{u,\gamma}$ defined by a set $u=\{y\}$ and a graph $\gamma=\{e\}$. Thus $K_{u,\gamma}=\{\kappa^I_y,\kappa^J_e\}$. Suppose now that $y\in e$. Because \[ \theta^0=\sgn(\theta^J)\frac{\xi_I}{\sqrt{1+\xi_K\xi^K}}\,\theta^I \] (see \eqref{old-new}) the integral \begin{equation} \int_e\theta^0 \label{e-th0} \end{equation} can be approximated {\em modulo the factor} $\sgn(\theta^I)$ by \begin{equation} \frac{\xi_I(y)}{\sqrt{1+\xi_L(y)\xi^L(y)}}\int_e\theta^I=\frac{\kappa_{Iy}(\theta)\kappa^I_e(\theta)}{\sqrt{1+\kappa_{Ly}(\theta)\kappa^L_y(\theta)}}, \label{t0-app} \end{equation} where $\theta=(\theta^0,\theta^J)=(\xi^I,\theta^J)\in\Theta$. If, however, $y\not\in e$ then in general we cannot obtain from the d.o.f. $\{\kappa^I_{y},\kappa^J_e\}$ a good approximation of the integral \eqref{e-th0}. Thus we conclude that to define the set $\Lambda$ we should use sets $\{K_{u,\gamma}\}$ such that each point $y\in u$ belongs to an edge of $\gamma$ and for each edge $e$ of $\gamma$ the intersection $e\cap u$ consists of exactly one point. However, this conclusion may seem to be a bit premature because while drawing it we neglected the lack of the factor $\sgn(\theta^I)$ in the formula \eqref{t0-app}. It turns out that, given $K_{u,\gamma}$, the d.o.f. in it do not contain any information about the factor---this fact is a consequence of the following lemma \cite{q-suit}: \begin{lm} Let $\gamma=\{e_1,\ldots,e_N\}$ be a graph. Then for every $(x^I_{i})\in\mathbb{R}^{3N}$ there exists a global coframe $(\theta^I)$ on $\Sigma$ compatible (incompatible) with the orientation of the manifold such that \[ \int_{e_{i}}\theta^I=x^I_{i} \] for every $I=1,2,3$ and $i=1,2,\ldots,N$. \label{theta-x3} \end{lm} \noindent The lemma means that for every $\theta\equiv(\xi^I,\theta^J)\in\Theta$ the equivalence class $[\theta]$ defined by $K_{u,\gamma}$ contains points of $\Theta$ given by global coframes compatible as well as global coframes incompatible with the orientation of $\Sigma$. Hence no function on $K_{u,\gamma}$ can be an approximation of $\sgn(\theta^I)$. Is the impossibility to approximate $\sgn(\theta^I)$ by functions on $K_{u,\gamma}$ a problem? It could be if relevant quantities like ones describing the geometry of $\Sigma$ as well as constraints and Hamiltonians depended on $\sgn(\theta^I)$. Note that the metric $q$ is a quadratic function of $\theta^0$. Therefore the sign $\sgn(\theta^I)$ is irrelevant for the metric---see the formula \eqref{q-xi} expressing the metric in terms of $(\xi^I,\theta^J)$. Consequently, geometric quantities on $\Sigma$ including the Hodge operator $*$ given by $q$ and the orientation of $\Sigma$ do not depend on $\sgn(\theta^I)$. Regarding the constraints (and the Hamiltonians\footnote{The Hamiltonians of TEGR and YMTM are sums of constraints.}) of TEGR and YMTM, an important observation is that they are {\em quite specific} functions of $(\theta^A,p_B)$ and a variable $\xi^A$ defined as a solution of the following equation system\footnote{Clearly, $\xi^A$ is a function of $\theta^A$---see \cite{os}.} \cite{nester}: \begin{align*} &\xi_A\theta^A=0&& \xi_A\xi^A=-1. \end{align*} Namely, these three variables appear in the constraints exclusively in the form of either $(i)$ a contraction with respect to the index $A$ e.g. $\xi^Adp_A$ or $(ii)$ scalar products defined by $\eta$ (or its inverse) e.g. $\eta_{AB}d\theta^A\wedge*d\theta^B$ (or $\eta^{AB}p_A\wedge*p_B)$. Since the matrix $(\eta_{AB})$ (and its inverse) is diagonal two time-like components (that is, components with $A=0$) of the variables always multiply each other e.g. \[ \xi^Adp_A=\xi^0dp_0+\xi^Idp_I\quad\quad\text{or} \quad\quad \eta_{AB}d\theta^A\wedge*d\theta^B=-d\theta^0\wedge*d\theta^0+d\theta^I\wedge*d\theta^I. \] On the other hand every time-like component of the variables under consideration is proportional to $\sgn(\theta^I)$ \cite{ham-nv}: \begin{align*} &p_0=\sgn(\theta^L)\sqrt{1+\xi_J\xi^J}\,\vec{\theta}^I\lrcorner\,\zeta_I,&&\theta^0=\sgn(\theta^J)\frac{\xi_I}{\sqrt{1+\xi_L\xi^L}}\,\theta^I, && \xi^0=\sgn(\theta^I)\sqrt{1+\xi_J\xi^J}, \end{align*} and the space-like components (that is, components with $A\in\{1,2,3\}$) of the variables are independent of the factor. Thus the factor appears in the constraints exclusively as $[\sgn(\theta^I)]^2\equiv 1$ and, consequently, the constraints expressed in terms of $(\zeta_I,r_J,\xi^K,\theta^L)$ are independent of $\sgn(\theta^I)$---see \cite{ham-nv} for explicite expressions of them. Thus (at least at this stage) the impossibility to express $\sgn(\theta^I)$ by functions on $K_{u,\gamma}$ does not seem to cause any problem. Let us turn back to the conclusion placed just below the formula \eqref{t0-app}. It motivates us to introduce a special kind of graphs: \begin{df} A speckled graph $\dot{\gamma}$ in $\Sigma$ is a pair $(u,\gamma)$, where $u$ is a finite set of points in $\Sigma$ and $\gamma$ is a graph such that there exists a surjective map $\chi:\gamma\to u$ such that $\chi(e)\in e$ for every $e\in\gamma$. \end{df} Let $\dot{\gamma}=(u,\gamma)$ be a speckled graph. We will denote \[ K_{u,\gamma}\equiv K_{\dot{\gamma}}. \] Now the conclusion mentioned above can be reformulated: to define a set $\Lambda$ for TEGR we should use sets $\{K_{\dot{\gamma}}\}$ given by all speckled graphs in $\Sigma$. Let us now take a closer look at these graphs. \subsubsection{Properties of speckled graphs} Consider now a pair $(u,\gamma)$, where $u$ is a finite set of points and $\gamma$ is a graph. Of course, $(u,\gamma)$ may not be a speckled graph because it may happen that $(i)$ there exist elements of $u$ which do not belong to any edge of $\gamma$; $(ii)$ there are edges of $\gamma$ such that no point in $u$ belongs to the edges or $(iii)$ there are edges of $\gamma$ such that two or more distinct points of $u$ belong to each of the edges. Note however that $(u,\gamma)$ can be easily transformed to a speckled graph as follows: in a case of a point $y$ in $u$ of the sort $(i)$ one can choose an edge $e$ such that $y$ is the only point of $u$ which belongs to $e$ and $\gamma\cup \{e\}$ is a graph; in a case of an edge of the sort $(ii)$ one can single out a point in $e$ and add it to $u$; in a case of an edge of the sort $(iii)$ one can divide it into smaller edges such that each smaller one contains exactly one point belonging to $u$. \begin{cor} For every pair $(u,\gamma)$, where $u$ is a finite set of points and $\gamma$ is a graph, there exists a speckled graph $\dot{\gamma}'=(u',\gamma')$ such that $u'\supset u$ and $\gamma'\geq\gamma$. \label{ug-dg} \end{cor} We will say that a speckled graph $\dot{\gamma}'$ is {\em greater or equal} to a speckled graph $\dot{\gamma}$, \begin{equation} \dot{\gamma}'=(u',\gamma')\geq\dot{\gamma}=(u,\gamma), \label{spg->} \end{equation} if $u'\supset u$ and $\gamma'\geq\gamma$. \begin{lm} The set of all speckled graphs in $\Sigma$ equipped with the relation \eqref{spg->} is a directed set. \end{lm} \begin{proof} The relation \eqref{spg->} is obviously transitive. Let us then show that for every two speckled graphs $\dot{\gamma}'=(u',\gamma'),\dot{\gamma}=(u,\gamma)$ there exists a speckled graph $\dot{\gamma}''$ such that $\dot{\gamma}''\geq\dot{\gamma}'$ and $\dot{\gamma}''\geq\dot{\gamma}$. Let $u_0:=u'\cup u$ and let $\gamma_0$ be a graph such that $\gamma_0\geq\gamma',\gamma$. Due to Corollary \ref{ug-dg} there exists a speckled graph $\dot{\gamma}''=(u'',\gamma'')$ such that $u''\supset u_0$ and $\gamma''\geq\gamma_0$. Thus $\dot{\gamma}''$ is the desired speckled graph. \end{proof} \begin{lm} For every finite set $K$ of configurational elementary d.o.f. there exists a speckled graph ${\dot{\gamma}}$ such that each d.o.f. in $K$ is a linear combination of d.o.f. in $K_{\dot{\gamma}}$. \label{K-Kdg} \end{lm} \begin{proof} By virtue of Corollary \ref{K-lc-Kug} each d.o.f. in $K$ is a linear combination of d.o.f. in $K_{u,\gamma}$ given by a pair $(u,\gamma)$ consisting of a finite set $u$ of points in $\Sigma$ and a graph $\gamma$. On the other hand, Equations \eqref{keke} and Corollary \ref{ug-dg} allow us to conclude that there exists a speckled graph $\dot{\gamma}'$ such that each d.o.f. in $K_{u,\gamma}$ is a linear combination of d.o.f. in $K_{\dot{\gamma}'}$. \end{proof} Due to Lemmas \ref{lm-Kug-xi} and \ref{K-Kdg} a set of all sets $K_{\dot{\gamma}}$, where $\dot{\gamma}$ runs through all speckled graphs in $\Sigma$, meets the requirement satisfied by the set $\mathbf{K}'$ in Proposition \ref{big-pro}. This means that for every $\Psi\in{\rm Cyl}$ there exists a speckled graph $\dot{\gamma}$ such that $\Psi$ is a cylindrical function compatible with $K_{\dot{\gamma}}$. \begin{lm} $\dot{\gamma}'\geq\dot{\gamma}$ if and only if each d.o.f. in $K_{\dot{\gamma}}$ is linear combination of d.o.f. in $K_{\dot{\gamma}'}$. \label{g'g-lin} \end{lm} \begin{proof} If $\dot{\gamma}'\geq\dot{\gamma}$ then using Equations \eqref{keke} we can easily conclude that each d.o.f. in $K_{\dot{\gamma}}$ is linear combination of d.o.f. in $K_{\dot{\gamma}'}$. Let $\dot{\gamma}'=(u',\gamma')$ and $\dot{\gamma}=(u,\gamma)$, where $u=\{y_1,\ldots,y_{M}\}$. Suppose now that each d.o.f. in $K_{\dot{\gamma}}$ is a linear combination of d.o.f. in $K_{\dot{\gamma}'}$. Taking into account the formula \eqref{k-y} we see that then each $\kappa^I_{y_{i}}$ belonging to ${\cal K}_{\dot{\gamma}}$ belongs to ${\cal K}_{\dot{\gamma}'}$. Thus $u'\supset u$. Now let us show that $\gamma'\geq \gamma$. To this end consider a set $\Omega$ of one-forms on $\Sigma$ defined as follows: a one-form $\varpi$ belongs to $\Omega$ if there exists one-forms $\theta^2,\theta^3$ such that $(\varpi,\theta^2,\theta^3)$ form a global coframe on $\Sigma$. Then for any real functions $\{\xi^I\}$, $I=1,2,3$, on $\Sigma$ the collection $\theta=(\xi^I,\varpi,\theta^2,\theta^3)$ is an element of $\Theta$. Given $e\in{\cal E}$, we define a real function $\kappa_e$ on $\Omega$ \[ \kappa_{e}(\varpi):=\kappa^1_{e}(\theta)=\int_e\varpi \] and apply Lemma \ref{lm-Kug-xi} to conclude that for every graph $\gamma_0=\{e_1,\ldots,e_{N_0}\}$ and for each $(x_1,\ldots,x_{N_0})\in\mathbb{R}^{N_0}$ there exists $\varpi\in\Omega$ such that $\kappa_{e_{i}}(\varpi)=x_{i}$. Suppose now that each d.o.f. in $K_{\dot{\gamma}}$ is a linear combination of d.o.f. in $K_{\dot{\gamma}'}$, where $\dot{\gamma}=(u,\gamma)$ and $\dot{\gamma}'=(u',\gamma')$. Obviously, a combination describing $\kappa^1_e$ defined by an edge $e$ of $\gamma$ cannot contain d.o.f. $\{\kappa^J_{y'}\}$ given by points $\{y'\}=u'$. Thus \[ \kappa^1_e=A^i\kappa^1_{e'_i}+B^i\kappa^2_{e'_i}+C^i\kappa^3_{e'_i}, \] where $A^i,B^j,C^k$ are constant coefficients and $\gamma'=\{e'_1,\ldots,e'_{N'}\}$. Given $\theta=(\xi^I,\theta^J)\in\Theta$, consider a family $\{\theta_t=(\xi^I,\theta^1,t\theta^2,\theta^3)\}$ of elements of $\Theta$, where the number $t>0$. Differentiating with respect to $t$ both sides of the following equations \[ \kappa^1_e(\theta_t)=A^i\kappa^1_{e'_i}(\theta_t)+B^i\kappa^2_{e'_i}(\theta_t)+C^i\kappa^3_{e'_i}(\theta_t) \] we obtain \[ 0=B^i\kappa^2_{e'_i}(\theta), \] hence $B^i=0$ by virtue of Lemma \ref{lm-Kug-xi}. Similarly we show that $C^i=0$. We conclude that each $\kappa^1_{e}\in K_{\dot{\gamma}}$ is a linear combinations of d.o.f $\{\kappa^1_{e'_{j}}\}\subset K_{\dot{\gamma}'}$ only. Thus each function in $\{\kappa_{e_1},\ldots,\kappa_{e_N}\}$ associated with edges of the graph $\gamma$ is a linear combination of functions $\{\kappa_{e'_1},\ldots,\kappa_{e'_{N'}}\}$ associated with edges of $\gamma'$. Now, to conclude that $\gamma'\geq\gamma$ it is enough to apply the following lemma \cite{q-stat}: \begin{lm} Let $\Omega$ be a set of one-forms on $\Sigma$ such that for every graph $\gamma_0=\{e_1,\ldots,e_{N_0}\}$ and for each $(x_1,\ldots,x_{N_0})\in\mathbb{R}^{N_0}$ there exists $\varpi\in\Omega$ such that \[ \kappa_{e_{i}}(\varpi)=x_{i}. \] Then $\gamma'=\{e'_1,\ldots,e'_{N'}\}\geq\gamma=\{e_1,\ldots,e_N\}$ if and only if each function in $\{\kappa_{e_1},\ldots,\kappa_{e_N}\}$ is a linear combination of functions $\{\kappa_{e'_1},\ldots,\kappa_{e'_{N'}}\}$. \end{lm} Thus $\gamma'\geq\gamma$ and, taking into account the previous result $u'\supset u$, we see that $\dot{\gamma}'\geq\dot{\gamma}$. \end{proof} \subsubsection{Choice of a directed set $\Lambda$} Consider an element $\hat{F}$ of $\hat{\mathbf{F}}$ and an element $K=\{\kappa_1,\ldots,\kappa_{N}\}$ of $\mathbf{K}$. We say that a pair $(\hat{F},K)$ is {\em non-degenerate} if $\dim\hat{F}=N$ and an $(N\times N)$-matrix $G=(G_{\beta\alpha})$ of components \begin{equation} G_{\beta\alpha}:=\hat{\varphi}_{\beta}\kappa_{\alpha}, \label{matr-G} \end{equation} where $(\hat{\varphi}_1,\ldots,\hat{\varphi}_{N})$ is a basis of $\hat{F}$, is non-degenerate. \begin{df} The set $\Lambda$ is a set of all non-degenerate pairs $(\hat{F},K_{\dot{\gamma}})\in\hat{\mathbf{F}}\times \mathbf{K}$, where $\dot{\gamma}$ runs through all speckled graphs in $\Sigma$. \label{df-Lambda} \end{df} \begin{lm} For every speckled graph $\dot{\gamma}$ in $\Sigma$ there exists $\hat{F}\in\hat{\mathbf{F}}$ such that $(\hat{F},K_{\dot{\gamma}})\in\Lambda$. \label{every-g} \end{lm} \begin{proof} Let $\dot{\gamma}=(u,\gamma)$, where $u=\{y_1,\ldots,y_M\}$ and $\gamma=\{e_1,\ldots,e_N\}$ ($M\leq N$). There exist regions $\{V_1,\ldots,V_M\}$ such that $V_{j}\cap u=y_{j}$. Consequently, \[ \varepsilon(V_{j},y_{i})=-\delta_{ji} \] and introducing multi-labels $\alpha=(i,I)$ and $\beta=(j,J)$ we can write \[ G^1_{\beta\alpha}:=-\hat{\varphi}^{V_{j}}_J\kappa^I_{y_{i}}=\delta^{I}{}_J\delta_{ji}=\delta_{\beta\alpha}. \] The independence of edges $\{e_1,\ldots,e_N\}$ of the graph $\gamma$ imply that there exists a set $\{S_1,\ldots,S_N\}$ of faces such that $e_{i}\cap S_{j}$ is empty if $i\neq j$ and consists of exactly one point distinct from the endpoints of $e_{i}$ if $i=j$. The orientations of the faces can be chosen in such a way that \[ \epsilon(S_{j},e_{i})=\delta_{ji} \] Using the multi-labels $\alpha=(i,I)$ and $\beta=(j,J)$ we can write \[ G^2_{\beta\alpha}=\hat{\varphi}^{S_{j}}_J\kappa^I_{e_{i}}=\delta^{I}{}_J\delta_{ji}=\delta_{\beta\alpha}. \] Since \[ -\hat{\varphi}^{V_{j}}_J\kappa^I_{e_{i}}=0=\hat{\varphi}^{S_{j}}_J\kappa^I_{y_{i}} \] the matrix $G$ given by \eqref{matr-G} for $K_{\dot{\gamma}}$ and \[ F_0:=\{\ -\hat{\varphi}^{V_{i}}_I,\hat{\varphi}^{S_{j}}_J\ | \ I,J=1,2,3;\, i=1,\ldots,M;\, j=1,\ldots,N\ \} \] is of the following form \[ G= \begin{pmatrix} G^1 & \mathbf{0}\\ \mathbf{0} & G^2 \end{pmatrix}=\mathbf{1} \] and, being the unit $(M+N)\times(M+N)$ matrix, is obviously non-degenerate. Thus if \[ \hat{F}={\rm span}_{\mathbb{R}}\,F_0 \] then elements of $F_0$ constitute a basis of $\hat{F}$ and $(\hat{F},K_{\dot{\gamma}})\in\Lambda$. \end{proof} Now let us define a relation $\geq$ on $\Lambda$: \begin{df} Let $(\hat{F}',K_{\dot{\gamma}'}),(\hat{F},K_{\dot{\gamma}})\in\Lambda$. Then $(\hat{F}',K_{\dot{\gamma}'})\geq (\hat{F},K_{\dot{\gamma}})$ if and only if \begin{align*} &\hat{F}'\supset\hat{F} && \text{and} && \dot{\gamma}'\geq\dot{\gamma}. \end{align*} \label{df-Lambda->} \end{df} \begin{lm} $(\Lambda,\geq)$ is a directed set. \label{Lambda-dir} \end{lm} Regarding a proof of the lemma it would be perhaps enough to refer to the proof of an analogous lemma in \cite{q-stat} concerning a set $\Lambda$ constructed for DPG saying that a proof of Lemma \ref{Lambda-dir} is a modification of the proof of that lemma in \cite{q-stat}. But yet taking into account the importance of Lemma \ref{Lambda-dir} to avoid any doubt we decided to present the proof explicitely. Before we will prove the lemma let us state some facts which will be used in the proof. Let $\bld{\Psi}$ be a subset of ${\rm Cyl}$. Then operators in $\hat{{\cal F}}$ restricted to $\bld{\Psi}$ are maps from $\bld{\Psi}$ into ${\rm Cyl}$. Since both ${\rm Cyl}$ and $\hat{{\cal F}}$ are linear spaces the restricted operators are maps valued in a linear space and the space of all the restricted operators is a linear space. Consequently, the notion of linear independence of the restricted operators is well defined---below this notion will be called {\em linear independence of the operators on} $\bld{\Psi}$. \begin{lm} Let ${\rm Cyl}_K$ be a set of all cylindrical functions compatible with a set $K$ of independent d.o.f.. Assume that operators $\{\hat{\varphi}_1,\ldots,\hat{\varphi}_M\}\subset \hat{{\cal F}}$ act on elements of ${\rm Cyl}_K$ according to the formula in Assumption \ref{comp-f}. If $\{\hat{\varphi}_1,\ldots,\hat{\varphi}_M\}\subset \hat{{\cal F}}$ are linearly independent on a subset $\bld{\Psi}$ of ${\rm Cyl}_K$ then they are linearly independent on $K$. \label{cyl-K} \end{lm} \begin{pro} Let $\Lambda$ be a subset of $\hat{\mathbf{F}}\times\mathbf{K}$ which satisfies Assumptions \ref{k-Lambda} and \ref{comp-f}. Then for every finite set $\{\hat{\varphi}_1,\ldots,\hat{\varphi}_M\}\subset\hat{\cal F}$ of linearly independent operators there exists a set $(\hat{F},K)\in\Lambda$ such that the operators are linearly independent on $K$. \label{Lambda-pr} \end{pro} \noindent Both the lemma and the proposition are proven in \cite{q-stat}. \begin{proof}[Proof of Lemma \ref{Lambda-dir}] The transitivity of the relation $\geq$ is obvious. Thus we have to prove only that for any two elements $\lambda',\lambda\in\Lambda$ there exists $\lambda''\in\Lambda$ such that $\lambda''\geq\lambda'$ and $\lambda''\geq\lambda$. To achieve this we will refer to Lemma \ref{cyl-K} and Proposition \ref{Lambda-pr}. Therefore first we have to show that we are allowed to use them. By virtue of Lemmas \ref{K-Kdg} and \ref{every-g} the set $\Lambda$ satisfies Assumption \ref{k-Lambda}. On the other hand, Equation \eqref{hphi-Psi} guarantees that every $\hat{\varphi}\in \hat{{\cal F}}$ acts on cylindrical functions compatible with $K_{\dot{\gamma}}$ according to the formula in Assumption \ref{comp-f} hence $\Lambda$ meets the assumption. Let us fix $\lambda'=(\hat{F}',K_{\dot{\gamma}'})$ and $\lambda=(\hat{F},K_{\dot{\gamma}})$. We define $\hat{F}_0$ as a linear subspace of $\hat{{\cal F}}$ spanned by elements of $\hat{F}'\cup\hat{F}$ and choose a basis $(\hat{\varphi}_1,\ldots,\hat{\varphi}_M)$ of $\hat{F}_0$. Proposition \ref{Lambda-pr} and Definition \ref{df-Lambda} of $\Lambda$ guarantee that there exists a speckled graph $\dot{\gamma}_0$ such that the operators $(\hat{\varphi}_1,\ldots,\hat{\varphi}_M)$ remain linearly independent when restricted to $K_{\dot{\gamma}_0}$. Let $\dot{\gamma}''=(u'',\gamma'')$ be a speckled graph such that $(i)$ the number $3N$ of elements of $K_{\dot{\gamma}''}$ is greater than $\dim \hat{F}_0=M$ and $(ii)$ $\dot{\gamma}''\geq \dot{\gamma}_0,\dot{\gamma}',\dot{\gamma}$. By virtue of Lemma \ref{g'g-lin} d.o.f. in $K_{\dot{\gamma}_0}$ are cylindrical functions compatible with $K_{\dot{\gamma}''}$ and, according to Lemma \ref{cyl-K}, the operators $(\hat{\varphi}_1,\ldots,\hat{\varphi}_M)$ are linearly independent on $K_{\dot{\gamma}''}$. Consider now a matrix $G^0$ of components \[ G^0_{\beta\alpha}:=\hat{\varphi}_{\beta}\kappa_{\alpha}, \] where $\{\kappa_1,\ldots,\kappa_{3N}\}=K_{\dot{\gamma}''}$. Clearly, the matrix has $M$ rows and $3N$ columns and because $(\hat{\varphi}_1,\ldots,\hat{\varphi}_M)$ are linearly independent on $K_{\dot{\gamma}''}$ its rank is equal $M<3N$. Using the following operations $(i)$ multiplying a row of by a non-zero number, $(ii)$ adding to a row a linear combination of other rows $(iii)$ reordering the rows and $(iv)$ reordering the columns we can transform the matrix $G^0$ to a matrix $G^1$ of the following form \[ G^1= \begin{pmatrix} \mathbf{1} & G' \end{pmatrix}, \] where $\mathbf{1}$ is $M\times M$ unit matrix and $G'$ is a $M\times(3N-M)$ matrix. Note that the first three operations used to transform $G^0$ to $G^1$ correspond to a transformation of the basis $(\hat{\varphi}_1,\ldots,\hat{\varphi}_M)$ to an other basis $(\hat{\varphi}'_1,\ldots,\hat{\varphi}'_M)$ of $\hat{F}_0$, while the fourth operation corresponds to renumbering the d.o.f. in $K_{\dot{\gamma}''}$: $\kappa_{\alpha}\mapsto \kappa'_{\alpha}:=\kappa_{\sigma(\alpha)}$, where $\sigma$ is a permutation of the sequence $(1,\ldots,3N)$. Thus \[ G^1_{\beta\alpha}=\hat{\varphi}'_{\beta}\kappa'_{\alpha}. \] Let $\{\hat{\varphi}^0_1,\ldots,\hat{\varphi}^0_{3N}\}$ be operators constructed with respect to $K_{\dot{\gamma}''}$ exactly as in the proof of Lemma \ref{every-g}. Then \[ \hat{\varphi}^0_{\beta}\kappa'_{\alpha}=\delta_{\beta\alpha}. \] Thus if \[ \Big(\hat{\varphi}''_1,\ldots,\hat{\varphi}''_{3N}\Big):=\Big(\hat{\varphi}'_1,\ldots,\hat{\varphi}'_{M},\hat{\varphi}^0_{M+1},\ldots,\hat{\varphi}^0_{3N}\Big) \] then a $3N\times3N$ matrix $G=(G_{\beta\alpha})$ of components \[ G_{\beta\alpha}:=\hat{\varphi}''_{\beta}\kappa'_{\alpha} \] is of the following form \[ G= \begin{pmatrix} \mathbf{1} & G'\\ \mathbf{0} & \mathbf{1}' \end{pmatrix}, \] where $\mathbf{0}$ is a zero $(3N-M)\times M$ matrix, and $\mathbf{1}'$ is a unit $(3N-M)\times(3N-M)$ matrix. The matrix $G$ is obviously non-degenerate which means in particular that the operators $(\hat{\varphi}''_1,\ldots,\hat{\varphi}''_{3N})$ are linearly independent. To finish the proof it is enough to define \[ \hat{F}'':={\rm span}_{\mathbb{R}} \{\hat{\varphi}''_1,\ldots,\hat{\varphi}''_{3N}\} \] and $\lambda'':=(\hat{F}'',K_{\dot{\gamma}''})$. \end{proof} \subsubsection{Checking Assumptions} Now we have to check whether the directed set $(\Lambda,\geq)$ just constructed satisfies all Assumptions listed in Section \ref{ad-as}. Proving Lemma \ref{Lambda-dir} we showed that $\Lambda$ satisfies Assumption \ref{k-Lambda}. Regarding Assumption \ref{f-Lambda} consider a set $F_0=\{\varphi_1,\ldots,\varphi_N\}$ of momentum elementary d.o.f.. Let us fix $\varphi_i\in F_0$. Suppose that it is of the sort \eqref{phi-V} i.e. $\varphi_i=\varphi^{V_i}_{I_i}$ for some region $V_i$ and some $I_i\in\{1,2,3\}$. Then using a construction similar to that applied in the proof of Lemma \ref{every-g} one can find configurational d.o.f. $\{\kappa^{I}_{y_i}\}$ such that \[ \hat{\varphi}^{V_i}_I\kappa^J_{y_i}=-\delta^J{}_I \] for every $I,J=1,2,3$. Let $e_i$ be an edge such that $y_i\in e_i$. Then $\dot{\gamma}_i:=(\{y_i\},\{e_i\})$ is a speckled graph. As in the proof of Lemma \ref{every-g} one can find a face $S_i$ such that \[ \hat{\varphi}^{S_i}_J\kappa^L_{e_i}=\delta^L{}_J. \] for every $J,L=1,2,3$. Let \[ \hat{F}_i:={\rm span}\{\ \hat{\varphi}^{V_i}_I, \hat{\varphi}^{S_i}_J \ |\ I,J=1,2,3 \ \}. \] Then $\hat{\varphi}_i\in\hat{F}_i$ and $(\hat{F}_i,K_{\dot{\gamma}_i})\in\Lambda$. Suppose now that $\varphi_i$ is of the sort \eqref{phi-S}. Then in a similar way one can construct an element $(\hat{F}_i,K_{\dot{\gamma}_i})$ of $\Lambda$ such that $\hat{\varphi}_i\in\hat{F}_i$. Since $\Lambda$ is a directed set there exists $(\hat{F},K_{\dot{\gamma}})\in \Lambda$ such that $(\hat{F},K_{\dot{\gamma}})\geq(\hat{F}_i,K_{\dot{\gamma}_i})$ for every $i=1,\ldots,N$. Taking into account Definition \ref{df-Lambda->} of the relation $\geq$ on $\Lambda$ we see that $\hat{F}$ contains all the operators $\{\hat{\varphi}_1,\ldots,\hat{\varphi}_N\}$. Thus Assumption \ref{f-Lambda} is satisfied. Assumption \ref{RN} is satisfied by virtue of Lemma \ref{lm-Kug-xi}. We already concluded (proving Lemma \ref{Lambda-dir}) that $\Lambda$ meets Assumption \ref{comp-f}. Equations \eqref{ze-xi-const} and \eqref{hphiS-ke} guarantee that Assumption \ref{const} is satisfied and Definition \ref{df-Lambda} of $\Lambda$ ensures that Assumption \ref{non-deg} holds. Consider now Assumption \ref{Q'=Q}. Let $(\hat{F},K_{\dot{\gamma}'}),(\hat{F},K_{\dot{\gamma}})$ be elements of $\Lambda$. Recall that by virtue of Lemma \ref{K-Kdg} there exists $K_{\dot{\gamma}''}$ such that each d.o.f. in $K_{\dot{\gamma}}\cup K_{\dot{\gamma}'}$ is a linear combination of d.o.f. in $K_{\dot{\gamma}''}$. Suppose that $\Theta_{K_{\dot{\gamma}'}}=\Theta_{K_{\dot{\gamma}}}$. Then Lemma \ref{lm-Kug-xi} applied to $K_{\dot{\gamma}''}$ allows us set $\bar{K}=K_{\dot{\gamma}''}$ and $K=K_{\dot{\gamma}}$, $K'=K_{\dot{\gamma}'}$ in the following proposition \cite{q-stat}: \begin{pro} Let $K,K'$ be sets of independent d.o.f. of $N$ and $N'$ elements respectively such that $\Theta_K=\Theta_{K'}$. Suppose that there exists a set $\bar{K}$ of independent d.o.f. of $\bar{N}$ elements such that the image of $\tilde{\bar{K}}$ is $\mathbb{R}^{\bar{N}}$ and each d.o.f. in $K\cup K'$ is a linear combination of d.o.f. in $\bar{K}$. Then each d.o.f. in $K$ is a linear combination of d.o.f. in $K'$. \end{pro} \noindent Thus each d.o.f. in $K_{\dot{\gamma}}$ is a linear combination of d.o.f. in $K_{\dot{\gamma}'}$. Then, as stated by Lemma \ref{g'g-lin}, $\dot{\gamma}'\geq \dot{\gamma}$ and, taking into account Definition \ref{df-Lambda->}, Assumption \ref{Q'=Q} follows. Assumption \ref{lin-comb} holds by virtue of Definition \ref{df-Lambda->} of the relation $\geq$ on $\Lambda$ and Lemma \ref{g'g-lin}, while Assumption \ref{FF'} is satisfied due to the definition. Thus the set $(\Lambda,\geq)$ satisfies all Assumptions. Consequently, it generates the space ${\cal D}$ of quantum states. \subsection{The space ${\cal D}$ of quantum states for TEGR \label{D}} Consider $\lambda=(\hat{F},K_{\dot{\gamma}})\in\Lambda$. The natural coordinates \eqref{lin-coor-0} define on the reduced configuration space $\Theta_{K_{\dot{\gamma}}}$ a measure \begin{equation} d\mu_\lambda:=dx_1\ldots dx_N. \label{dmu-la} \end{equation} The measure provides a Hilbert space \begin{equation} {\cal H}_\lambda:=L^2(\Theta_{K_{\dot{\gamma}}},d\mu_\lambda) \label{H-la} \end{equation} together with a set ${\cal D}_\lambda$ of all density operators (i.e. positive operators of trace equal $1$) on ${\cal H}_\lambda$. It was shown in \cite{q-stat} that given two elements $\lambda',\lambda$ of $\Lambda$ such that $\lambda'\geq\lambda$ there exists a distinguished projection $\pi_{\lambda\la'}$ from ${\cal D}_{\lambda'}$ onto ${\cal D}_\lambda$. The projection is defined as follows. If $\lambda'=(\hat{F}',K')\geq\lambda=(\hat{F},K)$ then every $\kappa_\alpha\in K$ is a linear combination of d.o.f. $\{\kappa'_1,\ldots,\kappa'_{N'}\}=K'$ (see Lemma \ref{g'g-lin}): \begin{equation} \kappa_\alpha=B^\beta{}_\alpha\kappa'_\beta, \label{k-Bk'} \end{equation} where $(B^\beta{}_\alpha)$ are real numbers. This relation defines a linear projection ${\rm pr}_{KK'}:\Theta_{K'}\mapsto\Theta_K$: \begin{equation} {\rm pr}_{KK'}:=\tilde{K}^{-1}\circ (B\tilde{K}'), \label{pr-KK} \end{equation} where $B\tilde{K}'$ means the action of the matrix $B=(B^\beta{}_\alpha)$ on the function $\tilde{K}'$ valued in the corresponding $\mathbb{R}^{N'}$. On the other hand, by virtue of Assumption \ref{comp-f} and \ref{const} every $\hat{\varphi}\in\hat{\cal F}$ defines a constant vector field \begin{equation} \sum_\beta(\hat{\varphi}\kappa'_\beta)\partial_{x'_\beta} \label{v-const} \end{equation} on $\Theta_{K'}$, where $(x'_\beta)$ are the natural coordinates on $\Theta_{K'}$. Since there is a natural one-to-one linear correspondence between constant vector fields on $\Theta_{K'}$ and points of this space every $\hat{\varphi}\in\hat{\cal F}$ distinguishes a point in $\Theta_{K'}$ which will be denoted by $[\hat{\varphi}]'$. The map $\hat{\varphi}\mapsto[\hat{\varphi}]'$ is linear and due to non-degeneracy of $(\hat{F}',K')$ its restriction to $\hat{F}'$ is invertible. Since $\hat{F}\subset \hat{F}'$ the image $[\hat{F}]'$ is a linear subspace of $\Theta_{K'}$ such that $\dim [\hat{F}]'=\dim\Theta_K$. It turns out that $\ker{\rm pr}_{KK'}\cap[\hat{F}]'=\varnothing$ hence \[ \Theta_{K'}=\ker{\rm pr}_{KK'}\oplus[\hat{F}]' \] and \[ \omega_{\lambda'\lambda}:=\big({\rm pr}_{KK'}\big|_{[\hat{F}]'}\big)^{-1} \] is a well defined linear isomorphism from $\Theta_K$ onto $[\hat{F}]'$. Using $\omega_{\lambda'\lambda}$ one pushes forward the measure $d\mu_\lambda$ obtaining a measure $d\mu_{\lambda'\lambda}$ on $[\hat{F}]'$ which allows to define a Hilbert space ${\cal H}_{\lambda'\lambda}$ over $[\hat{F}]'$---this Hilbert space is naturally isomorphic to ${\cal H}_\lambda$. There exists a unique measure $d\tilde{\mu}_{\lambda'\lambda}$ on $\ker{\rm pr}_{KK'}$ such that $d\mu_{\lambda'}=d\tilde{\mu}_{\lambda'\lambda}\times d\mu_{\lambda'\lambda}$---this measure provides a Hilbert space $\tilde{{\cal H}}_{\lambda'\lambda}$ such that \[ {\cal H}_{\lambda'}=\tilde{{\cal H}}_{\lambda'\lambda}\otimes{\cal H}_{\lambda'\lambda}. \] Acting on $\rho'\in{\cal D}_{\lambda'}$ by the partial trace with respect to the Hilbert space $\tilde{{\cal H}}_{\lambda'\lambda}$ one gets a density operator on ${\cal H}_{\lambda'\lambda}$ which can be naturally mapped to an element $\rho\in{\cal D}_\lambda$---by definition \[ \pi_{\lambda\lambda'}\rho':=\rho. \] An important observation is that the projection $\pi_{\lambda\la'}$ is fully determined by the projection ${\rm pr}_{KK'}$ and the subspace $[\hat{F}]'$. It turns out that for every triplet $\lambda'',\lambda',\lambda\in\Lambda$ such that $\lambda''\geq\lambda'\geq\lambda$ the corresponding projections satisfy the following consistency condition \begin{equation} \pi_{\lambda\la''}=\pi_{\lambda\la'}\circ\pi_{\lambda'\lambda''}, \label{pipipi} \end{equation} which means that $\{{\cal D}_\lambda,\pi_{\lambda\la'}\}_{\lambda\in\Lambda}$ is a {\em projective family}. The space ${\cal D}$ of quantum states for a theory of the phase space $P\times\Theta$ is the {\em projective limit} of the family: \[ {\cal D}:=\underleftarrow{\lim} \,{\cal D}_\lambda. \] \subsection{$C^*$-algebra of quantum observables} Let us recall briefly a construction of a $C^*$-algebra of quantum observables \cite{kpt} associated with the space ${\cal D}$. Denote by ${\cal B}_\lambda$ the $C^*$-algebra of bounded linear operators on the Hilbert space ${\cal H}_\lambda$ given by \eqref{H-la}. Each density operator $\rho\in {\cal D}_\lambda$ defines an algebraic state (that is, linear $\mathbb{C}$-valued positive normed functional) on the algebra ${\cal B}_\lambda$ via a trace: \[ {\cal B}_\lambda\ni a \mapsto \tr(a\rho)\in\mathbb{C}. \] This fact guarantees that for every pair $\lambda'\geq\lambda$ of elements of $\Lambda$ there exists a unique injective $*$-homomorphism $\pi^*_{\lambda'\lambda}:{\cal B}_\lambda\to{\cal B}_{\lambda'}$ dual to the projection $\pi_{\lambda\lambda'}:{\cal D}_{\lambda'}\to{\cal D}_\lambda$ in the following sense: for every $a\in{\cal B}_\lambda$ and every $\rho'\in {\cal D}_{\lambda'}$ \[ \tr(\pi^*_{\lambda'\lambda}(a)\rho')=\tr(a\,\pi_{\lambda\la'}(\rho')), \] By virtue of \eqref{pipipi} for every triplet $\lambda''\geq\lambda'\geq\lambda$ \[ \pi^*_{\lambda''\lambda}=\pi^*_{\lambda''\lambda'}\circ\pi^*_{\lambda'\lambda}, \] which means that $\{{\cal B},\pi^*_{\lambda'\lambda}\}$ is an inductive family of $C^*$-algebras associated with the projective family $\{{\cal D}_\lambda,\pi_{\lambda\lambda'}\}$. Its inductive limit \[ {\cal B}:= \underrightarrow{\lim}\,{\cal B}_\lambda \] is naturally a unital $C^*$-algebra which can be interpreted as an algebra of quantum observables. It can be shown that each element $\rho$ of the space ${\cal D}$ defines an algebraic state on $\cal B$. \section{Action of spatial diffeomorphisms on ${\cal D}$ \label{diff-D}} Since we would like to quantize TEGR in a background independent manner it is natural to follow LQG methods (see e.g. \cite{cq-diff,rev,rev-1}) to define an action of diffeomorphisms of the manifold $\Sigma$ on the space ${\cal D}$. Since $\Sigma$ represents a spatial slice of the original space-time the diffeomorphisms of $\Sigma$ can be regarded as spatial diffeomorphisms. \subsection{Action of diffeomorphisms on elementary d.o.f} Let ${\rm Diff}$ be a group of all analytic diffeomorphisms of $\Sigma$ which preserve the orientation of the manifold. Consider an element $\tau$ of ${\rm Diff}$. Since the fields $(\zeta_I,r_J,\xi^K,\theta^L)$ are differential forms on $\Sigma$ the diffeomorphism acts on them naturally as the pull-back $\tau^*$. Thus the pull-back define the action of the diffeomorphism on the phase space $P\times\Theta$. Since elementary d.o.f. in $\cal K$ and $\cal F$ are functions on the phase space it is natural to define an action of $\tau$ on $\cal K$ and $\cal F$ as follows. Given $\kappa\in{\cal K}$, the result $\tau \kappa$ of the action of $\tau$ on $\kappa$ is a function on $\Theta$ such that \begin{equation} (\tau \kappa)(\theta)=\kappa(\tau^*\theta). \label{tau-kappa} \end{equation} Similarly, given $\varphi\in{\cal F}$, the result $\tau \varphi$ of the action of $\tau$ on $\varphi$ is a function on $P$ such that \[ (\tau \varphi)(p)=\kappa(\tau^*p). \] Obviously, \begin{equation} \begin{aligned} \tau \kappa^I_y&=\kappa^I_{\tau(y)},&\tau \kappa^J_e&=\kappa^I_{\tau(e)},\\ \tau \varphi^V_I&=\varphi^{\tau(V)}_I,&\tau \varphi^S_J&=\varphi^{\tau(S)}_J, \end{aligned} \label{tau-kf} \end{equation} which mean that both sets $\cal K$ and $\cal F$ are preserved by the action of $\tau $. \subsection{Action of diffeomorphisms on reduced configuration spaces} In the next step let us define an action of diffeomorphisms on reduced configuration spaces. Let us fix a finite set $K=\{\kappa_1,\ldots,\kappa_N\}$ of independent d.o.f. and a diffeomorphism $\tau$. Denote by $\tau K$ the set $\{\tau \kappa_1,\ldots,\tau \kappa_N\}$. Moreover, let $\sim$ and $\sim_\tau$ be the equivalence relations on $\Theta$ defined by, respectively, $K$ and $\tau K$ (see Section \ref{fin-sets}) and let $[\theta]$ and $[\theta]_{\tau}$ denote equivalence classes of $\theta\in\Theta$ given by the corresponding relations. By definition $\theta\sim\theta'$ if and only if $\kappa_\alpha(\theta)=\kappa_\alpha(\theta')$ for every $\kappa_\alpha\in K$. By virtue of \eqref{tau-kappa} the latter condition is satisfied if and only if for every $\tau \kappa_\alpha\in\tau K$ \[ \tau \kappa_\alpha(\tau^{-1*}\theta)=\tau \kappa_\alpha(\tau^{-1*}\theta'). \] This means that $\theta\sim\theta'$ if and only if $(\tau^{-1*}\theta)\sim_\tau(\tau^{-1*}\theta')$. Consequently, the following map \[ \Theta_K\ni [\theta]\mapsto T_\tau([\theta]):=[\tau^{-1*}\theta]_\tau\in\Theta_{\tau K} \] is well defined and is a bijection. Consider now the projections ${\rm pr}_K$ and ${\rm pr}_{\tau K}$ defined by \eqref{pr-K} and the maps $\tilde{K}$ and $\widetilde{\tau K}$ defined by \eqref{k-inj}. We have \[ {\rm pr}_{\tau K}(\tau^{-1*}\theta)=[\tau^{-1*}\theta]_\tau=T_\tau([\theta])=T_\tau({\rm pr}_K(\theta)), \] hence \begin{equation} T^{-1}_\tau\circ{\rm pr}_{\tau K}={\rm pr}_K\circ\tau^{*}. \label{prt-Tpr} \end{equation} On the other hand, \begin{multline*} \tilde{K}([\theta])=(\kappa_1(\theta),\ldots,\kappa_N(\theta))=(\tau \kappa_1(\tau^{-1*}\theta),\ldots,\tau \kappa_N(\tau^{-1*}\theta))=\widetilde{\tau K}([\tau^{-1*}\theta]_\tau)=\\=\widetilde{\tau K}(T_\tau([\theta])), \end{multline*} hence \begin{equation} \tilde{K}=\widetilde{\tau K}\circ T_\tau. \label{K-tKT} \end{equation} It follows from the this result that $\widetilde{\tau K}$ is a bijection onto $\mathbb{R}^N$ which means that $\tau K$ is a set of independent d.o.f. and $\Theta_{\tau K}$ is a reduced configuration space. Let $(x_\alpha)$ be the natural coordinate frame \eqref{lin-coor-0} on $\Theta_K$ and let $(\bar{x}_\alpha)$ be the natural coordinate frame on $\Theta_{\tau K}$. Denote by $\{\partial_{x_\alpha}\}$ and $\{\partial_{\bar{x}_\alpha}\}$ vector fields defined by the coordinate frames on, respectively, $\Theta_K$ and $\Theta_{\tau K}$. Equations \eqref{x-tilK} and \eqref{K-tKT} imply that the map $T_\tau$ when expressed in the coordinate frames is an identity map. Hence $T_{\tau *}\partial_{x_\alpha}=\partial_{\bar{x}_\alpha}$ and consequently \begin{equation} T^{-1*}_\tau(\partial_{x_\alpha}\psi)=\partial_{\bar{x}_\alpha}(T^{-1 *}_\tau\psi), \label{Tx-xT} \end{equation} where $\psi$ is a function on $\Theta_K$. \subsection{Action of diffeomorphisms on cylindrical functions and momentum operators } Now we will extend the action $\tau $ from $\cal K$ onto ${\rm Cyl}$: given $\Psi\in{\rm Cyl}$ we define \[ (\tau \Psi)(\theta):=\Psi(\tau^*\theta) \] ---this definition guarantees that $\tau $ acts linearly on ${\rm Cyl}$. Assume that $\Psi\in{\rm Cyl}$ is compatible with a set $K$ of independent d.o.f., that is, $\Psi={\rm pr}_{K}\psi$. Then by virtue of \eqref{prt-Tpr} \[ (\tau \Psi)(\theta)=\psi({\rm pr}_K(\tau^*\theta))=\psi(T^{-1}_\tau({\rm pr}_{\tau K}(\theta)))=[{\rm pr}^*_{\tau K}(T^{-1*}_\tau\psi)](\theta). \] \begin{cor} If $\Psi={\rm pr}_K^*\psi$ then $\tau \Psi={\rm pr}^*_{\tau K}(T^{-1*}_\tau\psi)$. If $\Psi$ is compatible with $K$ then $\tau \Psi$ is compatible with $\tau K$. \label{cor-tau-cyl} \end{cor} \noindent The corollary means that the action of $\tau$ preserves the space ${\rm Cyl}$---recall that every element of ${\rm Cyl}$ is a finite linear combination of functions such that each function is compatible with a set of independent d.o.f.. Thus $\Psi\mapsto\tau \Psi$ is a linear automorphism on ${\rm Cyl}$. In the next step we define an action of diffeomorphisms on the linear space $\hat{\cal F}$ of the momentum operators: given $\hat{\varphi}\in\hat{\cal F}$, \[ (\tau \hat{\varphi})\Psi:=\tau \big(\hat{\varphi}(\tau^{-1} \Psi)). \] It follows immediately from the definition that $\hat{\varphi}\mapsto\tau \hat{\varphi}$ is a linear map. Let us now calculate $\hat{\varphi}^V_I(\tau\Psi)$. We know already that for every $\Psi\in{\rm Cyl}$ there exists a finite set $K\equiv K_{u,\gamma}$ and a complex function $\psi$ on $\Theta_K$ such that $\Psi={\rm pr}_{K}\psi$. Obviously, \[ \tau K\equiv\tau K_{u,\gamma}=K_{\tau(u),\tau(\gamma)} \] Let $u=\{y_1,\ldots,y_N\}$ and let $(\bar{z}^{I}_i,\bar{x}^{J}_j)$ be the natural coordinates \eqref{lin-coor} on $\Theta_{\tau K}$. Then by virtue of \eqref{hat-zeta} and Corollary \ref{cor-tau-cyl} \[ \hat{\varphi}^V_I(\tau \Psi)=\sum_{i=1}^N \varepsilon(V,\tau(y_{i}))\,{\rm pr}^*_{\tau K}(\partial_{\bar{z}^{I}_{i}}(T^{-1*}_\tau\psi)) \] Using in turn \eqref{Tx-xT} and \eqref{prt-Tpr} we obtain \[ \hat{\varphi}^V_I(\tau \Psi)=\tau \Big(\sum_{i=1}^N \varepsilon(V,\tau(y_{i}))\,{\rm pr}^*_{K}(\partial_{z^{I}_{i}}\psi))\Big), \] where $(z^I_i)$ are coordinates being a part of the natural coordinate frame on $\Theta_K$. Note now that by virtue of \eqref{Vy} \[ \varepsilon(V,\tau(y_i))=\varepsilon(\tau^{-1}(V),y_i) \] hence \[ \hat{\varphi}^V_I(\tau \Psi)=\tau (\hat{\varphi}^{\tau^{-1}(V)}\Psi). \] We conclude that\footnote{Taking into account \eqref{tau-kf} we could define the action $\tau $ on $\hat{\cal F}$ requiring that $\tau \hat{\varphi}^V_I:=\hat{\varphi}^{\tau(V)}_I$ and $\tau \hat{\varphi}^S_J:=\hat{\varphi}^{\tau(S)}_J$, but then we could run into troubles with proving linearity of the action.} \begin{align*} \tau \hat{\varphi}^V_I&=\hat{\varphi}^{\tau(V)}_I,&\tau \hat{\varphi}^S_J&=\hat{\varphi}^{\tau(S)}_J \end{align*} ---the latter equation can be proven similarly. The result means that the action of $\tau $ preserves the space $\hat{\cal F}$. Thus $\hat{\varphi}\mapsto\tau\hat{\varphi}$ is a linear automorphism on $\hat{\cal F}$. \subsection{Action of diffeomorphisms on the directed set $(\Lambda,\geq)$ } Given $\lambda=(\hat{F},K_{\dot{\gamma}})\in\Lambda$, we define \[ \tau \lambda:=(\tau \hat{F},\tau K_{\dot{\gamma}}). \] Let us prove now that this action preserves $\Lambda$ and the directing relation $\geq$ on it. According to Definition \ref{df-Lambda} $\tau \lambda$ is an element of $\Lambda$ if and only if $\tau K_{\dot{\gamma}}$ is a set of independent d.o.f. defined by a speckled graph and the pair $(\tau \hat{F},\tau K_{\dot{\gamma}})$ is non-degenerate. It is obvious that if $\dot{\gamma}=(u,\gamma)$ is a speckled graph then $\tau(\dot{\gamma})=(\tau(u),\tau(\gamma))$ is also a speckled graph. By virtue of \eqref{tau-kf} $\tau K_{\dot{\gamma}}=K_{\tau(\dot{\gamma})}$. On the other hand, since the action $\tau $ on $\hat{\cal F}$ is linear and invertible $\tau \hat{F}$ is a linear subspace of $\hat{\cal F}$ of the dimension equal to $\dim\hat{F}$. Therefore $\dim\tau \hat{F}$ is equal to the number of elements of $\tau K$. Let $(\hat{\varphi}_1,\ldots,\hat{\varphi}_N)$ be a basis of $\hat{F}$ and $K_{\dot{\gamma}}=\{\kappa_1,\ldots,\kappa_N\}$. Then $(\tau \hat{\varphi}_1,\ldots,\tau \hat{\varphi}_N)$ is a basis of $\hat{F}$ and $\tau K_{\dot{\gamma}}=\{\tau \kappa_1,\ldots,\tau \kappa_N\}$. We have \[ \tilde{G}_{\beta\alpha}:= (\tau \hat{\varphi}_\beta)(\tau \kappa_\alpha)=\tau (\hat{\varphi}_\beta\kappa_\alpha)=\hat{\varphi}_\beta\kappa_\alpha=G_{\beta\alpha}, \] ---here we used the fact that $\hat{\varphi}_\beta\kappa_\alpha$ is a constant cylindrical function. Thus non-degene\-ra\-cy of $(\tau \hat{F},\tau K_{\dot{\gamma}})$ follows from non-degeneracy of $(\hat{F},K)$. Consequently, $\tau \lambda\in\Lambda$. Consider now a pair $\lambda'=(\hat{F}',K_{\dot{\gamma}'})$ and $\lambda=(\hat{F},K_{\dot{\gamma}})$ such that $\lambda'\geq\lambda$. Using Definition \ref{df-Lambda->} and properties of the action of $\tau $ on $\hat{\cal F}$ we obtain \begin{align*} &\tau \hat{F}'\supset \tau \hat{F},&&\tau(\dot{\gamma}')\geq\tau(\dot{\gamma}), \end{align*} which means that \[ \tau \lambda'=(\tau \hat{F}',\tau K_{\dot{\gamma}'})\geq \tau \lambda=(\tau \hat{F},\tau K_{\dot{\gamma}}). \] Consequently, the relation $\geq$ is preserved by the action $\tau $ on $\Lambda$. We conclude that the directed set $(\Lambda,\geq)$ is preserved by the action of diffeomorphisms. \subsection{Action of diffeomorphisms on ${\cal D}$} Consider $\lambda=(\hat{F},K_{\dot{\gamma}})\in\Lambda$. Recall that the map $T_\tau:\Theta_{K_{\dot{\gamma}}} \to\Theta_{\tau K_{\dot{\gamma}}}$ when expressed in the natural coordinate frames $(x_\alpha)$ on $\Theta_{K_{\dot{\gamma}}}$ and $(\bar{x}_\alpha)$ on $\Theta_{\tau K_{\dot{\gamma}}}$ is an identity map. This means that $T_\tau$ maps the measure $d\mu_\lambda$ on $\Theta_{K_{\dot{\gamma}}}$ defined by \eqref{dmu-la} to the measure $d\mu_{\tau \lambda}$ on $\Theta_{\tau K_{\dot{\gamma}}}$ defined analogously. Therefore the map \begin{equation} {\cal H}_\lambda\ni\psi\mapsto U_\tau\psi:=T^{-1*}_\tau\psi \label{tau-Hl} \end{equation} is a {\em unitary} map onto ${\cal H}_{\tau \lambda}$. Consequently, \begin{equation} {\cal D}_\lambda\ni\rho_\lambda\mapsto u_\tau\rho_\lambda:=U_\tau\rho_\lambda U^{-1}_\tau \label{tau-Dl} \end{equation} is a map onto ${\cal D}_{\tau \lambda}$. Consider now an element $\rho$ of ${\cal D}$---by virtue of the definition of a projective limit $\rho$ is a family $\{\rho_\lambda\}_{\lambda\in\Lambda}$ such that $\rho_\lambda\in{\cal D}_\lambda$ for every $\lambda$ and $\pi_{\lambda\lambda'}\rho_{\lambda'}=\rho_\lambda$ for every pair $\lambda'\geq\lambda$. It is natural to define an action of diffeomorphisms on $\rho$ as follows \begin{equation} \tau \rho:=\{u_\tau\rho_\lambda\}_{\lambda\in\Lambda}, \label{tau-rho} \end{equation} but is $\tau\rho$ an element of ${\cal D}$? Clearly, $\tau \rho\in{\cal D}$ if for every $\lambda'\geq\lambda$ \[ \pi_{\bar{\lambda}\bar{\lambda}'}(u_\tau\rho_{\lambda'})=u_\tau\rho_\lambda, \] where we denoted $\bar{\lambda}\equiv\tau\lambda$ and $\bar{\lambda}'\equiv\tau\lambda'$ to keep the notation compact. Thus to prove that the action of diffeomorphisms on ${\cal D}$ preserves the space we should show that \begin{equation} u^{-1}_\tau\circ\pi_{\bar{\lambda}\bar{\lambda}'}\circ u_\tau=\pi_{\lambda\lambda'}. \label{upiu-pi} \end{equation} Assume that $\lambda'=(\hat{F}',K')\geq\lambda=(\hat{F},K)$. Recall the projection $\pi_{\lambda\lambda'}$ is determined by the projection ${\rm pr}_{KK'}$ and the subspace $[\hat{F}]'$ of $\Theta_{K'}$ (see Section \ref{D}). Similarly, the projection $\pi_{\bar{\lambda}\bar{\lambda'}}$ is constructed from ${\rm pr}_{\bar{K}\bar{K}'}$ and the subspace $\overline{[\tau\hat{F}]}{}'$ of $\Theta_{\tau K'}$, where $\bar{K}\equiv\tau K$, $\bar{K}'\equiv \tau K'$ and $\hat{\varphi}\mapsto \overline{[\hat{\varphi}]}{}'$ is the linear map from $\hat{\cal F}$ onto $\Theta_{\tau K'}$ defined in Section \ref{D}. Taking into account that the map $u_\tau$ appearing \eqref{upiu-pi} is defined by $T_\tau$ (see \eqref{tau-Dl} and \eqref{tau-Hl}) we conclude that to prove \eqref{upiu-pi} it is enough to show that \begin{align} T_\tau^{-1}\circ{\rm pr}_{\bar{K}\bar{K}'}\circ T_\tau&={\rm pr}_{KK'},& T_\tau[\hat{F}]'=\overline{[\tau\hat{F}]}{}'. \label{II} \end{align} Let us denote elements of $K$ and $K'$ as it was done in Section \ref{D}. It follows from \eqref{k-Bk'} that \[ \tau\kappa_\alpha=B^\beta{}_\alpha\tau\kappa'_\beta, \] hence by virtue of \eqref{pr-KK} \[ {\rm pr}_{\bar{K}\bar{K}'}=\widetilde{\tau K}^{-1}\circ(B\widetilde{\tau K'}). \] Using \eqref{K-tKT} we obtain the first equation in \eqref{II} \[ T_\tau^{-1}\circ{\rm pr}_{\bar{K}\bar{K}'}\circ T_\tau=T_\tau^{-1}\circ\widetilde{\tau K}^{-1}\circ (B\widetilde{\tau K'})\circ T_\tau=\tilde{K}^{-1}\circ(B\tilde{K}')={\rm pr}_{KK'}. \] An operator $\hat{\varphi}\in\hat{F}$ defines on $\Theta_{K'}$ the constant vector field \eqref{v-const} which means that in the natural coordinate frame $(x'_\beta)$ the point $[\hat{\varphi}]'\in\Theta_{K'}$ is represented by $(\hat{\varphi}\kappa'_\beta)$. On the other hand, the operator $\tau\hat{\varphi}$ defines on $\Theta_{\tau K'}$ a constant vector field \[ \sum_\beta\big((\tau\hat{\varphi})(\tau\kappa'_\beta)\big)\partial_{\bar{x}'_\beta}, \] where $(\bar{x}'_\beta)$ are the natural coordinates on $\Theta_{\tau K'}$. Thus the point $\overline{[\tau\hat{\varphi}]}{}'\in\Theta_{\tau K'}$ is represented by \[ \big((\tau\hat{\varphi})(\tau\kappa'_\beta)\big)=\big(\tau(\hat{\varphi}\kappa'_\beta)\big)=(\hat{\varphi}\kappa'_\beta) \] in the frame $(\bar{x}'_\beta)$. Since the map $T_\tau$ expressed in the coordinates $(x'_\beta)$ and $(\bar{x}'_\beta)$ is an identity we conclude that \[ T_\tau([\hat{\varphi}]')=\overline{[\tau\hat{\varphi}]}{}' \] which means that the second equation in \eqref{II} is true. In this way we showed that $\tau\rho\in{\cal D}$, that is, that the action \eqref{tau-rho} of diffeomorphisms preserves the space ${\cal D}$. \section{Other spaces of quantum states for a theory of the phase space $P\times \Theta$ \label{other}} \subsection{Spaces built from other variables on the phase space} A space $\bar{{\cal D}}$ of quantum states similar to ${\cal D}$ can be constructed by applying the natural description of the phase space \cite{q-suit}, that is, the description in terms of fields $(\theta^A,p_B)$ (see Section \ref{phsp}). In this case elementary d.o.f. are given by integrals of one-forms $(\theta^A)$ over edges and by integrals of two-forms $(p_B)$ over faces. To define a directed set $(\bar{\Lambda},\geq)$ which underlies the construction of $\bar{{\cal D}}$ it is enough to use the directed set of all usual (non-speckled) graphs. In other words, this construction is fully analogous to the construction of quantum states for DPG presented in \cite{q-stat}---the only difference between these two constructions is that in the case of $\bar{{\cal D}}$ the canonical variables are four one-forms $(\theta^A)$ and four two-forms $(p_B)$ while in \cite{q-stat} the canonical variables are one one-form and one two-form. Thus the construction of $\bar{{\cal D}}$ is simpler than that of ${\cal D}$. Unfortunately, the space $\bar{{\cal D}}$ possesses an undesirable property: as shown in \cite{q-suit} quantum states in $\bar{{\cal D}}$ correspond not only to elements of $\Theta$ by also to all quadruplets $(\theta^A)$ which define via \eqref{q} non-Riemannian metrics on $\Sigma$. Since we do not see any workable method which could distinguish in $\bar{{\cal D}}$ states corresponding only to elements of $\Theta$ we prefer to base the quantization of TEGR on the space ${\cal D}$. Let us emphasize that constructing the space ${\cal D}$ we never applied the fact that the variables $(\zeta_I,r_J,\xi^K,\theta^L)$ are defined by $\iota=\sgn$---this fact was used in the discussion in Section \ref{speckl}, but the only goal of this discussion was to show that the impossibility to approximate $\sgn(\theta^I)$ by means of functions on $\Theta_{K_{\dot{\gamma}}}$ is not an obstacle for defining quantum geometry operators and quantum constraints as counterparts of classical constraints of TEGR and YMTM. In other words, the discussion concerned not the very construction of ${\cal D}$ but rather further applications of ${\cal D}$. This means that a space ${\cal D}_\iota$ of quantum states for TEGR can be built in the same way starting from any variables $(\zeta_{\iota I},r_J,\xi_\iota^K,\theta^L)$, however, as shown in \cite{ham-nv} the constraints of TEGR and YMTM derived in \cite{oko-tegr} and \cite{os} cannot be imposed on ${\cal D}_\iota$ unless $\iota=\sgn$ or $\iota=-\sgn$. In particular, the variables $(\zeta_{-s I},r_J,\xi_{-s}^K,\theta^L)$ given by $\iota=-\sgn$ can be used to construct a space ${\cal D}_{-s}$. By virtue of \eqref{new-old} \begin{align} \zeta_{sI}&=-\zeta_{-sI}, & \xi^K_s&=-\xi^K_{-s}, \label{s=--s} \end{align} where we used the original notation for the variables $(\zeta_{I},r_J,\xi^K,\theta^L)$ (see \eqref{simp-n}). These simple relations imply that the space ${\cal D}_{-s}$ and ${\cal D}$ are the same: ${\cal D}\equiv {\cal D}_{-s}$---a proof of this statement can be found in Appendix \ref{DD-s}. \subsection{Hilbert spaces built from some almost periodic functions} It was shown in \cite{q-stat} that for every theory for which it is possible to apply the general method presented in that paper to obtain a convex set of quantum states there exists another space of quantum states. This space is a Hilbert space built from almost periodic functions defined on those reduced configuration spaces which are isomorphic to $\mathbb{R}^N$. Thus in the case of TEGR there exist Hilbert spaces $\{{\cal H}_\iota\}$ and $\bar{{\cal H}}$: the former ones associated with the spaces $\{{\cal D}_\iota\}$ and the latter one with $\bar{{\cal D}}$. However, in order to proceed with the second step of the Dirac strategy we would have to define on such a Hilbert space operators corresponding to the constraints and we expect this to be quite difficult. The source of the difficulty is the fact that on a Hilbert space of almost periodic functions on $\mathbb{R}^N$ the standard quantum operator of position is ill defined because an almost periodic function multiplied by a Cartesian coordinate on $\mathbb{R}^N$ is no longer an element of this Hilbert space. Since configurational elementary d.o.f. define Cartesian coordinates on configuration spaces we see that we would not be able to represent the configurational d.o.f. on ${\cal H}_\iota$ and $\bar{{\cal H}}$ by usual multiplication. To define an operator on ${\cal H}_\iota$ or $\bar{{\cal H}}$ corresponding to such a d.o.f. we would have to multiply the d.o.f. by a purely imaginary number and exponentiate the product. But taking into account the form of the constraints of TEGR \cite{oko-tegr,ham-nv} it is hard to expect that such ``exponentiated position operators'' can be used to represent the constraints. Thus the spaces $\{{\cal H}_\iota\}$ and $\bar{{\cal H}}$ do not seem to be very promising for canonical quantization of TEGR. \section{Discussion} \subsection{General remarks} The main results of this paper is the space ${\cal D}$ of quantum states and the related $C^*$-algebra $\cal B$ of quantum observables. The space ${\cal D}$ is not a Hilbert space but a convex set and each element of it naturally defines an algebraic state on the algebra, hence a Hilbert space can be obtained \cite{kpt} from any state in ${\cal D}$ and the algebra via the GNS construction. Although for every $\lambda\in\Lambda$ the space ${\cal D}_\lambda$ is a set of all density operators on the Hilbert space ${\cal H}_\lambda$ we do not expect that there exists a Hilbert space such that ${\cal D}$ is a set of density operators on it. The construction of ${\cal D}$ and $\cal B$ is based on the phase space $P\times \Theta$ described in Section \ref{phsp}. The elementary d.o.f. \eqref{k-y}, \eqref{k-e}, \eqref{phi-V} and \eqref{phi-S} used in the construction are defined as natural integrals of the canonical variables $(\zeta_I,r_J,\xi^K,\theta^L)$ being differential forms on the manifold $\Sigma$. Recall that the natural variables $(\theta^A,p_B)$ on the phase space are functions \eqref{old-new} of $(\zeta_I,r_J,\xi^K,\theta^L)$ involving the factor $\sgn(\theta^I)$ defined by \eqref{sgn-th}. Since the factor cannot be expressed or even approximated by the elementary d.o.f. (see Lemma \ref{theta-x3}) the spaces ${\cal D}$ and $\cal B$ may be useful only for a class of theories: the Hamiltonian (and possible constraints) of a theory belonging to this class when expressed in terms of the variables $(\zeta_I,r_J,\xi^K,\theta^L)$ may not depend on the factor. As shown in \cite{ham-nv}, both TEGR and YMTM belong to this class. \subsection{Diffeomorphism invariant states} Since ${\cal D}$ is a space of kinematic quantum states to proceed further with the canonical quantization of TEGR we have to find a procedure by means of which we could single out physical quantum states for TEGR---an outline of such procedure was presented in \cite{q-stat}. Because TEGR is a diffeomorphism invariant theory it is reasonable to require that each physical state is invariant with respect to the natural action of the spatial diffeomorphisms on the space ${\cal D}$ defined in Section \ref{diff-D} as it is required in the case of LQG \cite{cq-diff,rev,rev-1}. Existence of such states in ${\cal D}$ and possible uniqueness are open questions---at this moment it is difficult to predict whether a theorem of existence and uniqueness of such a state analogous to those presented in \cite{lost,fl} can be proven; let us only note that in the case of a space of quantum states for DPG constructed in \cite{oko-ncomp} there are plenty of diffeomorphism invariant states, however that construction does not follow the general pattern described in \cite{q-stat} and differs significantly from both the present construction of ${\cal D}$ and the construction of the space of quantum states for DPG described in \cite{q-stat}. \subsection{The space ${\cal D}$ versus the kinematic Hilbert space of LQG} The space ${\cal D}$ is a space of kinematic quantum states meant to serve as an element of a background independent canonical quantization of general relativity (GR) in the teleparallel formulation. Let us compare the space with its counterpart in LQG since LQG is a result of a background independent canonical quantization of an other formulation of GR. The counterpart is the Hilbert space ${\cal H}_{\rm LQG}$ defined as a space of some wave functions. These wave functions are defined on a space $\overline{\cal A}$ of so called generalized $SU(2)$-connections \cite{proj} over a three-dimensional manifold $\Sigma$ and the scalar product on ${\cal H}_{\rm LQG}$ is defined by an integral with respect to the Ashtekar-Lewandowski (AL) measure $d\mu_{\rm AL}$ \cite{al-hoop} on $\overline{\cal A}$: \[ {\cal H}_{\rm LQG}:=L^2(\overline{\cal A},d\mu_{\rm AL}). \] Alternatively, the space ${\cal H}_{\rm LQG}$ can be seen as the inductive limit of an inductive family of Hilbert spaces $\{{\cal H}_\gamma,p_{\gamma'\gamma}\}$ labeled by the directed set of (usual) graphs in $\Sigma$ (for some details of this alternative description see \cite{oko-ncomp}). Each Hilbert space ${\cal H}_\gamma$ is defined as follows: given graph $\gamma$, one reduces the Hamiltonian configuration space ${\cal A}$ of LQG being the space of all $SU(2)$-connections over $\Sigma$ obtaining a reduced configuration space ${\cal A}_\gamma$ isomorphic to $SU(2)^N$, where $N$ is the number of edges of $\gamma$. Next, one defines \[ {\cal H}_\gamma:=L^2({\cal A}_\gamma,d\mu_{\gamma}), \] where $d\mu_{\gamma}$ is a measure on ${\cal A}_\gamma$ given uniquely by the normed Haar measure on $SU(2)^N$. It is easy to find some close similarities between elements of the construction of ${\cal H}_{\rm LQG}$ and those of the construction of ${\cal D}$: ${\cal A}$ corresponds to the Hamiltonian configuration space $\Theta$, the spaces $\{A_\gamma\}$ are counterparts of the reduced configuration spaces $\{\Theta_{K_{\dot{\gamma}}}\}$, likewise the Hilbert spaces $\{{\cal H}_\gamma\}$ are counterparts of the spaces $\{{\cal H}_\lambda\}$. Note also that the measure $d\mu_\lambda$ given by \eqref{dmu-la} which defines ${\cal H}_\lambda$ via \eqref{H-la} is in fact a Haar measure on $\Theta_{K_{\dot{\gamma}}}$ (the latter space being a real linear space is naturally a Lie group). Moreover, as shown in \cite{q-stat} for the space $\Theta$ there exists a space $\bar{\Theta}$ related to $\Theta$ in the same way as $\overline{\cal A}$ is related to ${\cal A}$. One may ask now why we did not define a Hilbert space for TEGR in the same way as the space ${\cal H}_{\rm LQG}$ is defined? The answer is very simple: each space ${\cal A}_\gamma$ is {\em compact} and this fact enables to define the AL measure on $\overline{\cal A}$ and, alternatively, it enables to define the embeddings $\{p_{\gamma'\gamma}:{\cal H}_\gamma\to{\cal H}_{\gamma'}\}$ which allow to ``glue'' the Hilbert spaces $\{{\cal H}_\gamma\}$ into ${\cal H}_{\rm LQG}$ via the inductive limit. On the other hand, every space $\Theta_{K_{\dot{\gamma}}}$ is {\em non-compact} and this fact turns out to be an obstacle for defining a measure on $\bar{\Theta}$ as a counterpart of the AL measure and, alternatively, it turns out to be an obstacle for defining embeddings $p_{\lambda'\lambda}:{\cal H}_\lambda\to{\cal H}_{\lambda'}$ which would allows us to ``glue'' the spaces $\{{\cal H}_\lambda\}$ into a larger one by means of an inductive limit. In other words, non-compactness of the spaces $\{\Theta_{K_{\dot{\gamma}}}\}$ precludes the use of the inductive techniques but on the other hand linearity of the spaces allows us to apply the projective techniques according to the original idea by Kijowski \cite{kpt}. Note however that the compactness of the spaces $\{{\cal A}_\gamma\}$ is in fact obtained by means of a reduction of the natural Lorentz symmetry of GR done at the level of the classical theory---this symmetry is reduced to its ``sub-symmetry'' described by the group of three-dimensional rotations. Technically it is achieved by a passage from the complex Ashtekar-Sen connections \cite{a-var-1,a-var-2} of the non-compact structure group $SL(2,\mathbb{C})$ to the real Ashtekar-Barbero connections \cite{barb} of the compact structure group $SU(2)$. Let us emphasize that the construction of ${\cal D}$ does not require any reduction of the Lorentz symmetry of the classical theory, however it is still to early to claim that there are no obstacles for defining local Lorentz transformations on ${\cal D}$---this issue needs to be analyzed carefully. Let us finally mention an important difference between the spaces ${\cal D}$ and ${\cal H}_{\rm LGQ}$ (for a similar discussion see \cite{oko-ncomp}). Both spaces ${\cal D}$ and ${\cal H}_{\rm LGQ}$ are built from some spaces associated with (speckled or usual) graphs in $\Sigma$: in the former case these spaces are $\{{\cal D}_\lambda\}$ ($\lambda=(F,K_{\dot{\gamma}})$, but ${\cal D}_\lambda$ does not depend actually on the space $F$), in the latter one these spaces are $\{{\cal H}_\gamma\}$. Since ${\cal D}$ is the projective limit of $\{{\cal D}_\lambda\}$ each state $\rho\in{\cal D}$ is a collection $\{\rho_\lambda\}$ of states such that $\rho_\lambda\in{\cal D}_\lambda$. This means that, given $\lambda$, the state $\rho_\lambda$ contains only a partial information about $\rho$ and therefore it can be treated merely as {\em an approximation} of $\rho$ \cite{kpt}. On the other hand, in the case of ${\cal H}_{\rm LQG}$ defined as the inductive limit of $\{{\cal H}_\gamma\}$ for every graph $\gamma$ there exists a canonical embedding $p_\gamma:{\cal H}_\gamma\to{\cal H}_{\rm LQG}$ and consequently each element of ${\cal H}_\gamma$ can be treated as a rightful element of ${\cal H}_{\rm LQG}$. \paragraph{Acknowledgments} This work was partially supported by the grant N N202 104838 of Polish Ministerstwo Nauki i Szkolnictwa Wy\.zszego.
1,314,259,992,798
arxiv
\section{Introduction}\label{sI} Large-scale magnetic ($B$) fields appear to be quite common in our universe (e.g.~see~\cite{CT}-\cite{V}), with a verified presence in stars, galaxies, galaxy clusters, high-redshift protogalaxies and possibly even in intergalactic voids. Nevertheless, the origin, the evolution and the role of these large-scale cosmic magnetic fields remain essentially unknown, although their widespread presence might suggest that they are of primordial nature. The case for cosmological magnetic fields got stronger when recent reports claimed the existence of coherent magnetic fields in the low density intergalactic space (where no dynamo amplification is likely to operate) with strengths around $10^{-15}~G$~\cite{AK}-\cite{CBF}. Additional support comes from the fact that galaxies (like our Milky Way), galaxy clusters and remote protogalaxies have $B$-fields of similar ($\mu G$-order) strengths, which could be a sign of a common origin for all these fields. It has long been known that large-scale cosmological magnetic fields, if present, would have affected the evolution of (baryonic) density perturbations, during both the linear and the non-linear regime of structure formation. More specifically, the presence of the $B$-field is believed to slow down the standard growth-rate of linear density gradients by an amount proportional to the square of the Alv\'en speed. Nevertheless, the available Newtonian and relativistic studies (see~\cite{RR}-\cite{TS} and~\cite{TB1}-\cite{BMT} respectively) account only for the contribution of the magnetic pressure, namely of the field's positive pressure. The effects of the magnetic tension, that is of the negative pressure exerted along the field lines themselves, have never been accounted for. The only exception has been a recent Newtonian study, where the role of the magnetic tension on the linear evolution of density inhomogeneities in the post-recombination universe was investigated~\cite{VT}. That work indicated that the aforementioned two magnetic agents may have opposing action, but the results were not conclusive. Here, we provide the first (to the best of our knowledge) fully relativistic study of magnetised density perturbations that incorporates the effects of the field's tension, in addition to those of its (positive) pressure. Our starting point is a perturbed, nearly flat, Friedmann-Robertson-Walker (FRW) universe permeated by a weak large-scale magnetic field. The latter could be primordial in origin, or a later addition to the phenomenology of our universe (e.g.~see~\cite{KKT,Wetal} for recent reviews). Confining to the post-recombination epoch, where structure formation starts in earnest, we set the matter pressure to zero and focus on the role and the implications of the $B$-field. The latter affects the linear evolution of density inhomogeneities through the Lorentz force, which splits into a pressure and a tension part. Not surprisingly, since we are dealing with dust, the magnetic pressure becomes the sole source of support against the gravitational pull of the matter. The linear contribution of the magnetic tension, on the other hand, is two-fold. There are pure-tension stresses, similar but not identical to those identified in the Newtonian study of~\cite{VT}, and a purely relativistic magneto-curvature stress triggered by the non-Euclidean geometry of the host space. Both of these tension stresses reflect the elasticity of the magnetic forcelines and their generic tendency to react against any agent (physical or geometrical) that distorts them from equilibrium~\cite{P}-\cite{T}. We analyse the role of the $B$-field in a step-by-step approach, accounting for the effects of the magnetic pressure first, before gradually incorporating those of the field's tension. In the first instance, our results recover those of the earlier studies. We confirm that, when dealing with dust, there is a purely magnetic Jeans length below which density perturbations cannot grow. Instead, the density gradients oscillate with an amplitude that decays to zero. Well outside the aforementioned Jeans scale, on the other hand, the perturbations grow essentially unimpeded by the field's presence. Incorporating the effects of the magnetic tension does not seem to affect the large-scale evolution of the density gradients, since they continue to grow as if there was no $B$-field present. On wavelengths near and below the Jeans threshold, however, the standard picture changes when the tension stresses are accounted for. Although the density perturbations still oscillate with decreasing amplitude on scales well inside the Jeans length, they now decay to a finite (rather than to zero) amplitude. Moreover, close to the Jeans length, where the magnetic pressure balances out the gravitational pull of the matter and the field's tension becomes the main player, the perturbations experience a slow (logarithmic) growth. Despite the weakness of the effect, this result clearly demonstrates the opposing action of the aforementioned two magnetic agents and reveals the, as yet unknown, role of the magnetic tension. In our final step we also incorporate the magneto-curvature (tension) stresses into the linear equations. However, our assumption of a spatially flat FRW background means that the associated effects are (by default) too weak to make a ``visible'' difference. The magneto-curvature stresses identified here could (in principle) play the dominant role during a curvature-dominated era, which could occur in the very early or in the very late evolution of the universe. In such a case the type of spatial curvature (i.e.~whether it is positive or negative) is of paramount importance, because it determines the nature of the magneto-geometrical effect. Nevertheless, to look into the possible implications of such stresses, one needs to consider cosmological backgrounds with nonzero spatial curvature, which goes beyond the scope of the present work. \section{Relativistic magnetohydrodynamics}\label{sRMHD} We will study the role of cosmological magnetic fields on density perturbations by applying the 1+3 covariant formalism to relativistic cosmic magnetohydrodynamics (MHD), an approach that has been proven a powerful tool in the past (e.g.~see~\cite{TB1}-\cite{BMT}). \subsection{The 1+3 spacetime splitting}\label{ss1+3SS} Introducing a family of timelike observers allows for the 1+3 threading of the spacetime into time and 3-dimensional space. The temporal direction is defined by the observers' 4-velocity field ($u_a$ -- normalised so that $u^{a}u_{a}=-1$), while their rest-space is associated with the projector $h_{ab}=g_{ab}+ u_{a}u_{b}$ (where $g_{ab}$ is the spacetime metric). The latter is a symmetric tensor that projects orthogonally to the 4-velocity vector (i.e.~$h_{ab}u^b=0$) and acts as the metric of the 3-space when there is no rotation. Using both $u_a$ and $h_{ab}$, we define the temporal and spatial derivatives of a general tensor field $S_{ab...}{}^{cd...}$ as \begin{equation} \dot{S}_{ab\cdots}{}^{cd\cdots}= u^{e}\nabla_{e}S_{ab\cdots}{}^{cd\cdots} \hspace{10mm} {\rm and} \hspace{10mm} {\rm D}_{e}S_{ab\cdots}{}^{cd\cdots}= h_e{}^{s}h_a{}^{f}h_b{}^{p}h_q{}^{c}h_r{}^{d}\cdots \nabla_{s}S_{fp\cdots}{}^{qr\cdots}\,, \label{dD} \end{equation} respectively (with $\nabla_a$ representing the 4-dimensional covariant derivative operator). \subsection{Matter fields and kinematics}\label{ssMFKs} The 1+3 formalism utilises a systematic decomposition of all physical variables and operators into their irreducible temporal and spatial components. For instance, relative to the observers introduced earlier, which in our case are always comoving with the matter, the energy-momentum tensor of a general imperfect fluid splits as \begin{equation}\label{Tab} T_{ab}= \rho u_au_b+ ph_{ab}+ 2q_{(a}u_{b)}+ \pi_{ab}\,, \end{equation} with $\rho=T_{ab}u^au^b$, $p=T_{ab}h^{ab}/3$, $q_a=-h_a{}^bT_{bc}u^c$ and $\pi_{ab}=h_{\langle a}{}^ch_{b\rangle}{}^dT_{cd}$ representing the energy density, the isotropic pressure, the energy flux and the viscosity of the matter respectively.\footnote{Square brackets indicate antisymmetrisation, round ones symmetrisation and angled brackets denote the symmetric and traceless part of second rank spacelike tensors. For instance, $\pi_{ab}=h_{(a}{}^ch_{b)}{}^dT_{cd}-ph_{ab}$. Also, we use geometrised and Heaviside-Lorentz units throughout this manuscript.} Note that the quantities on the right-hand side of the above correspond to the total matter, which may also include electromagnetic fields. In an analogous way, the covariant derivative of the observers' 4-velocity decomposes into the irreducible kinematic variables according to \begin{equation}\label{velocity gradient} \nabla_{b}u_{a}= \frac{1}{3}\,\Theta h_{ab}+ \sigma_{ab}+ \omega_{ab}- A_{a}u_{b}\,, \end{equation} where $\Theta=\nabla^a u_a={\rm D}^au_a$ is the volume expansion/contraction scalar, $\sigma_{ab}={\rm D}_{\langle b}u_{a\rangle}$ is the shear tensor, $\omega_{ab}={\rm D}_{[b}u_{a]}$ is the vorticity tensor and $A_a=\dot{u}_a$ is the 4-acceleration vector. By construction, $\sigma_{ab}u^a=0= \omega_{ab}u^a=A_{a}u^a$, which ensures that all three of them are spacelike. The volume scalar monitors the mean separation between neighbouring observers and it is also used to introduce a characteristic length-scale along their worldlines. This is the cosmological scale-factor ($a$) defined by means of $\dot{a}/a=\Theta/3$. The shear and the vorticity describe kinematic anisotropies and rotation respectively, while the 4-acceleration implies the presence of non-gravitational forces. Note that the antisymmetry of the vorticity tensor means that it can be replaced by the vector $\omega_a=\varepsilon_{abc}\omega^{bc}/2$, where $\varepsilon_{abc}$ is the Levi-Civita tensor of the 3-space. The evolution of the kinematic variables defined above follows after applying the Ricci identities to the 4-velocity vector, namely from the expression $2\nabla_{[a}\nabla_{b]}u_c=R_{abcd}u^d$, where $R_{abcd}$ is the Riemann tensor of the spacetime. The Ricci identities decompose into a set of three timelike formulae, monitoring the evolution of the $u_a$-field, and into an equal number of spacelike relations that act as constraints. Referring the reader to~\cite{TCM} for further discussion and details, we will only provide the propagation equation of the volume scalar, namely \begin{equation}\label{Raychaudhuri} \dot{\Theta}=-\frac{1}{3}\,\Theta^{2}- \frac{1}{2}\,(\rho+3p)- 2\left(\sigma^2-\omega^2\right)+ {\rm D}^{a}A_{a}+ A^{a}A_{a}+ \Lambda\,, \end{equation} with $\Lambda$ representing the cosmological constant, $\sigma^2= \sigma_{ab}\sigma^{ab}/2$ and $\omega^2= \omega_{ab}\omega^{ab}/2$ by definition. The above expression, which is commonly known as the Raychaudhuri equation, applies to a general spacetime filled with an imperfect fluid of arbitrary electrical conductivity. \subsection{Magnetohydrodynamics and conservation laws}\label{ssMHDCLs} The post-inflationary universe is treated as a very good electrical conductor, at least on subhorizon scales where causal microphysical processes readily apply (see also footnote~2 below). In such an environment, the ideal-MHD limit is believed to provide an excellent physical approximation. Mathematically speaking this means setting $\varsigma\rightarrow\infty$, where $\varsigma$ is the electrical conductivity of the cosmic medium. In a frame comoving with the matter, the covariant form of Ohm's law reads $\mathcal{J}_a=\varsigma E_a$, with $\mathcal{J}_a$ and $E_a$ representing the spatial currents and the electric field respectively (e.g.~see~\cite{J}). Then, in the presence of finite 3-currents, $E_a\rightarrow0$ as $\varsigma\rightarrow\infty$. All these ensure that at the ideal-MHD limit, Maxwell's equations reduce to one propagation formula \begin{equation}\label{Faraday} \dot{B}_{\langle a\rangle}= -{2\over3}\,\Theta B_a+ \left(\sigma_{ab}+\epsilon_{abc}\omega^c\right)B^b \end{equation} and three constraints \begin{equation}\label{Gauss,Coulomb,Ampere} {\rm curl}B_a= \mathcal{J}_a- \varepsilon_{abc}A^bB^c\,, \hspace{10mm} 2\omega_aB^a= \mu \hspace{10mm} {\rm and} \hspace{10mm} {\rm D}^aB_a= 0\,, \end{equation} where $\dot{B}_{\langle a\rangle}=h_a{}^b\dot{B}_b$, ${\rm curl}B_a=\varepsilon_{abc}{\rm D}^bB^c$ and $\mu$ is the electric charge density of the matter~\cite{TB1}-\cite{BMT}. The former of the above, namely Eq.~(\ref{Faraday}), guarantees that the magnetic forcelines always connect the same particles at all times~\cite{E1}. In other words, at the ideal MHD approximation, the $B$-field is frozen into the highly conductive matter. In the absence of electric fields, the total energy-momentum tensor of the magnetised matter, assuming that the latter is a perfect fluid of arbitrarily high electrical conductivity, reads \begin{equation}\label{Ttot} T_{ab}= \left(\rho+\frac{1}{2}\,B^2\right)u_au_b+ \left(p+\frac{1}{6}\,B^2\right)h_{ab}+ \Pi_{ab}\,, \end{equation} where $B^2=B_aB^a$ and $\Pi_{ab}=\Pi_{\langle ab\rangle}= (B^2/3)h_{ab}-B_aB_b$. The former provides a measure of the magnetic energy density and isotropic pressure, while the latter defines the anisotropic pressure of the $B$-field~\cite{TB1}-\cite{BMT}. Following (\ref{Tab}) and (\ref{Ttot}), we deduce that our magnetised medium behaves as an imperfect fluid with effective energy density $\rho+B^2/2$, effective isotropic pressure $p+B^2/6$ and effective viscosity $\Pi_{ab}$. The latter is a symmetric and trace-free spacelike tensor, which unveils the generically anisotropic nature of the $B$-field. Note that $\Pi_{ab}$ has positive eigenvalues orthogonal to the magnetic forcelines and negative parallel to them. More specifically, it is straightforward to show that $\Pi_{ab}n^b=(B^2/3)n_a$ and that $\Pi_{ab}k^b=-(2B^2/3)k_a$, where $n_a$ and $k_a$ are the unit vectors normal and along $B_a$ respectively. The positive eigenvalues are associated with the ordinary magnetic pressure and reflect the tendency of the field lines to fend off. The negative eigenvalue, on the other hand, manifests the tension properties of the magnetic forcelines, their elasticity and their intrinsic ``preference'' to remain as straight as possible~(e.g.~see~\cite{P,M}). The conservation laws for the energy and the momentum densities of a highly conductive magnetised fluid follow from the (twice contracted) Bianchi identities and from Maxwell's equations. In particular, assuming a perfect medium, the timelike and the spacelike parts of the aforementioned Bianchi identities lead to the energy density \begin{equation}\label{rho} \dot{\rho}= -\Theta(\rho+p) \end{equation} and to the momentum-density \begin{equation}\label{A} (\rho+p)A_a= -{\rm D}_ap- \varepsilon_{abc}B^b\mathcal{J}^c\,, \end{equation} conservation laws, namely to the continuity equation and to the Navier-Stokes equation respectively.\footnote{Following (\ref{A}), the magnetic effects on the fluid propagate via the Lorentz force and require the presence of coherent electric currents. These are generated after inflation, which means that their size cannot exceed that of the causal horizon. Therefore, the magnetic effects discussed in this work apply primarily to subhorizon scales.} At the same time, the induction equation (see relation (\ref{Faraday}) above) leads to the conservation law of the magnetic energy density, namely to~\cite{TB1}-\cite{BMT} \begin{equation}\label{dotB2} \left(B^2\right)^{\cdot}= -{4\over3}\,\Theta B^2- 2\sigma_{ab}\Pi^{ab}\,. \end{equation} Expressions (\ref{rho}) and (\ref{dotB2}) reveal that, at the ideal-MHD limit, the energy density of the magnetised matter and that of the $B$-field itself are separately conserved. \section{Magnetised density inhomogeneities}\label{ssMDIs} Inhomogeneities in the density distribution of the matter are affected by pressure gradients. As mentioned above, the magnetic field is an additional source of pressure, both positive and negative. In what follows we will study the implications of these two different types of pressure for the linear evolution of magnetised density perturbations in the post-recombination universe. \subsection{The key variables}\label{ssKVs} Following the earlier relativistic treatments of~\cite{TB1,TM} (see also~\cite{BMT} for a review), we monitor inhomogeneities in the density distribution of matter by means of the dimensionless gradient \begin{equation}\label{Drel} \Delta_a= \frac{a}{\rho}\,{\rm D}_a\rho\,. \end{equation} The above variable, which depicts spatial variations in the matter density as measured by a pair of neighbouring observers, is supplemented by the auxiliary quantities \begin{equation}\label{ZBrel} \mathcal{Z}_a= a{\rm D}_a\Theta \hspace{10mm} {\rm and} \hspace{10mm} \mathcal{B}_a= \frac{a}{B^2}\,{\rm D}_aB^2\,. \end{equation} These, in turn, monitor local inhomogeneities in the volume expansion and in the magnetic energy density respectively. Note that all of the above vanish identically in an FRW background (see \S~\ref{ssBM} next) and for this reason they are gauge-invariant linear perturbations~\cite{SW}. \subsection{The background model}\label{ssBM} Our aim is to study the magnetic implications for the evolution of density perturbations in a perturbed almost-FRW universe. We therefore select as our background model a spatially flat Friedmann model with zero cosmological constant. Also, to enhance the linear magnetic effects, we will allow for the presence of completely random and sufficiently weak background magnetic field. The randomness implies that $\langle B_a\rangle=0$, which preserves the isotropy of the FRW host, while $\langle B^2\rangle\neq0$. The weakness ensures that, although the $B$-field contributes to the background energy density, its input is small (i.e.~$\langle B^2\rangle\ll\rho$), leaving the standard FRW dynamics unaffected. The symmetries of the Friedmannian spacetimes imply that the only surviving background variables are time-dependent scalars. All the rest vanish identically and they will be therefore treated as first-order (gauge-invariant) perturbations. These include, among others, the inhomogeneity variables introduced in \S~\ref{ssKVs} earlier. Then, using overbars to denote the zero-order quantities, while setting $\bar{\Theta}=3H$ (where $H=\dot{a}/a$ is the unperturbed Hubble parameter) and $\bar{B}^2=\langle B^2\rangle$, with $\bar{B}^2=\bar{B}^2(t)$, the background evolution is monitored by the set \begin{equation}\label{bgrFr} 3H^2= \bar{\rho}\,, \hspace{10mm} \dot{H}= -H^2- \frac{1}{6}\left(\bar{\rho}+3\bar{p}\right)\,, \hspace{10mm} \dot{\bar{\rho}}= -3H\left(\bar{\rho}+\bar{p}\right) \end{equation} and \begin{equation}\label{bgrB2} (\bar{B}^2)^\cdot= -4H\bar{B}^2\,. \end{equation} Note that we have ignored the magnetic contribution to the zero-order Friedmann and Raychaudhuri equations (see expressions (\ref{bgrFr}a) and (\ref{bgrFr}b) above), given that $\bar{B}^2\ll\bar{\rho}$ in the background. We also remind the reader that, at the ideal MHD limit, the energy density of the matter and that of the $B$-field are separately conserved (see Eqs.~(\ref{bgrFr}c) and (\ref{bgrB2})). The latter relation also unveils the radiation-like evolution of the zero-order magnetic field, namely that $\bar{B}^2\propto a^{-4}$, which also guarantees magnetic-flux conservation. \subsection{Linear evolution of the inhomogeneities}\label{ssLEIs} The nonlinear formulae describing the general evolution of magnetised density inhomogeneties can be found in~\cite{TB1}-\cite{BMT}, where we refer the reader for further discussion and technical details. Here, we will linearise these relations around a spatially flat FRW background (with zero cosmological constant) permeated by a sufficiently random and weak magnetic field (see \S~\ref{ssBM} before). In doing so, we will treat the magnetic energy-density and pressure gradients as first-order perturbations, which makes the perturbed $B$-field (and its spatial gradients) half-order perturbations.\footnote{The magnetic contribution to the linear equations comes always through terms of order $B^2$ (see expressions (\ref{linD'rel})-(\ref{linB'rel})), which ensures the perturbative consistency of the adopted linearisation scheme.} On these grounds, the linear evolution of the inhomogeneities is monitored by the propagation formulae~\cite{BMT} \begin{equation}\label{linD'rel} \dot{\Delta}_a= 3wH\Delta_a- (1+w)\mathcal{Z}_a+ \frac{3aH}{\bar{\rho}}\,\varepsilon_{abc}B^b{\rm curl}B^c+ 2aH(1+w)c_{\rm a}^2A_a\,, \end{equation} \begin{equation}\label{linZ'rel} \dot{\mathcal{Z}}_a= -2H\mathcal{Z}_a- \frac{1}{2}\,\bar{\rho}\Delta_a- \frac{1}{2}\,\bar{\rho}(1+w)c_{\rm a}^2\mathcal{B}_a+ \frac{3a}{2}\,\varepsilon_{abc}B^b{\rm curl}B^c+ a{\rm D}_a{\rm D}^bA_b \end{equation} and \begin{equation}\label{linB'rel} \dot{\mathcal{B}}_a= \frac{4}{3(1+w)}\,\dot{\Delta}_a- \frac{4wH}{1+w}\,\Delta_a- \frac{4aH}{\bar{\rho}(1+w)}\,\varepsilon_{abc}B^b{\rm curl}B^c- 4aHA_a\,. \end{equation} According to (\ref{linD'rel}) and \ref{linZ'rel})), the magnetic field also sources inhomogeneities, both in the density of the matter and in the volume expansion of the universe. In the above $w=\bar{p}/\bar{\rho}$ is the background barotropic index of the matter and $c_{\rm a}^2=\bar{B}^2/\bar{\rho}(1+w)$ defines the zero-order Alfv\'{e}n speed. By construction, the latter satisfies the constraint $c_{\rm a}^2\ll1$ due to the overall weakness of the $B$-field. Finally, to linear order, the 4-acceleration vector seen on the right-hand side of the above is given by the momentum conservation law (see Eq.~(\ref{A}) in \S~\ref{ssMHDCLs}), which now reads \begin{equation} \rho(1+w)A_a= -{\rm D}_ap- \varepsilon_{abc}B^b{\rm curl}B^c= {\rm D}_ap- {1\over2}\,{\rm D}_aB^2+ B^b{\rm D}_bB_a\,, \label{MHDAa} \end{equation} since $J_a={\rm curl}B_a=\varepsilon_{abc}{\rm D}^bB^c$ to linear order (see Eq.~(\ref{Gauss,Coulomb,Ampere}a)). Note that in the second equality of the above the Lorentz force splits into its pressure and tension stresses, given by ${\rm D}_aB^2/2$ and $B^b{\rm D}_bB_a$ respectively. In what follows, we will investigate the implications of these two magnetic agents for the evolution of linear perturbations in the density distribution of the matter. \subsection{Types of inhomogeneities}\label{ssTIs} The variables defined in \S~\ref{ssKVs}, namely $\Delta_a$, $\mathcal{Z}_a$ and $\mathcal{B}_a$, contain collective information about three types of inhomogeneities: scalar, vector and tensor. The former monitors overdensities or underdensities in the matter distribution, which we usually refer to as density perturbations. Vector inhomogeneities, on the other hand, describe rotational (vortex-like) distortions in the matter. Finally, tensor inhomogeneities describe changes in the shape of the density profile under constant volume. We may decode all this information by taking the comoving spatial gradient of $\Delta_a$ and then implementing the irreducible decomposition (e.g.~see~\cite{TCM}) \begin{equation}\label{Delab} \Delta_{ab}= a{\rm D}_b\Delta_a= {1\over3}\,\Delta h_{ab}+ \Sigma_{ab}+ \varepsilon_{abc}W^c\,. \end{equation} Here $\Delta=a{\rm D}^a\Delta_a$ is the scalar describing overdensities/underdensities in the matter, $W_a=-a{\rm curl} \Delta_a/2$ is the vector monitoring density vortices and $\Sigma_{ab}=a{\rm D}_{\langle b}\Delta_{a\rangle}$ is the symmetric and trace-free tensor following changes in the shape of the density profile. Clearly, similar decompositions also apply to the expansion and the magnetic energy-density gradients~\cite{TM,BMT}. The anisotropic nature of the $B$-field ensures its interaction with all of the aforementioned three types of inhomogeneities. Here, we will focus on the linear evolution of scalar density perturbations after recombination. This restriction means that we may set the matter pressure and the associated barotropic index to zero. \section{Magnetised density perturbations}\label{sMDPs} When dealing with dust, the magnetic field becomes the sole source of pressure support. However, this does not a priori guarantee that the growth of density perturbations will slow down in the magnetic presence, since the $B$-field is a source of negative pressure (tension) as well. \subsection{Linear evolution of density perturbations}\label{ssLEDPs} Scalar perturbations in the matter density, in the volume expansion of the universe and in the magnetic energy density are monitored by \begin{equation}\label{scalarsD,Z,B} \Delta= a{\rm D}^a\Delta_a\,, \hspace{10mm} \mathcal{Z}= a{\rm D}^a\mathcal{Z}_a\,, \hspace{10mm} {\rm and} \hspace{10mm} \mathcal{B}= a{\rm D}^a\mathcal{B}_a\,, \end{equation} respectively (see \S~\ref{ssTIs} above). Then, setting $w=0$ and $c_{\rm a}^2=\bar{B}^2/\bar{\rho}\ll1$, the comoving 3-divergences of Eqs.~(\ref{linD'rel}), (\ref{linZ'rel}) and (\ref{linB'rel}) lead to the linear propagation formulae \begin{equation}\label{scalarD'rel} \dot{\Delta}= -\mathcal{Z}+ \frac{3}{2}\,Hc_{\rm a}^2\mathcal{B}- Hc_{\rm a}^2\mathcal{K}- \frac{6a^2H}{\bar{\rho}} \left(\sigma_B^2-\omega_B^2\right)\,, \end{equation} \begin{eqnarray}\label{scalarZ'rel} \nonumber \dot{\mathcal{Z}}&=& -2H\mathcal{Z}- \frac{1}{2}\,\bar{\rho}\Delta+ \frac{1}{4}\,\bar{\rho}c_{\rm a}^2\mathcal{B}- \frac{1}{2}\,c_{\rm a}^2{\rm D}^2\mathcal{B}- \frac{1}{2}\,\bar{\rho}c_{\rm a}^2\mathcal{K}- 3a^2\left(\sigma_B^2-\omega_B^2\right) \\&& +\frac{2a^2}{\bar{\rho}}\,{\rm D}^2\left(\sigma_B^2-\omega_B^2\right) \end{eqnarray} and \begin{equation}\label{scalarB'rel} \dot{\mathcal{B}}= \frac{4}{3}\,\dot{\Delta}\,, \end{equation} respectively.\footnote{In deriving Eq.~(\ref{scalarZ'rel}) we have also used the linear auxiliary relation \begin{equation}\label{scalar A} a^2A= -\frac{1}{2}\,c_{\rm a}^2\mathcal{B}+ \frac{1}{3}\,c_{\rm a}^2\mathcal{K}+ \frac{2a^2}{\bar{\rho}}\left(\sigma_B^2-\omega_B^2\right)\,, \end{equation} where $A={\rm D}_aA^a$ is the 3-divergence of the 4-acceleration. Note that the last two terms on the right-hand side of the above, together with the last two terms of (\ref{scalarD'rel}) and the last three terms of (\ref{scalarZ'rel}), represent tension stresses, the effects of which were not included in the relativistic solutions of~\cite{TB1}-\cite{BMT}. The Newtonian analogues of the magnetic shear and vorticity, on the other hand, were accounted for in~\cite{VT} (see Eqs.~(23) and (27) there).} Note that the scalars $\sigma_B^2={\rm D}_{\langle b}B_{a\rangle}{\rm D}^{\langle b}B^{a\rangle}/2$ and $\omega_B^2={\rm D}_{[b}B_{a]}{\rm D}^{[b}B^{a]}/2$ are respectively related to shape and rotational distortions in a field-line congruence, which makes the tensors $\sigma_{ab}^B= {\rm D}_{\langle b}B_{a\rangle}$ and $\omega_{ab}^B= {\rm D}_{[b}B_{a]}$ the magnetic analogues of the kinematic shear and vorticity (see \S~\ref{ssMFKs}). Also, $\mathcal{K}=a^2\mathcal{R}$ by definition, with $\mathcal{R}$ representing the perturbed 3-Ricci scalar, which means that the fifth term on the right-hand side of Eq.~(\ref{scalarZ'rel}) carries the combined effects of magnetism and spatial curvature. Note that this particular stress reflects the vector nature of the $B$-field and derives from a purely geometrical coupling between magnetism and spacetime curvature~\cite{TM,T}. The latter comes into play via the Ricci identities and adds to the standard interaction between matter and geometry that the Einstein field equations introduce. Following~\cite{BMT}, the rescaled 3-Ricci scalar evolves as \begin{equation} \dot{\mathcal{K}}= -{4\over3}\,Hc_{\rm a}^2\mathcal{K}+ 2Hc_{\rm a}^2\mathcal{B}\,, \label{lcK} \end{equation} to linear order. Finally, we should point out that the second term on the right-hand side of Eq.~(\ref{scalarD'rel}) and the third and fourth terms on the right-hand of (\ref{scalarZ'rel}) are due to the (positive) magnetic pressure. On the other hand, the last two terms of (\ref{scalarD'rel}) and the last three terms of (\ref{scalarZ'rel}) carry the effects of the field's tension. \subsection{The wave-like equation}\label{ssW-LE} Taking the time derivative of (\ref{scalarD'rel}), using the rest of the propagation formulae and keeping up to linear order terms, we obtain the following wave-like equation for the density perturbations \begin{equation}\label{D''} \ddot{\Delta}= -2H\dot{\Delta}+ \frac{1}{2}\,\bar{\rho}\Delta+ \frac{2}{3}\,c_{\rm a}^{2}{\rm D}^{2}\Delta+ \frac{2}{3}\,c_{\rm a}^{2}\rho\mathcal{K}+ 4a^{2}\left(\sigma_B^2-\omega_B^2\right)- \frac{2a^2}{\bar{\rho}}\,{\rm D}^2\left(\sigma_B^2-\omega_B^2\right)\,. \end{equation} with additional terms due to the universal expansion, the presence of matter (including the $B$-field) and spacetime curvature. In deriving the above we have also used the linear propagation formulae $(\sigma_B^2)^{\cdot}=-6H\sigma_B^2$ and $(\omega_B^2)^{\cdot}= -6H\omega_B^2$, which in turn follow from the linear auxiliary relation $({\rm D}_bB_a)^{\cdot}=-3H{\rm D}_bB_a$. The latter is obtained after combining the linear commutation law (A.2.2) of~\cite{BMT} with the linearised magnetic induction equation (i.e.~with $\dot{B}_a=-2HB_a$ -- see expression (\ref{Faraday}) in \S~\ref{ssMHDCLs}). Note that, in the absence of matter pressure, the Alfv\'en speed has become the wave velocity as well. Also note that the magneto-curvature effects reverse when the (rescaled) 3-curvature scalar ($\mathcal{K}$) changes from positive to negative and vice versa. Our next step is to harmonically decompose Eq.~(\ref{D''}), by introducing the standard scalar harmonics functions $\mathcal{Q}^{(n)}$, with $\dot{\mathcal{Q}}^{(n)}=0$ and ${\rm D}^2\mathcal{Q}^{(n)}=-(n/a)^2\mathcal{Q}^{(n)}$. Then, setting $\Delta=\sum_n\Delta_{(n)}\mathcal{Q}^{(n)}$, $\mathcal{K}=\sum_n\mathcal{K}_{(n)}\mathcal{Q}^{(n)}$ and $(\sigma_B^2-\omega_B^2)= \sum_n(\sigma_B^2-\omega_B^2)_{(n)}\mathcal{Q}^{(n)}$, with ${\rm D}_a\Delta_{(n)}=0={\rm D}_a\mathcal{K}_{(n)}={\rm D}_a(\sigma_B^2-\omega_B^2)_{(n)}$, arrive at \begin{eqnarray}\label{D harmonic} \nonumber \ddot{\Delta}_{(n)}&=& -2H\dot{\Delta}_{(n)}+\frac{1}{2}\,\bar{\rho}\left[1-\frac{4}{9}\,c_{\rm a}^2 \left(\frac{\lambda_H}{\lambda_n}\right)^2\right]\Delta_{(n)}+ \frac{2}{3}\,\bar{\rho}c_{\rm a}^2\mathcal{K}_{(n)} \\&& +4\left[1+\frac{1}{6}\left(\frac{\lambda_H}{\lambda_n}\right)^2\right] \left(\Sigma_B^2-\Omega_B^2\right)_{(n)}\,, \end{eqnarray} where $\lambda_H=1/H$ is the Hubble radius, $\lambda_n=a/n$ is the physical scale of the perturbation (with $n$ being the comoving wavenumber), while $\Sigma_B^2=a^2\sigma_B^2$ and $\Omega_B^2=a^2\omega_B^2$ define the rescaled magnetic shear and the magnetic voricity respectively. As expected, in the absence of these stresses, expressions (\ref{D''}) and (\ref{D harmonic}) reduce to the wavelike formula obtained in~\cite{BMT} (see Eq.~(7.4.5) there). Also, for a direct comparison between (\ref{D''}) and its Newtonian analogue, we refer the reader to Eq.~(27) in~\cite{VT}. The second term on the right-hand side of (\ref{D harmonic}) conveys the opposing action of gravity and (magnetic) pressure. These effects cancel each other out (and the aforementioned term goes to zero) at a specific wavelength, which is given by \begin{equation}\label{Jeans} \lambda= \lambda_J= \frac{2}{3}\,c_{\rm a}\lambda_H \end{equation} and marks the (purely magnetic) Jeans length~\cite{TM,BMT}. On scales much larger than $\lambda_J$, gravity prevails and the perturbations grow. When $\lambda_n\ll\lambda_J$, however, the (positive) pressure of the $B$-field dominates and prevents the perturbations from growing (see \S~\ref{sSs} next). \section{Linear solutions}\label{sSs} We will examine the magnetic effects on the evolution of density perturbations in three steps. First, we will allow the magnetic pressure to act alone. Then, we will consider the simultaneous action of magnetic pressure and tension, leaving the role of the magneto-curvature coupling last. \subsection{Magnetic pressure effects} Without the magnetic tension terms, which include the magneto-curvature stresses as well, Eq.~(\ref{D harmonic}) reduces to \begin{equation}\label{dif1.1} \ddot{\Delta}_{(n)}=-2H\dot{\Delta}_{(n)}+\frac{1}{2}\,\bar{\rho} \left[1-\left(\frac{\lambda_J}{\lambda_n}\right)^2\right] \Delta_{(n)}\,, \end{equation} having substituted for the (magnetic) Jean's length from definition (\ref{Jeans}). During the dust era $a\propto t^{2/3}$, $H=2/3t$ and $\bar{\rho}=4/3t^2$. At the same time, the scale-ratio $\alpha=\lambda_J/\lambda_n$, which carries the magnetic effects, remains constant (recall that $c_{\rm a}\propto t^{-1/3}$ and $\lambda_n\propto t^{2/3}$ after equipartition, while $\lambda_H\propto t$ always and $\lambda_J=c_{\rm a}\lambda_H$). Then, the above differential equation assumes the form \begin{equation}\label{dif1.2} \frac{{\rm d}^2\Delta_{(n)}}{{\rm d}t^2}= -\frac{4}{3t}\,\frac{{\rm d}\Delta_{(n)}}{{\rm d}t}+ \frac{2}{3t^2}\left(1-\alpha^2\right)\Delta_{(n)} \end{equation} and accepts the power-law solution \begin{equation}\label{sol1} \Delta_{(n)}= C_1\,t^{s_1}+ C_2\,t^{s_2}\,, \end{equation} with $s_{1,2}=-[1\mp\sqrt{25-24\alpha^2}]/6$. Therefore, in the absence of the $B$-field (i.e.~when $\alpha=0$), we recover the standard non-magnetised solution for linear density perturbations in the dust era (i.e.~$s_{1,2}=-1, 2/3$ -- e.g.~see~\cite{TCM}). On the other hand, recalling that $\alpha^2=(4c_{\rm a}^2/9) (\lambda_H/\lambda_n)^2$, we deduce that the magnetic pressure inhibits the growth of these distortions by an amount proportional to the Alfv\'en-speed squared.\footnote{Analogous magnetic effects on the linear evolution of density perturbations were also observed during the radiation epoch, in solutions where only the pressure of the $B$-field was accounted for (see~\cite{TB2,BMT} for details).} Moreover, the impact of the aforementioned effect is scale-dependent. We will therefore consider the following three characteristic cases: \begin{itemize} \item $\lambda_n\gg\lambda_J$: On scales much larger than the magnetic Jean's length, we have $\alpha= \lambda_J/\lambda_n\ll1$ and therefore solution (\ref{sol1}) reduces to \begin{equation}\label{1a} \Delta= C_1\,t^{2/3}+ C_2\,t^{-1} \end{equation} We have thus recovered the standard non-magnetised solution, which implies that the magnetic pressure has no effect on large scales. \item $\lambda_n\ll\lambda_J$: Here, $\alpha= \lambda_J/\lambda_n\gg1$, in which case solution (\ref{sol1}) takes the (imaginary) form \begin{equation}\label{1b} \Delta_{(n)}= t^{-1/6}\left(C_1\,t^{\imath\alpha\sqrt{2/3}}+ C_2\,t^{-\imath\alpha\sqrt{2/3}}\right)\,. \end{equation} Consequently, on small scales, the magnetic pressure dominates forcing the perturbations to oscillate (with amplitude that decreases as $t^{-1/6}$). \item $\lambda_n=\lambda_J$: At the $\alpha= \lambda_J/\lambda_n=1$ threshold the last term on the right-hand side of Eq.~(\ref{dif1.2}) vanishes and solution (\ref{sol1}) reads \begin{equation}\label{1c} \Delta= C_1+ C_2\,t^{-1/3}\,. \end{equation} In other words, on wavelengths equal to the Jeans length, the magnetic pressure balances the gravitational pull of the matter and the perturbations maintain constant amplitude. \end{itemize} Overall, the effects of the field's pressure are only felt on scales close and below the magnetic Jeans length. On larger wavelengths, the perturbations grow as if there was no $B$-field present. These results are identical to those obtained in the Newtonian study of~\cite{VT} and very close (both qualitatively and quantitatively) to those of the earlier relativistic treatments~\cite{TB1,BMT}. \subsection{Combined pressure and tension effects}\label{ssCPTEs} The opposite nature of the magnetic-pressure contribution and that of the field's tension (i.e.~positive vs negative) indicate that these two agents may act against each other. Here, we will attempt to clarify the matter by considering the combined effect of pressure and tension on the evolution of linear density perturbations. It should be noted, however, that the magneto-curvature stresses (which are also due to the magnetic tension) will remain switched off. Then, the density gradients are monitored by the linear system \begin{equation}\label{D'' p+t} \ddot{\Delta}_{(n)}= -2H\dot{\Delta}_{(n)}+ \frac{1}{2}\,\bar{\rho}\left[1 -\left(\frac{\lambda_J}{\lambda_n}\right)^2\right]\Delta_{(n)}+ 4\left[1+{1\over6}\left(\frac{\lambda_H}{\lambda_n}\right)^2\right] \left(\Sigma_B{}^2-\Omega_B{}^2\right)_{(n)} \end{equation} and \begin{equation}\label{S} \left(\Sigma_B{}^2-\Omega_B{}^2\right)^{\cdot}= -4H\left(\Sigma_B{}^2-\Omega_B{}^2\right)\,. \end{equation} The latter implies that $\Sigma_B$, $\Omega_B\propto a^{-2}$ on all scales and follows from the fact that $\Sigma_B=a\sigma_B$ and $\Omega_B=a\omega_B$, with $\sigma_B$, $\omega_B\propto a^{-3}$ (see Eqs.~(\ref{D''}) and (\ref{D harmonic}) in \S~\ref{ssW-LE}). Given that $H=2/3t$ and $\bar{\rho}=4/3t^2$ after equilibrium, the above recast as \begin{equation}\label{d2Delta} \frac{{\rm d}^2\Delta_{(n)}}{{\rm d}t^2}= -\frac{4}{3t}\frac{{\rm d}\Delta_{(n)}}{{\rm d}t}+ \frac{2}{3t^2}\left(1-\alpha^2\right)\Delta_{(n)}+ 4\left[1+{1\over6}\,\beta^2\left({t\over t_0}\right)^{2/3}\right] \left(\Sigma_B{}^2-\Omega_B{}^2\right)_{(n)} \end{equation} and \begin{equation}\label{dSigma} \frac{{\rm d}}{{\rm d}t}\left(\Sigma_B{}^2-\Omega_B{}^2\right)= -\frac{8}{3t}\left(\Sigma_B{}^2-\Omega_B{}^2\right)\,, \end{equation} respectively. As before, $\alpha=\lambda_J/\lambda_n=$~constant after equipartition, while $\beta=(\lambda_H/\lambda_n)_0=$~constant determines the physical scale of the perturbations at the start of the dust era. When $\alpha\neq1,\sqrt{2/3}$, the system of (\ref{d2Delta}) and (\ref{dSigma}) solves analytically giving \begin{equation}\label{gensol} \Delta_{(n)}= C_1\,t^{s_1}+ C_2\,t^{s_2}+ C_3\left[\frac{\beta^2}{6(\alpha^2-1)} +\frac{1}{\alpha^2-{2/3}}\left({t_0\over t}\right)^{2/3}\right]\,, \end{equation} where $s_{1,2}=-[1\mp\sqrt{25-24\alpha^2}]/6$ exactly as before (see solution (\ref{sol1})). Consequently, the introduction of the tension stresses has added two extra modes (one constant and one decaying) to the linear evolution of magnetised density perturbations. Then, depending on the scale of the perturbation, we may consider the cases: \begin{itemize} \item $\lambda_n\gg\lambda_J$: In this case $\alpha\ll1$ and the above solution reduces to\footnote{Although we use the same symbols for the integration constants in all our solutions, these generally differ.} \begin{equation}\label{s2.1} \Delta= C_1\,t^{2/3}+ C_2\,t^{-1}+ C_3+ C_4\,t^{-2/3}\,. \end{equation} Hence, on scales much larger than the magnetic Jeans length, the incorporation of the field's tension has not changed the standard picture. The density perturbations keep growing as $\Delta\propto t^{2/3}$, like their magnetic-free counterparts (compare to solution (\ref{1a})). \item $\lambda_n\ll\lambda_J$: Here $\alpha\gg1$ and expression (\ref{gensol}) becomes \begin{equation}\label{s2.2} \Delta_{(n)}= t^{-1/6}\left(C_1\,t^{\imath\alpha\sqrt{2/3}}+ C_2\,t^{-\imath\alpha\sqrt{2/3}}\right)+ C_3+ C_4\,t^{-2/3}\,. \end{equation} As in solution (\ref{1b}) before, on small scales the magnetic pressure still forces the perturbations to oscillate with an amplitude that drops as $t^{-{1/6}}$. This time, however, the oscillations do not decay to zero but to a finite constant value that depends on the initial conditions. \item $\lambda_n=\lambda_J$: This special case corresponds to $\alpha=1$, when we can no longer use solution (\ref{gensol}). Instead, setting $\alpha=1$ into Eq.~(\ref{d2Delta}), the system of (\ref{d2Delta}), (\ref{dSigma}) gives \begin{equation}\label{s2.3} \Delta= C_1\ln t+ C_2+ C_3\,t^{-1/3}+ C_4\,t^{-2/3}\,. \end{equation} Therefore, at the magnetic Jeans length, where the field's pressure cancels out the gravitational pull of the matter, the magnetic tension becomes the sole player, takes over and leads to a weak (logarithmic) growth of the perturbations. Recall that, in the absence of tension stresses, perturbations with wavelength equal to the Jeans length remain constant (see solution (\ref{1c}) before). The growth seen in solution (\ref{s2.3}) demonstrates the opposing action between the field's pressure and tension on the linear evolution of density perturbations, which lies at the core of this investigation. \item $\lambda_n=\sqrt{3/2}\,\lambda_J$: This is our second special case, corresponding to $\alpha=\sqrt{2/3}$ and to $\beta^2(t/t_0)^{2/3}\gg1$, in which case the system (\ref{d2Delta}) and (\ref{dSigma}) accepts the solution \begin{equation}\label{s2.4} \Delta= C_1\,t^{1/3}+ C_2+ C_3\,t^{-2/3}\,. \end{equation} Consequently on scales that are only slightly larger than the magnetic Jean's length the perturbations grow as $t^{1/3}$, instead of following the $\Delta\propto t^{2/3}$-law associated with much larger wavelengths (see solutions (\ref{1a}) and (\ref{s2.1})). This implies that the growth-rate of density perturbations increases gradually as we move on to scales progressively larger than $\lambda_J$ and the overall magnetic effect weakens (which is to be expected).\footnote{A closer look into the study of~\cite{VT} reveals that solutions (\ref{s2.3}) and (\ref{s2.4}) reside in the Newtonian equations as well, although not as distinct special cases, which is probably the reason they were not identified there.} \end{itemize} \subsection{Including the magneto-curvature effects}\label{ssIM-CEs} In order to incorporate the magneto-curvature stresses seen in Eqs.~(\ref{D''}) and (\ref{D harmonic}) into our solutions, we need to involve the evolution formula of the rescaled 3-Ricci scalar (see expression (\ref{lcK})). Then, taking the time derivative of (\ref{D''}) and using (\ref{lcK}) and (\ref{S}), we arrive at the differential equation \begin{equation}\label{D'''} \dddot{\Delta}= - 6H\ddot{\Delta}- {7\over6}\,\bar{\rho}\dot{\Delta}+ {1\over2}\,H\bar{\rho}\Delta+ {2\over3}\,Hc_{\rm a}^2{\rm D}^2\Delta+ {2\over3}\,c_{\rm a}^2{\rm D}^2\dot{\Delta}- {2H\over\bar{\rho}}\,{\rm D}^2 \left(\Sigma_B^2-\Omega_B^2\right)\,. \end{equation} Harmonically decomposed the above reads \begin{eqnarray}\label{D''' hd} \nonumber \dddot{\Delta}_{(n)}&=& -6H\ddot{\Delta}_{(n)}- {7\over6}\,\bar{\rho} \left(1+\frac{3}{7}\,\alpha^2\right)\dot{\Delta}_{(n)}+ \frac{1}{2}\,H\bar{\rho}\left(1-\alpha^2\right)\Delta_{(n)} \nonumber\\ &&+{2\over3}\,H\beta^2\left({t\over t_0}\right)^{2/3} \left(\Sigma_B^2-\Omega_B^2\right)_{(n)}\,, \end{eqnarray} with $\alpha=\lambda_J/\lambda_n=$~constant and $\beta=(\lambda_H/\lambda_n)_0$. After equipartition, the above differential equation takes the form \begin{eqnarray}\label{D''' eq} \nonumber \frac{{\rm d}^3\Delta_{(n)}}{{\rm d}t^3}&=& -\frac{4}{t}\,\frac{{\rm d}^2\Delta_{(n)}}{{\rm d}t^2}- \frac{14}{9t^2}\,\left(1+\frac{3}{7}\,\alpha^2\right)\frac{{\rm d}\Delta_{(n)}}{{\rm d}t}+ \frac{4}{9t^3}\left(1-\alpha^2\right)\Delta_{(n)} \\&& +\frac{4}{9t}\,\beta^2\left({t\over t_0}\right)^{2/3} \left(\Sigma_B^2-\Omega_B^2\right)_{(n)}\,. \end{eqnarray} Finally, when $\alpha\neq1$, the system of (\ref{dSigma}) and (\ref{D''' eq}) solves to give \begin{equation}\label{gen sol3} \Delta_{(n)}= C_1\,t^{s_1}+ C_2\,t^{s_2}+ {1\over\alpha^2-1}\left(C_3+C_4\,t^{-2/3}\right)\,. \end{equation} with $s_{1,2}=-[1\mp\sqrt{25-24\alpha^2}]/6$. When $\alpha=1$, on the other hand we obtain \begin{equation} \Delta= C_1\,\ln t+ C_2- C_3\,t^{-1/3}+ C_4\,t^{-2/3}\,. \end{equation} For all practical purposes, the above results are identical to solutions (\ref{gensol}) and (\ref{s2.3}), implying that the inclusion of the magneto-curvature effects does not alter the linear evolution of the density perturbations. This is not surprising, since the spatial flatness of the FRW background ensures that the magneto-curvature stresses are too weak to make a noticeable difference. \section{Discussion}\label{sD} With the exception of the Cosmic Microwave Background (CMB), magnetic fields have been observed nearly everywhere in the cosmos. The idea of primordial magnetism has also been gaining ground because it could in principle explain all the large-scale $B$-fields seen in the universe today. If present, cosmological magnetic fields could have played a role during structure formation, since they can in principle generate and affect the evolution of all types of perturbations, namely scalar, vector and tensor distortions (see \S~\ref{ssTIs}). When it comes to scalar (density) perturbations, however, half of the magnetic effects are excluded, since all the available cosmological studies (with the exception of~\cite{VT} -- to the best of our knowledge) account only for field's pressure and bypass the magnetic tension.\footnote{In astrophysics the implications of the magnetic tension have been investigated in a number of studies looking at the physics of star formation, accretion discs and compact stars (e.g.~see~\cite{NCVR}-\cite{MM} and references therein).} Moreover, technically speaking, it is more straightforward to obtain analytic solutions before rather than after equipartition. The main difficulty comes from the Alfv\'en speed, which is constant throughout the radiation era but acquires a time-dependence after equilibrium. As result, the available dust-epoch solutions were obtained after imposing certain simplifying assumptions~\cite{TB1}-\cite{BMT}. In the present work we re-examine the magnetic implications for the evolution of baryonic density perturbations and try to address both of the aforementioned issues. Our study uses full general relativity, incorporates the effects of the field's tension and focuses on the post-recombination universe. The aim was to refine and extend previous relativistic studies, as well as provide a direct comparison with the existing Newtonian treatments of the issue. Above all, however, we wanted to investigate and reveal the as yet unknown role of the magnetic tension. At the centre of our analysis is the wave-like equation monitoring the linear evolution of magnetised density perturbations. In contrast to previous approaches, this formula carries the effects of the magnetic tension, in addition to those of the field's pressure. After equipartition, the latter is the sole source of support against the gravitational pull of the matter. This leads to a purely magnetic Jeans length, which means that the magnetic pressure could in principle determine the first gravitationally bound formations. The tension stresses, on the other hand, are triggered by the elasticity of the field lines and by their natural tendency to react against any agent that distorts them from equilibrium. Among these are the magneto-curvature stresses, which result from the purely geometrical coupling between the $B$-field and the spatial geometry of the host spacetime. We have incorporated all the aforementioned effects into our analytic solutions in three successive steps of increasing inclusiveness. At first, we only considered the effects of the field's pressure, in which case our results were in full agreement with those of the previous Newtonian study. We then also accounted for the role of the magnetic tension and finally, to complete the picture, we incorporated the magneto-curvature stresses as well. Our results showed that the field's pressure and tension act against each other. The magnetic pressure, in particular, inhibits the growth of the perturbations, while the tension tends to enhance it. These effects were also found to be scale-dependent, with the pressure dominating well inside the (purely magnetic) Jeans length and with the tension taking over near the Jeans threshold. On much larger wavelengths, on the other hand, neither of these agents had a measurable effect and the perturbations evolved unaffected by the field's presence More specifically, well inside the magnetic Jeans length and in the absence of any tension input, we found that the field's pressure forces the perturbations to oscillate with an amplitude that decreases as $\Delta\propto t^{-1/6}$ and decays (asymptotically) to zero. When the magnetic tension was included the oscillations still decayed (at the same rate), though now to a finite value instead of zero. Near the Jeans length the support of the field's pressure and the gravitational pull of the matter cancel each other out, thus leaving the magnetic tension as the sole player. This resulted into a slow logarithmic growth for the density perturbations, which revealed the (as yet unknown) opposing action of the aforementioned two magnetic agents on the linear evolution of density gradients. We expect an analogous effect near the Jeans length during the radiation era as well. Qualitatively speaking, the role of the magnetic tension demonstrated how versatile and unconventional the $B$-fields can be. Quantitatively, the tension effects were relatively weak because their contribution decays quickly (faster than that of the field's pressure) with the universal expansion. Nevertheless, it is conceivable that there can be physical situations where the field's tension could play a more prominent role. This should probably happen in the nonlinear phase of structure formation on scales considerably smaller than the magnetic Jeans length and more likely during the (typically) anisotropic collapse of a magnetised protogalactic cloud. Given the complexity of the nonlinear regime, however, one should have to employ numerical methods to complement the analytical work. Beyond the Jeans length, the overall magnetic effect was found to gradually fade away and the standard (non-magnetised) linear growth-rate of density perturbations was eventually re-established. Finally, the magneto-curvature stresses (which also result from the field's tension) were found to be too weak to leave a measurable imprint. This was largely expected, however, given the (assumed) spatial flatness of our FRW background.\footnote{In order to study the coupling between magnetism and spacetime geometry in detail and to investigate its potential implications in depth, one needs to allow for FRW backgrounds with nonzero spatial curvature.} What is particularly interesting about these stresses, is that their effect reverses depending on the sign of the spatial curvature (i.e.~on whether it is positive or negative -- see Eq.~(\ref{D''}) in \S~\ref{ssW-LE}). Therefore, if these magneto-geometrical effects were to be detected, they should also provide information about the universe's spatial geometry.\\ \textbf{Acknowledgments} JDB was supported by the the Science and Technology Facilities Council (STFC) of the United Kingdom.
1,314,259,992,799
arxiv
\section{Introduction} Graph clustering \citep{Schaeffer:COSREV2007} is probably one of the main exploratory tools used in network analysis as it provides data analysts with a high level summarized view of complex networks. One of the main paradigms for graph clustering is community search \citep{FortunatoSurveyGraphs2010}: a \emph{community} is a subset of nodes in a graph that are densely connected and have relatively few connections to nodes outside of the community. While this paradigm is very successful in many applications, it suffers from a main limitation: it cannot be used to detect other important structures that arise in graphs, such as bipartite structures, hubs, authorities, and other patterns. The alternative solution favoured in this paper is provided by block models \citep{White1971,White1976}: in such a model, a cluster consists of nodes that share the same connectivity patterns to other clusters, regardless of the pattern itself (community, hub, bipartite, etc.). A popular probabilistic view on block models is provided by the stochastic block model \citep[SBM,][]{Hollands83,wang1987}. The main idea is to assume that a hidden random variable is attached to each node. This variable contains the cluster membership information while connection probabilities between clusters are handled by the parameters of the model. The reader is send to \cite{Goldenberg09} for a survey of probabilistic models for graphs and to \cite{wasserman1994social}, Ch.16, for an overview of the stochastic block models This paper focuses on dynamic graphs in the following sense: we assume that nodes of the graph are fixed and that interactions between them are directed and take place at a specific instant. In other words, we consider a directed multi-graph (two nodes can be connected by more than one edge) in which each directed edge is labelled with an occurrence time. We are interested in extending the SBM to this type of graphs. More precisely, the proposed model is based on a counting process point of view of the interactions between nodes: we assume that the number of interactions between two nodes follows a non homogeneous Poisson counting process (NHPP). As in a standard SBM, nodes are assumed to belong to clusters that do not change over time, thus the temporal aspect is handled only via the non homogeneity of the counting processes. Then the block model hypothesis take the following form: the intensity of the NHPP that counts interactions between two nodes depends only on the clusters of the nodes. In order to obtain a tractable inference, a segmentation of the time interval under study is introduced and the interactions are aggregated over the sub-intervals of the partition. Following \cite{Come_Latouche15}, the model is adjusted to the data via the maximization of the integrated classification likelihood \citep[ICL][]{biernacki2000assessing} in an exact form. As in \cite{Come_Latouche15} (and \cite{wyse2014inferring} for latent block models), the maximization is done via a greedy search. This allows us to chose automatically the number of clusters in the block model. When the number of sub-intervals is large, the model can suffer from a form of over fitting as the ICL penalizes only a large number of clusters. Therefore, we introduce a variant, based on the model developed in \cite{corneli_asonam}, in which sub-intervals are clustered into classes of homogeneous intensities. Those clusters are accounted for in a new version of the ICL which prevents over fitting. This paper is structured as follows: in Section \ref{sec:RW} we mention works related to the approach we propose, Section \ref{sec:TheMod} presents the proposed temporal extension of the SBM, Section \ref{sec:Est} derives the exact ICL for this model and presents the greedy search algorithm used to maximize the ICL. Section \ref{sec:Exp} gathers experimental results on simulated data and on real world data. Section \ref{sec:Conc} concludes the paper. \section{Related Works} \label{sec:RW} Numerous extensions of the original SBM have already been proposed to deal with dynamic graphs. In this context, both nodes memberships to a cluster and interactions between nodes can be seen as stochastic processes. In \cite{yang2011detecting}, for instance, authors introduce a Markov Chain to obtain the cluster of node in $t$ given its cluster at time $t-1$. \cite{xu2013dynamic} as well as \cite{xing2010} used a state space model to describe temporal changes at the level of the connectivity pattern. In the latter, the authors developed a method to retrieve overlapping clusters through time. In general, the proposed temporal variations of the SBM share a similar approach: the data set consists in a sequence of graphs rather than the more general structure we assume. Some papers remove those assumptions by considering continuous time models in which edges occur at specific instants (for instance when someone sends an email). This is the case of e.g. \cite{proceedingsdubois2013} and of \cite{Rossi12,GuigouresEtAl2015}. A temporal stochastic block model, related to the one presented in this paper is independently developed by \cite{Matias_Poisson}. They assume that nodes in a network belong to clusters whose composition do not change over time and interactions are counted by a non-homogeneous Poisson process whose intensity only depends on the nodes clusters. In order to estimate (non-parametrically) the instantaneous intensity functions of the Poisson processes, they develop a variational EM algorithm to maximize an approximation of the likelihood. \section{The model}\label{sec:TheMod} We consider a fixed set of $N$ nodes, $\{1,\ldots,N\}$, that can interact as frequently as wanted during the time interval $[0,T]$. Interactions are directed from one node to another and are assumed to be instantaneous\footnote{In practice, the starting time of an interaction with a duration will be considered.}. A natural mathematical model for this type of interactions is provided by counting processes on $[0,T]$. Indeed a counting process is a stochastic process with values that are non negative integers increasing through time: the value at time $t$ can be seen as the number of interactions that took place from $0$ to $t$. Then the classical adjacency matrix $(X_{ij})_{1\leq i, j\leq N}$ of static graphs is replaced by a $N\times N$ collection of counting processes, $(X_{ij}(t))_{1\leq i, j\leq N}$, where $X_{ij}(t)$ is the counting process that gives the number of interactions from node $i$ to node $j$. We still call $\mathbf{X}=(X_{ij}(t))_{1\leq i, j\leq N}$ the adjacency matrix of this dynamical graph. We introduce in this Section a generative model for adjacency matrices of dynamical graphs that is inspired by the classical stochastic block model (SBM). \subsection{Non-homogeneous Poisson counting process} We first chose a simple form for $X_{ij}(t)$: we assume that this process is a non-homogeneous Poisson counting process (NHPP) with instantaneous intensity given by the function from $[0,T]$ to $\mathbb{R}$, $\lambda_{ij}$. For $s \leq t \leq T$, it then holds \begin{equation} p(X_{ij}(t) - X_{ij}(s)|\lambda_{ij})=\frac{(\int_s^t \lambda_{ij}(u)du)^{X_{ij}(t) - X_{ij}(s)}}{(X_{ij}(t) - X_{ij}(s))!}\exp\left(-\int_s^t \lambda_{ij}(u)du\right), \end{equation} where $X_{ij}(t) - X_{ij}(s)$ is the (non negative) number of interactions from $i$ to $j$ that took place during $[s,t]$. (We assume that $X_{ij}(0)=0$.) \subsection{Block modelling} The main idea of the SBM \citep{Hollands83, wang1987} is to assume that nodes have some (hidden) characteristics that solely explain their interactions, in a stochastic sense. In our context this means that rather than having pairwise intensity functions $\lambda_{ij}$, those functions are shared by nodes that have the same characteristics. In more technical terms, we assume the nodes are grouped in $K$ clusters ($\mathcal{A}_1,\dots, \mathcal{A}_K$) and introduce a hidden cluster membership random vector $\mathbf{z}\in \{1,\ldots K\}^N$ such that \begin{equation*} z_i=k \qquad\text{iff}\qquad i \in \mathcal{A}_k, \qquad k \leq K. \end{equation*} The random component $z_i$ is assumed to follow a multinomial distribution with parameter vector $\omega$ such that \begin{equation*} \mathbb{P}\{ z_i=k \}=\omega_k \qquad\text{with}\qquad \sum_{k \leq K} \omega_k=1. \end{equation*} In addition, the $(z_i)_{1\leq i\leq N}$ are assumed to be independent (knowing $\omega$) and thus \begin{equation}\label{eq:z:omega} p(\mathbf{z} | \boldsymbol{\omega},K)=\prod_{k \leq K}\omega_k^{|\mathcal{A}_k|}, \end{equation} where $|\mathcal{A}_k|$ denotes the cardinal of $\mathcal{A}_k$. Notice that this part of the model is exactly identical to what is done in the classical SBM. In a second step, we assume that given $\mathbf{z}$, the counting processes $X_{ij}(t)$ are independent and in addition that the intensity function $\lambda_{ij}$ depends only on $z_i$ and $z_j$. In order to keep notations tight we denote $\lambda_{z_iz_j}$ the common intensity function and we will not use directly the pairwise intensity functions $\lambda_{ij}$. We denote $\boldsymbol{\lambda}$ the matrix valued intensity function $\boldsymbol{\lambda}=(\lambda_{kg}(t))_{1\leq k,g \leq K}$. Combining all the assumptions, we have for $s \leq t \leq T$ \begin{equation} \label{eq:fullmodel} p(\mathbf{X}(t)-\mathbf{X}(s)|\mathbf{z},\boldsymbol{\lambda})=\prod_{i\neq j} \frac{(\int_s^t \lambda_{z_iz_j}(u)du)^{X_{ij}(t)-X_{ij}(s)}}{(X_{ij}(t) - X_{ij}(s))!}\exp\left(-\int_s^t \lambda_{z_iz_j}(u)du\right). \end{equation} \subsection{Discrete time version} In order to make inference tractable, we move from the continuous time model to a discrete time one. This is done via a partition of the interval $[0,T]$ based on a set of $U+1$ instants \begin{equation*} 0=t_0 \leq t_1 \leq \dots \leq t_{U-1} \leq t_U=T, \end{equation*} that defines $U$ intervals $I_u:=[t_{u-1}, t_u[$ (with arbitrary length $\Delta_u$). The purpose of the partition is to aggregate the interaction. Let us denote \begin{equation} Y_{ij}^{I_u}:=X_{ij}(t_u) - X_{ij}(t_{u-1}), \qquad u \in \{1, \dots, U \}. \end{equation} In words, $Y_{ij}^{I_u}$ measures the increment, over the time interval $I_u$, of the Poisson process counting interactions from $i$ to $j$. We denote by $Y_{ij}$ the random vector \begin{equation*} Y_{ij}:=(Y_{ij}^{I_1}, \dots , Y_{ij}^{I_U})^T. \end{equation*} Thanks to the independence of the increments of a Poisson process, we get the following joint density: \begin{equation} \label{eq:L1} p(Y_{ij} | \lambda_{ij})= \prod_{u=1}^{U} \left(\frac{(\int_{I_u} \lambda_{ij}(s)ds)^{Y_{ij}^{I_u}}}{Y_{ij}^{I_u}!}\exp{\left(-\int_{I_u} \lambda_{ij}(s)ds\right)}\right). \end{equation} The variations of $\lambda_{ij}$ inside an interval $I_u$ have no effect on the distribution of $Y_{ij}$. This allows us to use the integrated intensity function $\Lambda$ defined on $[0,T]$ by \begin{equation*} \Lambda_{ij}(t) :=\int_{0}^t \lambda_{ij}(s)ds. \end{equation*} In addition, we denote by $\pi_{ij}^{I_u}$ the increment of the integrated intensity function over $I_u$ \begin{equation*} \pi_{ij}^{I_u}:=\Lambda_{ij}(t_u)- \Lambda_{ij}(t_{u-1}), \qquad\forall u \in \{1,\dots, U\}. \end{equation*} Then equation \eqref{eq:L1} becomes \begin{equation} \label{eq:L2} p(Y_{ij} | \pi_{ij})= \prod_{u=1}^{U} \left(\frac{(\pi_{ij}^{I_u})^{Y_{ij}^{I_u}}}{Y_{ij}^{I_u}!}\exp{\left( -\pi_{ij}^{I_u}\right)}\right), \end{equation} with $\pi_{ij}:=(\pi_{ij}^{I_1}, \dots, \pi_{ij}^{I_U})^T$. Using the block model assumptions, we have in addition \begin{equation} \label{eq:Y:bm} p(Y_{ij} | \pi_{z_iz_j},z_i,z_j)= \prod_{u=1}^{U} \left(\frac{(\pi_{z_iz_j}^{I_u})^{Y_{ij}^{I_u}}}{Y_{ij}^{I_u}!}\exp{\left( -\pi_{z_iz_j}^{I_u}\right)}\right), \end{equation} where we have used the fact that $\lambda_{ij}=\lambda_{z_iz_j}$ (which leads to $\Lambda_{ij}=\Lambda_{z_iz_j}$, etc.). Considering the network as a whole, we can introduce two tensors of order 3. $Y$ is a $N\times N\times U$ random tensor whose element $(i,j,u)$ is the random variable $Y_{ij}^{I_u}$ and $\pi$ is the $K \times K \times U$ tensor whose element $(k,g,u)$ is $\pi_{kg}^{I_u}$. $Y$ can be seen as an aggregated (or discrete time version) of the adjacency process $\mathbf{X}$ while $\pi$ can be seen as summary of the matrix valued intensity function $\boldsymbol{\lambda}$. The conditional independence assumption of the block model leads to \begin{equation} \label{eq:L4} p(Y|\pi,\mathbf{z})=\prod_{i,j}^{N}p(Y_{ij}|\pi_{z_iz_j},z_i,z_j). \end{equation} To simplify the rest of the paper, we will use the following notations \begin{align*} \prod_{i,j} \prod_{k,g}\prod_{u}&:=\prod_{i=1}^N \prod_{j=1}^N \prod_{k=1}^K \prod_{g=1}^K \prod_{u=1}^U\\ \prod_{z_i=k} \left(\prod _{z_j=g}\right)&:=\prod_{\substack{i: \\ z_i=k}} \left(\prod_{\substack{j: \\ z_j=g}}\right). \end{align*} The joint distribution of $Y$, given $\textbf{z}$ and $\pi$, is \begin{align} \label{eq:L5} p(Y|\mathbf{z}, \pi) &= \prod_{i,j} \prod_u \left(\frac{(\pi_{z_i z_j}^{I_u})^{Y_{ij}^{I_u}}} {Y_{ij}^{I_u}!}\exp{\left( -\pi_{z_i z_j}^{I_u}\right)}\right) \nonumber \\ &= \prod_{k,g} \prod_{u}\left(\frac{(\pi_{k,g}^{I_u})^{S_{kgu}}} {P_{kgu}}\exp{\left( -|A_k||A_g|\pi_{kg}^{I_u}\right)}\right), \end{align} where \begin{equation*} S_{kgu}=\sum_{\substack{ z_i=k}}\sum_{\substack{ z_j=g}} Y_{ij}^{I_u}, \end{equation*} is the total number of interactions from cluster $k$ to cluster $g$ (possibly equal to $k$) and with \begin{equation*} P_{kgu}=\prod_{\substack{z_i=k}}\prod_{z_j=g} Y_{ij}^{I_u}!. \end{equation*} \subsection{A constrained version} As will be shown in Section \ref{par:GS}, the model presented thus far is prone to over fitting when the number of sub-intervals $U$ is large compared to $N$. Additional constraints on the intensity functions $\{\Lambda_{kg}(t)\}_{k,g \leq K}$ are needed in this situation. Let us consider a fixed pair of clusters $(k,g)$. So far, the increments $\{\pi_{kg}^{I_u}\}_{u \leq U}$ are allowed to differ on each $I_u$ over the considered partition. A constraint can be introduced by assigning the time intervals $(I_1,\dots I_U)$ to different time clusters and assuming that increments are identical for all the intervals belonging to the same time cluster. Formally, we introduce $D$ clusters ($\mathcal{C}_1,\dots, \mathcal{C}_D$) and a hidden random vector $\mathbf{y}\in \{0,1\}^U$, labelling memberships \begin{equation*} y_u=d \qquad\text{iff}\qquad I_u \in \mathcal{C}_d. \end{equation*} Each $y_u$ is assume to follow a multinomial distribution depending on parameter $\boldsymbol{\rho}$ \begin{equation*} \mathbb{P}\{ y_u=d \}=\rho_d \qquad\text{with}\qquad \sum_{d \leq D} \rho_d=1, \end{equation*} and in addition the $y_u$ are assumed to be independent, leading to \begin{equation}\label{eq:y:rho} p(\mathbf{y} | \boldsymbol{\rho},D)=\prod_{d \leq D}\rho_d^{|\mathcal{C}_d|}. \end{equation} The random variable $Y_{ij}^{I_u}$ is now assumed to follow the conditional distribution \begin{equation} p(Y_{ij}^{I_u}| \mathbf{z}, \mathbf{y})=\frac{(\pi_{z_i z_j}^{y_u})^{Y_{ij}^{I_u}}}{Y_{ij}^{I_u}!}\exp{(-\pi_{z_i z_j}^{y_u})}. \end{equation} Notice that the new Poisson parameter $\pi_{z_i z_j}^{y_u}$ replaces $\pi_{z_i z_j}^{I_u}$ in the unconstrained version. The joint distribution of $Y$, given $\mathbf{z}$ and $\mathbf{y}$, can easily be obtained \begin{equation} \label{eq:L_unconstrained} p(Y|\mathbf{z}, \mathbf{y}, \pi)=\prod_{k,g} \prod_{d}\left(\frac{(\pi_{kg}^d)^{S_{kgd}}}{P_{kgd}}\exp{\left( -|\mathcal{A}_k||\mathcal{A}_g||\mathcal{C}_d|\pi_{kg}^{d}\right)}\right), \end{equation} where \begin{equation*} S_{kgd}=\sum_{\substack{z_i=k}}\sum_{\substack{z_j=g}}\sum_{y_u=d} Y_{ij}^{I_u}, \qquad P_{kgd}=\prod_{\substack{z_i=k}}\prod_{z_j=g}\prod_{y_u=d} Y_{ij}^{I_u}!. \end{equation*} \begin{Remark} The introduction of this hidden vector $\mathbf{y}$ is not the only way to impose regularity constraints to the integrated function $\Lambda_{kg}(t)$. For example, a segmentation constraint could be imposed by forcing each temporal cluster to contain only adjacent time intervals. \end{Remark} \subsubsection{Summary} We have defined two generative models: \begin{description} \item[Model A] the model has two meta parameters, $K$ the number of clusters and $\boldsymbol{\omega}$ the parameters of a multinomial distribution on $\{1,\ldots,K\}$. The hidden variable $\mathbf{z}$ is generated by the multivariate multinomial distribution of equation \eqref{eq:z:omega}. Then the model has a $K\times K\times U$ tensor of parameters $\pi$. Given $\mathbf{z}$ and $\pi$, the model generates a tensor of interaction counts $Y$ using equation \eqref{eq:L5}. \item[Model B] is a constrained version of model \textbf{A}. In addition to the meta parameters $K$ and $\boldsymbol{\omega}$ of model \textbf{A}, it has two meta parameters, $D$ the number of clusters of time sub-intervals and $\boldsymbol{\rho}$ the parameters of a multinomial distribution on $\{1,\ldots,D\}$. The hidden variable $\mathbf{y}$ is generated by the multivariate multinomial distribution of equation \eqref{eq:y:rho}. Model \textbf{B} has a $K\times K\times D$ tensor of parameters $\pi$. Given $\mathbf{z}$, $\mathbf{y}$ and $\pi$, the model generates a tensor of interaction counts $Y$ using equation \eqref{eq:L_unconstrained}. \end{description} Unless specified otherwise ``the model'' is used for model \textbf{A}. \section{Estimation}\label{sec:Est} \subsection{Non parametric estimation of integrated intensities} In this Section we assume that $\mathbf{z}$ is known. No hypothesis has been formulated about the shape of the functions $\{\Lambda_{kg}(t)\}_{\{k,g \leq K, t \leq T\}}$ and the increments of these functions over the partition introduced can be estimated by maximum likelihood (ML), thanks to equation \eqref{eq:L5} \begin{equation*} \log\mathcal{L}(\pi| Y, \mathbf{z})=\sum_{k,g}\sum_{u} \left[S_{kgu}\log(\pi_{kg}^{I_u}) - |\mathcal{A}_k||\mathcal{A}_g|\pi_{kg}^{I_u} + c \right], \end{equation*} where $c$ denotes those terms not depending on $\pi$. It immediately follows \begin{equation} \label{eq:MLE_pi} \hat{\pi}_{kg}^{I_u}=\frac{S_{kgu}}{|\mathcal{A}_k||\mathcal{A}_g|}, \qquad\forall (k,g), \end{equation} where $\hat{\pi}_{kg}^{I_u}$ denotes the ML estimator of $\pi_{kg}^{I_u}$. In words, $\Lambda_{kg}(t_u) - \Lambda_{kg}(t_{u-1})$ can be estimated by ML as the total number of interactions on the sub-graph corresponding to the connections from cluster $A_k$ to cluster $A_g$, over the time interval $I_u$, divided by the number of nodes on this sub-graph. Once the tensor $\pi$ has been estimated, we have a point-wise, non parametric estimator of $\Lambda_{kg}(t_u)$, for every $u \leq U$, defined by \begin{equation} \label{eq:MLE_Lambda} \hat{\Lambda}_{kg}(t_u)= \sum_{l=1}^u \hat{\pi}_{kg}^{I_l}, \qquad\forall (k,g). \end{equation} Thanks to the properties of the ML estimator, together with the linearity of \eqref{eq:MLE_Lambda}, we know that $\hat{\Lambda}_{kg}(t_u)$ is an unbiased and convergent estimator of $\Lambda_{kg}(t_u)$. \begin{Remark} Estimator \eqref{eq:MLE_Lambda} at times $\{t_u\}_{u\leq U}$, can be viewed as an extension to random graphs and mixture models of the non parametric estimator proposed in \cite{Leemis91}. In that article, $N$-trajectories of independent NHPPs, sharing the same intensity function, are observed and the proposed estimator is basically obtained via method of moments. \end{Remark} In all the experiments, we consider the following step-wise linear estimator of $\Lambda_{kg}(t)$ \begin{equation} \label{eq:linear_estimator} \hat{\Lambda}_{kg}(t)=\sum_{u=1}^{U} \left[\hat{\Lambda}_{kg}(t_{u-1}) + \frac{\hat{\Lambda}_{kg}(t_u) - \hat{\Lambda}_{kg}(t_{u-1})}{t_u - t_{u-1}}(t - t_{u-1})\right]\mathbf{1}_{[t_{u-1}, t_u[}(t), \end{equation} which is a linear combination of estimators in equation \eqref{eq:MLE_Lambda} on the interval $[0,T]$. This is a consistent and unbiased estimator of $\Lambda_{kg}(t)$ at times $\{t_u\}_{u \leq U}$ only. When considering model \textbf{B}, equations \eqref{eq:MLE_pi} and \eqref{eq:MLE_Lambda} are replaced by \begin{align} \label{eq:MLE_modelB} &\hat{\pi}_{kg}^{d}=\frac{S_{kgd}}{|\mathcal{A}_k||\mathcal{A}_g||\mathcal{C}_d|} \\ &\hat{\Lambda}_{kg}(t_u)= \sum_{l=1}^u \hat{\pi}_{kg}^{y_l}. \end{align} Equation \eqref{eq:linear_estimator} remains unchanged, but an important difference between the constrained model and the unconstrained one should be understood: in the former, each interval $I_u$ corresponds to a different slope for the function $\hat{\Lambda}_{kg}(t)$ whereas in the latter we only have $D$ different slopes, one for each time cluster. \subsection{ICL} Since the vector $\mathbf{z}$, as well as the number of clusters $K$ are unknown, estimator \eqref{eq:MLE_pi} cannot be used directly. Hence we propose a two step procedure consisting in \begin{enumerate} \item providing estimates of $\mathbf{z}$ and $K$, \item using these estimates to implement \eqref{eq:MLE_pi} and \eqref{eq:MLE_Lambda}. \end{enumerate} To accomplish the first task, the same approach followed in \cite{Come_Latouche15} is adopted: we directly maximize the the joint integrated log-likelihood of complete data (ICL), relying on a greedy search over the labels and number of clusters. To perform such a maximization, we need the ICL to have an explicit form. This can be achieved by introducing conjugated prior distributions on the model parameters. The ICL can be written as \begin{equation} \label{eq:ICL} \mathcal{ICL}(\mathbf{z}, K):=\log(p(Y,\mathbf{z}|K))=\log(p(Y|\mathbf{z}, K)) + \log(p(\mathbf{z}|K)). \end{equation} This \emph{exact} quantity is approximated by the well known ICL \emph{criterion} \citep{biernacki2000assessing}. This criterion, obtained through Laplace and Stirling approximations of the joint density on the left hand side of equation \eqref{eq:ICL}, is used as a model selection tool, since it penalizes models with a high number of parameters. In the following, we refer to the joint log-density in equation \eqref{eq:ICL} as to the \emph{exact ICL} to differentiate it from the ICL criterion. We are now going to study in detail the two quantities on the r.h.s. of the above equation. The first probability density is obtained by integrating out the parameter $\pi$ \begin{equation*} p(Y|\mathbf{z}, K)= \int p(Y,\pi|\mathbf{z},K)d\pi. \end{equation*} In order to have an explicit formula for this term, we impose the following Gamma prior conjugated density over the tensor $\pi$: \begin{equation*} p(\pi|a,b)=\prod_{k,g,u}\frac{b^a}{\Gamma(a)}\pi_{kgu}^{a-1}e^{-b \pi_{kgu}}, \end{equation*} where the hyper-parameters of the Gamma prior distribution have been set constant to $a$ and $b$ for simplicity.\footnote{The model can easily be extended to the more general framework: \begin{equation*} p(\pi_{kgu}|a_{kgu}, b_{kgu})=\text{Gamma}(\pi_{kgu}|a_{kgu}, b_{kgu}). \end{equation*} } By using the Bayes rule \begin{equation*} p(Y,\pi|\mathbf{z})=p(Y|\pi,\mathbf{z})p(\pi|a,b), \end{equation*} we get: \begin{align*} \begin{split} p(Y, \pi|\mathbf{z})= &\prod_{k,g,u} \frac{b^a}{\Gamma(a)P_{kgu}}\pi_{kgu}^{S_{kgu}+a-1}\\ &\times\exp\left(-\pi_{kgu}\left[|\mathcal{A}_k||\mathcal{A}_g|+ b\right]\right), \end{split} \end{align*} which can be integrated with respect to $\pi$ to obtain \begin{align} \label{eq:T1} \begin{split} p(Y|\mathbf{z},K)=&\prod_{k,g,u} \left[\frac{b^a}{\Gamma(a)P_{kgu}} \frac{\Gamma[S_{kgu}+a]}{\left[|\mathcal{A}_k||\mathcal{A}_g| + b\right]^{(S_{kgu}+a)}}\right]. \end{split} \end{align} We now focus on the second density on the right hand side \begin{equation*} p(\mathbf{z}|K)=\int p(\mathbf{z},\boldsymbol{\omega}|K)d\boldsymbol{\omega}. \end{equation*} A Dirichlet \emph{prior} distribution can be attached to $\boldsymbol{w}$ in order to get an explicit formula, in a similar fashion of what we did with $\pi$: \begin{align*} \nu(\omega|K) =& \text{Dir}_K(\boldsymbol{\omega}; \alpha,\dots,\alpha). \end{align*} The integrated density $p(\mathbf{z}|K)$ can be proven to reduce to \begin{align} \label{eq:p_z} p(\mathbf{z}| K)=\frac{\Gamma(\alpha K)}{\Gamma(\alpha)^K}\frac{\prod_{k\leq K}\Gamma(|\mathcal{A}_k| + \alpha)}{ \Gamma(N + \alpha K)} \end{align} \subsection{Model B} When considering the constrained framework described at the end of the previous section, the ICL is defined \begin{align*} \mathcal{ICL}(\mathbf{z}, \mathbf{y}, K,D):=&\log(p(Y,\mathbf{z}, \mathbf{y}|K,D)) \\ =&\log(p(Y|\mathbf{z},\mathbf{y})) + \log(p(\mathbf{z}|K)) + \log(p(\mathbf{y}|D)) \end{align*} and it is maximized to provide estimates of $\mathbf{z}, \mathbf{y}, K$ and $D$. The first density on the right hand side is obtained by integrating out the hyper-parameter $\pi$. This integration can be done explicitly by attaching to $\pi$ the following prior density function \begin{equation*} \nu(\pi|a,b)=\prod_{k,g}\prod_d \frac{b^a}{\Gamma(a)}\pi_{kgd}^{a-1}e^{-b \pi_{kgd}}. \end{equation*} The second integrated density on the right hand side can be read in \eqref{eq:p_z} and the third is obtained by integrating out the parameter $\boldsymbol{\rho}$, whose prior density density function is assumed to be \begin{equation*} \nu(\boldsymbol{\rho}|D)=\text{Dir}_D(\boldsymbol{\rho};\beta, \dots, \beta). \end{equation*} The exact ICL is finally obtained by taking the logarithm of \begin{align} \label{eq:modelB_ICL} p(Y,\mathbf{z}, \mathbf{y}|K,D)&=\prod_{k,g,d} \frac{b^a}{\Gamma(a)P_{kgd}} \frac{\Gamma[S_{kgd}+a]}{\left[|\mathcal{A}_k||\mathcal{A}_g||\mathcal{C}_d| + b\right]^{(S_{kgd}+a)}} \nonumber \\ &\times \frac{\Gamma(\alpha K)}{\Gamma(\alpha)^K}\frac{\prod_{k\leq K}\Gamma(|\mathcal{A}_k| + \alpha)}{ \Gamma(N + \alpha K)}\nonumber \\ &\times \frac{\Gamma(\beta D)}{\Gamma(\beta)^D}\frac{\prod_{d\leq D}\Gamma(|\mathcal{C}_d| + \beta)}{ \Gamma(U + \beta D)}. \end{align} \subsection{Greedy search} \label{par:GS} By setting conjugated prior distributions over the model parameters, we obtained an ICL (equation \eqref{eq:ICL}) in an explicit form. Nonetheless explicit formulas to maximize it, with respect to $\mathbf{z}$ and $K$, do not exist. We then rely on a greedy search algorithm, that has been used to maximize the exact ICL, in the context of a standard SBM, by \cite{Come_Latouche15}. This algorithm basically works as follows: \begin{enumerate} \item An initial configuration for both $\mathbf{z}$ and $K$ is set (standard clustering algorithms like \emph{k-means} or hierarchical clustering can be used). \item Labels switches leading to the highest increase in the exact ICL are repeatedly made. A label switch consists in a merge of two clusters or in a node switch from one cluster to another. \end{enumerate} \begin{Remark} The greedy algorithm described in this section, makes the best choice \emph{locally}. A convergence toward the global optimum in not guaranteed and often this optimum can only be approximated by a local optimum reached by the algorithm. \end{Remark} \begin{Remark} The \emph{exact ICL} (as well as the \emph{ICL criterion}) penalizes the number of parameters. Since the tensor $\pi$ has dimension $K \times K \times U$, when $U$, which is fixed, is very hight, the ICL will take its maximum for $K=1$. In other words the only way the ICL has to make the model more parsimonious is to reduce $K$ up to one. By doing so, any community (or other) structure will not be detected. This over-fitting problem has nothing to see with the possible limitations of the greedy search algorithm and it can be solved by switching to model \textbf{B}. \end{Remark} Once $K_{max}$ has been fixed, together with an initial value of $\mathbf{z}$, a shuffled sequence of all the nodes in the graph is created. Each node in the sequence is moved to the cluster leading to the highest increase in the ICL, if any. This procedure is repeated until no further increase in the ICL is still possible. Henceforth, we refer to this step as to \emph{Greedy-Exchange} (\textbf{GE}). When maximizing the modularity score to detect communities, the \textbf{GE} usually is a final refinement step to be adopted after repeatedly merging clusters of nodes. In that context, moreover, the number of clusters is initialized to $U$ and each node is alone in its own cluster. See for example \cite{NoakRotta}. Here, we follow a different approach, proposed by \cite{Come_Latouche15} and \cite{Blondel08fastunfolding}: after running the \text{GE} , we try to \emph{merge} the remaining clusters of nodes in the attempt to increase the ICL. In this final step (henceforth \textbf{GM}), all the possible merges are tested and the best one is retained. The ICL does not have to be computed before and after each swap/merge: possible increases can be assessed directly. When switching one node (say $i$) from cluster $\mathcal{A}_{k'}$ to $\mathcal{A}_l$, with $k' \neq l$, the change in the ICL is given by\footnote{Hereafter, the ``*'' notation refers to the statistics \emph{after} switching/merging. } \begin{equation*} \Delta_{k' \rightarrow l} = ICL(\mathbf{z^*},K)- ICL(\mathbf{z},K). \end{equation*} The only statistics not simplifying, are those involving $k'$ and $l$, hence the equation above can be read as follows \small \begin{align} \label{eq:switch} \begin{split} \Delta_{k'\rightarrow l} :=& \log\left(\frac{\Gamma(|\mathcal{A}_{k'}| - 1 + \alpha)\Gamma(|\mathcal{A}_l|+1 +\alpha)}{\Gamma(|\mathcal{A}_{k'}| + \alpha)\Gamma(|\mathcal{A}_l|+\alpha)}\right) \\ + &\sum_{g \leq K}\sum_{u \leq U}\log(L^*_{k'gu}) + \sum_{g \leq K}\sum_{u \leq U} \log(L^*_{lgu}) \\ + &\sum_{k \leq K}\sum_{u \leq U}\log(L^*_{kk'u})+\sum_{k \leq K}\sum_{u \leq U}\log(L^*_{klu})\\ - &\sum_u (\log(L^*_{k'k'u}) + \log(L^*_{k'lu}) +\log(L^*_{lk'u}) + \log(L^*_{llu})) \\ - &\sum_{g \leq K}\sum_{u \leq U}\log(L_{k'gu}) - \sum_{g \leq K}\sum_{u \leq U} \log(L_{lgu}) \\ - &\sum_{k \leq K}\sum_{u \leq U}\log(L_{kk'u})- \sum_{k \leq K}\sum_{u \leq U}\log(L_{klu})\\ + &\sum_u (\log(L_{k'k'u}) + \log(L_{k'lu}) +\log(L_{lk'u}) + \log(L_{llu})), \end{split} \end{align} \normalsize where $L_{kgu}$ is the term inside the product on the right hand side of equation \eqref{eq:T1} and $\mathbf{z^*}$ and $L_{kdu}^*$ refer to new configuration where the node $i$ in in $\mathcal{A}_l$. \normalsize When merging clusters $\mathcal{A}_{k'}$ and $\mathcal{A}_l$ into the cluster $\mathcal{A}_l$, the change in the ICL can be expressed as follows: \small \begin{align} \label{eq:merge} \begin{split} \Delta_{k'\rightarrow l} :=& ICL(\mathbf{z}^*, K-1)- ICL(\mathbf{z},K) =\\ =&\log\left(\frac{p(\mathbf{z^*}| K-1)}{p(\mathbf{z}| K)} \right)+ \\ + & \sum_{g \leq K}\sum_{u \leq U}(\log(L^*_{lgu}) + \log(L^*_{klu})) - \sum_u \log(L^*_{llu})\\ -&\sum_{g \leq K}\sum_{u \leq U}\log(L_{k'gu}) - \sum_{g \leq K}\sum_{u \leq U} \log(L_{lgu}) \\ - &\sum_{k \leq K}\sum_{u \leq U}\log(L_{kk'u})- \sum_{k \leq K}\sum_{u \leq U}\log(L_{klu})\\ +&\sum_u (\log(L_{k'k'u}) + \log(L_{k'lu}) +\log(L_{lk'u}) + \log(L_{llu})). \end{split} \end{align} \normalsize When working with model \textbf{B}, we need to initialize $D_{max}$ and $\mathbf{y}$. Then a shuffled sequence of time intervals $I_1, \dots, I_U$ is considered and each interval is swapped to the time cluster leading to the highest increase in the ICL (\textbf{GE} for time intervals). When no further increase in the ICL is possible, we look for possible merges between time clusters in the attempt to increase the ICL (\textbf{GM} for time intervals). Formulas to directly assess the increase in the ICL can be obtained, similar to those for nodes swaps and merges. In case of model \textbf{B}, different strategies are possible to optimize the ICL: \begin{enumerate} \item \textbf{GE} + \textbf{GM} for nodes at first and then for times (we will call this strategy \textbf{TN}, henceforth). \item \textbf{GE} + \textbf{GM} for time intervals at first and then for nodes (\textbf{NT} strategy). \item An hybrid strategy, involving alternate switching of nodes and time intervals (\textbf{M} strategy). \end{enumerate} We will provide details about the chosen strategy case by case in the following. \section{Experiments}\label{sec:Exp} In this section, experiments on both synthetic and real data are provided. All running times are measured on a twelve cores Intel Xeon server with 92 GB of main memory running a GNU Linux operating system, the greedy algorithm described in Section \ref{par:GS} being implemented in C++. A Euclidean hierarchical clustering algorithm was used to initialize the labels and $K_{max}$ was set to $N/2$. In the following, we call TSBM the temporal SBM we propose and we refer to the optimization algorithm described in the previous section as greedy ICL. \subsection{Simulated Data} \subsubsection{First Scenario} We start by investigating how the proposed approach can be used to efficiently estimate the vector $\mathbf{z}$ of labels in situations where the standard SBM fails. Thus, we simulate interactions between 50 $(N)$ nodes, grouped in two hidden clusters $\mathcal{A}_1$ and $\mathcal{A}_2$, over 100 $(U)$ time intervals of unitary length. The generative model considered for the simulations depends on two time clusters $\mathcal{C}_1$ and $\mathcal{C}_2$ containing a certain number of time intervals $I_1,\dots I_U$. If $I_u$ is in $\mathcal{C}_1$ then $Y_{ij}^{I_u}$ is drawn from a Poisson distribution $\mathcal{P}(P_{z_i z_j})$. Otherwise, $Y_{ij}^{I_u}$ is drawn from a Poisson distribution $\mathcal{P}(Q_{z_i z_j})$. The matrices $P$ and $Q$ are given by \begin{equation*} P= \begin{pmatrix} \psi & 1 \\ 1 & \psi \\ \end{pmatrix} \qquad\text{and}\qquad Q= \begin{pmatrix} 1 & \psi \\ \psi & 1 \\ \end{pmatrix}, \end{equation*} where $\psi$ is a free parameter in $[1, \infty)$. When this parameter is equal to 1, we are in a degenerate case and there is not any structure to detect: all the nodes are placed in the same, unique cluster. The higher $\psi$, the stronger the $\emph{contrast}$ between the interactions pattern inside and outside the cluster. In this paragraph, $\psi$ is set equal to 2 and the proportions of the clusters are set equal ($\boldsymbol{\omega}=(1/2, 1/2)$). The number of time intervals assigned to each time cluster is assumed to be equal to $U/2$. In the following, we consider \begin{align*} \mathcal{C}_1:=&\{I_1, \dots, I_{25}\}\cup \{I_{51},\dots,I_{75}\}, \\ \mathcal{C}_2:=&\{I_{26}, \dots, I_{50}\}\cup \{I_{76},\dots,I_{100}\}. \end{align*} This generative model defines two integrated intensity functions (IIFs), say $\Lambda_1(t)$ and $\Lambda_2(t)$. The former is the IIF of the Poisson processes counting interactions between nodes sharing the same cluster, the latter is the IIF of the Poisson processes counting interactions between vertices in different clusters. These IIFs can be observed in Figure \ref{fig:IIFs}. \begin{figure*}[ht] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{IIFs_ex1_6x8L} \captionsetup{format=hang} \subcaption{ } \label{fig:IIFs} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{IIFs_ex1_estimates_6x8L} \captionsetup{format=hang} \subcaption{} \label{fig:IIFs_estim} \end{subfigure} \caption{Real \ref{fig:IIFs} and estimated \ref{fig:IIFs_estim} integrated intensity functions (IIFs) according to the considered generative model ($\psi=2$). In blue we have $\Lambda_1(t)$, for $\psi=4$, in red $\Lambda_2(t)$.} \end{figure*} A tensor $Y$, with dimensions $N \times N \times U$, is drawn. Its $(i,j,u)$ component is the sampled number of interactions from node $i$ to node $j$ over the time interval $I_u$. Moreover, sampled interactions are aggregated over the whole time horizon to obtain an adjacency matrix. In other words, each tensor is integrated over its third dimension. We compared the greedy ICL algorithm with the Gibbs sampling approach introduced by \cite{nouedoui2013}. The former was run on the tensor $Y$ (providing estimates in 11.86 \emph{seconds} on average) the latter on the corresponding adjacency matrix. This experiment was repeated 50 times and estimates of random vector $\mathbf{z}$ were provided at each iteration. Each estimate $\hat{\mathbf{z}}$ is compared with the true $\mathbf{z}$ and an adjusted rand index \citep[ARI][]{rand1971objective} is computed. This index takes values between zero and one, where one corresponds to the perfect clustering (up to label switching). \begin{Remark} the true structure is always recovered by the TSBM: 50 unitary values of the ARI are obtained. Conversely, the standard SBM never succeeds in recovering any hidden structures present in the data (50 null ARIs are obtained). This can easily be explained since the time clusters have opposite interaction patterns, making them hard to uncover when aggregating over time. \end{Remark} Relying on an efficient estimate of $\mathbf{z}$, the two integrated intensity functions can be estimated through the estimator in equation \eqref{eq:linear_estimator}. Results can be observed in Figure \ref{fig:IIFs_estim}, where the estimated functions (coloured dots) overlap the real functions \ref{fig:IIFs}. \paragraph{Over fitting} We now illustrate how the model discussed so far fails in recovering the true vector $\mathbf{z}$ when the number of time intervals (and hence of free parameters) grows. We consider the same generative model of the previous paragraph, with a lower $\psi$: \begin{equation*} P= \begin{pmatrix} 1.4 & 1 \\ 1 & 1.4 \\ \end{pmatrix} \qquad\text{and}\qquad Q= \begin{pmatrix} 1 & 1.4 \\ 1.4 & 1 \\ \end{pmatrix}. \end{equation*} Despite the lower contrast (from $2$ to $1.4$ in $P$ and $Q$), with $U=100$ and time sub-intervals of unitary length, the TSBM model still always recovers the true vector $\mathbf{z}$. Now we consider a finer partition of $[0,100]$ by setting $U=1000$ and $\Delta_u=0.1$ as well as scaling the intensity matrices as follows \begin{equation*} \tilde{P}:= \begin{pmatrix} 0.14 & 0.1 \\ 0.1 & 0.14 \\ \end{pmatrix} \qquad\text{and}\qquad \tilde{Q}= \begin{pmatrix} 0.1 & 0.14 \\ 0.14 & 0.1 \\ \end{pmatrix}. \end{equation*} Moreover, we set \begin{equation*} \mathcal{C}_1:=\{I_1, \dots, I_{250}\} \cup \{I_{501}, \dots, I_{750}\} \end{equation*} and $\mathcal{C}_2$ is the complement of $\mathcal{C}_1$, as previously. Finally, we sampled 50 dynamic graphs over the interval $[0,100]$ from the corresponding generative model. Thus, each graph is characterized by a sampled tensor $Y$. Unfortunately, the model is not robust to such changes. Indeed, when running the greedy ICL algorithm on each sampled tensor $Y$, the algorithm does not see any community structure and all nodes are placed in the same cluster. This leads to a null ARI, for each estimation. As mentioned in paragraph \ref{par:GS}, the ICL penalizes the number of parameters and since the tensor $\pi$ has dimension $K \times K \times U$, for a fixed $K$, when moving from the larger decomposition ($U=100$) to the finer one ($U=1000$), the number of free parameters in the model is approximatively\footnote{The dimension of the vector $\boldsymbol{\omega}$ does not change.} multiplied by 10. The increase we observe in the likelihood, when increasing the number of clusters of nodes from $K=1$ to $K=2$, is not sufficient to compensate the penalty due to the high number of parameters and hence the ICL decreases. Therefore, the maximum is taken for $K=1$ and a single cluster is detected. Model \textbf{B} allows to tackle this issue. When allowing the integrated intensity functions $\Lambda_1(t)$ and $\Lambda_2(t)$ to grow at the same rate on each interval $I_u$ belonging to the same time cluster $\mathcal{C}_d$, we basically reduce the third dimension of the tensor $\pi$ from $U$ to $D$. The greedy ICL algorithm for Model \textbf{B} was run on each sampled tensor $Y$, providing estimates of $\mathbf{z}$ and $\mathbf{y}$ in $2.38$ \emph{minutes}, on average. A hierarchical clustering algorithm was used to initialize the time labels $\mathbf{y}$, and the initial number of time clusters was set to $D_{max}=\sqrt{U}$. In an attempt to avoid convergence to local maxima, ten estimates are built for each tensor and the estimate leading to the best ICL is finally retained. The adjusted rand index is used to evaluate the clustering, as previously, and the results are presented as box plots in Figure \ref{fig:MB1}. \begin{figure}[ht] \centering \includegraphics[width=.9\linewidth]{ModelB_aris_take2_6x8L} \caption{Box plots for both clusterings of nodes and time intervals: 50 dynamic graphs were sampled according to the considered generative model, estimates of $\mathbf{z}$ and $\mathbf{y}$ are provided by the greedy ICL (model B). } \label{fig:MB1} \end{figure} Note that the results were obtained through the optimization strategy \textbf{TN}. The other two strategies described in section \ref{par:GS}, namely the \textbf{NT} strategy and the \textbf{M} strategy, led to similar results in terms of final ICL and ARIs. \subsubsection{Second Scenario} Since the node clusters are fixed over time, the TSBM model can be seen as an alternative to a standard SBM to estimate the label vector $\mathbf{z}$. The previous scenario shows that the TSBM can recover the true vector $\mathbf{z}$ in situations where the SBM fails. In this paragraph we show how the TSBM and the SBM can sometimes have similar performances. We considered dynamic graphs with 50 $(N)$ nodes and 50 $(U)$ time intervals \begin{equation*} I_1, \dots, I_{50}. \end{equation*} These time intervals are grouped in two time clusters $\mathcal{C}_1$ and $\mathcal{C}_2$, the former containing the first 25 time intervals, the latter the last 25 time intervals. If $I_u$ is in $\mathcal{C}_1$ then $Y_{ij}^{I_u}$ is drawn from a Poisson distribution $\mathcal{P}(P_{z_i z_j})$. Otherwise, $Y_{ij}^{I_u}$ is drawn from a Poisson distribution $\mathcal{P}(2P_{z_i z_j})$. The $P$ matrix is given by \begin{equation*} P= \begin{pmatrix} \psi & 2 \\ 2 & \psi \end{pmatrix} \end{equation*} and $\psi$ is a free parameter in $[2, +\infty)$. Hence, we have two different integrated intensity functions, say $\Lambda_1(t)$ and $\Lambda_2(t)$ with the same roles as in the previous section. These two functions are plotted in Figure \ref{fig:IIFs_2}, for a value of $\psi=4$. \begin{figure*}[ht] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{IIFs_ex2_6x8L} \captionsetup{format=hang} \subcaption{ } \label{fig:IIFs_2} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{IIFs_ex2_estimates_6x8L} \captionsetup{format=hang} \subcaption{} \label{fig:IIFs_2_estim} \end{subfigure} \caption{Real \ref{fig:IIFs_2} and estimated \ref{fig:IIFs_2_estim} integrated intensity functions (IIFs) according to the considered generative model. In blue we have $\Lambda_1(t)$, for $\psi=4$, in red $\Lambda_2(t)$.} \end{figure*} We investigated six values for the parameter $\psi$ \begin{equation*} \{2.1, 2.2, 2.3, 2.4,2.5,2.6\}. \end{equation*} For each value of $\psi$, we sampled 50 tensors $Y$, of dimension $(50 \times 50 \times 50)$, according to the generative model considered. Interactions are aggregated over the time interval $[0,50]$ to obtain adjacency matrices. We ran the greedy ICL algorithm on each tensor and the Gibbs sampling (SBM) algorithm on each adjacency matrix. For the greedy ICL algorithm, estimates of vector $\mathbf{z}$ were obtained in a mean running time of 5.52 \emph{seconds}. As previously, to avoid convergence to local maxima, ten different estimates are built for each tensor, the one leading to the highest ICL being retained. The results are presented as box plots in Figure \ref{fig:ari_s}. \begin{figure*}[ht] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{ARIs_TSBM_ex2_6x8L} \captionsetup{format=hang} \subcaption{\footnotesize ARIs obtained by greedy ICL.} \label{fig:Ari_TSBM} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{ARIs_SBM_ex2_6x8L} \captionsetup{format=hang} \subcaption{\footnotesize ARIs obtained with the Gibbs sampling procedure for SBM.} \label{fig:Ari_SBM} \end{subfigure} \caption{Box plots of ARIs for different levels of contrast ($\psi$). We compare the proposed model with a standard SBM.} \label{fig:ari_s} \end{figure*} Although the SBM leads to slightly better clustering results for small values of $\psi$ (2.2, 2.3) and the TSBM for higher values of $\psi$ (2.5, 2.6), we observe that the two models have quite similar performances (in terms of accuracy) in this scenario. To provide some intuitions about the scalability (see next paragraph) of the proposed approach we repeated the previous experiment by setting $K=3$ clusters, corresponding to the following connectivity matrix: \begin{equation*} P= \begin{pmatrix} \psi & 2 & 2\\ 2 & \psi & 2\\ 2 & 2 & \psi \end{pmatrix}. \end{equation*} The assignment of the time intervals to the time clusters is unchanged as well as the connectivity pattern on each time cluster are unchanged. The contrast parameter $\psi$ takes values in the set $\{2, 2.5,2.10, \dots, 2.8 \}$ and 50 dynamic graphs were sampled, according to the described settings, for each value of $\psi$. We ran the TSBM on each dynamic graph obtaining 50 estimates of the labels vector $\mathbf{z}$ (one for each $\psi$) and box and whiskers plots for each group of ARIs can be seen in Figure \ref{fig:repeat}. \begin{figure*}[ht] \centering \includegraphics[width=.9\linewidth]{experim_2_3cl_6x8L.pdf} \caption{Box plots of ARIs for different levels of contrast ($\psi$). Data have been sampled by non-homogeneous Poisson processes counting interactions in a dynamic graph whose nodes are grouped in three clusters and interactivity patterns vary across two time clusters. } \label{fig:repeat} \end{figure*} By comparing this figure with Figure \ref{fig:Ari_TSBM}, we can see that the model needs a slight higher contrast to fully recover the true structure. Actually, when increasing the number of clusters without increasing the number of nodes, the size of each cluster decreases (on average) and since the estimator of $\mathbf{z}$ we are using is related to the ML estimator, we can imagine a slower convergence to the true value of $\mathbf{z}$. \subsubsection{Scalability} A full scalability analysis of the proposed algorithm as well as the convergence properties of the proposed estimators are outside the scope of this paper. Nonetheless, in appendix we provide details about the computational complexity of the greedy-ICL algorithm. Future works could certainly be devoted to improve both the algorithm efficiency and scalability through the use of more sophisticated data structures. \subsection{Real data} The dataset used in this section was collected during the \textbf{ACM Hypertext} conference held in Turin, June 29th - July 1st 2009. We focus on the first conference day (24 hours) and consider a dynamic network with 113 $(N)$ nodes (conference attendees) and 96 $(U)$ time intervals (the consecutive quarter-hours in the period: 8am of June 29th - 7.59am of June 30th). The network edges are the proximity face to face interactions between the conference attendees. An interaction is monitored when two attendees are face to face, nearer than 1.5 meters for a time period of at least 20 seconds\footnote{More informations about the way the data were collected can be found in \cite{Isella:2011qo} or visiting the website \url{http://www.sociopatterns.org/data sets/hypertext-2009-dynamic-contact-network/ }. }. The data set we considered consists of several lines similar to the following one \begin{center} \begin{tabular}{c|c|c|c} \hline \footnotesize\emph{ID$1$} & \footnotesize\emph{ID$2$} & \footnotesize\emph{Time Interval ($15m$)} & \footnotesize\emph{Number of interactions} \\ \hline 52 & 26 & 5 & 16 \\ \hline \end{tabular} \end{center} \medskip It means that conference attendees 52 and 26, between 9am and 9.15am, have spoken for $16 \times 20s \approx 5m30s$. We set $K_{max}=20$ and the vector $\mathbf{z}$ was initialized randomly: each node was assigned to a cluster following a multinomial distribution. The greedy algorithm was run ten times on the considered dataset, each time with a different initialization and estimates of $\mathbf{z}$ and $K$ were provided in 13.81 \emph{seconds}, on average. The final values of the ICL can be observed as box plots in Figure \ref{fig:final_icl_boxplot} . \begin{figure}[ht] \centering \includegraphics[width=.9\linewidth]{final_icl_boxplot_4x6L} \caption{Box plot of the ten final values of the ICL produced by the greedy ICL algorithm for different initializations.} \label{fig:final_icl_boxplot} \end{figure} The estimates associated to the highest ICL correspond to 5 node clusters. In Figure \ref{fig:MainFIgure}, we focus on the cluster $\mathcal{A}_4$, containing 48 nodes. In Figure \ref{fig:Agginter4} we plotted the time cumulated interactions inside the cluster. As it can be seen the connectivity pattern for this cluster is very representative of the entire graph: between 13pm and 14pm and 18pm and 19.30pm there are significant increases in the interactions intensity. \begin{figure*}[ht] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{agg_int_cl4_cum_6x8L.pdf} \caption{\footnotesize Cumulated aggregated connections inside cluster $\mathcal{A}_4$.} \label{fig:Agginter4} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{IIF_est_cl4_6x8L.pdf} \caption{\footnotesize Estimated IIF for interactions inside cluster $\mathcal{A}_4$. } \label{fig:E_iif} \end{subfigure} \caption{\small in Figure \ref{fig:Agginter4}, cumulated aggregated connections for each time interval for cluster $\mathcal{A}_4$ . In Figure \ref{fig:E_iif} the estimated IIF for interactions inside cluster $\mathcal{A}_4$. Vertical red lines delimit the lunch break and the wine and cheese reception.} \label{fig:MainFIgure} \end{figure*} The estimated integrated intensity function (IIF) for interactions inside this cluster can be observed in Figure \ref{fig:E_iif}. The function has a higher slope on those time intervals where attendees in the cluster are more likely to have interactions. The vertical red lines delimit two important times of social gathering\footnote{More informations at \url{http://www.ht2009.org/program.php}.}: \begin{itemize} \item 13.00-15.00 - lunch break. \item 18.00-19.00 - wine and cheese reception. \end{itemize} We conclude this section by illustrating how Model \textbf{B} can be used to assign time intervals on which interactions have similar intensity to the same time cluster. We run the greedy ICL algorithm for Model \textbf{B} on the dataset by using the optimization strategy \textbf{M} described at the end of Section \ref{par:GS} (other strategies lead in this case to similar results) and $D_{max}$ was set equal to 20. The time clustering provided by the greedy ICL algorithm can be observed in Figure \ref{fig:MainFIgure2}. On the left hand side, the aggregated interactions for each quarter-hour during the first day are reported. On the right hand side, interactions taking place into those time intervals assigned to the same time cluster have the same form/color. Two important things should be noticed: \begin{itemize} \item[1.] The obtained clustering seems meaningful: the three time intervals with the highest interactions level are placed in the same cluster (blue), apart from all the others. More in general, each cluster is associated to a certain intensity level, so time intervals in the same cluster, not necessarily adjacent, share the same global interactivity pattern. \item[2.] There are not constraints on the number of abruptly changes connected with these five time clusters. In other words, time clusters do not need to be adjacent and this is the real difference between the approach considered in this paper (time clustering) and a pure segmentation one. \end{itemize} \begin{figure*}[ht] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{agg_inter_6x8L.pdf} \caption{\footnotesize Aggregated connections.} \label{fig:Agginter} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{agg_inter_clust_6x8L.pdf} \caption{\footnotesize Clustered time intervals. } \label{fig:Agginter_clust} \end{subfigure} \caption{\small in Figure \ref{fig:Agginter}, aggregated connections for each time interval for the whole network. In Figure \ref{fig:Agginter_clust} interactions of the same form/color take place on time intervals assigned to the same cluster (model \textbf{B}). } \label{fig:MainFIgure2} \end{figure*} \section{Conclusion}\label{sec:Conc} We proposed a non-stationary extension of the stochastic block model (SBM) allowing us to cluster nodes of a network is situations where the classical SBM fails. The approach we chose consists in partitioning the time interval over which interactions are studied into sub-intervals of fixed length. Those intervals provide aggregated interaction counts that are increments of non homogeneous Poisson processes (NHPPs). In a SBM inspired perspective, nodes are clustered in such a way that aggregated interaction counts are homogeneous over clusters. We derived an exact integrated classification likelihood (ICL) for such a model and proposed to maximize it through a greedy search strategy. Finally, a non parametric maximum likelihood estimator was developed to estimate the integrated intensity functions of the NHPPs counting interactions between nodes. The experiments we carried out on artificial and real world networks highlight the capacity of the model to capture non-stationary structures in dynamic graphs. \newpage
1,314,259,992,800
arxiv
\section{Introduction} Heavy-tailed distributions, and power-law (or Pareto) distributions in particular have been reported from a very broad range of areas, including earthquake intensities \cite{Gutenberg1944,Christensen2002,Newberry2019}, avalanche sizes \cite{Birkeland2002}, solar flares \cite{Lu1991}, degree distributions of various social and biological networks \cite{Pastor2001,Stumpf2005,Newman2005}, incomes \cite{Pareto1896,Yakovenko2009}, insurance claims \cite{Shpilberg1977,Rootzen1995}, number of citations of scientific publications \cite{Price1965,Redner1998,Golosovsky2017}, and many more. For financial institutions, the importance of heavy-tailed behavior comes from the fact that a simple Gaussian model severely underestimates the risks associated with different products or investment strategies, which in turn results in considerable losses. This is why the mathematical background of heavy-tailed distributions and their estimation have been most extensively studied in this context (\hspace{1sp}\cite{Mandelbrot1960, Mandelbrot1963,Rachev2003,Embrechts1999} and others). For physicists, in turn, the general fascination with power laws comes from the concept of universality classes in statistical physics, which describe the behavior of various systems close to their critical points, irrespective of the details of the mechanisms of the systems \cite{Wilson1983}. Inspired by this, and using the increasing availability of good-quality data sets, many of the field have ventured into interdisciplinary waters where power-law behavior has been reported; thereby creating entirely new disciplines like econophysics \cite{Kertesz1997,Schinckus2016}. In such investigations, the emphasis has been mostly on understanding the emergence of power-law behavior \cite{Bak1991, Albert2002, Farmer2004b}. In contrast, the problem of practical estimation methods of the exponent associated with the power-law (or indeed the verification of power-law behavior) has received relatively little attention from the community. Notable exceptions are \cite{Newman2005, Clauset2009}, in which the power-law nature of numerous phenomena has been questioned. Turning to non-linear optics \cite{Boyd2008, Grynberg2010}, the problem formulation is, however, somewhat different. Optical experiments producing light with heavy-tail intensity distributions allow high repetition rates and large samples to study unstable non-linear phenomena and their sources \cite{Barthelemy2008, Mercadier2009}. Moreover, unlike in social or financial contexts, experiments can be repeated. For intensive pumps, non-linear systems can produce bright signals with directly measurable intensity fluctuations. In statistical optics \cite{Goodman1985}, coherence theory and quantum optics \cite{Mandel1995}, the probability density function of intensities is a subject of investigation for intensive light beams. Remarkably, it allows a direct observation of macroscopic quantum phenomena \cite{Iskhakov2009, Iskhakov2011, Iskhakov2012, Iskhakov2016a, Iskhakov2016b}. Intensity distributions in supercontinuum generation setups \cite{Solli2007,Ruban2010,Akhmediev2016} being heavy-tailed -- hence the term \emph{rogue waves} -- does have a theoretical basis. So our aim is not to decide whether the observed distribution is heavy-tailed or not. We rather propose ways to estimate the \emph{tail exponent} of the distribution in the presence of experimental imperfections, which can help in experimental control and design. The issue of detector saturation, for example, should be paid special attention to since for intensity distributions with extremely heavy tails, it cannot be solved by a simple re-calibration. There is always going to be some portion of the data that will be affected by saturation. Low-intensity statistics, on the other hand, are mostly determined by background noise. Consequently, there is only a limited interval of intensities within which the observations can be used for estimation purposes and this hugely affects the efficiency of any estimation procedure. If we are unlucky, the intervals affected by the background noise and detector saturation overlap, and the experiment cannot be salvaged. If, however, there is a portion that is useful, we propose ways to gain an initial estimate of the tail exponent (and by extension, higher quantiles) which helps design further measurement setups. The procedures presented in the current manuscript can be used to quantify the instabilities resulting in extreme events common in optical non-linear processes \cite{Manceau2019,Spasibko2017}, and potentially also in non-linear optomechanics \cite{Brawley2016,Siler2018}, four-wave mixing in atomic vapors \cite{McCormick2007, Guerrero2020} and wave-mixing processes in superconducting circuits \cite{Aban2004, Sivak2019, Mundhada2019}. Beyond fundamental interest, extreme events can be used to produce highly non-classical effects distillable to large squeezing and entanglement for quantum technology \cite{Heersink2006,Dong2008}. In parallel, they also inspire an investigation of optical Maxwell demon principle \cite{Vidrighin2016} for heavy-tailed distributions. The article is structured as follows: In Section \ref{sec:background}, we give a brief overview of the mathematical background and clarify terminology. Section \ref{sec:tools} describes a generic estimation toolkit with pointers to more sophisticated methods. Section \ref{sec:intensity} proposes variations of the generic methods for evaluating numerical data in non-linear optics specifically. \section{Background and terminology}\label{sec:background} Let us first provide an overview of the terminology and concepts concerning \emph{heavy-tailed distributions}, used throughout this work. In general, distributions that decay slower than exponential are referred to as heavy-tailed. To give a clear mathematical formulation of this concept, it is useful to introduce the \emph{tail function} (also referred to as survival function, or complementary distribution function), defined for an arbitrary real-valued random variable \(X\) as \[\overline F(x) \equiv \prob{X \geq x}.\] In other words, this function describes the probability that the variable reaches or exceeds a threshold \(x\). It is related to the more familiar distribution function (DF) \(F(x)\) through \(\overline F (x) = 1 - F(x)\), and to the probability density function (PDF) \(f(x)\) through \(\overline F(x) = \int_x^{\infty} f(u)\, \mathrm d u\). In what follows, we will only consider the right tail of distributions, that is, the behavior of the largest values, and for the sake of simplicity, suppose that we are dealing with positive-valued random variables like optical intensities. The whole treatment can be straightforwardly extended to the left tails of distributions. Using the tail function (TF), a \emph{heavy-tailed distribution} can be defined as a distribution for which \begin{equation}\label{heavy_def} C \equiv \mathop{\underline{\lim}}_{x\to\infty}\frac{-\ln \overline F(x)}{x} = 0. \end{equation} There are other equivalent definitions \cite{HeavyTextBook}, we prefer the latter formulation since it is the one which is most in line with the somewhat vague notion of an ``L-shaped'' distribution used in connection with rogue waves (\hspace{1sp}\cite{ShapingLight}, for example). Note that if the limit $C$ in \eqref{heavy_def} is a finite positive value, the distribution decays asymptotically at an exponential rate, while if it is infinity, the distribution decays faster than exponential. Definition \eqref{heavy_def} includes even distributions whose moments of any order are finite, for example the log-normal distribution and the Weibull distribution with a shape parameter lower than one \cite{Bohm2010_IntroToStatistics}. \emph{Pareto-type distributions} (or regularly-varying distributions) form a subset of heavy-tailed distributions and are defined as \begin{equation} \label{eq:RV} \overline F(x) = x^{-\alpha}\cdot L(x), \end{equation} with \(L(x)\) being a slowly-varying function (for any \(t > 0\), \(\lim_{x\to\infty}L(tx)/L(x) = 1\); slowly-varying functions include for example the logarithm function and functions that have a finite limit in infinity) and $\alpha > 0$. For the Pareto distribution, $L(x) = \mathrm{const}$, corresponding exact power-law behavior. The exponent \(\alpha\) is referred to as the \emph{tail exponent}. Note that this exponent describes the decay of the tail function, the exponent corresponding to the decay of the PDF is \(\alpha+1\). The moments \(\eval{X^a}\) are finite for all $a < \alpha$ and infinite for all $a > \alpha$. Whether $\eval{X^\alpha}$ itself is finite depends on the function \(L(x)\). What makes heavy-tailed distributions statistically special is the fact that many traditional procedures based on the mean and the variance are inapplicable if the first and second moments do not exist. It is possible, of course, to compute sample averages, but they are meaningless if the corresponding expected value is not finite for the underlying distribution. That is, as the number of observations is increased, such sample averages do not converge to a finite number. The lack of definite moments limits many evaluations in statistical optics \cite{Goodman1985}, coherence optics and quantum optics \cite{Mandel1995}. One should not, for example, calculate second-order quantities like correlation \cite{Boitier2011, Spasibko2017, Zhou2017, Zhang2019} for $\alpha < 2$. Furthermore, the traditional central limit theorem, which lies at the heart of many applied models, is not applicable, either. The question what can be said about the largest observations for such distributions arises quite naturally. There are limit laws concerning the maxima of samples and the distribution of threshold exceedances. Very simply stated, if a distribution has the form \eqref{eq:RV}, the distribution of sample maxima tends to an extreme value distribution described by the DF $\exp\left\{-(1+\gamma x)^{-1/\gamma}\right\}$, with $0 < \gamma = \alpha^{-1} < \infty$. The distribution of the exceedances of a sufficiently large threshold $l$ (that is, the random variable $X-l \mid X > l$) tends in turn to a generalized Pareto distribution with DF $1 - (1+\gamma x)^{-1/\gamma}$, $\gamma = \alpha^{-1}$. Both limit laws hold, of course, given proper normalizing constants, for details see \cite{Beirlant_StatisticsOfExtremes}. These limit laws can be straightforwardly used to model the behavior of the underlying distribution beyond the largest observed value. \begin{figure}[t] \centering \newcommand\shift{-0.03096769} { \begingroup% \makeatletter% \providecommand\color[2][]{% \errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}% \renewcommand\color[2][]{}% }% \providecommand\transparent[1]{% \errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}% \renewcommand\transparent[1]{}% }% \providecommand\rotatebox[2]{#2}% \newcommand*\fsize{\dimexpr\f@size pt\relax}% \newcommand*\lineheight[1]{\fontsize{\fsize}{#1\fsize}\selectfont}% \ifx\svgwidth\undefined% \setlength{\unitlength}{351.50464876bp}% \ifx\svgscale\undefined% \relax% \else% \setlength{\unitlength}{\unitlength * \real{\svgscale}}% \fi% \else% \setlength{\unitlength}{\svgwidth}% \fi% \global\let\svgwidth\undefined% \global\let\svgscale\undefined% \makeatother% \begin{picture}(1,0.67871066)% \lineheight{1}% \setlength\tabcolsep{0pt}% \put(0,0){\includegraphics[width=\unitlength]{distributions_venn_diagram_7.eps}}% \put(0.34749046,\fpeval{0.36512741+\shift}){\color[rgb]{0,0,0}\rotatebox{90}{\makebox(0,0)[t]{\lineheight{1.25}\smash{\begin{tabular}[t]{c}Exponential \cite{Manceau2019} \\\\Gamma \cite{Manceau2019}\end{tabular}}}}}% \put(0.9348413,\fpeval{0.25561001 + \shift}){\color[rgb]{0,0,0}\rotatebox{90}{\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}Log-Pareto \cite{Cormann2009}\end{tabular}}}}}% % \put(0.63832751,\fpeval{0.11596769+\shift}){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\alpha = \gamma^{-1} > 0$\end{tabular}}}}% \put(0.3072914,\fpeval{0.11596769 + \shift}){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\gamma = 0$\end{tabular}}}}% \put(0.02203698, \fpeval{0.11596769 + \shift}){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\gamma < 0$\end{tabular}}}}% \put(0.23445492, \fpeval{0.00048338 + \shift}){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}Max domain of attraction\end{tabular}}}}% \put(0.53854442, \fpeval{0.25299836 + \shift}){\color[rgb]{0,0,0}\rotatebox{90}{\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}Log-normal \cite{Milonni2004}\end{tabular}}}}}% \put(0.88516223, \fpeval{0.11596769 + \shift}){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\alpha = 0$\end{tabular}}}}% \put(0.73903773, \fpeval{0.65277305 + \shift}){\color[rgb]{0,0,0}\makebox(0,0)[t]{\lineheight{1.25}\smash{\begin{tabular}[t]{c}Heavy-tailed: $C= 0$\end{tabular}}}}% \put(0.07644239, \fpeval{0.20518685 + \shift}){\color[rgb]{0,0,0}\rotatebox{90}{\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}Uniform \cite{Jaynes1957}, Beta \cite{Dmitruk2009}\end{tabular}}}}}% \put(0.28636368, \fpeval{0.65277305 + \shift}){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$0 < C < \infty$\end{tabular}}}}% \put(0.14326637, \fpeval{0.65277305 + \shift}){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$C = \infty$\end{tabular}}}}% \put(0.21421823, \fpeval{0.26865091 + \shift}){\color[rgb]{0,0,0}\rotatebox{90}{\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}Gaussian \cite{Jaynes1957} \end{tabular}}}}}% \put(0.63825478, \fpeval{0.47442825 + \shift}){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}Pareto \cite{Pareto1896}\end{tabular}}}}% \put(0.63106856, \fpeval{0.39677595 + \shift}){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}Cauchy \cite{Svelto2010}\end{tabular}}}}% \put(0.65162017,\fpeval{0.31306397 + \shift}){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}Levy \cite{Lepri2007}\end{tabular}}}}% \put(0.60042899, \fpeval{0.23599199 + \shift}){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}Log-gamma \cite{Ghitani2018}\end{tabular}}}}% \end{picture}% \endgroup% } \vspace*{5mm} \caption{Categorization of univariate continuous distributions according to the properties of their right tails: speed of decay compared to exponential ($C$), tail exponent ($\alpha$), and behavior of maxima ($\gamma$); including a few examples for each category.{\nocite{Manceau2019,Jaynes1957,Dmitruk2009,Milonni2004,Svelto2010,Lepri2007,Cormann2009, Ghitani2018}} \mytodo{Add more references, include log-gamma.}}\label{fig:venn} \end{figure} There are heavy-tailed distributions whose tails are heavier than \eqref{eq:RV} (for example the log-Pareto constructed by exponentiating a Pareto-distributed variable), these limit laws do not apply to them. There are also heavy-tailed distributions whose tail is lighter than \eqref{eq:RV}, for these the DF of the maxima tends to $e^{-e^{-x}}$, whereas threshold exceedances tend to an exponential distribution; these distributions arise as the natural limit for $\gamma = 0$. Distributions with a light, but infinite tail also belong to the $\gamma=0$ domain of attraction. The limit distribution of the maxima of random variables with a finite upper bound is also $\exp\left\{-(1+\gamma x)^{-1/\gamma}\right\}$, but with $\gamma < 0$. Figure \ref{fig:venn} summarizes this categorization according to tail heaviness. In what follows, we will deal with the extreme value index \(\gamma\) instead of the tail exponent \(\alpha = \gamma^{-1}\). This has two practical reasons: firstly, \(\gamma\) is the quantity that is used in all mathematical publications, so the quite extensive mathematical theory is based on that quantity; and secondly, in the context of supercontinuum generation, \(\gamma\) is the quantity proportional to the mean intensity of the pump, so it is in a way more handy than \(\alpha\). \section{Basic tools for investigating power-law tails}\label{sec:tools} The purpose of this section is to show physicists some simple, visual tools for assessing power-law behavior and also give references on improving the behavior of the estimators. The reason why we have not picked a single favorite estimator is that in order to have a reliable assessment of power-law behavior, it is better to use more than one tool and see whether they produce consistent results. \emph{Examples:} For demonstration purposes we will use in the next sections computer-generated samples of different distributions; their properties are summarized in Table \ref{tab:dist1}. The size of the generated samples was $10^4$. We did not use solely regularly-varying distributions \eqref{eq:RV} because we would also like to show what the tools produce when used with distributions that do not have a power-law tail. The exponential distribution is a standard thin-tailed distribution, compared to which the heavy-tailed property itself is defined. The log-normal distribution is, according to the definition \eqref{heavy_def} heavy-tailed, but it does not have a finite tail exponent. A log-gamma distributed variable can be created by exponentiating a gamma-distributed variable the same way one transforms a normally distributed variable into a log-normal; it is an example for a regularly-varying distribution with \(L(x) \neq \mathrm{const}\). The Pareto distribution corresponds to pure power-law behavior, and is used for demonstrating the best-case scenario (it is also the \(a = 1\) special case of the log-gamma distribution). \begin{table}[tbh] \centering \renewcommand{\arraystretch}{2} \begin{tabular}{c|c|c|c|c} Name & PDF & Heavy-tailed? & \(\gamma\) \\ \hline Exponential & \(\displaystyle \lambda e^{-\lambda x}\) & N & 0 \\ Log-normal & \(\displaystyle \frac 1{\sqrt{2\pi \sigma^2}} \frac 1 x \exp\left\{-\frac{\ln^2 x^2}{2\sigma^2}\right\}\) & Y & 0 \\ Log-gamma & \(\displaystyle \frac{b^a}{\Gamma(a)} (\ln x)^{a-1} \cdot x^{-b -1} \) & Y & \(b^{-1}\)\\ Pareto & \(\displaystyle \alpha\cdot x ^{-\alpha-1}\) & Y & \(\alpha^{-1}\) \end{tabular} \caption{Distributions used for demonstration purposes.} \label{tab:dist1} \end{table} \subsection{Histogram} Preparing the histogram is probably the most wide-spread way to visualize random samples, and is definitely go-to tool for many physicists. This is why we devote more attention to it in this paper than it deserves from the mathematical point of view. It involves defining a discrete set of bins over the number line and then counting how many observations of the random variable fall in each bin. Given a linear set of bins, the histogram provides an estimate of the PDF of the underlying distribution. If this distribution is, at least approximately, Pareto, then the histogram should be linear on a log-log plot with the absolute value of the slope equal to the tail exponent plus one. One can even perform a least-squares fit to obtain the slope of the line to estimate the exponent. There are, however, three major problems with this approach: \begin{enumerate} \item Due to the power-law nature, there will be only a few observations (if any) on the right end of a linear set of bins, meaning that exactly for large values, where the power-law property itself should be more pronounced, there will be considerable variance. \item As taking the logarithm and the expectation are not interchangeable, the expected value of the logarithm of the frequencies is not equal to the logarithm of the PDF. Furthermore, the variance of the frequencies depends on the location. So the basic prerequisites of an ordinary least squares fit do not hold. \item The choice of bins has a marked effect on the outcome. \end{enumerate} To help with the first problem, one can use logarithmic bins instead of linear ones. However, it is important to be aware of the fact that using logarithmic bins, that is, bin sizes that are linear on a logarithmic scale is equivalent to preparing a linearly binned histogram of $Y=\ln X$. Consequently, this version of the histogram approximates the probability density function of $Y$, $g(y)$. This change of variables results in \(g(\ln x)=x \cdot f(x)\), meaning that the slope of the linearly binned histogram is $\alpha+1$, while the slope of the logarithmically binned version is $\alpha$ instead, see figure \ref{fig:hist_est}(a). Concerning the second point, if one does not take the logarithm of the frequencies, there is no problem with expectations, and the variances can be calculated explicitly. A consistent version of the histogram approach is preparing the histogram of $Y = \ln X$, and performing a weighted least-squares fit of $A\cdot e^{-By}$. The weights are needed to make the data homoscedastic \cite{Aitken1934}, and can be calculated as $\left[\hat p_k(1 - \hat p_k)\right]^{-1}$, with $\hat p_k$ denoting the fraction of observations within the $k^\mathrm{th}$ bin. The choice of bins in this setting is especially problematic, since empty bins should to be avoided (or, alternatively, discarded when performing the fit). \begin{figure}[ptb!] \centering \includegraphics[width=\textwidth]{Sec3_histogram} \caption{(a) Linearly (filled circles) and logarithmically binned (empty circles) histograms for a single sample of Pareto (\(\alpha = 3\)) data, sample size $10^5$, 100 bins. The dashed line shows the result of the naive estimator, the solid line shows the result of the improved version. For the linearly binned histogram, the analytic formula is \(3\cdot x^{-4}\), for the logarithmically binned \(3\cdot x^{-3}\). (b) Relative root mean squared error (RMSE) of the naive histogram estimator of \(\alpha^{-1}\) and the improved version as a function of how many of the largest observations were taken into account. Values were calculated using $10^4$ Pareto-distributed samples of size \(10^4\). The dotted black line corresponds to the Cramér--Rao bound for the Pareto distribution, \(1/\sqrt{k}\); the colored dashed lines show the performance of the naive estimator; and the colored solid lines correspond to the improved estimator. The different colors indicate how many bins were used to construct the histogram (see the legend).}\label{fig:hist_est} \end{figure} The third issue cannot be completely eliminated, however, figure \ref{fig:hist_est}(b) shows the results of a small-scale simulation experiment based on purely power-law samples (corresponding to the best-case scenario). We have taken \(N_{\mathrm{samples}} = 10^4\) samples, each consisting of $N=10^4$ elements. For each sample, for \(k = 1, 2, \ldots,N\) we have calculated the values of the estimators based on the top \(k\) observations and compared them to the known value of the extreme value index \(\gamma\) to obtain \[\mathrm{Rel.\ RMSE}_m(k) \equiv \frac{1}{\gamma}\cdot \sqrt{\frac 1 {N_{\mathrm{samples}}} \sum_{i = 1}^{N_{\mathrm{samples}}} \left(\hat\gamma_{m,i}(k) - \gamma\right)^2,} \] with \(m\) denoting the method used. This means that for a given value of \(k\), we prepared 4 histograms (2, 3, 5, and 9 bins), and calculated the estimate using the naive (least-squares linear fit on log-log scale) and the improved version (weighted least squares exponential fit on lin-log scale) of the estimator as well for either. First of all, the simulation showed how bad the performance of the naive histogram estimator really is. Secondly, and probably a little counter-intuitively, it showed how few bins are required in order to minimize the error of either version of the estimator based on the histogram compared to how many bins one would use for visualization. Considering, however, that one only needs to estimate a single parameter, increasing the number of points does not necessarily help if these in turn become less accurate. To get an estimate of how many bins should be used for a given number of observations \(k\), one can consider the Rényi representation theorem \cite{Renyi1953}. Using it, it is straightforward to show that the difference between the log-spacing between the largest and smallest element of a purely Pareto-distributed sample of size \(k\) is \(\alpha^{-1}\cdot[1 + 1/2 + \ldots + 1/(k-1)] \approx \alpha^{-1}[\gamma^* + \ln(k-1)]\), with \(\gamma^* \approx 0.5772\) the Euler--Mascheroni constant. Furthermore, supposing that there is an ideal (log) bin width that minimizes the RMSE of a histogram estimator independently of the sample size, it has to be proportional to \(\alpha^{-1}\) (since that is the scale parameter of the logarithmically transformed sample), so the total number of bins should be a linear function of \(\ln(k-1)\). Based on our simulations, for \(k=100\), 3 bins are ideal, for \(k=10^4\) 5 bins produce the best results using the improved version of the histogram estimator (see figure \ref{fig:hist_est}(b)). Interestingly, for the naive version it seems to be always better to prepare just 2 bins. All in all, the histogram need not be completely discarded as a tool even when working with heavy-tailed distributions, provided that certain changes are applied to the naive estimator: use logarithmic binning, use only a few bins (if the histogram resembles a broomstick-like in figure \ref{fig:hist_est}(a), reduce the number of bins), and use weighted least-squares to fit an exponential to the logarithmically transformed data. Nevertheless, the histogram remains first and foremost a visual tool in the heavy-tailed context. \subsection{Empirical tail function and QQ estimator}\label{sec:TF} The empirical tail function (ETF) presents a simple alternative to histograms as a visual tool that does not depend on an arbitrary binning procedure. Given a sample of independent, identically distributed (iid) observations, \(\left\{x_1, x_2,\ldots, x_N\right\}\), sorted in descending order \(x_{(1)}\geq x_{(2)}\geq\ldots \geq x_{(N)}\), the empirical tail function is given as \begin{equation} \overline F^*\left(x_{(k)}\right) = \frac k N, \label{eq:emp.cdf} \end{equation} or equivalently, for an arbitrary threshold \(l\in \mathbb R\), \[\overline F^*(l) = \frac 1 N \sum_{i = 1}^N \mathds 1\left\{x_i \geq l\right\},\] with \(\mathds 1 \{\cdot\}\) denoting the indicator function. In other words, one has to check what proportion of the observations exceed the limit \(l\); \(l = x_{(k)}\) yields the first definition, equation \eqref{eq:emp.cdf}. If the sample is power-law distributed (at least for the largest observations) the ETF should be linear on a log-log plot (again, for the largest observations), with the slope equal to \(-\alpha\) (see figure \ref{fig:etfhist}). Since the TF is the integral of the PDF, the ETF is considerably smoother than the histogram, which makes it easier to detect deviations from power-law behavior visually. \begin{figure}[tb] \centering \includegraphics[width=0.9\textwidth]{ETF_2} \caption{Empirical tail functions for 100 samples of length \(10^4\), for the distributions: (a) Exponential (\(\lambda = 0.023\)), (b) Log-normal (\(\sigma = 2.8\)), (c) Log-gamma (\(a = b = 0.5\)), and (d) Pareto \((\alpha = 0.5)\). One of the ETF-s for each distribution is shown in black, the others in red. The dashed lines correspond to the tangent line to the analytic TF for the largest values. The slopes of the dashed lines are (a) -1.35, (b) -0.53, and (c) -0.5. Note that distinguishing a log-normal sample from a regularly varying one is non-trivial since the tangent of the ETF based on a finite sample is never vertical.}\label{fig:etfhist} \end{figure} As for a numerical estimate of \(\alpha\), it is, of course, possible to perform a least-squares fit on \(\left\{\ln x_{(k)}, \ln\frac k N \right\}\) (and that also yields much better results than the naive histogram approach), however, it is better to do this with the roles reversed, that is, by treating \(\ln \frac k N\) as the independent variable. The latter procedure is referred to as the QQ estimator \cite{Kratz1996}, with QQ standing for quantile-quantile. The essential difference is that in the ETF version, one divides by the sample average \(\left\langle\ln^2 x_{(k)}\right\rangle - \left\langle\ln x_{(k)}\right\rangle^2\), and in the QQ version a deterministic number, \(\left\langle\ln^2 \frac{k}{N}\right\rangle - \left\langle\ln \frac{k}N\right\rangle^2\). The QQ version is therefore more stable. In general, a QQ plot is constructed by plotting the sorted observations against the matching quantiles of the theoretical distribution. If the underlying distribution matches the theoretical distribution (up to a linear transformation), then the QQ plot should be linear. In the specific case of a Pareto-distributed variable, one transforms it into an exponentially distributed variable by taking the logarithm and plots the sample as a function of the standard exponential quantiles, that is, the plot consists of the points $\left\{ -\ln \frac k N, \ln x_{(k)}\right\}$ and the slope of the line is \(\alpha^{-1}\). In essence, this corresponds to exchanging the two axes of the ETF plot on a log-log scale. The QQ estimator of \(\gamma = \alpha^{-1}\) is then calculated as the slope of the LS fit on the QQ plot. This is a very simple estimator, and it has been shown to be weakly consistent \cite{Resnick_Heavy-TailPhenomena}. However, the issues present in the naive histogram estimator are present here, too. Namely the expected value of $\ln X_{(k)}$ is not $-\alpha^{-1}\cdot \ln \frac k N$, and the variance of $\ln X_{(k)}$ depends on $k$. What is more, $X_{(i)}$ and $X_{(j)}$ are not independent. The problem has been addressed in \cite{Aban2004} for purely Pareto distributed samples, taking into account both the issue of expectations and the covariance matrix of order statistics. The authors show that the solution of the generalized regression problem is equivalent to the Hill estimator discussed in section \ref{sec:hill}, which should not come as a surprise for since it is the maximum likelihood estimator. The same issue has been addressed in \cite{Beirlant1999}, under more general assumptions on the underlying distribution, yielding an improved regression estimator. Nevertheless, even the basic QQ estimator is also a viable, although usually inferior, alternative to the Hill estimator. \subsection{Hill plot}\label{sec:hill} The Hill estimator can be obtained as the conditional maximum likelihood estimator of the reciprocal of the tail exponent \(\alpha^{-1}\) for Pareto-distributed data \cite{hill1975}: \begin{equation} \widehat{\alpha^{-1}}(k) \equiv \hat\gamma^{\mathrm H}(k) = \frac 1 {k}\sum_{i = 1}^{k} \ln x_{(i)} - \ln x_{(k+1)}, \label{eq:hill} \end{equation} with \(x_{(i)}\), \(i = 1, 2, \ldots, N\) denoting the \(i^{\mathrm{th}}\) largest element of the sample. In a more general setting, given a sufficiently large number of observations from a regularly-varying distribution, it converges to the extreme value index of the distribution (the rate of convergence depends on the distribution) \cite{beirlant2005}. For pure Pareto\((\alpha)\) data, \(\eval{\hat \gamma^{\mathrm H}(k)} = \alpha^{-1}\), and \(\mathrm{RMSE}\left({\hat \gamma^{\mathrm H}(k)}\right) = \alpha^{-1}/\sqrt{k}\). \begin{figure}[tb] \centering \includegraphics[width=0.9\textwidth]{Hill_estimator_2} \caption{Hill plot for 100 computer-generated data sets of length $10^4$ for different distributions. The plot for a single realization is highlighted in black for each distribution. The parameters of the distributions were the following: (a) Exponential (\(\lambda = 0.023\)), (b) Log-normal \(\mu = 0, \sigma = 2.8\), (c) Log-gamma: \(a = b = 0.5\), and (d) Pareto: \(\alpha = 0.5\). The true values for the extreme value index \(\gamma\) are indicated by dashed lines.}\label{fig:hill} \end{figure} If a random variable has a power-law distribution, taking the logarithm of it transforms it into an exponentially distributed random variable, so another way of interpreting the Hill estimator is the following: it first transforms the power-law sample into an exponential one, and after that it estimates the parameter of this exponential by taking the sample average, using the fact that for \(X\sim\mathrm{EXP}(\lambda)\), \(\eval{X|X>l} = l + \lambda^{-1}\) for any \(l \in \mathbb R_+\), which is sometimes referred to as the ageless or memoryless property of the exponential distribution. A third interpretation was given in the context of generalized regression in \cite{Aban2004}. The term Hill plot stands for plotting \(\hat \gamma^{\mathrm H}\) as a function of the tail length \(k\). If we know that the data is exactly power-law, it is of course best to set \(k = N - 1\), so the Hill plot does not make much sense. For real-life data, however, it is usually only the largest observations for which power-law behavior is a reasonable approximation. Consequently, one would normally look for a plateau in the Hill plot where the estimate is relatively stable, and choose the number of points to take into account based on that. Drees et al.\ \cite{Drees_HillPlot} have shown that it is best practice to plot the Hill estimator as a function of \(\ln k\) instead of \(k\), for which they coined the phrase altHill plot (figure \ref{fig:hill}). Taking a closer look, for a general regularly-varying distribution \eqref{eq:RV} the presence of the function \(L(x)\) complicates the situation in two ways: \begin{itemize} \item It introduces a non-zero bias which becomes zero only asymptotically. How fast it becomes zero depends on the nature of corrections to the power-law behavior. \item The optimal tail length \(k_{\mathrm{opt}}\) which minimizes the mean squared error of the estimator becomes (often considerably) smaller than the sample size. \end{itemize} See for example figure \ref{fig:hill}(c), showing the Hill plots for log-gamma distributed samples, which have a logarithmic correction to the power-law. Having information about \(L(x)\) is therefore very useful: one can modify the Hill estimator to reduce its bias, and also estimate the optimal tail length \(k_{\mathrm{opt}}\). The statistics literature contains several approaches to bias reduction as well as the choice of tail length (see section \ref{sec:tail_length} or \cite{Beirlant_StatisticsOfExtremes} and references therein), but in general, the more information one has about the higher-order behavior of the distribution the better, as estimating higher-order parameters is of course even more problematic than estimating the extreme value index \(\gamma\) itself. Another issue with the Hill estimator is that it definitely produces a positive number, regardless of the underlying distribution, see for example figure \ref{fig:hill}(b), where the true extreme value index \(\gamma\) is zero for the log-normal distribution. Generally, the closer the estimate \(\hat\gamma\) is to zero, the less confident one should be in the results, even if the histogram seems straight at the end. As a rule of thumb, if one has an estimate \(\hat\gamma < 0.3\), it is a good idea to consider alternative models for the data (unless the theoretical background is clear and indicates a power-law behavior without reason for doubt). There are further, still basic tools, like the mean excess plot \cite{Coles2001, Ghosh2010}, which is quite popular in the actuarial field, however, but are not as well suited for estimating the value of the tail exponent. \section{Parameter estimation for intensity measurements}\label{sec:intensity} The previous section showed that in the ideal case, either approach yields results consistent with the true tail exponent. Actual experiments, however, are never that simple, the distortions and noise introduced by the apparatus require further considerations. There are two major effects that cannot be escaped: detector imperfections (limited linear response) and noise added by different experimental elements. We will use a very simplistic model of the experiments done in \cite{Manceau2019} in order to show how these affect estimation. \subsection{Models used in simulations}\label{model} Manceau \emph{et al.}\ \cite{Manceau2019} showed that for supercontinuum generation setups, the distribution of measured intensities has very heavy tails. Table \ref{tab:experiments} summarizes the types of experimental setups used by them from the mathematical point of view. \begin{table}[htb] \centering \renewcommand{\arraystretch}{1.5} \begin{tabular}{c|c|c|c|c} Source & Experiment & Idealized distribution & Heavy-tailed? & \(\gamma\) \\ \hline Thermal & Optical harmonic gen. & \(\left[\mathrm{EXP}(\lambda)\right]^n\) & Y & 0 \\ Thermal & Supercontinuum gen. & \(\sinh^2\left[\mathrm{EXP}(\lambda)\right]\) & Y & \(2/\lambda\) \\ BSV & Optical harmonic gen. & \(\left[\mathrm{GAMMA}(1/2, \beta)\right]^n\) & Y & 0 \\ BSV & Supercontinuum gen. & \(\sinh^2\left[\mathrm{GAMMA}(1/2, \beta)\right]\) & Y & \(2/\beta\) \\ \end{tabular} \caption{Experimental setups in \cite{Manceau2019}, BSV stands for bright squeezed vacuum. In effect we have different non-linear transformations of an exponential random variable for thermal pumping, and a gamma distributed variable for BSV pumping.}\label{tab:experiments} \end{table} According to Equation \eqref{heavy_def}, each experimental setup produces heavy-tailed observables, even though the pumps are not heavy-tailed (the gamma distribution also decays asymptotically at an exponential rate). Note that for optical harmonic generation, even though the decay is slower than exponential, each moment exists, meaning that there is no theoretical issue with calculating second (or higher) order correlations like \(g^{(2)}\) \cite{Mandel1995}, so we are not going to further concern ourselves with that case. For the supercontinuum setup, however, the situation is quite different: depending on the value of \(\lambda\) (or \(\beta\)), which is inversely proportional to the pump's mean intensity, the highest existing moment can be arbitrarily small. \begin{figure}[ptb] \centering \includegraphics[width=\textwidth]{Sec4_double} \caption{a) Model behavior with thermal pumping. The light blue dash-dotted line shows the TF of \(X_0\), the dark blue dash-dotted line shows the TF of \(X_1\). The red/light red lines show 100 simulated ETF-s with lower cutoff and detector noise (\(X_2\)). The solid black line shows the TF of the model when an upper cutoff is also included. The lower and upper cutoffs are marked by black dotted lines. Parameters: \(I \sim \mathrm{EXP}(\lambda = 1)\), \(K = 200\), \(\omega_1\sim \mathcal N (0, \sigma_1^2 = 1)\), \(l = 10^3\), \(\omega_2\sim \mathcal N (0, \sigma_2^2 = 10^6)\), \(L = 2\cdot 10^6\). (b) Histogram and ETF for a single sample of size $10^4$ with the given parameters.}\label{fig:modelbehavior} \end{figure} If the models in Table \ref{tab:experiments} were not only approximations of the reality, one would only need to take the inverse of the non-linear transformation involved in order to obtain a vanilla exponential or gamma sample and quite simply estimate the single free parameter of those distributions. Experimental imperfections do, however, complicate the situation. For illustration, let us consider the following simple model of the supercontinuum-generation process: \begin{equation} X = D\left(K\cdot \sinh^2(I + \omega_1)\right), \end{equation} with \(I\) denoting the pump's intensity (whose distribution is either \(\mathrm{EXP}(\lambda)\) or \(\mathrm{GAMMA}(1/2, b)\)) and \(X\) the detector's output. The detector is modeled through \begin{equation} D(x) = \left\{ \begin{array}{ll} l + \omega_2 & \mathrm{if}\, x < l, \\ x + \omega_2 & \mathrm{if}\, x \in [l, L], \\ L + \omega_2 & \mathrm{if}\, x > L, \end{array} \right. \end{equation} with \(\omega_1, \omega_2\) denoting independent Gaussian noises on the pump's and the detector's side, respectively (these can be aggregates of different types of noises); \(K\) is a constant factor. The value of \(l\) corresponds to the detector's noise floor, the value of \(L\) to its saturation limit. The effects of the different elements of the model are shown in figure \ref{fig:modelbehavior}(a). The dash-dotted light blue line shows the ideal case, i.e., the TF of \(X_0 \equiv K\cdot \sinh^2 I\) which is distorted in the following steps: Adding a Gaussian noise prior to the non-linear transformation (\(X_1 \equiv K\cdot \sinh^2 \left[I + \omega_1 \right]\)) corresponds asymptotically only to a multiplication by a constant factor \(\exp\left\{\lambda \sigma_1^2\right\}\) (see dark blue dash-dotted line). Introducing a lower cutoff \(l\) introduces a discontinuity in the TF, which is smeared by the second Gaussian noise \(\omega_2\) (\(X_2 \equiv \max\left\{K\cdot \sinh^2 \left[I + \omega_1 \right], l\right\} + \omega_2\)). The upper cutoff in \(L\) introduces a sharp fall to zero in \(L\), which could be smoothed by taking a more accurate model of the saturation curve. However, for the sake of simplicity, the final step is just \(X = \min\left\{\max\left\{K\cdot \sinh^2 \left[I + \omega_1 \right], l\right\}, L\right\} + \omega_2\). For the specific parameters used in figure \ref{fig:modelbehavior}, detector saturation essentially corresponds to discarding about the top 1\% of observations, shown in the figure in light red. Note that out of the parameters of the model, only the pump's mean intensity has an impact on the tail exponent (if there was a multiplicative distortion prior to the non-linear transformation, the situation would be different). Figure \ref{fig:modelbehavior}(b) illustrates how much easier it is to notice if the saturation limit of the detector was reached during the experiment if one uses the ETF instead of the histogram to visualize the data: while there is a very visible cut-off at the saturation limit for the former, the latter shows only a slight increase in the rightmost bin. Note that in this model, there is a non-zero probability of negative observations, which is why the ETF is not go to one as \(x\) goes to zero. In the following sections, we will examine how the different estimation tools perform for the different versions of our model. \subsection{Numerical results for supercontinuum generation}\label{sec:SCG} Using figure \ref{fig:modelbehavior}(a), we have gained an insight into what the individual parameters of our model do. Now we are looking at the performance of the three basic estimators discussed in section \ref{sec:tools}, and how they are affected by model parameters. \begin{figure}[ptb] \centering \setlength{\unitlength}{0.5\textwidth} \begin{picture}(2,0.7) \put(0,0){\includegraphics[width=0.49\textwidth]{relerr_expPump_cmp}} \put(0.85,0.6){(a)} \put(1,0){\includegraphics[width=0.49\textwidth]{relbias_expPump_cmp}} \put(1.85,0.6){(b)} \end{picture} \caption{Comparison of estimators, SCG setup, thermal source, without detector saturation. (a) Relative RMSE. The black dashed line shows the relative RMSE for the exact Pareto case, \(1/\sqrt{k}\). (b) Relative bias. (Parameters: \(\lambda = \sigma_1 = 1\), \(l = \sigma_2 = 10^3\), \(L = \infty\), \(K = 200\).)}\label{fig:cmp_exppump} \end{figure} Figure \ref{fig:cmp_exppump} shows the root mean square error of the different estimators for the thermal case, calculated using \(10^3\) simulated samples of size \(10^4\). Clearly, in this case the convergence to power-law is quick, since the ideal RMSE (black dashed line) is reached easily by the Hill estimator, with the other two estimators performing only a little worse (the minimum is about 4\% instead of 3\%). This is easily understood when examining the asymptotic form of the TF (which has a closed analytic form for \(\sigma_2 = 0\)), \begin{equation} \label{eq:thermal_asymptotics} \overline F_{\mathrm{SCG, thermal}}(x) = \left[e^{-\lambda \sigma_1^2}\cdot \frac{4x}{K}\right]^{-\frac \lambda 2}\cdot \left(1 - \frac {\lambda K}{4x} + \mathcal O(x^{-2})\right), \end{equation} which means that the convergence to the asymptotic behavior is also power-law. The divergence in the bias is caused by the fact that observations for this model can get arbitrarily close to zero (can even be negative), so the term \(-\ln x_{(k+1)}\) in \eqref{eq:hill} will dominate in the Hill estimator as \(k\) is increased. This divergence also introduces meaningless minima in the RMSE curve at \(k\approx 8 \cdot 10^3\), which is where the bias curves cross zero before diverging. Figure \ref{fig:exppump_var} shows that varying the mean intensity of the source (\(\lambda^{-1}\)) and the noise floor (\(l\)) changes the situation compared to figure \ref{fig:cmp_exppump}. The best achievable error is essentially determined by how many observations are unaffected by experimental imperfections. If the mean intensity is increased (figure \ref{fig:exppump_var}(a)), more observations are above the noise floor of the detector, which ultimately means that one has to discard fewer observations when estimating the tail index. Note that the blue line in figure \ref{fig:exppump_var}(a) corresponds to considerable pre-transformation noise: the typical scale of the signal (\(\lambda^{-1} = 1/2\)) is exceeded by the scale of the noise (\(\sigma_1 = 1\)), resulting in oddly shaped profile. The same is true for the red line in figure \ref{fig:exppump_var}(b): even though the noise floor is low (\(l = 10^2\)), it is exceeded by the scale of the additive noise \(\omega_2\) (\(\sigma_2 = 10^3\)). \begin{figure}[t] \centering \setlength{\unitlength}{0.5\textwidth} \begin{picture}(2,0.7) \put(0,0){\includegraphics[width=0.49\textwidth]{relerr_expPump_Hill_varlam}} \put(0.85,0.6){(a)} \put(1,0){\includegraphics[width=0.49\textwidth]{relerr_expPump_Hill_varl}} \put(1.85,0.6){(b)} \end{picture} \caption{Relative RMSE of the Hill estimator as a function of the number of points taken into account: (a) for different values of the parameter \(\lambda\) of the thermal pump; (b) for different values of the noise floor parameter \(l\) of the detector.}\label{fig:exppump_var} \end{figure} If the process is pumped by bright squeezed vacuum, the problem becomes more technically involved. As figure \ref{fig:cmp_gammapump} shows, the best achievable RMSE is significantly worse (10-20\% instead of 3-4\%), owing to the considerable bias of the estimators for this case. This is because the correction to power-law behavior decays as a power of \(\ln x\), and not \(x\). It is therefore helpful if, after taking the logarithm of the observations, one uses the conditional maximum likelihood estimator for \(\mathrm{GAMMA}(1/2, \beta)\) instead of the Hill estimator, which, as discussed before, is a conditional MLE for \(\mathrm{EXP}(\lambda)\). That is, one should maximize the function \begin{eqnarray} \nonumber \ln \mathcal L(\alpha, \beta) &=& \frac{\alpha - 1}{k}\sum_{i = 1}^k\ln\ln x_{(i)} - \frac \beta k \sum_{i = 1}^k \ln x_{(i)} \\ &&+ \alpha\ln \beta - \ln\Gamma(\alpha, \beta\ln x_{(k+1)}), \label{eq:gamma_mle} \end{eqnarray} with \(\Gamma(s,z)\) denoting the upper incomplete gamma function. It is straightforward to show that asymptotically, the value of \(\beta^{-1}\) that maximizes \eqref{eq:gamma_mle} is equal to the Hill estimator, and that the corrections are \(\mathcal O\left(1/\ln x_{(k+1)}\right)\). The maximization can be done numerically using \(\alpha = 1/2 = \mathrm{fixed}\), with \(\beta_0 = \left[\hat \gamma^{\mathrm H}\right]^{-1}\) as the starting point. As the dashed lines in figure \ref{fig:cmp_gammapump} show, the estimator defined in \eqref{eq:gamma_mle} is indeed more efficient than the simple Hill estimator. The improvement is not as significant as one might hope for since the actual transformation applied to the input signal was \(K\cdot\sinh^2(\cdot)\) and not \(\exp(\cdot)\). Thus, in theory, to get better results, one should change \(\ln x_{(i)}\) to \(\mathop{\mathrm{asinh}}\left(\sqrt{x_{(i)}/K'}\right)\) in \eqref{eq:gamma_mle}, with \(K' = K\cdot \exp\left\{\sigma_1^2\beta\right\}\), however, this is problematic since \(K'\) depends on the unknown values of \(\beta\) (\(\equiv 2/\gamma\)), \(\sigma_1\), and \(K\). \begin{figure}[t] \centering \setlength{\unitlength}{0.5\textwidth} \begin{picture}(2,0.7) \put(0,0){\includegraphics[width=0.49\textwidth]{relerr_gammaPump_cmp}} \put(0.85,0.6){(a)} \put(1,0){\includegraphics[width=0.49\textwidth]{relbias_gammaPump_cmp}} \put(1.85,0.6){(b)} \end{picture} \caption{Comparison of estimators, BSV source, without detector saturation. (a) Relative RMSE. (b) Relative bias. (Parameters: \(\beta = \sigma_1 = 1\), \(l = \sigma_2 = 10^3\), \(L = \infty\), \(K = 200\).)}\label{fig:cmp_gammapump} \end{figure} \subsection{Detector saturation}\label{sec:saturation} If the observations during an experiment are distorted by detector saturation, one can try and recalibrate the equipment so that all observations fall within the linear range of the detector. However, this is not necessarily trivial to do if the process is indeed heavy-tailed, but even if this is not a problem, it is still a good practice to try and evaluate the original data set instead of throwing it away. This, however, requires modifying the basic estimators introduced in section \ref{sec:tools}, which were constructed supposing that the largest observations are (close to being) Pareto distributed. Figure \ref{fig:modifiedestimators}(a) shows that, as expected, the basic approach fails if there is detector saturation. The three basic techniques presented in section \ref{sec:tools} are easily modified to work with discarding the largest observations which are affected by detector saturation. For the least squares approaches on either the histogram or the QQ plot, one quite straightforwardly has to omit observations above a certain threshold, but otherwise the optimization is exactly the same as without the upper cutoff. For generalizing the Hill estimator, one has to take advantage of the Rényi representation theorem according to which the scaled spacings of the order statistics of an exponential sample are themselves exponentially distributed: that is, if \(Z_i\), \(i = 1,\ldots, N\) is an i.i.d.\ \(\mathrm{EXP}(\lambda)\) sample, then \(S_{(i)} \equiv i\cdot(Z_{(i)} - Z_{(i+1)})\), \(i = 1,\ldots, N-1\) are also i.i.d.\ \(\mathrm{EXP}(\lambda)\), with \(Z_{(i)}\) denoting the \(i^{\mathrm{th}}\) largest observation in the sample. Knowing that if \(X\) is Pareto, then \(\ln X\) is exponential, discarding the \(j\) largest observations results in the following estimator: \begin{eqnarray} \nonumber \hat\gamma(k, j) &:=& \frac{1}{k - j} \sum_{i = j+1}^k i\left(\ln x_{(i)} - \ln x_{(i+1)}\right) \\ \label{eq:genhill}&=& \frac{j}{k-j}\ln \frac{x_{(j+1)}}{x_{(k+1)}} + \frac 1{k - j}\sum_{i = j+1}^k\ln \frac{x_{(i)}}{x_{(k+1)}}. \end{eqnarray} \begin{figure} \centering \setlength{\unitlength}{0.5\textwidth} \begin{picture}(2,0.7) \put(0,0){\includegraphics[width=0.49\textwidth]{relerr_expPump_Hill_varL}} \put(0.85,0.6){(a)} \put(1,0){\includegraphics[width=0.49\textwidth]{relerr_expPump_Hillg_varL}} \put(1.85,0.6){(b)} \end{picture} \caption{(a) Hill estimator performance for different values of detector saturation. Dashed line: \(1/\sqrt k\). (b) Generalized version \eqref{eq:genhill} to compensate for the saturation \(L = 10^6\) and varying values of \(j\), the number of discarded observations. Dashed line: \(1/\sqrt{k - 150}\). }\label{fig:modifiedestimators} \end{figure} Note that setting \(j = 0\) does yield the standard Hill estimator, as expected. As figure \ref{fig:modifiedestimators}(b) shows, our suggested generalization indeed significantly improves the performance. When choosing the value of \(j\), one has to, of course, discard the values affected by detector saturation. However, discarding too many values increases the RMSE, which is in the best-case scenario proportional to \(1/\sqrt{k-j}\). If, in order to reduce the bias, one chooses to only keep the measurements within a shorter interval \(\left[x_{\mathrm{LO}}', x_{\mathrm{HI}}'\right]\) instead of \(\left[x_{\mathrm{LO}}, x_{\mathrm{HI}}\right]\) (\(x_{\mathrm{LO}} < x_{\mathrm{LO}}' < x_{\mathrm{HI}}' < x_{\mathrm{HI}}\)), the number of measurements has to be multiplied by a factor of \( \left[{\overline F(x_{\mathrm{LO}}) - \overline F(x_{\mathrm{HI}})}\right] / \left[{\overline F(x_{\mathrm{LO}}') - \overline F(x_{\mathrm{HI}}')}\right] \). This ensures that on average, the same number of observations will fall in \(\left[x_{\mathrm{LO}}', x_{\mathrm{HI}}'\right]\) during the second experiment as in \(\left[x_{\mathrm{LO}}, x_{\mathrm{HI}}\right]\) during the first one. The values of \(\overline F\left(\cdot\right)\) can be substituted by their empirical counterparts from the first experiment. This way one can check whether the new estimate \(\hat\gamma'\) gained from the second experiment is consistent with the old one. The figures for the QQ estimator are quite similar, whereas the histogram approach is not much improved by discarding the data affected by saturation. \subsection{Choosing tail length}\label{sec:tail_length} Figures \ref{fig:cmp_exppump} - \ref{fig:modifiedestimators} show the best attainable performance of the basic estimators. The problem in practice is how to decide how many points to take into account for calculating the estimators (\(k\)), especially if there is a large number of samples and evaluating each one is not feasible without automation. If the higher-order behavior is known, or can be estimated with reasonable accuracy, the optimal tail length (minimizing the RMSE) can be estimated as well. This is the approach followed by most of the literature, however, due to the difficulty of estimating higher-order behavior, the resulting algorithms can be quite involved (requiring the tuning of nuisance parameters) and often do not outperform simpler, heuristic approaches. \begin{table}[hptb] \centering \begin{footnotesize} \renewcommand{\arraystretch}{1.1} \begin{tabular}{|lccc|cc|cc|cc|} \cline{5-10} \multicolumn{4}{c|}{} & \multicolumn{2}{c|}{Pareto} & \multicolumn{2}{c|}{Log-gamma} & \multicolumn{2}{c|}{Model} \\ \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Based on} & \multirow{2}{*}{Tail choice} & \multirow{2}{*}{Cost} & \multicolumn{2}{c|}{\(\gamma\)} & \multicolumn{2}{c|}{\(\gamma\)} & \multicolumn{2}{c|}{\(\gamma\)} \\ \cline{5-10} & & & & 2 & 1 & 2 & 1 & 2 & 1 \\ \hline\hline 1. & Hill & path stability & mid & \cellcolor{green!25}0.04 & \cellcolor{green!50}0.02 & 0.46 & 0.22 & \cellcolor{green!75}0.08 & \cellcolor{green!75}0.15 \\ 2. & hist. & path stability & mid & 0.16 & 0.09 & 0.45 & \cellcolor{green!50}0.20 & 0.27 & \cellcolor{green!25}0.22 \\ 3. & QQ & path stability & mid & 0.30 & 0.16 & 0.52 & 0.23 & \cellcolor{green!25}0.24 & \cellcolor{green!25}0.22 \\ 4. \cite{Clauset2009} & Hill & KS distance & high & \cellcolor{green!50}0.03 & \cellcolor{green!75}0.01 & 0.46 & 0.23 & \cellcolor{green!75}0.08 & 0.44 \\ 5. \cite{Guillou2001} & Hill & aux.\ statistic & low & \cellcolor{green!50}0.03 & \cellcolor{green!50}0.02 & \cellcolor{green!75}0.39 & \cellcolor{green!75}0.18 & \cellcolor{green!50}0.13 & \cellcolor{green!50}0.17 \\ 6. \cite{Danielsson2001} & Hill & higher-order p. & high & \cellcolor{green!75} 0.02 & \cellcolor{green!75}0.01 & \cellcolor{green!50} 0.40 & \cellcolor{green!50}0.20 & 0.64 & 0.56\\ 7. \cite{Drees1998} & Hill & higher-order p. & mid & 0.15 & 0.09 & 0.46 & 0.23 & 0.25 & 0.61 \\ 8. \cite{Caeiro2015}/1 & Hill & higher-order p. & mid & 0.05 & \cellcolor{green!25}0.03 & 0.43 & \cellcolor{green!25}0.21 & 0.81 & 0.37\\ 9. \cite{Caeiro2015}/3 & Hill & higher-order p. & high & 0.07 & 0.04 & 0.46 & 0.23 & 0.89 & 0.63 \\ 10. \cite{Caeiro2015}/3 & reduced-bias Hill & higher-order p. & high & 0.21 & 0.11 & \cellcolor{green!25}0.41 & \cellcolor{green!50}0.20 & 0.90 & 0.72 \\ \hline \end{tabular} \end{footnotesize} \caption{Empirical RMSE of different estimators applied to 100 samples of size $10^4$, for different types of distributions, all of which had a positive finite extreme value index ($\gamma \equiv \alpha^{-1} \in \{2, 1\}$). The ``Cost'' column refers to CPU time and complexity. The column ``Model'' refers to the toy model with a thermal pump with (\(\lambda=\gamma/2\), \(\sigma_1 = 0\), \(l = \sigma_2 = 10^3\), \(L = \infty\)). The best three values were indicated in different shades of green for each distribution. \mytodo{Make more compact}}\label{tab:rmse} \end{table} We have implemented the estimation procedures shown in Table \ref{tab:rmse}. We proposed procedures 1-3 in which a heuristic path stability approach was used to choose the tail length, mainly based on \cite{Neves2015}, and tweaked using \cite{Drees_HillPlot} and \cite{Clauset2009} (for details, see \ref{sec:PS}). This approach was included because it essentially emulates how one would choose a tail length from the Hill plot, and it also works with an arbitrary estimator. Table \ref{tab:rmse} of course, is not an exhaustive study of these procedures, especially since procedures 6-10 involve more than one nuisance parameter, which could have been used to fine-tune them. We did not do that, we used the default parameter values suggested in the sources to see how they perform ``out of the box''. Based on our tests, the procedure introduced by Guillou et al.\ \cite{Guillou2001} provided the most reliable results and had the further advantage of simplicity and speed. Our suggested path stability approach, which mimics visual evaluation, also works reasonably well combined with the Hill estimator. \section{Conclusion and Discussion} In this work, we first gave a brief overview of a basic toolkit that may be used to estimate the tail exponent of associated with a heavy-tailed sample. We devoted more attention to histograms than it is justified from the mathematical point of view as they are the default tools for many physicists, and discussed how to use them more efficiently. Subsequently, we discussed two simple alternatives: the QQ estimator and the Hill estimator through examples. We addressed the challenges specific to intensity measurements in supercontinuum generation experiments. In order to do that, we introduced a model for the observable intensity distribution. Firstly, if the source in the experiment is bright squeezed vacuum, the estimation becomes considerably more inefficient than in the thermal case. We suggest using a modified version of the Hill estimator, defined in \eqref{eq:gamma_mle} for BSV. If the pump is thermal the plain Hill estimator \eqref{eq:hill} is sufficient. Next, one should check whether the observations were affected by detector saturation. This is easily done by preparing the empirical tail function of the sample. If the answer is yes, one has to discard the affected observations and apply the estimator introduced in \eqref{eq:genhill}. Finally, we also included comparison of procedures which can choose how many observations to take into account in an automated fashion. These results can be directly extended to investigate heavy tail distribution caused by three-wave and four-wave mixing in non-linear optics \cite{Manceau2019}, atomic physics \cite{Boyer2008}, superconducting circuits \cite{Sivak2019}, and non-linear optomechanics \cite{Brawley2016, Siler2018}, and electromechanics \cite{Seitner2017}. \section*{Acknowledgments} {\'E}.R. and L.R. acknowledge the support of project 19-22950Y of the Czech Science Foundation. R.F. acknowledges the projects {LTAUSA19099} and {CZ.02.1.01/0.0/0.0/16{\_}026/0008460} of MEYS CR.
1,314,259,992,801
arxiv
\section{Introduction} \label{sec:Introduction} Co$_3$V$_2$O$_8$ represents the 3d transition metal ortho-oxo-vanadates labeled as kagome staircase structures and crystallizes in the orthorhombic space group Cmca.\cite{fue1970,sau1973} Its crystallographic structure is characterized by edge-sharing CoO$_6$ octahedra forming buckled layers of corner-sharing triangles, the kagome staircases, which are separated along the {\it b} axis by VO$_4$ tetrahedra (Fig.~\ref{fig:structure}). \begin{figure} \includegraphics[width=1.5in]{fig1.eps} \includegraphics[width=1.8in]{fig2.eps} \caption{\label{fig:structure} (Color online) Visualization of the kagome staircase structure of Co$_3$V$_2$O$_8$. (a) shows the edge sharing MO$_6$ octahedra [Co$_c$O$_6$ light blue (light gray), Co$_s$O$_6$ dark blue (dark gray)] isolated by non-magnetic VO$_4$ tetrahedra (gray). (b) depicts a single kagome staircase viewed along the {\it b} axis with only the magnetic ions on both crystallographic sites [Co$_c$ light blue (light gray), Co$_s$ dark blue (dark gray)].} \end{figure} The magnetic exchange is effectuated by a 90$^\circ$ Co-O-Co intralayer pathway. Interestingly, this system exhibits a sequence of five magnetic phase transitions featuring temperature dependent incommensurate antiferromagnetic phases with commensurate lock-ins and a ferromagnetic ground state, where all magnetic structures are collinear along the {\it a} axis.\cite{che2006} The ferromagnetic structure reveals two strongly different magnetic moments for the cross-tie Co ions (4a site) and spine Co ions (8e site) of 1.54 $\mu_B$ and 2.73 $\mu_B$,\cite{che2006} respectively, despite the fact that both Co$^{2+}$ ions apparently present high-spin configurations as macroscopic measurements exhibit saturation of the cross-tie moments.\cite{wil2007/2} In the kagome staircase structure several nearest and next-nearest neighbor exchange interaction pathways are possible,\cite{che2006} which is the motivation for this study. Investigating the magnetization density, which is reflected by the respective magnetic form factors of the ions, may reveal preferred exchange pathways with the presence of magnetization on the involved O sites. Eventual induced magnetization on the empty d-shell of V sites would allow interlayer coupling by super-superexchange. Therefore, magnetic Compton scattering and polarized neutron diffraction experiments have been carried out leading to the spin density in momentum space and the magnetization density in real space, respectively. However, detailed and precise analysis of these quantities are required in order to determine the exact contribution of the atomic species involved in the studied system. Quantum chemical modeling is therefore needed to gain insights, at a molecular level, into the electronic structure of the two cobalt-oxide octahedra. In this context, {\it ab initio} cluster calculations were performed for the Co$_c$O$_6$ and Co$_s$O$_6$ octahedra yielding precise molecular orbitals (MO) and wave functions (wf). The latter were used to analyze the experimentally observed density distributions. The refinement of the contribution of each MO/wf, at a quantum chemical level, to the real and momentum space densities simultaneously is a powerful procedure allowing interesting and valuable features of the magnetic form factors to be determined. \section{Ab initio calculations} \label{sec:abinitio} The two different clusters Co$_c$O$_6$ and Co$_s$O$_6$ were modelled separately. The calculations were performed within the framework of the Kohn-Sham formulation of the density functional theory using the PC GAMESS program.\cite{pcgamess} The functional B3LYP was employed to approximate the exchange-correlation interaction. The B3LYP is a hybrid functional, well adapted to study transition metal compounds and magnetic interactions, in which a predefined amount of the exact Hartree-Fock exchange is added to the well known pure density functionals.\cite{b3lyp1,b3lyp2,b3lyp3,b3lyp4} The atoms in the cluster were described using Ahlrich's pVDZ atomic orbital (AO) basis set\cite{sch1992} Co(14s,8p,5d,1p)/[5s,2p,2d,1p],O(7s,4p,1d)/[3s,2p,1d]. The notations (klm) and [klm] indicate the number of Gaussian type orbitals and contracted Gaussian type orbitals, respectively. In order to mimic the Madelung potential, the two quantum mechanical clusters were surrounded by point charges (PC) according to the Effective Fragment Potential method.\cite{gor2001} As previously reported for other systems,\cite{rad2005,pas2006,bar1988,pas1993} the choice of the embedding method was shown to be crucial for the physical meaning of the {\it ab initio} calculations. To avoid the electron density leaking out of the cluster, a boundary region has been introduced, which is formed by effective core potentials (ECP) placed in the nearest cationic positions around the cluster. Thus, the first coordination shell of Co$^{2+}$ and V$^{5+}$ ions has been described by ECPs according to the SBKJC ECP basis set.\cite{ste1992} As native ECPs for Co and V do not treat the 3s and 3p electrons as core electrons, those ECPs would in fact be too compact due to the higher number of valence electrons. In order to overcome this problem the ECPs for Mg and Al have been used due to the fact that their ionic radii are closer to those of Co and V, respectively. Except for the 3d shells, the remaining electrons are replaced by an effective potential. Thus, 1565 PC (3x3x3 unit cells with the respective Co ion in the center) have been built to mimic the Madelung potential on the cluster. The MO coefficients relevant to the Co3d were extracted from the simulations and used to model the Magnetic Compton Profiles (MCP) and the magnetic form factors. \section{Experimental} \label{sec:Experimental} \subsection{Unpolarized neutron diffraction} Co$_{3}$V$_2$O$_8$ single crystals have been grown from self-flux in a ZrO$_2$/Y crucible by the slow cooling method. Some of the crystals have been ground and investigated at the high-flux neutron powder diffractometer D20 (Institut Laue Langevin, Grenoble) confirming the absence of parasitic phases. The nuclear structure of a chosen single crystal has been studied at the four-circle diffractometer D9 (Institut Laue Langevin, Grenoble). A set of more than 500 independent reflections up to $\sin\theta/\lambda$=0.92 has been measured using two different wavelengths in order to determine the extinction effects ($\lambda_1$=0.835~\AA{}, $\lambda_2$=0.512~\AA{}). The data collection has been performed in the paramagnetic phase at $T$=13.5 K, which is just above the N\'eel temperature of 11.2 K. In order to reveal possible structural changes between the paramagnetic and the ferromagnetic phase due to structural phase transitions or magnetostriction an additional nuclear structure investigation has been carried out at the single crystal diffractometer D15 (Institut Laue Langevin, Grenoble) under the same experimental conditions as the polarized neutron diffraction experiment (Sec.~\ref{sec:pnd}), i.e. at $T$=3.5 K with an applied magnetic field of $H$=2 T along the easy axis {\it a} and using a wavelength of $\lambda$=0.854~\AA{}. \subsection{Polarized neutron diffraction (PND)} \label{sec:pnd} The real space magnetization density has been studied at the hot neutron spin polarized two-axes diffractometer 5C1 (Laboratoire L\'eon Brillouin, Saclay). Neutrons from the source are monochromated and polarized by the (111) reflection of a magnetized Heusler crystal Cu$_2$MnAl. The wavelength is 0.84~\AA{}, which corresponds to the maximum flux of the hot source and is ideal for studying large domains of reciprocal space. The polarization factor of the beam is $p=-0.88$. In order to fully magnetize the sample and avoid beam depolarization a magnetic field of $H$=2 T has been applied along the easy axis {\it a}. The flipping ratios $R$ (Eq.~\ref{eq:rfl}), the ratios between the spin-up and spin-down intensities, of over 500 independent $(hkl)$ reflections with $h$=0,1,2 have been measured in the ferromagnetic phase at $T$=3.5 K. \begin{equation} R=\frac{(F_N^2+q^2F_M^2)p^+_p+2F_Nq^2F_Mp^+_m+(1-q^2)q^2 F_M^2y_{pm}}{(F_N^2+q^2F_M^2)p^-_p+2F_Nq^2 F_Mp^-_m+(1-q^2)q^2F_M^2y_{pm}}, \label{eq:rfl} \end{equation} $F_M$ and $F_N$ denote the magnetic and nuclear structure factors. $q$=$\sin\alpha$ is a geometric factor with $\alpha$ being the angle between the scattering vector and the magnetization vector. The parameters $p^\pm_{p/m}$ and $y_{pm}$ are extinction correction factors of the respective cross-sections according to Ref.~\onlinecite{bon1976}. \subsection{X-ray magnetic Compton scattering (MCS)} The investigation of the spin density in momentum space has been carried out at the High Energy Inelastic Scattering beamline BL08W at SPring-8 in Hyogo, Japan. This beamline is designed for magnetic Compton scattering spectroscopy as it offers high energy elliptically polarized X-rays emitted from an Elliptic Multipole Wiggler. The incident photon beam with an energy range of 170-300 keV is monochromated and focused by an asymmetric Johann type monochromator using the Si (620) reflection. The sample magnetization is achieved with a superconducting magnet with a maximum field of 2.5 T and a minimum polarity-switching time of 5 seconds. The backscattered photon energy is analyzed by a 10-segmented Ge solid state detector positioned at a scattering angle of 178.4$^\circ$. The experiment has been carried out with an incident photon energy of 176.3 keV, which gives a good compromise between the beam intensity and the scattering cross section. The initial interest of applying this method to Co$_3$V$_2$O$_8$ was to map the momentum space spin density of the ferromagnetic phase as a projection onto the {\it b$^*$-c$^*$} plane in order to gather information about the 3d electron spin states and to correlate the results with those obtained from the polarized neutron diffraction experiment, but the experimental conditions and especially the large magnetic anisotropy of the system did not allow that. The minimal achievable sample temperature is approximately 5.6 K, i.e. close below the magnetic transition into the antiferromagnetic phase. It can be seen in the magnetic phase diagrams of Co$_3$V$_2$O$_8$\cite{qur2007,wil2007/2} that at this temperature already weak magnetic fields applied along the {\it b} or {\it c} axis induce a magnetic phase transition into the antiferromagnetic phase, while $H||{\it a}$ stabilizes the ferromagnetic one. The necessity of applying a magnetic field of considerable strength and therewith magnetizing the sample along the incident beam in order to increase the magnetic contribution to the scattering cross-section beside the requirement of turning the sample about a vertical axis to be able to reconstruct the two-dimensional momentum density, led to a change of strategy. To make sure not to induce magnetic phase transitions by rotating the sample with respect to the field direction the measurements have been carried out within the antiferromagnetic phase at $T$=7.5 K and by applying a magnetic field of $H$=2 T with the induced ferromagnetic component lying in the {\it b-c} plane. In addition to the trivial directions [010] and [001] four further directional MCPs $J_{mag}$ in the $p_y$-$p_z$ plane have been investigated. An additional profile has been measured along the [100] direction by decreasing the applied magnetic field to 0.25 T. A directional MCP yields the projection of the spin momentum space density onto the scattering vector, which is by definition $p_z$: \begin{equation} J_{mag}(p_z)=\int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty}|\chi_\uparrow(\mathbf p)|^2-|\chi_\downarrow(\mathbf p)|^2dp_xdp_y \end{equation} with $\chi_{\uparrow(\downarrow)}(\mathbf p)$ denoting the momentum wf of an occupied majority (minority) spin state. The Compton profiles of the respective sample magnetization states have been recorded for 60 seconds, repeating the cycle $[+--+-++-]$ multiple times. \section{Results} \subsection{Nuclear structure and extinction} The nuclear structure investigation at 13.5 K on the four-circle diffractometer D9 confirmed the correct phase formation of the orthorhombic structure (space group Cmca) with $a$=6.015(3)~\AA{}, $b$=11.480(5)~\AA{} and $c$=8.289(4)~\AA{}. The observed integrated intensities have been corrected for absorption by applying the transmission factor integral $\exp[\mu(\bar{t}_{in} + \bar{t}_{out})]$ and analyzed by simultaneously fitting a structure model to both datasets with $\lambda_1$ and $\lambda_2$ using FullProf.\cite{fullprof} The refinement process included the atomic positions and isotropic temperature factors of Co and O plus three additional extinction parameters according to an anisotropic Shelx-like empirical extinction correction.\cite{lar1970} The atomic position and temperature factor of V have been fixed in all refinements, because of its low coherent neutron scattering cross section. The refined structural parameters ($R$=5.3) are listed in Tab.\ref{tab:nuc}.\newline \begin{table}[htbp] \caption{\label{tab:nuc}Structural parameters of the investigated Co$_3$V$_2$O$_8$ single crystal.} \begin{ruledtabular} \begin{tabular}{ccccc} Atom&{\it x}&{\it y}&{\it z}&{\it B}(\AA{}$^2$)\\ \hline Co1 & 0 & 0 & 0 & 0.27(10)\\ Co2 & 0.25 & 0.1328(7) & 0.25 & 0.20(7) \\ V & 0 & 0.3773 & 0.1204 & 0.30 \\ O1 & 0 & 0.2489(3) & 0.2700(4) & 0.38(4) \\ O2 & 0 & 0.0008(4) & 0.2448(4) & 0.33(4) \\ O3 & 0.2702(3) & 0.1185(3) & 0.9990(2) & 0.33(4)\\ \\ \multicolumn{5}{c}{Extinction parameters}\\ \multicolumn{5}{c}{$x_{11}$=1.0(1) $x_{22}$=0.36(5) $x_{33}$=0.6(1) }\\ \end{tabular} \end{ruledtabular} \end{table} In order to analyze the nuclear structure under identical conditions as the flipping ratio measurement ($T$=3.5 K, $H$=2 T), only those reflections have been measured for which the ratio $\gamma$ between the magnetic and the nuclear structure factor has been derived from the observed flipping ratios. The ferromagnetic contribution to the integrated intensity $I$ has been canceled out according to \begin{equation} I\sim F_N^2+|\mathbf Q_M|^2=F_N^2+q^2F_M^2=F_N^2(1+q^2\gamma^2), \end{equation} where $\mathbf Q_M$ is the magnetic interaction vector. The subsequent refinement showed that no considerable change in the nuclear structure has taken place, i.e. the derivation of the $F_M$ from observed flipping ratios by using the observed $F_N$ at $T=13.5$ K is justified.\newline All low-angle nuclear reflections suffer considerably from extinction, therefore, special attention has been paid to the extinction of magnetic scattering. As the flipping ratio treatment uses the same extinction parameters for both nuclear and magnetic scattering, it is important to verify, if the extinction effects are indeed comparable. Therefore, three strong magnetic reflections have been measured as a function of applied magnetic field after the sample has been cooled in zero-field to 3.5 K. Fig.~\ref{fig:magext} shows the integrated intensities of three reflections after the nuclear contribution has been subtracted. The field dependence of the magnetic contribution reveals a surprising and interesting tendency: Instead of increasing with increasing applied field, as one would expect if the cross-tie moments get saturated, the intensity of magnetic scattering drops significantly. \begin{figure} \includegraphics[width=3in]{fig3.eps} \caption{\label{fig:magext} (Color online) Intensity of three different magnetic reflections in dependence of an applied magnetic field revealing primary extinction effects.} \end{figure} This observation can be explained with field dependent increase of primary extinction. At $H$=0 T the sample exhibits a multidomain state with presumably negligible extinction effects. By increasing the field the magnetic domains grow until they reach approximately the size of the structural domains. On reaching saturation at $H$$\approx$0.25 T the primary extinction effects for magnetic scattering should be comparable to those of nuclear scattering. The mosaicity which governs secondary extinction should a priori not be affected. In order to verify these assumptions the extinction correction factor $y$ has been calculated for three magnetic reflections using the refined extinction parameters from the nuclear structure refinement according to the anisotropic FullProf model: \begin{equation} y=\left[1+\frac{0.001F_M^2\lambda^3(x_{11}h^2+x_{22}k^2+x_{33}l^2)}{4\sin(2\theta)(\sin\theta/\lambda)^2}\right]^{-\frac{1}{2}} \end{equation} The calculated values have been compared with the observed ones, which can easily be deduced from the intensity ratios at $H$=0 T and $H$=0.25 T. The results are listed in Tab.~\ref{tab:magext}. It can be seen that the calculated extinction factors are to a greater or lesser extent comparable with the observed ones. Nevertheless, the extinction of magnetic scattering seems to be underestimated. \begin{table}[htbp] \caption{\label{tab:magext}Observed and calculated extinction correction parameters for three low-angle magnetic reflections.} \begin{ruledtabular} \begin{tabular}{ccccc} (hkl) & $\sin\theta/\lambda$ (\AA$^{-1}$)& $F_{M,obs}$ (10$^{-12}$ cm) & $y_{obs}$ & $y_{cal}$\\ \hline (021) & 0.10595 & 4.87(1) & 0.39(7) & 0.46(3) \\ (002) & 0.12064 & 3.39(1) & 0.47(6) & 0.64(3) \\ (023) & 0.20083 & 4.72(1) & 0.47(6) & 0.60(3) \\ \end{tabular} \end{ruledtabular} \end{table} \subsection{Real space magnetization density} The nuclear structure factors, which have been deduced from the unpolarized neutron experiments, have been used to derive the magnetic structure factors from the observed flipping ratios by solving Eq.~\ref{eq:rfl} with respect to $F_M$. The individual observed and calculated flipping ratios are shown in Fig.~\ref{fig:rvssintl} as a function of $\sin \theta/\lambda$. \begin{figure} \includegraphics[width=3in]{fig4.eps} \caption{\label{fig:rvssintl} (Color online) Observed (squares) and calculated (dots) flipping ratios as a function of $\sin \theta/\lambda$.} \end{figure} As the crystal structure is centrosymmetric the experimental magnetization density can directly be reconstructed by a Fourier synthesis. Fig.~\ref{fig:real}(b) shows the projection of the observed magnetization density onto the {\it b-c} plane, while Fig.~\ref{fig:real}(a) depicts the unit cell viewed along the {\it a} axis in order to assign the density peaks to the respective atoms. As it has been assumed in the previous section, the Co$_c$ moments do not get saturated, but significant magnetization density is present on V and O sites. While the density is quite localized for the V, O1 and O2 sites, rather diffuse density can be observed around the O3 site. The split density peaks of O1 result from the fact, that actually two O1 ions are visible in the projection. Similarly, the density around the Co$_s$ ions seems to be much higher compared to the Co$_c$ ions, which is due to the fact, that two Co$_s$ ions are contained in the projection, while only one Co$_c$ ion is projected. Besides the superexchange pathways Co$_s$-O2-Co$_c$ and Co$_s$-O3-Co$_c$ an interlayer exchange becomes evident with the non-zero magnetization density of V and O1. Further {\it ab initio} solid state computations are currently being performed to simulate the spin density map of Co$_3$V$_2$O$_8$ and to elucidate the composite mechanisms of the induced magnetic moments on the different O and V sites. This will be the subject of a forthcoming publication.\cite{zbi2008} \begin{figure} \hspace{0.35cm}\includegraphics[width=3.02in]{fig5.eps} \includegraphics[width=3.4in]{fig6.eps} \includegraphics[width=3.4in]{fig7.eps} \caption{\label{fig:real} (Color online) (a) Crystal structure viewed along the {\it a} axis. (b) Experimental and (c) calculated magnetization density as a projection onto the {\it b-c} plane. Contour lines defining positive values are drawn as solid lines in 0.05 $\mu_B$/\AA{}$^2$ intervals between 0 $\mu_B$\/\AA{}$^2$ and 0.15 $\mu_B$/\AA{}$^2$ and in 0.4 $\mu_B$/\AA{}$^2$ intervals above. Negative isodensities are represented by broken lines in 0.1 $\mu_B$/\AA{}$^2$ steps.} \end{figure} \subsection{Momentum space density} The MCPs were extracted by taking the difference of the scattered intensities $I^+$ and $I^-$ of the respective charge Compton profiles. Due to the fact that the more intense charge Compton profiles still exhibit relatively large values at the outermost measured positions of $p_z=\pm 10$ a.u. (atomic units), the actual number of electrons between -10 a.u. and +10 a.u. is evaluated from the profiles interpolated using tabulated data for the elements resulting from Hartree-Fock calculations.\cite{big1975} Before summing up the magnetic intensity of each detector cell, the data have been corrected for the detector cell efficiency, sample absorption and scattering cross-section according to Ref.~\onlinecite{zuko}. Furthermore, the energy scale of each detector cell has been calibrated by measuring a radioactive sample with well known emission energies. \\ The experimental MCPs were folded at $p_z$=0 to increase statistical accuracy by taking the average of each branch. The area under each profile has been normalized to the number of magnetic electrons per formula unit. With the use of iron standards the induced ferromagnetic component can be deduced from the magnetic effect, which is the relative contribution of the MCP to the total Compton profile \begin{equation} M_0=\frac{I^+-I^-}{I^++I^-}\cdot 100\%. \end{equation} The magnetic effects of the respective directional MCPs are listed in Tab.~\ref{tab:mageff} with their corresponding ferromagnetic components induced parallel to the scattering vector. In the $p_y$-$p_z$ plane the spin moments monotonically decrease with increasing angle between the magnetic field direction and the $[001]$ direction, which directly reveals the magnetic anisotropy with {\it b} being the hard axis of this system.\cite{bal2004,wil2007/2} Effective beam path length dependent multiple scattering effects due to the rotation of the sample are estimated to be less than a few percent of the spin values given in Tab.~\ref{tab:mageff}. \begin{table}[htbp] \caption{\label{tab:mageff}Magnetic effects of the respective directional MCPs with a magnetic field $H$ applied along the scattering vector. $\vartheta$ denotes the angle between a MCP and the $[001]$ direction.} \begin{ruledtabular} \begin{tabular}{ccccc} MCP& $\vartheta$ ($^\circ$)&$H (T)$&$M_0$ (\%)& S ($\mu_B$)\\ \hline $[001]$ & 0 & 2 & 0.541 & 0.616 \\ $[023]$ & 17 & 2 & 0.488 & 0.556 \\ $[012]$ & 34.7 & 2 & 0.415 & 0.472 \\ $[011]$ & 54.1 & 2 & 0.274 & 0.312 \\ $[032]$ & 64.3 & 2 & 0.229 & 0.261 \\ $[010]$ & 90 & 2 & 0.095 & 0.108 \\ $[100]$ & 90 & 0.25 & 0.251 & 0.287 \\ \end{tabular} \end{ruledtabular} \end{table} Fig.~\ref{fig:mcp} shows the normalized observed MCPs, which reveal similar shapes for the seven investigated crystallographic directions. \begin{figure} \includegraphics[width=2.8in]{fig8.eps} \caption{\label{fig:mcp} (Color online) Observed (dots) and calculated (solid lines) normalized directional MCPs (shifted vertically in order to improve clarity, horizontal lines serve as a guide for the eye). The abscissa $p_z$ is taken to be parallel to the respective scattering vector. $\vartheta$ denotes the angle between a respective MCP and the [001] direction.} \end{figure} Using all profiles except the one along the [100] direction the two-dimensional momentum spin density in the $p_y$-$p_z$ plane has been reconstructed by the direct Fourier-transform method.\cite{suz1989,tan2001} The calculation has been performed on a grid with a distance of 0.1 a.u. between each point. The result is shown as a two-dimensional contour plot in Fig.~\ref{fig:mom}(a). \begin{figure} \includegraphics[width=3.2in]{fig9.eps} \includegraphics[width=3.2in]{fig10.eps} \caption{\label{fig:mom} (Color online) Reconstructed experimental (a) and calculated (b) spin momentum density in the $p_y$-$p_z$ plane. Contours are drawn in 0.025 $\mu_B$/(a.u.)$^3$ intervals. White solid lines depict the boundary of the first Brillouin zone.} \end{figure} Low spin density can be recognized inside the first Brillouin zone (BZ), which extends beyond its border along the $\langle$010$\rangle$ and $\langle$001$\rangle$ directions. In the vicinity of the first BZ border the density increases more rapidly with increasing momentum along $\langle$021$\rangle$. Peaks are present at $(p_y,p_z)$=$(0.35,1.85)$ and $(1.4,0.55)$. \subsection{Correlated refinement in both spaces} The idea behind correlating the density distributions in real and momentum space is that the population of each spin polarized orbital must be represented likewise in the observed densities in both spaces. The fact that the MCS method only samples the spin part and the PND method both the spin and the orbital part of the magnetic moment has been handled in the following way. As the observed MCPs have been normalized, the area under each profile corresponding to the size of the spin moment is not refined. The refined parameters were the populations of each spin polarized orbital (in correlation with the real space quantities), thus only the shape of each profile is refined. The refined magnetic moments stem solely from the PND data. In order to analyze the observed MCPs theoretical ones have been calculated by projecting the square of the {\it ab initio} wf onto the respective scattering vector. Thereby the symmetry relations between the different cluster density distributions in the unit cell have to be taken into account, which yields two and four symmetrically inequivalent Co$_c$O$_6$ and Co$_s$O$_6$ clusters, respectively (Tab.~\ref{tab:clsym}). \begin{table}[htbp] \caption{\label{tab:clsym}Symmetry relations between the two and four inequivalent Co$_c$O$_6$ and Co$_s$O$_6$, respectively.} \begin{ruledtabular} \begin{tabular}{cccc} cluster& Co position & symmetry relation to c$_1$/s$_1$\\ \hline $c_1$& $(0,0,0)$ & $xyz$ \\ $c_2$ & $(0,\frac{1}{2},\tfrac{1}{2})$ & $xy\bar{z}$ \\ $s_1$ & $(\frac{1}{4},y,\frac{1}{4})$ & $xyz$ \\ $s_2$ & $(\frac{3}{4},y,\frac{1}{4})$ & $\bar{x}yz$ \\ $s_3$ & $(\frac{3}{4},\bar{y},\frac{3}{4})$ & $\bar{x}\bar{y}\bar{z}$ \\ $s_4$ & $(\frac{1}{4},\bar{y},\frac{3}{4})$ & $x\bar{y}\bar{z}$ \\ \end{tabular} \end{ruledtabular} \end{table} The point symmetries of the Co$_c$O$_6$ and Co$_s$O$_6$ clusters are $2/m..$ and $.2.$,\cite{tables} which correspond to $2/m..$ and $.2/m.$ in momentum space. Due to the special symmetry of the Co$_s$O$_6$ density, the projections of the different clusters in momentum space are invariant for the principal axes and those in the $p_y$-$p_z$ plane. In the case of the Co$_c$O$_6$ clusters, the projections onto non-principal axes in the $p_y$-$p_z$ yield different profiles, which need to be averaged. The projected orbitals have been convoluted with a Gaussian function having a full width at half maximum of the instrumental resolution. As reported previously\cite{koi2001} the fact that the projection of each MO in momentum space has a characteristic shape makes it possible to refine its population $\beta_k$ which is the contribution to the observed MCP: \begin{equation} J_{mag}(p_z)=\int\limits_{-\infty}^{\infty}\int\limits_{\infty}^{\infty}\sum_{k}\beta_k\chi^2_k(\mathbf p)dp_xdp_y \end{equation} Here, $\chi_k(\mathbf p)$ denotes a momentum space MO/wf. Similarly, the populations $\beta_k$ of the real space MOs can be used to deduce the magnetic form factors $f_X(\mathbf q)$ of the respective elements $X$ by calculating the Fourier transform of the atomic spin density:\ \begin{equation} f_X(\mathbf q)=\int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty}\sum_{k}\beta_k\psi^2_{k,X}(\mathbf r)\exp(2\pi i\mathbf q \mathbf r)d\mathbf r, \end{equation} where $\psi_{k,X}$ defines the real space MO $k$ only including the atomic orbitals $\phi_{i,X}$ of element $X$=Co, O. With this procedure the observed flipping ratios can be refined based on a simple aspheric magnetic form factor model deduced from {\it ab initio} wf. For the V ions the analytic approximation of the V$^{4+}$ form factor\cite{lis1971} has been used. Refining the population parameters for each MCP individually yields excellent agreement with the observed profiles. But since the refinement process exhibits numerous local minima with significantly varying results, it has been considered more reasonable to include all MCPs in the refinement despite the magnetic anisotropy. The respective population parameters $\beta_k$ have been refined simultaneously in both spaces together with the magnetic moments of Co, V and O by minimizing the function \begin{align} \chi^2=&\frac{1}{2}\sum_{i}\frac{(R_{i,obs}-R_{i,cal})^2}{\sigma^2_{i,obs}}\notag \\ +&\frac{1}{2}\sum_{n}\sum_{j}\frac{[J_{n,obs}(p_{z,j})-J_{n,cal}(p_{z,j})]^2}{\sigma^2_{j,obs}} \end{align} with $i$ and $j$ defining discrete data points of the PND and MCS experiment, respectively, and $n$ referring to the respective MCPs. The refinement yields fairly good agreement expressed by $R_{MCS}$=5.7 and $R_{PND}$=9.6 for the respective experiments. The refined total magnetic moments along the {\it a} axis are \begin{align} \mu(\text{Co}_c)&=1.54(4)~\mu_B\notag \\ \mu(\text{Co}_s)&=2.87(3)~\mu_B\notag \\ \mu(\text{V})&=0.41(4)~\mu_B\notag \\ \mu(\text{O}1)&=0.05(5)~\mu_B\notag \\ \mu(\text{O}2)&=0.35(5)~\mu_B\notag \\ \mu(\text{O}3)&=0.36(5)~\mu_B.\notag \end{align} Summing the magnetic moments of all ions in the unit cell weighted by their site multiplicity and dividing by the number of Co ions yields an averaged magnetization of 3.45 $\mu_B$/Co$^{2+}$. This value shows excellent agreement with the macroscopic magnetization for $H=2$ T along the {\it a} axis reported in Ref.~\onlinecite{wil2007/2}. The resulting relative orbital populations are listed in Tab.~\ref{tab:fitpars}. The refined parameters were used to calculate the MCPs, which are depicted as solid lines in Fig.~\ref{fig:mcp}. Fig.~\ref{fig:contr} shows that the line shapes of the two respective contributing parts (Co$_c$O$_6$ and Co$_s$O$_6$ MOs) are different concerning the ratio between the value at the peak and at $p_z$=0 and that they vary with the projection angle, which is important for a meaningful fit. \begin{figure} \includegraphics[width=3in]{fig11.eps} \caption{\label{fig:contr} (Color online) Observed (dots) and calculated (solid lines) MCPs along two different directions. The dotted and dashed lines show the contribution of the Co$_c$O$_6$ and the Co$_s$O$_6$ cluster, respectively.} \end{figure} \begin{table}[htbp] \caption{\label{tab:fitpars}Refined orbital occupation parameters of the Co$_c$O$_6$ and Co$_s$O$_6$ clusters.} \begin{ruledtabular} \begin{tabular}{ccc} orbital& Co$_c$ & Co$_s$\\ \hline $d_{xy}$& 0.27(2) & 0.12(2) \\ $d_{xz}$ & 0.27(2) & 0.12(2) \\ $d_{yz}$ & 0.16(2) & 0.26(2) \\ $d_{x^2-y^2}$ & 0.17(2) & 0.30(2) \\ $d_{3z^2-r^2}$ & 0.13(2)& 0.20(2) \\ \end{tabular} \end{ruledtabular} \end{table} From the calculated MCPs the momentum space spin density has been reconstructed [Fig.~\ref{fig:mom}(b)]. The calculated real space magnetization density map [Fig.~\ref{fig:real}(c)] has been obtained by a Fourier synthesis of the calculated magnetic structure factors. The main features of the respective density maps coincide well, although some differences are evident: the dip in the momentum space density around $p_z$=0 is not pronounced well in the calculated map and has a shape, which is rotated by 90$^\circ$ with respect to the observed map. This possibly results from strong hybridization effects between the Co3d and O2p orbitals. Furthermore, the disagreements between the experimental and calculated MCPs are most probably due to the limitation of the cluster calculations. Considering a larger sized cluster or the solid state, the gap between the experiment and fitting could be reduced. The real space spin density of the O1 and O3 sites is slightly underestimated. Furthermore, density peaks exist next to the spine site density along the {\it z} axis, which do not coincide with atomic positions. However, this fact can be attributed to truncation effects in the Fourier series. \subsection{Discussion} The study presented here combining several methods reveals very interesting magnetic properties of the kagome staircase system Co$_3$V$_2$O$_8$. The previous assumption, that the ferromagnetic structure at zero magnetic field is not fully ordered because of the Co$_c$ only exhibiting 1.54 $\mu_B$, is disproved. Previous macroscopic magnetization measurements\cite{wil2007/2} indeed showed a saturated moment of approximately 3.4 $\mu_B$ per Co site at $H$=2 T along {\it a}, but the results of the polarized neutron single-crystal diffraction experiment with adequate extinction correction presented here reveal that the field dependent increase of magnetization stems from the V, O2 and O3 sites. Hereby, the V and O2 site show quite localized magnetization density, while the O3 density seems to be smeared out due to truncation effects in the Fourier series. A periodic {\it ab initio} calculation confirms the existence of magnetization density on the V and O sites and will be presented elsewhere.\cite{zbi2008} The spin polarized density on O2 and O3, which are those oxygen ions in the Co$_c$O$_6$ clusters, may be a strong indication for a partially covalent character of the Co$_c$ ions and the reason for their relatively low magnetic moment compared to Co$_s$. The magnetization density distribution clearly exhibits the superexchange pathways between the two different Co sites, but it indicates also the interlayer coupling, which is mediated by the V-O1 bridge. Combining the methods of polarized neutron diffraction and magnetic Compton scattering allowed us to refine the occupations of the Co3d orbitals in a stable way. Like it has been previously reported but with inverted values\cite{fue1982} the two crystallographically different Co ions exhibit different spin polarized orbital occupations. While the unpaired electrons are equally distributed between the $t_{2g}$ and $e_g$ levels for the Co$_s$ ion, the magnetic signal stems by only 30\% from the $e_{g}$ orbitals for the Co$_c$ ion as a consequence of the spin transfer from the surrounding O ions. Concerning the $e_g$ orbitals of both ions the basal plane orbital $d_{x^2-y^2}$ is more populated than the apical $d_{3z^2-r^2}$ orbital. This possibly indicates a higher exchange interaction between the Co$_s$ ions via an intermediate O2 ion. In the case of the Co$_c$ ions it could be a hint that the magnetic exchange with the spine Co ions takes place preferentially via an O3 ion. \begin{acknowledgments} This research was supported by the {\it Deutsche Forschungsgemeinschaft} within the priority program 1178. The magnetic Compton scattering experiment was performed with the approval of the Japan Synchrotron Radiation Research Institute (JASRI) (Proposal No. 2007B2021). Helpful discussions with Dr. A. A. Granovsky and Prof. A. Koizumi are thankfully acknowledged. \end{acknowledgments}
1,314,259,992,802
arxiv
\section{Introduction} \label{sect:intro} The stellar initial mass function (IMF), which characterizes the mass distribution of stars between $0.01~\ensuremath{M_{\odot}}$ and $>100~\ensuremath{M_{\odot}}$, has long been considered universal \citep[see, e.g., reviews by][]{bastian2010, kroupa2013}. The IMF, which is therefore qualified as canonical, is often represented by a lognormal function peaking at stellar masses around $0.2-0.3~\ensuremath{M_{\odot}}$, connected to a power-law tail, $\frac{{\rm d}N}{{\rm d}\log M} \propto M^{-1.35}$, that dominates for masses larger than $1~\ensuremath{M_{\odot}}$ \citep{chabrier2005}. Following the functional description of the IMF by \cite{salpeter1955} and \cite{scalo1986}, \cite{kroupa1993} proposed another representation based on a series of three broken power-laws. In this representation, which was later refined by \cite{kroupa2002}, the form of the IMF would follow $\frac{{\rm d}N}{{\rm d}\log M} \propto M^{0.7}$ in the range $0.01-0.08~\ensuremath{M_{\odot}}$, $\frac{{\rm d}N}{{\rm d}\log M} \propto M^{-0.3}$ in the range $0.08-0.5~\ensuremath{M_{\odot}}$, and $\frac{{\rm d}N}{{\rm d}\log M} \propto M^{-1.3}$ for $M>0.5~\ensuremath{M_{\odot}}$. The power-laws at the high-mass end of these two representations correspond, within the limits of observational uncertainties, to the description of \cite{salpeter1955}, $\frac{{\rm d}N}{{\rm d}\log M} \propto M^{-1.35}$, which becomes $N(>\log M)\propto M^{-1.35}$ in its complementary cumulative distribution form. The IMF universality, which has been postulated on the basis of studies of field stars and young stellar clusters in the solar vicinity (up to a few hundred of parsecs), has recently been challenged in more extreme environments. Observations of young massive clusters in the Milky Way \citep{lu2013, maia2016, hosek2019}, in nearby galaxies \citep{schneider2018}, and of high-redshift galaxies \citep{smith2014, zhang2018} measured top-heavy IMFs with a large proportion of high-mass stars compared to low-mass stars (see review by \citealt{hopkins2018}). Conversely, bottom-heavy IMFs have been measured for metal-rich populations, indicating that the IMF may vary with metallicity \citep[e.g.,][]{marks2012, martin-navarro2015}. The physical processes at the origin of the IMF and the questions of whether and how the IMF is linked to its environment are still a matter of debate (see reviews by \citealt{offner2014, krumholz2015, ballesteros2020, lee2020}). Over the past two decades a plethora of studies of the core populations in nearby star-forming regions revealed that their mass distribution, called the core mass function (CMF), has a shape that resembles that of the IMF. This result has been consistently found through (sub)millimeter continuum observations with ground-based single-dish telescopes \citep[e.g.,][]{motte1998, motte2001, stanke2006, enoch2008} and interferometers \citep[e.g.,][]{testi-sargent1998}. It has been confirmed with deep, far-infrared to submillimeter images obtained by the \textit{Herschel} space observatory \citep[e.g.,][]{konyves2015, benedettini2018, massi2019, ladjelate2020} and a handful of near-infrared extinction maps and molecular line integrated images \citep{alves2007, onishi2001, takemura2021}. The astonishing similarity between the IMF and the observed CMFs, all of which are consistent with each other, suggests that the IMF may inherit its shape from the CMF \citep[e.g.,][]{motte1998, andre2014}. The IMF would arise from a global shift of the CMF by introducing, for individual cores, a conversion efficiency of core mass into star mass, also called star formation efficiency ($\epsilon_{\rm core}$). CMF studies in low-mass star-forming regions suggest a broad range of mass conversion efficiencies, from $\epsilon_{\rm core}\sim 15\%$ \citep{onishi2001} to $\epsilon_{\rm core}\sim 30-40\%$ \citep{alves2007, konyves2015, pezzuto2021} or even $\epsilon_{\rm core}\sim 100\%$ \citep{motte1998, benedettini2018}. These differences could simply be related to the spatial resolution of the observations, which defines cores as peaked cloud structures with full width at half maximum (FWHM) sizes $1-3$ times the resolution element \citep{reid2010, louvet2021simu, tatematsu2021}. Cores identified in low-mass star-forming regions generally have sizes of $1\,000-20\,000$~au ($0.005-0.1$~pc) and masses of $0.01-10~\ensuremath{M_{\odot}}$. We here adapt the terminology of \cite{motte2018a} to gas structures in massive protoclusters and assume that clumps have sizes of $\sim$0.1~pc (or 20\,000~au), cores of $\sim$0.01~pc (or 2\,000~au), and fragments of $\sim$500~au. In contrast with the vast majority of published CMF studies, \cite{motte2018b} and \cite{kong2019} revealed that the CMF of two high-mass star-forming clouds, W43-MM1 and G28.37+0.07, presented an excess of high-mass cores, challenging the classical interpretation of the IMF origin. Combined CMFs, each built from a dozen to several dozen massive clumps, are also top-heavy \citep{csengeri2017b, liu2018, sanhueza2019, lu2020, sadaghiani2020, oneill2021}. However, these CMF measurements are most likely biased by mass segregation because clumps, which were observed with single pointings \citep[except for][]{sanhueza2019}, are overpopulated with massive cores that cluster at their centers \citep{kirk2016, plunkett2018, dibHenning2019, nony2021}. Systematic studies of massive protoclusters imaged at submillimeter wavelengths over their full extent, possibly a few square parsecs, are necessary to determine whether they generally display a canonical or top-heavy CMF. Although it is obvious that the star mass originates from the gas mass in molecular clouds, the gas reservoir used to form a star is difficult to define from observations. Most CMF studies are based on the concept of cores in the framework of the core-collapse model \citep{shu1987, andre2014}. Cores would be the quasi-static mass reservoirs for the self-similar collapse of protostars that will form a single star or, at most, a small stellar system originating from disk fragmentation. From recent studies \citep[e.g.,][]{csengeri2011, olguin2021, sanhueza2021}, it has become obvious, however, that cores are dynamical entities that are not isolated from their surroundings. In the framework of competitive accretion, hierarchical global collapse, or coalescence-collapse scenarios, cores generally acquire most of their mass during the protostellar collapse \citep[e.g.,][]{bonnell2006, leeHennebelle2018a, vazquez2019, pelkonen2021}. Despite the ill-defined concept of a core, constraining the CMF shape is crucial to show its universality or lack thereof. In particular, the CMFs of high-mass star-forming regions need to be constrained to investigate whether they follow the shape found in nearby, low-mass star-forming clouds \citep[e.g.,][]{konyves2015, ladjelate2020, pezzuto2021} or whether they are, at least in some cases, top-heavy. We here take the CMF as a metric, useful for comparing the distribution of small-scale structures, the cores of different clouds, and discuss the potential consequences of its shape on that of the IMF. Predicting the IMF from an observed CMF requires, among other things, a precise knowledge of the turbulent core subfragmentation, also called core multiplicity. The fragmentation of cores of size $\sim$2\,000~au into fragments of a few hundred astronomical units, however, remains a very young area of research. This is even more the case for the disk fragmentation process, which is expected to take over at scales smaller than $\sim$100~au. As a consequence, only a handful of studies investigated the effect of core multiplicity on the IMF, and they were only based on stellar multiplicity prescriptions \citep{swift2008, hatchellFuller2008, alcockParker2019, clarkWhitworth2021}. The authors used a wide range of core mass distributions between subfragments, also called mass partitions, varying from equipartition to a strong imbalance. The history of star formation can also significantly complicate the potentially direct relationship between the CMF and the IMF. The CMF represents a $\sim$10$^5$~yr snapshot, only valid for the cores involved in one star formation event, which lasts for one to two clump free-fall times \citep{motte2018a}. In contrast, the IMF results from the sum, over $\sim$10$^6$~yr in young star clusters to $10^9-10^{11}$~yr in galaxies \citep{heiderman2010, krumholz2015}, of the stars formed by many, $10-10^6$, star formation events. The ALMA-IMF\footnote{ ALMA project \#2017.1.01355.L, see \url{http://www.almaimf.com}.} Large Program (PIs: Motte, Ginsburg, Louvet, Sanhueza) is a survey of 15 nearby Galactic protoclusters that aims to obtain statistically meaningful results on the origin of the IMF \citep[see companion papers, Paper~I and Paper~II,][]{motte2021, ginsburg2021}. The W43-MM2 cloud is the second most massive young protocluster of ALMA-IMF \citep[$\sim$$1.2\times10^4~\ensuremath{M_{\odot}}$ over 6~pc$^2$,][]{motte2021}. With its less massive neighbor, W43-MM3, also imaged by ALMA-IMF, W43-MM2 constitutes the W43-MM2\&MM3 ridge, which has a total mass of $\sim$3.5$\,\times10^4~\ensuremath{M_{\odot}}$ \citep{nguyen2013} over a $\sim$14~pc$^2$ area. Located at $5.5$~kpc from the Sun \citep{zhangB2014}, the W43-MM2\&MM3 ridge is part of the exceptional W43 molecular cloud, which is at the junction of the Scutum-Centaurus spiral arm and the Galactic bar \citep{nguyen2011a, motte2014}. As expected from the high-density filamentary parsec-size structures that we call ridges \citep[see][]{hill2011, hennemann2012, motte2018a}, W43-MM2\&MM3 hosts a rich protocluster efficiently forming high-mass stars, thus qualifying as a mini-starburst \citep{nguyen2011b,motte2021}. In the W43-MM1 ridge, which is located 10~pc north of W43-MM2\&MM3, a mini-starburst protocluster has also been observed \citep{louvet2014, motte2018b, nony2020}. The W43-MM1 and W43-MM2\&MM3 clouds could therefore be the equivalent progenitors of the Wolf-Rayet and OB-star cluster \citep{blum1999,bik2005} located between these two ridges and powering a giant H\mbox{\sc ~ii} region. Despite the presence of gas heated by this giant H\mbox{\sc ~ii} region, the W43-MM2\&MM3 ridge is mainly constituted of cold gas (21-28~K, see Fig.~2 of \citealt{nguyen2013}). In Paper~I \citep{motte2021} W43-MM1 and W43-MM2 are qualified as young protoclusters, while the W43-MM3 cloud represents a more evolved evolutionary stage, quoted as intermediate. From the ALMA observations presented in Sect.~\ref{sect:obs and DR}, we set up a new extraction strategy that results in a census of 205 cores in the W43-MM2\&MM3 ridge (see Sect.~\ref{sect:extraction of compact sources}). The thermal dust emission of cores is carefully assessed and their masses are estimated (see Sect.~\ref{sect:core nature mass estim}). In Sect.~\ref{sect:cmf results}, we present the top-heavy CMF found for the W43-MM2\&MM3 protocluster and discuss its robustness. In Sect.~\ref{sect:discussion on the origin of stellar masses}, we then predict the core fragmentation mass function and IMF resulting from various mass conversion efficiencies and core fragmentation scenarios. We summarize the paper and present our conclusions in Sect.~\ref{sect:conclusions}. {\renewcommand{\arraystretch}{1.5}% \begin{table*}[ht] \centering \begin{threeparttable}[c] \caption{Observational data summary for the W43-MM2 and W43-MM3 12~m array images and their combination.} \label{tab:observation table} \begin{tabular}{cccccccc} \hline\hline \multirow{2}{*}{ALMA band} & \multirow{2}{*}{Field} & \multirow{2}{*}{Mosaic size} & \multirow{2}{*}{$\Theta_{\rm maj}\times\Theta_{\rm min}$} & \multirow{2}{*}{BPA} & Continuum & Original & Denoised \\ & & & & & bandwidth & RMS & RMS \\ & & [$\arcsec\times\arcsec$] & [$\arcsec\times\arcsec$] & [$\degree$] & [GHz] & [mJy$\,$beam$^{-1}$] &[mJy$\,$beam$^{-1}$] \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline & \multirow{2}{*}{W43-MM2} & \multirow{2}{*}{$92\times97$} & \multirow{2}{*}{$0.52\times0.41$} & \multirow{2}{*}{106} & 1.655 (\texttt{cleanest}\xspace) & 0.175 & -- \\ & & & & & 3.448 (\texttt{bsens}\xspace) & 0.132 & -- \\ \cline{2-8} $1.3~$mm & \multirow{2}{*}{W43-MM3} & \multirow{2}{*}{$92\times97$} & \multirow{2}{*}{$0.51\times0.43$} & \multirow{2}{*}{ 89} & 3.172 (\texttt{cleanest}\xspace) & 0.101 & -- \\ $228.4~$GHz & & & & & 3.448 (\texttt{bsens}\xspace) & 0.093 & -- \\ \cline{2-8} & \multirow{2}{*}{W43-MM2\&MM3} & \multirow{2}{*}{$158\times120$} & \multirow{2}{*}{$0.51\times0.42$} & \multirow{2}{*}{98} & $-$ (\texttt{cleanest}\xspace) & $\sim$0.15 & -- \\ & & & & & 3.448 (\texttt{bsens}\xspace) & $\sim$0.11 & $\sim$ 0.08 \\ \hline\hline & \multirow{2}{*}{W43-MM2} & \multirow{2}{*}{$202\times180$} & \multirow{2}{*}{$0.30\times0.24$} & \multirow{2}{*}{107} & 1.569 (\texttt{cleanest}\xspace) & 0.041 & -- \\ & & & & & 2.906 (\texttt{bsens}\xspace) & 0.026 & -- \\ \cline{2-8} $3.0~$mm & \multirow{2}{*}{W43-MM3} & \multirow{2}{*}{$202\times180$} & \multirow{2}{*}{$0.42\times0.28$} & \multirow{2}{*}{94} & 2.528 (\texttt{cleanest}\xspace) & 0.045 & -- \\ $99.66~$GHz & & & & & 2.906 (\texttt{bsens}\xspace) & 0.031 & -- \\ \cline{2-8} & \multirow{2}{*}{W43-MM2\&MM3} & \multirow{2}{*}{$275\times202$} & \multirow{2}{*}{$0.46\times0.46$} & \multirow{2}{*}{101} & $-$ (\texttt{cleanest}\xspace) & $\sim$0.048 & -- \\ & & & & & 2.906 (\texttt{bsens}\xspace) & $\sim$ 0.028 & $\sim$ 0.021 \\ \hline \end{tabular} \begin{tablenotes}[flushleft] \item (4) Major and minor sizes of the beam at half maximum. $\Theta_{\rm beam}$ is the geometrical average of these two quantities. \item (5) Position angle of the beam, measured counterclockwise from north to east. \item (6) Spectral bandwidth used to estimate the continuum emission level, with the name of the associated image in parentheses (see their definition in Sect.~\ref{sect:obs and DR}). \item (7) Noise level measured in the original map unities and thus with different beam sizes (see Col.~4). \item (8) Noise level measured in the \textsl{MnGSeg} \texttt{denoised}\xspace images (see Sect.~\ref{sect:extraction of compact sources} and \citealt{robitaille2019}). \end{tablenotes} \end{threeparttable} \end{table*}} \section{Observations and data reduction} \label{sect:obs and DR} Observations were carried out between December 2017 and December 2018 as part of the ALMA Large Program named ALMA-IMF (project \#2017.1.01355.L, see \citealt{motte2021}). The 12~m and 7~m ALMA arrays were used at both 1.3~mm and 3~mm (central frequencies $\nu_{\rm c} \simeq 228.4$~GHz in band~6 and $\simeq 99.66$~GHz in band~3, see \cref{tab:observation table}). The W43-MM2 and W43-MM3 fields have the same extent and were imaged by the ALMA 12~m and 7~m arrays with mosaics composed of 27 (respectively 11) pointings at 1.3~mm and 11 (respectively 3) pointings at 3~mm. For the 12~m array images, the maximum recoverable scales are $\sim$5.6~$\arcsec$ at 1.3~mm and $\sim$8.1~$\arcsec$ at 3~mm \citep{motte2021}, corresponding to $0.15-0.2$~pc at 5.5~kpc. At 1.3~mm and 3~mm, eight (respectively four) spectral windows were selected for the ALMA-IMF setup; they sum up to bandwidths of 3.7~GHz and 2.9~GHz, respectively. \cref{tab:observation table} summarizes the basic information of 12~m array observations for each field and each continuum waveband. A more complete description of the W43-MM2 and W43-MM3 data sets can be found in Paper~I \citep{motte2021} and Paper~II \citep{ginsburg2021}. The present W43-MM2 and W43-MM3 data sets were downloaded from the ALMA archive before they were corrected for system temperature and spectral data normalisation\footnote{ ALMA ticket: \url{https://help.almascience.org/kb/articles/607}, \url{https://almascience.nao.ac.jp/news/amplitude-calibration-issue-affecting-some-alma-data}}. This, however, has no significant impact on the continuum data as shown in Section~2 of Paper~II \citep{ginsburg2021}. The data were first calibrated using the default calibration pipelines of the CASA\footnote{ ALMA Pipeline Team, 2017, ALMA Science Pipeline User’s Guide, ALMA Doc 6.13. See \url{https://almascience.nrao.edu/processing/science-pipeline}.} software. We then used an automatic CASA~5.4 pipeline script\footnote{ \url{https://github.com/ALMA-IMF/reduction}} developed by the ALMA-IMF consortium and fully described in Paper~II \citep{ginsburg2021} to produce self-calibrated images. In short, this pipeline performs several iterations of phase self-calibration using custom masks in order to better define the self-calibration model and clean more deeply using the TCLEAN task and refined parameters after each pass. This process results in quantitatively reducing interferometric artifacts and leads to a noise level reduced by 12-20\% at 1.3~mm and 8-12\% at 3~mm for the 12~m array images for W43-MM2 and W43-MM3, respectively. The data we used for this analysis are different from those presented in Paper~I and Paper~II \citep{motte2021,ginsburg2021}, which are from an updated version of the pipeline using, among other things, CASA~5.7 instead of CASA~5.4 and an updated version of ALMA data products. We compared the images presented here to those in Paper~I and Paper~II \citep{motte2021,ginsburg2021} and found that the flux differed by $<$5\% for all continuum peaks. The difference is largely accounted for by small differences ($<$5\%) in beam area, which arise from changes in the baseline weighting during the processing that corrected for system temperature and spectral data normalisation. Greater differences were observed in the extended emission, but this has no impact on our analysis since, as described in Sect.~\ref{sect:extraction of compact sources}, the extended emission is filtered out when source identification is performed. We used the \texttt{multiscale} option of the TCLEAN task to minimize interferometric artifacts associated with missing short spacings. With the \texttt{multiscale} parameters of 0, 3, 9, 27 pixels (up to 81 at 3~mm) and with $4-5$ pixels per beam, it independently cleaned structures with characteristic sizes from the geometrical average of the beam size, $\Theta_{\rm beam}\simeq$0.46$\arcsec$, to $6$ and $17$ times this value, which means $\sim$2.7$\arcsec$ at 1.3~mm and up to $\sim$8$\arcsec$ at 3~mm, respectively. The combined 12~m$\,+\,$7~m images have a noise level higher by a factor of $\sim$3.4\footnote{ The higher noise level of the combined ALMA 12~m $+$ ACA 7~m images is due to a) the higher noise level of the 7~m data, b) the structural noise resulting from larger-scale emission, and c) the lower efficiency of the self-calibration process when applied to 7~m data.} and will thus not be used in this work. \begin{figure*}[hbtp!] \centering \vskip -0.5cm \begin{minipage}{1.\textwidth} \centering \includegraphics[width=.94\textwidth]{MM2MM3+cores.png} \end{minipage} \begin{minipage}{1.\textwidth} \hspace{14pt} \includegraphics[width=.85\textwidth]{W43-MM2_3.jpeg} \end{minipage}% \caption{W43-MM2\&MM3 protocluster cloud. Panel \textsl{(a)}: 1.3~mm image obtain by the ALMA 12~m array (best-sensitivity image, prior to primary-beam correction). W43-MM2 is to the west and W43-MM3 is to the east. White ellipses outline the FWHM size of compact cores extracted by \textsl{getsf}. Panel \textsl{(b)}: Three-color ALMA image. Red and green display the \texttt{bsens}\xspace continuum images at 1.3~mm and 3~mm, respectively, scaled by the theoretical ratio of thermal dust emission (see Eq.~\ref{eq:theo thermal ratio}). Blue corresponds to the free-free continuum emission image at the frequency of the H41$\alpha$ recombination line (Galv\'an-Madrid et al. in prep.). Filaments and cores appear in orange (red $+$ green), tracing thermal dust emission; the UCH\mbox{\sc ~ii} region appears in blue or cyan (blue $+$ green), indicating free-free emission. Ellipses in the lower left corners represent the angular resolution of the \texttt{bsens}\xspace 1.3~mm image and scale bars indicate the size in physical units.} \label{fig:1.3mm and trichrone} \end{figure*} The ALMA-IMF pipeline produces two different estimates of the continuum images \citep[see][]{ginsburg2021}. The first, called the \texttt{cleanest}\xspace image, was produced using the \texttt{findContinuum} routine of CASA which excludes, before the TCLEAN task, the channels associated with lines to estimate the continuum level. The \texttt{cleanest}\xspace image is thus a continuum image free of line contamination. In the case of the ALMA-IMF data of W43-MM2 and W43-MM3, the bandwidths of the \texttt{cleanest}\xspace images are, respectively, a fraction of $\sim$50\% and $\sim$90\% of the total bandwidths at 1.3~mm and 3~mm (see \cref{tab:observation table} and Fig.~3 of \citealt{ginsburg2021}). The second continuum image produced by the ALMA-IMF pipeline uses all channels of all the spectral bands to estimate the continuum at 1.3~mm and 3~mm. With a $\sim$30\% decrease in the rms noise level, it corresponds to the best-sensitivity image and is thus called the \texttt{bsens}\xspace image (see \cref{tab:observation table}). The W43-MM2 and W43-MM3 ALMA fields share a common area in both bands: $\sim$10$\arcsec \times 90\arcsec$ at 1.3~mm and $\sim$100$\arcsec \times 180\arcsec$ at 3~mm within their respective primary-beam responses down to 15\%. We combined the individually cleaned images in the image plane because CASA~5.4 cannot clean two fields with two different phase centers using the \texttt{multiscale} option. Although we requested the same angular resolution for both 1.3~mm and 3~mm mosaics, the latter were observed at a much higher resolution (see \cref{tab:observation table}). We thus smoothed the W43-MM2 and W43-MM3 \texttt{cleanest}\xspace and \texttt{bsens}\xspace images at 3~mm to the angular resolution of the 1.3~mm images, $\sim$0.46$\arcsec$, or 2\,500~au at the 5.5~kpc distance of W43. Because the beam orientations are similar (see \cref{tab:observation table}), we assumed that the median of the W43-MM2 and W43-MM3 parallactic angles are good approximations for the beams of the combined images. We then used the primary-beam shape of each individual mosaic to weight\footnote{ The combined primary-beam corrected image, $I_{\rm MM2+MM3}^{\rm PBcor}$, is the sum of individual primary-beam corrected images, $I_{\rm MM2}^{\rm PBcor}$ and $I_{\rm MM3}^{\rm PBcor}$ weighted by their combined primary-beam maps, PB$_{\rm MM2}$ and PB$_{\rm MM3}$, following the equation \begin{equation*} I_{\rm MM2+MM3}^{\rm PBcor} = \frac{ I_{MM2}^{\rm PBcor} \times ({\rm PB_{\rm MM2}})^2 + I_{\rm MM3}^{\rm PBcor} \times ({\rm PB_{\rm MM3}})^2 } {({\rm PB_{\rm MM2}})^2 + ({\rm PB_{\rm MM3}})^2}. \end{equation*}} the flux of pixels in the common area and define the combined primary-beam corrected image. This approach is valid because the noise level, when measured in the common area of maps with the same beam and uncorrected by the primary beam, is similar to within 20\% between maps, which is smaller than the 35\% difference measured on the whole map (see \cref{tab:observation table}). Figures~\ref{fig:1.3mm and trichrone}a and \ref{appendixfig:3mm image with cores} present the W43-MM2\&MM3 ridge, covered by the combined image of the W43-MM2 and W43-MM3 protoclusters observed by ALMA-IMF. They display the 12~m array \texttt{bsens}\xspace image at 1.3~mm and 3~mm, respectively. Figure~\ref{fig:1.3mm and trichrone}b presents a three-color image, which separates the thermal dust emission of star-forming filaments from the free-free emission associated with H\mbox{\sc ~ii} regions, as done in Paper~I \citep{motte2021}. It uses ALMA-IMF images of the 1.3~mm and 3~mm continuum and of the H41$\alpha$ recombination line, tracing the free-free continuum emission of ionized gas (see Sect.~\ref{sect:obs and DR} and \citealt{motte2021}). Several filaments cross the image and the W43-MM2 cloud displays a centrally concentrated structure reminiscent of hubs \citep[e.g.,][]{myers2009, peretto2013, didelon2015}. In single-dish studies, W43-MM2 has a $2.4\times 10^4~L_\odot$ bolometric luminosity, integrated over 0.23~pc, and coincides with a 6.67~GHz methanol maser \citep{walsh1998, motte2003}. The W43-MM3 clump, itself characterized by \cite{elia2021}, has a 0.24~pc size and $5.7\times 10^4~L_\odot$ bolometric luminosity. In \cref{fig:1.3mm and trichrone}b, it harbors an ultra-compact H\mbox{\sc ~ii} (UCH{\sc ~ii}) region, whose bubble forms a ring-like structure. Its $\sim$0.12~pc diameter, or $\sim$4.8$\arcsec$ at 5.5~kpc, is in good agreement with its size estimated from single-dish millimeter continuum \citep{motte2003}. Many compact sources are found along the dust emission of filaments of the W43-MM2\&MM3 ridge, suggesting that they could be dense cloud fragments such as cores. \section{Extraction of compact sources} \label{sect:extraction of compact sources} Since our goal is to extract cores from their surrounding cloud, we need to use software packages that identify and characterize cores as emission peaks, whose size is limited by their structured background and neighboring cores. Many source extraction algorithms have been used in star formation studies \citep[see][]{joncour2020,men2021getsf}. Here we use two completely independent methods, \textsl{getsf} and \textsl{GExt2D}. The \textsl{getsf}\footnote{ \url{https://irfu.cea.fr/Pisp/alexander.menshchikov/}} method \citep{men2021getsf} employs a spatial decomposition of the observed images to better isolate various spatial scales and separate the structural components of relatively round sources and elongated filaments from each other and from the background. The new method has many common features with its predecessors \textsl{getsources}, \textsl{getfilaments}, and \textsl{getimages} \citep{men2012multi, men2013getfilament, men2017getimages}. It has a single free parameter, the maximum size of the sources to be extracted. The detection provides a first-order estimate of the source footprints, sizes, and fluxes. As a second step, robust measurements of the sizes and fluxes of sources are done on background-subtracted images computed at each wavelength and, possibly, on other auxiliary images. The resulting catalog contains the size and fluxes of each source for each image. \textsl{GExt2D} (Bontemps et al. in prep.), like the \textsl{CuTeX} algorithm \citep{molinari2017}, uses second derivatives to identify the local maxima of the spatial curvature, which are then interpreted as the central positions of compact sources. The outskirts of each source are then determined, at each wavelength independently, from the inflexion points that are observed as the emission decreases away from the source peak. For each wavelength, the background under each source is evaluated by interpolating the emission along the source outskirts. Then, for all identified compact sources, their sizes and fluxes are measured by fitting Gaussians to their positions in the emission maps from which the associated background has been subtracted. Both algorithms allow multiple input images and separate the source detection step (see Sect.~\ref{sect:source detection}) from the step that characterize the sources in terms of size and flux measurements (see Sect.~\ref{sect:source characterization}). \subsection{Source detection}\label{sect:source detection} With the objective to build the most complete and most robust core catalog in the W43-MM2\&MM3 protocluster cloud, the core positions and footprints should be defined in the detection image that provides the optimum image sensitivity. This corresponds to the \texttt{bsens}\xspace image at 1.3~mm (see Sect.~\ref{sect:obs and DR}). To further improve the sensitivity of the image chosen to detect cores, we removed the noise associated with cloud structures, which are incoherent from one scale to another. To do this we used the Multi-resolution non-Gaussian Segmentation software (\textsl{MnGSeg}) that separates the incoherent structures, referred to as Gaussian, of a cloud from the coherent structures associated with star formation \citep[][see also \cref{appendixsect:mngseg}]{robitaille2019}. The removed Gaussian component corresponds to structural noise associated with the small-scale structures of cirrus that lie along the line of sight to the W43-MM2 and W43-MM3 protoclusters. In detail, the \texttt{denoised}\xspace image chosen for source extraction no longer contains incoherent components at scales larger than the beam size; it therefore consists of the sum of all the coherent cloud structures associated with star formation plus the white instrumental noise, which is a flux component needed to quantify the signal-to-noise ratio of extracted cores. We hereafter call \texttt{denoised}\xspace \& \texttt{bsens}\xspace and \texttt{denoised}\xspace \& \texttt{cleanest}\xspace the images passed through \textsl{MnGSeg} since their noise level decreases. As shown in \cref{appendixsect:mngseg}, images denoised by \textsl{MnGSeg} are indeed more sensitive and do not introduce spurious sources, meaning sources that are not part of the synthetic core population. In the case of the combined ALMA images of W43-MM2 and W43-MM3 the noise level decreased by about $\sim$30\% at both 1.3~mm and 3~mm wavelengths (see \cref{tab:observation table}), and thus allows the $5\,\sigma$ detection of point-like cores with masses of $\sim$0.20~$\ensuremath{M_{\odot}}$ (see Eq.~\ref{eq:optically thin mass} and adopted assumptions). {\renewcommand{\arraystretch}{1.5}% \begin{table*}[ht] \centering \begin{threeparttable}[c] \caption{Number of sources extracted by \textsl{getsf} in the W43-MM2\&MM3 protocluster, using different detection images (all 12~m array 1.3~mm uncorrected by the primary beam) and various measurement images (all 12~m array 1.3~mm and 3~mm primary-beam-corrected).} \label{tab:sensivity stat} \begin{tabular}{l|c|cc|cc} \hline\hline Detection image & \texttt{cleanest}\xspace & \multicolumn{2}{c|}{\texttt{bsens}\xspace} & \multicolumn{2}{c}{\texttt{denoised}\xspace \& \texttt{bsens}\xspace}\\ Measurement images & \texttt{cleanest}\xspace & \texttt{cleanest}\xspace & \texttt{bsens}\xspace & \texttt{denoised}\xspace \& \texttt{cleanest}\xspace & \texttt{denoised}\xspace \& \texttt{bsens}\xspace\\ \hline Number of sources, & & & & & \\ with robust 1.3~mm measurements\tnote{*} & 75 & 100 & 120 & 158 & 208 \\ with measurable 3~mm fluxes\tnote{$\dagger$} & 46 & 63 & 93 & 86 & 121 \\ \hline \end{tabular} \begin{tablenotes} \item[*] They are 1.3~mm sources that pass the recommended filtering of \textsl{getsf}: monochromatic goodness and significance above 1 in the detection image, small ellipticity, $a_{\rm 1.3mm}/b_{\rm 1.3mm}\leq 2$, and robust flux measurements at 1.3~mm, $S^{\rm peak}_{\rm 1.3mm} \geq 2 \sigma^{\rm peak}_{\rm 1.3mm}$, and $S^{\rm int}_{\rm 1.3mm} \geq 2 \sigma^{\rm int}_{\rm 1.3mm}$ in the measurement image. We also imposed a small average diameter, $\sqrt{a_{\rm 1.3mm} \times b_{\rm 1.3mm}}\leq 4\times \Theta_{\rm beam}$. \item[$\dagger$] The 3~mm fluxes of sources robustly detected at 1.3~mm are considered measurable when they correspond to small and low-ellipticity sources, $\sqrt{a_{\rm 3mm} \times b_{\rm 3mm}}\leq 4\times \Theta_{\rm beam}$ and $a_{\rm 3mm}/b_{\rm 3mm}\leq 2$, detected above $1\,\sigma_{\rm 3mm}$, $S^{\rm peak}_{\rm 3mm} > \sigma^{\rm peak}_{\rm 3mm}$, and $S^{\rm int}_{\rm 3mm} > \sigma^{\rm int}_{\rm 3mm}$. \end{tablenotes} \end{threeparttable} \end{table*}} Hereafter the master source catalogs will be those from the extraction performed with \textsl{getsf} (v210414), using the listed input images for the following: \begin{itemize} \item detection: 1.3~mm \texttt{denoised}\xspace \& \texttt{bsens}\xspace 12~m array image, not corrected by the primary beam; \item 1.3~mm measurements: \texttt{denoised}\xspace \& \texttt{bsens}\xspace and \texttt{denoised}\xspace \& \texttt{cleanest}\xspace 12~m array images, corrected by the primary beam; \item 3~mm measurements: \texttt{denoised}\xspace \& \texttt{bsens}\xspace and \texttt{denoised}\xspace \& \texttt{cleanest}\xspace 12~m array images, corrected by the primary beam; \end{itemize} To facilitate core extraction, the noise level of the detection image is flattened, using images that are uncorrected by the primary beam. \cref{appendixtab:core detection table} lists the sources detected by \textsl{getsf} at 1.3~mm and identified by their peak coordinates, RA and Dec, along with their characteristics measured at 1.3~mm and at 3~mm in the \texttt{denoised}\xspace \& \texttt{bsens}\xspace images: non-deconvolved major and minor diameters at half maximum, $a_{\rm 1.3mm} \times b_{\rm 1.3mm}$ and $a_{\rm 3mm} \times b_{\rm 3mm}$; position angles, PA$_{\rm 1.3mm}$ and PA$_{\rm 3mm}$; peak and integrated fluxes, $S^{\rm peak}_{\rm 1.3mm}$, $S^{\rm peak}_{\rm 3mm}$, $S^{\rm int}_{\rm 1.3mm}$ and $S^{\rm int}_{\rm 3mm}$; two tags to identify cores also extracted by \textsl{GExt2D} and cores identified as suffering from line contamination (see Sect.~\ref{sect:line contamination}). The \textsl{getsf} package extracted 208 cores that passed the basic recommended filtering\footnote{ The monochromatic goodness and significance of \textsl{getsf} sources, defined in \cite{men2021getsf}, should be larger than 1. For robust flux measurements, \cite{men2021getsf} recommends $S^{\rm peak}\geq 2 \sigma^{\rm peak}$ and $S^{\rm int} \geq 2 \sigma^{\rm int}$. Lastly, sources that have high ellipticity are filtered imposing $a/b\geq2$. These internal parameters of \textsl{getsf} are used to assess the quality of the detection of a source and the measurements of its size and fluxes.} \citep{men2021getsf}. \cref{tab:sensivity stat} gives the number of sources extracted by \textsl{getsf} when using different detection and measurement images, from the \texttt{cleanest}\xspace to the \texttt{bsens}\xspace and finally \texttt{denoised}\xspace \& \texttt{bsens}\xspace images, at 1.3~mm and 3~mm. The 208 sources of \cref{appendixtab:core detection table} are $\sim$1.6 times more numerous than the sources detected in the \texttt{original}\xspace \& \texttt{bsens}\xspace image and $\sim$2.8 times more numerous that those detected in the \texttt{original}\xspace \& \texttt{cleanest}\xspace image. In order to check the robustness of the \textsl{getsf} catalog of \cref{appendixtab:core detection table}, \textsl{GExt2D} (v210208) is used. Applied to the \texttt{bsens}\xspace 12~m array 1.3~mm image, not corrected by the primary beam, and after the recommended post-filtering\footnote{ To guarantee a reliable catalog, it is recommended to only keep \textsl{GExt2D} sources, whose signal-to-noise ratio measured in an annulus around each source is greater than 4 (see Bontemps et al. in prep.). The flux quality that quantifies the ratio of the second derivative isotropic part to its elliptical part, should also be higher than 1.85. It is used to exclude small flux variations along filaments. Lastly, sources that have high ellipticity are filtered imposing $a/b\geq1.5$.} (Bontemps et al. in prep.), \textsl{GExt2D} provides a catalog of 152 cores. \subsection{Source characterization}\label{sect:source characterization} The \textsl{getsf} and \textsl{GExt2D} measurements of source characteristics, that is to say their sizes and fluxes, were made in the 12~m array 1.3~mm and 3~mm images, which are primary-beam corrected. According to the good results of the \textsl{MnGSeg} denoising procedure applied on simulations of \textsl{getsf} extractions (see \cref{appendixsect:mngseg}), we kept the \textsl{getsf} measurements made in the \texttt{denoised}\xspace images. Since we need to estimate, and later on correct, the line contamination of fluxes of the sources extracted in the \texttt{bsens}\xspace image (see Sect.~\ref{sect:line contamination}), extraction was performed in the \texttt{denoised}\xspace \& \texttt{cleanest}\xspace images in addition to that performed in the \texttt{denoised}\xspace \& \texttt{bsens}\xspace images. Using the maximum size free parameter of \textsl{getsf}, we excluded five sources with FWHM larger than four times the beam, $\sqrt{a_{\rm 1.3mm}\times b_{\rm 1.3mm}} > 4\times\Theta_{\rm beam}$. They correspond to $\sim$10\,000~au at $d=5.5$~kpc, which thus would be much larger than the typical core size expected to be a few 1\,000~au in the dense W43 protoclusters \citep[e.g.,][]{bontemps2010, palau2013, motte2018b}. They have low 1.3~mm fluxes, with a median mass of $\sim$2~$\ensuremath{M_{\odot}}$ (see Eq.~\ref{eq:optically thin mass}), and are located at the outskirts of the protocluster cloud. In summary, the \textsl{getsf} catalog of \cref{appendixtab:core detection table} contains 208 sources, which are detected at 1.3~mm with robust flux measurements. Given the lower sensitivity of our 3~mm continuum images, 121 have 3~mm fluxes that are qualified as ``measurable'' because they are above $1\,\sigma$ (see \cref{tab:sensivity stat}). Of the 208 \textsl{getsf} sources, 100 are qualified as ``robust'' because they are also identified by \textsl{GExt2D} and $\sim$90\% of these common sources have no significant differences in their integrated fluxes, that is, their fluxes are at worst a factor of two larger or smaller than each other. The sources that have 1.3~mm fluxes consistent to within 30\% are considered even more robust, as indicated in \cref{appendixtab:core detection table}. \begin{figure}[ht] \centering \includegraphics[width=1.\linewidth]{fwhm-distribution.png} \caption{Distribution of the FWHM and FWHM$^{\rm dec}$ of the \textsl{getsf} sources as measured at 1.3~mm. A minimum size of $1\,300$~au is assumed for FWHM$^{\rm dec}$. The median value of the core deconvolved sizes is about $0.75\arcsec\simeq 1.6\times\Theta_{\rm beam}$ with $\Theta_{\rm beam}=0.46\arcsec$, corresponding to $\sim$3\,400~au.} \label{fig:fwhm distribution} \end{figure} Figure~\ref{fig:fwhm distribution} displays, for the 208 sources extracted by \textsl{getsf}, histograms of their 1.3~mm physical sizes before and after beam deconvolution\footnote{ We set a minimum deconvolved size of half the beam, $0.23\arcsec$ or 1\,300~au, to limit deconvolution effects that may give excessively small, and thus unrealistic, sizes.}, FWHM$=\sqrt{a_{\rm 1.3mm}\times b_{\rm 1.3mm}}\times d$ and FWHM$^{\rm dec}=\sqrt{a_{\rm 1.3mm}\times b_{\rm 1.3mm}-\Theta_{\rm beam}^2} \times d$, projected at the $d=5.5$~kpc distance of W43. The W43-MM2\&MM3 compact sources have deconvolved sizes ranging from $\sim$1\,300~au to $\sim$10\,000~au with a median value of $\sim$3\,400~au. Given their small physical sizes, these cloud fragments could represent the mass reservoirs, or at least the inner part of those reservoirs, that will undergo gravitational collapse to form a star or a small mutiple system. Following the classical terminology \citep[e.g.,][]{motte2018a} and if they are real cloud fragments (see Sect.~\ref{sect:nature of compact sources}), we hereafter call them cores. \section{Core nature and core mass estimates} \label{sect:core nature mass estim} Sources in the W43-MM2\&MM3 protocluster are generally characterized from their measurements in the 1.3~mm \texttt{denoised}\xspace \& \texttt{bsens}\xspace images obtained with the ALMA 12~m array (\cref{appendixtab:core detection table}). Some of them, however, may not correspond to real cores or may have 1.3~mm \texttt{denoised}\xspace \& \texttt{bsens}\xspace fluxes contaminated by line emission; their nature is investigated in Sect.~\ref{sect:nature of compact sources}. When the W43-MM2\&MM3 core sample is cleaned and the 1.3~mm fluxes are corrected, core masses are estimated (see Sect.~\ref{sect:mass estimation}). \subsection{Core sample of the W43-MM2\&MM3 ridge} \label{sect:nature of compact sources} To ensure that the millimeter sources of \cref{appendixtab:core detection table} are indeed dense cloud fragments and to correctly measure their mass, we investigated the contamination of their 1.3~mm and 3~mm continuum fluxes by free-free (see Sect.~\ref{sect:freefree contamination}) and line emission (see Sect.~\ref{sect:line contamination}). From the 208 sources of \cref{appendixtab:core detection table}, we removed three sources which correspond to structures dominated by free-free emission and corrected the 1.3~mm measurements of 14 cores contaminated by line emission. \subsubsection{Correction for free-free contamination} \label{sect:freefree contamination} \begin{figure*}[htbp!] \centering \begin{minipage}{0.39\textwidth} \centering \includegraphics[width=1.\textwidth]{free-free.png} \end{minipage}% \hskip 0.0198\textwidth \begin{minipage}{0.59\textwidth} \centering \includegraphics[width=1.\textwidth]{MM2_3-free-free-scatter-int-pow1-snr.png} \end{minipage} \caption{Investigating free-free contaminated sources. Panel \textsl{(a)}: UCH\mbox{\sc ~ii} region of W43-MM3 and its surrounding cloud imaged by ALMA at 1.3~mm. The red hatched region outlines the H41$\alpha$ recombination line emission of the H\mbox{\sc ~ii} region. White ellipses outline source boundaries (at half maximum) as defined by \textsl{getsf}. Panel \textsl{(b)}: Thermal dust emission cores separated from free-free emission sources, using their 1.3~mm to 3~mm flux ratios, $\gamma_1$, and shown as a function of the S/N in the 1.3~mm image. Blue points indicate cores with 3~mm thermal dust emission whose flux is rescaled to the source size measured at 1.3~mm (see Eq.~\ref{eq:re-scale}), while orange points locate cores undetected at 3~mm, thus taking the ratio of the 1.3~mm peak flux to the $1\,\sigma$ peak error at 3~mm, corresponding to a lower limit. Red symbols are sources located within the H41$\alpha$ recombination line region displayed in panel \textsl{(a)}. The gray curve indicates the median value of the core ratios, computed over bins of 20 adjacent cores as ranked by their S/N. The shaded gray area indicates the corresponding $3\,\sigma$ dispersion in flux ratio values. The magenta horizontal dashed line represents the theoretical flux ratio of thermal dust emission of 15.4, computed in Eq.~\ref{eq:theo thermal ratio}. The red hatched area locates the theoretical flux ratios of UCH\mbox{\sc ~ii} or HCH\mbox{\sc ~ii} regions, whose free-free emission is either optically thin (lower limit) or partly to totally optically thick (upper limit).} \label{fig:freefree} \end{figure*} Figure~\ref{fig:1.3mm and trichrone}b shows that there is only one localized area associated with free-free emission in the 1.3~mm ALMA-IMF images of W43-MM2 and W43-MM3. This is the W43-MM3 UCH\mbox{\sc ~ii} region which is particularly bright at 3~mm. Figure~\ref{fig:freefree}a displays the boundary of this H\mbox{\sc ~ii} region, as defined by the H41$\alpha$ recombination line emission observed as part of the ALMA-IMF Large Program (Galv\'an-Madrid et al. in prep.). In this area the large-scale continuum emission mainly consists of free-free emission, and the thermal dust emission of cores could only represent a minor part of the total flux at small scales. This calls into question the nature of the five compact sources detected over the extent of the H\mbox{\sc ~ii} bubble that may not be interpreted as dust cores: \#24, \#27, \#82, \#91, and \#172 (see \cref{fig:freefree}a). We investigated the free-free contamination of the cores of \cref{appendixtab:core detection table} by measuring the ratio of their 1.3~mm to 3~mm integrated fluxes, $S^{\rm int}_{\rm 1.3mm}$ and $S^{\rm int}_{\rm 3mm}$. To allow a direct comparison of these fluxes, not always integrated over the same area and thus not defining the same parcel of the cloud, we rescaled the 3~mm integrated flux of cores to their deconvolved 1.3~mm sizes, FWHM$^{\rm dec}_{\rm 1.3mm}$. We assumed a linear relation between the integrated flux and the angular scale, $S^{\rm int}(\Theta)\propto \Theta$, corresponding to the optically thin emission of an isothermal, $T(r)\simeq \rm constant$, protostellar envelope with a $\rho(r)\propto r^{-2}$ density distribution \citep[][]{motte2001, beuther2002}.This flux rescaling was applied in \textit{Herschel} studies that aimed to fit meaningful spectral energy distributions (\citealt{motte2010,nguyen2011a,tige2017}). As discussed in \cite{tige2017}, this correction factor would be larger for starless fragments that have a flatter density distribution, thus leading to a $S^{\rm int}(\Theta)\propto \Theta^m$ relation with $m>1$. In the case of hyper-compact H\mbox{\sc ~ii} regions (HCH{\sc ~ii}), potentially optically thick at their center, a larger correction factor would also be necessary. The rescaled 3~mm fluxes are computed via the following equation: \begin{equation} (S^{\rm int}_{\rm 3mm})^{\rm rescaled}_{m} = S^{\rm int}_{\rm 3mm}\times \left(\frac{{\rm FWHM}^{\rm dec}_{\rm 1.3mm}}{{\rm FWHM}^{\rm dec}_{\rm 3mm}}\right)^{m}. \label{eq:re-scale} \end{equation} Figure~\ref{fig:freefree}b displays, for the complete catalog of \cref{appendixtab:core detection table}, the ratios of the 1.3~mm to 3~mm fluxes with a rescaling using $m=1$. On average, 3~mm fluxes are corrected by $25\%$, with a maximum of $75\%$, for the cores that have measurable fluxes both at 1.3~mm and 3~mm. For the many cores that remain undetected or that have barely measured fluxes at 3~mm, $S^{\rm int}_{\rm 3mm}\leq \sigma$, we used the $1\,\sigma$ rms noise level to give a lower limit of their 1.3~mm to 3~mm flux ratio, $\frac{S^{\rm int}_{\rm 1.3mm}}{S^{\rm int}_{\rm 3mm}} \geq \frac{S^{\rm peak}_{\rm 1.3mm}}{\sigma_{\rm 3mm}^{\rm peak}}$. In addition, Figs.~\ref{appendixfig:freefree different rescaling}a--b display the same figure without rescaling ($m=0$) and for a rescaling better suited for starless cores ($m=2$). Figures~\ref{fig:freefree}b and \ref{appendixfig:freefree different rescaling}a--b allow a simple separation of sources dominated by thermal dust emission from those dominated by free-free emission. Under the optically thin assumption and arising from the same source area, the 1.3~mm to 3~mm theoretical flux ratio of thermal dust emission is given by \begin{align} \gamma &= \frac{S_{\rm 1.3mm}^{\rm int}}{S^{\rm int}_{\rm 3mm}} \label{eq:gamma}\\ &= \frac{\kappa_{\rm 1.3mm}}{\kappa_{\rm 3mm}} \frac{B_{\rm 1.3mm}(T_{\rm dust})}{B_{\rm 3mm}(T_{\rm dust})} = \frac{\kappa_{\rm 1.3mm}}{\kappa_{\rm 3mm}} \frac{\nu_{\rm 1.3mm}^3}{\nu_{\rm 3mm}^3} \frac{e^{h\,\nu_{\rm 3mm}/k_{\rm B}\,T_{\rm dust}}-1}{e^{h\,\nu_{\rm 1.3mm}/k_{\rm B}\,T_{\rm dust}}-1} \simeq 15.4, \label{eq:theo thermal ratio} \end{align} where $k_{\rm B}$ and $h$ are the Boltzmann and Planck constants, and $B_{\rm 1.3mm}(T_{\rm dust})$ and $B_{\rm 3mm}(T_{\rm dust})$ are the Planck function for the mean dust temperature of cores, $T_{\rm dust}=23$~K, at $\nu_{\rm 1.3mm}=228.9$~GHz and $\nu_{\rm 3mm}=100.7$~GHz. These frequency values are taken from Paper~II \citep{ginsburg2021} assuming a spectral index of $\alpha (\nu)=3.5$, which corresponds to a dust opacity spectral index of $\beta=1.5$, suitable for optically thin dense gas at the core scale \citep[see][]{AWB1993, juvela2015}. Because the W43-MM2\&MM3 ridge is a dense cloud \citep{nguyen2013}, we adopted a dust opacity per unit (gas $+$ dust) mass adapted for cold cloud structures: $\kappa_{\rm 1.3mm}=0.01\,\rm cm\,g^{-1}$ \citep{OssenkopfHenning1994}. The dust opacity mass at 3~mm, $\kappa_{\rm 3mm}$, is computed assuming \begin{equation}\label{eq:kappa} \kappa_{\lambda} = 0.01 \times \left( \frac{\lambda}{\rm 1.3\,mm} \right)^{-\beta} = 0.01 \times \left( \frac{\nu}{\rm 228.9\,GHz} \right)^{\beta}~\rm cm^2\,g ^{-1} \end{equation} with $\beta = 1.5$. For the cores that remain after post-filtering at both wavelengths, we computed their ratio of 1.3~mm flux to 3~mm flux, which is rescaled to the 1.3~mm size with an index of either $m=1$ or $m=2$ (see Eqs.~\ref{eq:re-scale}--\ref{eq:gamma}): $\gamma_1=\gamma^{\rm rescaled}_{m=1}$ and $\gamma_2=\gamma^{\rm rescaled}_{m=2}$, respectively. They have a median 1.3~mm to 3~mm flux ratio and associated standard deviation of $\widetilde{\gamma_1}\simeq 11.3\pm 1.8$ (see \cref{fig:freefree}b), which is close to the expected value of 15.4 (see Eq.~\ref{eq:theo thermal ratio}). Figure~\ref{fig:freefree}b shows that the 1.3~mm to 3~mm flux ratio tends to increase as the signal-to-noise ratio (S/N) increases, equivalent to the core flux increases. Rescaling the fluxes with an index of $m=2$, rather than $m=1$, removes this unexpected correlation and leads to a median flux ratio of $\widetilde{\gamma_2} \simeq 15.3\pm 2.0$ (see \cref{appendixfig:freefree different rescaling}a), which is closer to the theoretical value (see Eq.~\ref{eq:theo thermal ratio}). If confirmed, this result would argue in favor of the pre-stellar rather than protostellar nature of most of the cores extracted in the W43-MM2\&MM3 protoclusters. A companion paper by Nony et al. (in prep.) consistently shows that the protostellar to pre-stellar ratio of the W43-MM2\&MM3 core sample is about $\sim$25\%. In contrast, the 1.3~mm to 3~mm flux ratio of free-free emission is expected to be much lower than the ratio of thermal dust continuum emission estimated in \cref{eq:theo thermal ratio}. With a spectral index of optically thin and optically thick free-free emission of $\alpha (\nu)=-0.1$ and $\alpha (\nu) \simeq2$ \citep[e.g.,][]{keto2008}, respectively, the theoretical 1.3~mm to 3~mm flux ratios for H\mbox{\sc ~ii} regions lie within the $\simeq$0.9--5.2 range. As shown in \cref{fig:freefree}a, we found that \begin{itemize} \item three sources have low ratios ($\gamma_1 \simeq 0.9$) and are located along the H\mbox{\sc ~ii} ring within the free-free continuum bubble of W43-MM3. Sources \#27, \#82, and \#91 most likely correspond to free-free emission fluctuations in the UCH\mbox{\sc ~ii} region. % \item source \#24, which is located over the UCH\mbox{\sc ~ii} region extent, has a high 1.3~mm to 3~mm flux ratio, $\gamma_1 \simeq 18$, and can thus be considered a true core that is dominated by dust emission and lies on the same line of sight as the UCH\mbox{\sc ~ii} region (see Figs.~\ref{fig:freefree}a--b). % \item we find 13 sources in \cref{fig:freefree}b that have an intermediate flux ratio, $\gamma_1 \simeq 1.2-5$, and may indicate that they consist of partially optically thick free-free emission. However, only one source (source \#172) lies within the W43-MM3 H\mbox{\sc ~ii} bubble, and it has a lower-limit ratio of $\gamma_1 \geq 5$. Moreover, none of the sources with $\gamma_1 = 1.2-5$ ratios is associated with strong H41$\alpha$ recombination line emission, as expected for most HCH\mbox{\sc ~ii} regions. We therefore considered them to be real cores. \end{itemize} To confirm this, we developed a methodology that better takes into account the uncertainties of our source extraction and flux measurement process. For the 121 dust cores detected at 1.3~mm and that have measurable 3~mm fluxes, Figs.~\ref{fig:freefree}b and \ref{appendixfig:freefree different rescaling} locate the $3\,\sigma$ dispersion zone of the logarithm of their flux ratios. None of these sources with $\gamma_1= 1.2-5$ ratios lie outside this $3\,\sigma$ zone, suggesting that their flux measurements are too uncertain to securely qualify these sources as being free-free emission peaks. In summary, \cref{fig:freefree}b, Figs.~\ref{appendixfig:freefree different rescaling}a--b, and the same figures done for peak fluxes, identified only three sources that likely correspond to free-free emission peaks: \#27, \#82, and \#91. \cref{appendixtab:core detection table} pinpoints these three sources; they are removed from the core sample of \cref{appendixtab:derived core table} and will not be considered further. \subsubsection{Correction for line contamination} \label{sect:line contamination} In order to correctly measure the mass of cores, it is necessary to correct their continuum flux for line contamination. The 1.3~mm and 3~mm \texttt{denoised}\xspace \& \texttt{bsens}\xspace images used to identify sources in Sect.~\ref{sect:extraction of compact sources} indeed provide estimates of their continuum emission, based on all channels of all spectral bands. Some of these bands, however, contain bright emission lines associated with dense gas (see, e.g., Table~3 of \citealt{motte2021}). In addition, line forests of complex organic molecules \citep[COMs; e.g.,][]{garrod2006} are expected in all spectral windows when observing hot cores and shocked regions \citep[e.g.,][]{molet2019, bonfand2019}. Investigating the contamination by lines of the \texttt{bsens}\xspace continuum can be done by comparing \texttt{bsens}\xspace fluxes to fluxes measured in the \texttt{cleanest}\xspace images \cite[see][]{motte2018b}. Figure~\ref{fig:hotcore} presents, for the 155 sources at 1.3~mm with robust \texttt{denoised}\xspace \& \texttt{bsens}\xspace and \texttt{cleanest}\xspace fluxes, the ratios of their \texttt{denoised}\xspace \& \texttt{bsens}\xspace to their \texttt{cleanest}\xspace 1.3~mm peak fluxes. We use the peak rather than the integrated flux because the vast majority of hot cores are expected to be unresolved, and therefore have a higher ratio of \texttt{denoised}\xspace \& \texttt{bsens}\xspace to \texttt{cleanest}\xspace peak fluxes. Most sources have ratios that remain close to $1$, with a decrease in the point dispersion as the S/N of the \texttt{denoised}\xspace \& \texttt{cleanest}\xspace fluxes increases (see \cref{fig:hotcore}). As in \cref{fig:freefree}b, we computed the $3\,\sigma$ dispersion zone of the plotted flux ratios and found that \begin{itemize} \item four of the brightest sources (S/N $>20$) that lie above this $3\,\sigma$ zone have been identified as candidates to host a hot core by Herpin et al. (in prep.), namely cores \#1, \#3, \#7, and \#10. The line contamination of their 1.3~mm \texttt{denoised}\xspace \& \texttt{bsens}\xspace peak flux is estimated to range from 20\% to 45\% (see \cref{fig:hotcore}). % \item ten other sources lie well above the $3\,\sigma$ dispersion zone with high flux ratios, $\frac{ (S_{\rm 1.3mm}^{\rm peak})_{\rm bsens}} { (S_{\rm 1.3mm}^{\rm peak})_{\rm cleanest}} = 2-12$, seven of which (\#46, \#47, \#85, \#114, \#152, \#224, and \#245) correspond to sources contaminated by the $^{12}$CO(2-1) line, which present an excess of flux in the continuum emission of the \texttt{denoised}\xspace \& \texttt{bsens}\xspace image. The three remaining sources (\#183, \#248, and \#275) are most probably contaminated by other lines, undetermined at this stage. \end{itemize} As indicated in \cref{appendixtab:derived core table}, the properties of these four and ten cores are derived from their measurements in the \texttt{denoised}\xspace \& \texttt{cleanest}\xspace image. Given that we could only investigate the line contamination of 155 out of 205 sources, we expect to have, in our core catalog of \cref{appendixtab:derived core table}, a maximum of four that have core masses overestimated sources in the $0.1-0.5$~$\ensuremath{M_{\odot}}$ mass range. In summary, from the 208 sources of \cref{appendixtab:core detection table}, we removed three sources, that correspond to structures dominated by free-free emission (see contamination tag). For the 14 cores contaminated by line emission (see \cref{appendixtab:core detection table}), we corrected their 1.3~mm measurements, including size and fluxes, by taking their \texttt{denoised}\xspace \& \texttt{cleanest}\xspace measurements. \begin{figure}[ht] \centering \includegraphics[width=1.\linewidth]{MM2_3-hot-core-scatter-peak-pow0-snr.png} \caption{Line contamination of the 1.3~mm continuum fluxes of \textsl{getsf} sources, as estimated from the ratio of \texttt{denoised}\xspace \& \texttt{bsens}\xspace to \texttt{cleanest}\xspace peak fluxes, and shown as a function of the S/N in the cleanest image. The gray curve indicates the median value of the core ratios, computed over bins of 20 adjacent cores as ranked by their S/N. The shaded gray area indicates the corresponding $3\,\sigma$ dispersion in flux ratio values. The red, orange, and green points locate cores with hot-core signatures (Herpin et al. in prep.), cores contaminated by the CO(2-1) line, and cores contaminated by other spectral lines, respectively. The horizontal lines indicate the contamination levels of 0\% (magenta dashed line) and 20\% (green dotted line). By taking only the blue points, the \texttt{denoised}\xspace \& \texttt{bsens}\xspace over \texttt{cleanest}\xspace ratios of \cref{fig:hotcore} have a median value of $\simeq 1.1\pm 0.3$. } \label{fig:hotcore} \end{figure} \subsection{Mass estimates} \label{sect:mass estimation} We estimate the masses of cores, which are extracted by \textsl{getsf} in Sect.~\ref{sect:extraction of compact sources} and listed in Table~\ref{appendixtab:derived core table}. Because the thermal dust emission of cores is mostly optically thin at 1.3~mm, one generally uses the classical optically thin equation is genrally used to compute their masses. We give it here and provide a numerical application whose dependence on each physical variable is given, for simplicity, in the Rayleigh-Jeans approximation: \begin{equation} \begin{split} M_{\rm \tau\ll 1} \: & = \frac{S^{\rm int}_{\rm 1.3\,mm}\; d^2}{ \kappa_{\rm 1.3\,mm}\; B_{\rm 1.3\,mm}(T_{\rm dust})} \\ &\simeq \: 5\,\ensuremath{M_{\odot}} \times \left(\frac {S^{\rm int}_{\rm 1.3\,mm}}{\mbox{10~mJy}}\right) \left(\frac {T_{\rm dust}}{\rm 23~K}\right)^{-1} \\ & ~~~~ \times \left(\frac {d}{\mbox{5.5~kpc}} \right)^2 \left(\frac {\kappa_{\rm 1.3\,mm}}{\rm 0.01\,cm^2\,g^{-1}}\right)^{-1}. \end{split} \label{eq:optically thin mass} \end{equation} We estimated the volume-averaged core temperatures, $T_{\rm dust}$, from a map that combines a moderate angular resolution dust temperature image with the central heating and self-shielding of protostellar and pre-stellar cores, respectively (see \cref{appendixfig:dust temperature map} and Motte et al. in prep.). The dust temperature image is produced by the Bayesian fit of spectral energy distributions, performed by the \textsl{PPMAP} procedure \citep{marsh2015}. Using the five \textit{Herschel} $70-500~\mu$m images, two APEX 350 and 870~$\mu$m images, and the present ALMA 1.3~mm image, which have a large range of angular resolutions ($0.46\arcsec-36\arcsec$), provides a $2.5\arcsec$-resolution dust temperature image that needs to be extrapolated to the $0.46\arcsec$ resolution of our 1.3~mm ALMA-IMF image. The dust temperature of the immediate background of cores listed in \cref{appendixtab:derived core table} has a mean value of $\overline{T_{\rm dust}}^{\rm core\,bkg}= 24\pm 2$~K. Following \cite{motte2018b}, the dust temperature of massive protostellar cores averaged in $0.46\arcsec$-resolution elements is estimated from the total luminosity of the W43-MM2 cloud \citep[$\sim$$2\times 10^4~L_\odot$,][]{motte2003} divided between cores, in proportion to their associated line contamination in the 1.3~mm band (see Motte et al. in prep.). This leads to volume-averaged temperatures, $T_{\rm dust}$, between 20~K and 65~K. In addition, the mean core temperature of lower-mass cores driving outflows (see Nony et al. in prep.) is increased by $4\pm4$~K compared to the core background temperature. The temperature of candidate pre-stellar cores is itself decreased by $2\pm2$~K compared to their background temperature. The resulting estimates of the mass-averaged temperature of cores range from 19~K to 65~K, with uncertainties ranging from $\pm 2$~K to $\pm10$~K (see \cref{appendixtab:derived core table}). For the cores that reach sufficiently high densities ($\gtrsim5\times 10^7$~cm$^{-3}$, see Eq.~\ref{eq:density}), in other words the most massive ones, we expect them to be optically thick \citep[e.g.,][]{cyganowski2017, motte2018a}. To partly correct for this opacity, \cite{motte2018a} proposed an equation, which is given below and fully explained in \cref{appendixsect:detailed approach for the mass calculation}: \begin{equation} \label{eq:optically thick mass} M_{\rm \tau\gtrsim 1} \: = -\, \frac{\Omega_{\rm beam} \;d^2} {\kappa_{1.3{\rm mm}}}\, \frac{S^{\rm int}_{1.3{\rm mm}}} {S^{\rm peak}_{1.3{\rm mm}}} \, \ln\left(1\,-\,\frac{S^{\rm peak}_{1.3{\rm mm}}}{\Omega_{\rm beam}\;B_{1.3{\rm mm}}(T_{\rm dust})}\right). \end{equation} Here $\Omega_{\rm beam}$ is the solid angle of the beam. This correction is significant for two cores (cores \#1 and \#2), whose masses estimated with the optically thin assumption would have been underestimated by $\sim$15\%. With this correction of optical thickness and the temperatures estimated in \cref{appendixfig:dust temperature map}, the core mass range is $0.1-70~\ensuremath{M_{\odot}}$ (see \cref{tab:cmf and cores}). To start estimating which of these cores are gravitationally bound, we compared the measured masses with virial masses. The core virial masses were calculated from their FWHM sizes measured at 1.3~mm and their estimated temperatures, $T_{\rm dust}$, given in \cref{appendixtab:derived core table}. All the W43-MM2\&MM3 cores could be gravitationally bound because their virial parameter, $\alpha_{\rm vir}=M_{\rm vir}/M_{\rm \tau\gtrsim 1}$, is always smaller than the factor 2 chosen by \cite{bertoldi1992} to define self-gravitating objects. Their dynamical state, however, requires further study of the non-thermal motions of the cores, which will be measured in part by future ALMA-IMF studies of spectral lines. We estimated the absolute values of the core masses to be uncertain by a factor of a few, and the relative values between cores to be uncertain by $\sim$50\%. Dust opacity should indeed evolve as the core grows and the protostar heats up \citep{OssenkopfHenning1994} and may also have a radial dependence from the core surroundings to its center. We therefore assumed a $1\,\sigma$ uncertainty for the dust opacity that should cover its variations with gas density and temperature; divided or multiplied by a factor of 1.5 it becomes $\kappa_{\rm 1.3mm}=0.01\pm^{0.005}_{0.0033} \,\rm cm\,g^{-1}$. \cref{appendixtab:derived core table} lists the physical properties of the 205 cores derived from their 1.3~mm \texttt{denoised}\xspace \& \texttt{bsens}\xspace measurements and the analysis made in Sect.~\ref{sect:core nature mass estim}: deconvolved size, FWHM$^{\rm dec}$; mass corrected for optical depth, $M_{\rm \tau\gtrsim 1}$; dust temperature, $T_{\rm dust}$; volume density, $n_{\rm H_2}$. Volume densities are computed assuming a spherical core: \begin{equation} \begin{split} n_{\rm H_2} &= \frac{M_{\rm \tau\gtrsim 1}}{\frac{4}{3}\pi\,\mu\,m_{\rm H}\,\left({\rm FWHM}^{\rm dec}_{\rm 1.3mm}\right)^3} \\ &\simeq 7.8\times 10^7\,{\rm cm}^{-3} \times \left( \frac{M_{\rm \tau\gtrsim 1}}{\mbox{70~\ensuremath{M_{\odot}}}}\right) \left( \frac{{\rm FWHM}^{\rm dec}_{\rm 1.3mm}}{\mbox{3\,000~au}}\right)^{-3}. \end{split} \label{eq:density} \end{equation} \begin{figure*}[htbp!] \centering \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=1.\textwidth]{CMF_getsf.png} \end{minipage}% \hskip 0.0199\textwidth \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=1.\textwidth]{CMF_GExt2D.png} \end{minipage} \caption{Top-heavy CMF of the W43-MM2\&MM3 ridge, with cores extracted by the \textsl{getsf} (panel \textsl{a}) and \textsl{GExt2D} (panel \textsl{b}) software packages, in the \texttt{denoised}\xspace \& \texttt{bsens}\xspace and \texttt{original}\xspace \& \texttt{bsens}\xspace images, respectively. The cumulative forms of CMFs (blue histograms) are fitted above their 90\% completeness levels (black vertical lines) by single power-laws of the form $N(>\log M)\propto M^{\alpha}$, with $\alpha = -0.95 \pm 0.04$ (\textsl{a}) and $\alpha = -1.02 \pm 0.05$ (\textsl{b}) (red lines and $1\,\sigma$ global uncertainties). The global $3\,\sigma$ uncertainties are computed from 2\,000 CMFs that are uniformly randomly generated (light gray histograms) and from the fit uncertainty (see Sect.~\ref{sect:top heavy cmf}). The W43-MM2\&MM3 CMF slope is clearly shallower than the high-mass end of the canonical IMF, which has a power-law index of $\alpha = -1.35$ (\citealt{salpeter1955}, dashed magenta lines).} \label{fig:cmfs software} \end{figure*} \begin{figure*}[htbp!] \centering \begin{minipage}{0.49\textwidth} \centering \includegraphics[width=1.\textwidth]{CMF_cleanest.png} \end{minipage}% \begin{minipage}{0.49\textwidth} \centering \includegraphics[width=1.\textwidth]{CMF_constantT.png} \end{minipage} \begin{minipage}{0.49\textwidth} \centering \includegraphics[width=1.\textwidth]{CMF_evolutiveK.png} \end{minipage}% \begin{minipage}{0.49\textwidth} \centering \includegraphics[width=1.\textwidth]{CMF_LowInt.png} \end{minipage} \caption{\textsl{getsf} CMFs of the W43-MM2\&MM3 ridge built for a different core catalog (\textsl{a}), under different assumptions of dust temperature and emissivity (\textsl{b} and \textit{c}), and fit over a different mass range (\textsl{d}). The cumulative CMFs, their completeness levels, power-law fits, global $3\,\sigma$ uncertainties (explained in Sect.~\ref{sect:top heavy cmf}), and the Salpeter slope of the canonical IMF are represented as in \cref{fig:cmfs software}. Panel \textsl{(a)}: CMF derived from the core catalog of Paper~V (Louvet et al. in prep.), itself obtained by \textsl{getsf} extraction in the \texttt{original}\xspace \& \texttt{cleanest}\xspace images of W43-MM2 and W43-MM3, showing a similar but slightly shallower slope of $\alpha = 0.86\pm0.04$. Panel \textsl{(b)}: CMF obtained with a mean $T_{\rm dust}=23$~K dust temperature for all cores, instead of T$_{\rm dust}$ in \cref{appendixfig:dust temperature map}, displaying a similar but slightly shallower slope of $\alpha=-0.83\pm0.03$. Panel \textsl{(c)}: CMF derived assuming a linear relation for the dust opacity with core mass (see Sect.~\ref{sect:robustness against our assumptions}) showing a steeper slope of $\alpha=-1.02\pm0.03$. Panel \textsl{(d)}: Fitting the CMF of \cref{fig:cmfs software}a in the low- to intermediate-mass range,$0.8-16~\ensuremath{M_{\odot}}$. This leads to a similar but slightly shallower slope of $\alpha=-0.89\pm0.04$.} \label{fig:cmf tests} \end{figure*} {\renewcommand{\arraystretch}{1.5}% \begin{table}[ht] \centering \tiny \begin{threeparttable}[c] \caption{W43-MM2\&MM3 core populations and CMF parameters, as derived by two core extraction algorithms.} \label{tab:cmf and cores} \begin{tabular}{ccccc} \hline\hline Extraction & Number & \multirow{2}{*}{$\sum M_{\rm \tau\lesssim 1}$} & \multirow{2}{*}{Mass range} & \multirow{2}{*}{$\alpha$} \\ packages & of cores & & & \\ & & [\ensuremath{M_{\odot}}] & [\ensuremath{M_{\odot}}] & \\ (1) & (2) & (3) & (4) & (5) \\ \hline & & & $0.8-69.9$ & $-0.95\pm0.04$ \\ \textsl{getsf} & 205 & $541\pm29$ & $0.8-16$ & $-0.89\pm0.04$ \\ & & & $2.0-69.9$ & $-1.05\pm0.06$ \\ \hline & & & $1.1-83.1$ & $-1.02\pm0.05$ \\ \textsl{GExt2D} & 152 & $468\pm35$ & $1.1-16$ & $-0.98\pm0.06$ \\ & & & $2.0-83.1$ & $-1.07\pm0.07$ \\ \hline \end{tabular} \begin{tablenotes}[flushleft] \item (3) Cumulative mass of cores, listed in \cref{appendixtab:derived core table}. Uncertainties arise from those associated with individual core mass estimates. \item (4) Mass range used to fit a power-law to the cumulative form of the CMFs. The lower limit of this mass range is the 90\% completeness limit (see \cref{appendixsect:completeness simulation} and Sect.~\ref{sect:top heavy cmf}) of $2~\ensuremath{M_{\odot}}$; its upper limit corresponds to the maximum core mass detected or $16~\ensuremath{M_{\odot}}$. \item (5) Power law index of the CMFs in their cumulative form, $N(>\log M)\propto M^{\alpha}$. Uncertainties are estimated by varying dust temperature and emissivity and by taking into account the fit uncertainty notably associated with a completeness limit uncertainty of $\pm0.2~\ensuremath{M_{\odot}}$ (see Sect.~\ref{sect:robustness against our assumptions}). \end{tablenotes} \end{threeparttable} \end{table}} {\renewcommand{\arraystretch}{1.5}% \begin{table*}[ht] \centering \begin{threeparttable}[c] \caption{ CMFs and predicted IMFs of the W43-MM2\&MM3 protocluster: Uncertainty evaluation and predicted evolution.} \label{tab:tests cmf} \begin{tabular}{ll|ccc} \hline\hline & & Mass range & $\alpha$ & Associated figure \\ & & [\ensuremath{M_{\odot}}] & & \\ \hline \multicolumn{2}{l|}{\textbf{Reference CMF} (using \textsl{getsf} cores from the \texttt{denoised}\xspace \& \texttt{bsens}\xspace image)} & $0.8-69.9$ & $-0.95\pm0.04$ & \cref{fig:cmfs software}a \\ % CMF for & cores extracted in the \texttt{original}\xspace \& \texttt{cleanest}\xspace image & $1.2-75.6$ & $-0.86\pm0.04$ & \cref{fig:cmf tests}a \\ & masses computed with a constant $T_{\rm dust}$ & $0.8-492$ & $-0.83\pm0.03$ & \cref{fig:cmf tests}b \\ & masses computed with a linear function of $\kappa_{\rm 1.3mm}$ with the mass & $0.8-46.6$ & $-1.02\pm0.03$ & \cref{fig:cmf tests}c \\ \hline IMF for & a constant mass conversion efficiency, $\epsilon_{\rm core} = 50$\% & $0.4-35.0$ & $-0.95\pm0.04$ & \cref{fig:fragmentation}a \\ & a linear function with the mass of $\epsilon_{\rm core} \propto M$ & $0.44-69.9$ & $-0.59\pm0.04$ & \cref{fig:fragmentation}a \\ & a dependence on core density of $\epsilon_{\rm core} \propto (n_{\rm H_2})^{0.9}$ & $0.44-44.6$ & $-0.67\pm0.06$ & \cref{fig:fragmentation}a \\ % IMF for & thermal Jeans fragmentation with $\epsilon_{\rm core} = 50$\% & $0.4-1.6$ & $-3.46\pm0.55$ & \cref{fig:fragmentation}b \\ & an analytical function of $N_{\rm frag} \propto M^{0.4}$ with $\epsilon_{\rm core} = 50$\% & $0.3-3.9$ & $-1.42\pm0.10$ & \cref{fig:fragmentation}b \\ & a fractal hierarchical cascade with $\epsilon_{\rm core} = 50$\% & $0.27-23.3$ & $-1.00\pm0.04$ & \cref{fig:fragmentation}c \\ & a fractal hierarchical cascade with $\epsilon_{\rm core} \propto M$ &$0.004-46.6$ & $-0.49\pm0.06$ & \cref{fig:fragmentation}c \\ \hline \end{tabular} \begin{tablenotes}[para,flushleft] Notes: Cumulative CMFs and predicted IMFs are fitted by power-laws of the form $N(>\log M) \propto M^{\alpha}$. Mass ranges of the CMF and IMF fits are limited by the estimated completeness level. \end{tablenotes} \end{threeparttable} \end{table*}} \section{CMF results} \label{sect:cmf results} We use the core masses estimated in Sect.~\ref{sect:core nature mass estim} to build the CMF of the W43-MM2\&MM3 ridge in Sect.~\ref{sect:top heavy cmf} and discuss its robustness in Sect.~\ref{sect:robustness against our assumptions}. Tables~\ref{tab:cmf and cores} and \ref{tab:tests cmf} list the parameters of the W43-MM2\&MM3 CMFs derived from different catalogs and under different assumptions. \subsection{Top-heavy CMF in the W43-MM2\&MM3 ridge} \label{sect:top heavy cmf} Figure~\ref{fig:cmfs software} displays the W43-MM2\&MM3 CMFs as derived from the \textsl{getsf} and \textsl{GExt2D} samples of 205 and 152 cores, respectively. The 90\% completeness limits for \textsl{getsf} and \textsl{GExt2D} are estimated to be $0.8\pm0.2~\ensuremath{M_{\odot}}$ and $1.1\pm 0.2~\ensuremath{M_{\odot}}$, respectively (see \cref{appendixsect:completeness simulation}). Following the recommendations of \cite{maiz2005} and \cite{reid2006} for improving the measurement statistics, we chose to analyze the complementary cumulative distribution form (hereafter called cumulative form) rather than the differential form of these CMFs. The \textsl{getsf} and \textsl{GExt2D} CMFs are least-squares fitted above their completeness limits by single power-laws of the form $N(>\log M)\propto M^{\rm \alpha}$ with $\alpha=-0.95 \pm 0.04$ for \textsl{getsf} and $\alpha = -1.02\pm 0.05$ for \textsl{GExt2D} (see Figs.~\ref{fig:cmfs software}a--b). A slope uncertainty driven by uncertainties on the core masses, referred to below as mass-driven uncertainty, is computed from two thousand randomly generated CMFs, taking for each core a uniformly random mass in the range $[M_{\rm min}-M_{\rm max}]$. For each core, $M_{\rm max}$ and $M_{\rm min}$ are the maximum and minimum masses, respectively, computed from its measured flux, estimated temperature, and dust opacity, plus or minus the associated $1\,\sigma$ uncertainties (see Tables~\ref{appendixtab:core detection table}--\ref{appendixtab:derived core table}, and Sect.~\ref{sect:mass estimation}). The mass-driven uncertainties of the power-law indices range from $\sigma\simeq 0.03$ to $0.06$. In addition, we estimated a slope uncertainty due to the power-law fit, referred to as the fit uncertainty, from the $\chi^2$ uncertainty and by varying the initial point of the slope fit using the 90\% completeness level and its uncertainty (see \cref{tab:cmf and cores} and \cref{appendixfig:completeness}). The fit uncertainty of the power-law indices is about $\sigma\simeq 0.03$. The global uncertainties of the power-law indices are finally taken to be the quadratic sum of the mass-driven uncertainties and the fit uncertainties (see Tables~\ref{tab:cmf and cores} and \ref{tab:tests cmf}). When taking into account these global uncertainties, the CMF slopes measured in \cref{fig:cmfs software} are much shallower than the high-mass end, $>$1~$\ensuremath{M_{\odot}}$, of the canonical IMF that is often represented by a power-law function close to $N(>\log M)\propto M^{\rm -1.35}$ \citep{salpeter1955, kroupa2001, chabrier2005}. Using the shape of the IMF as a reference, these CMFs are qualified as top-heavy. They are overpopulated by high-mass cores compared to intermediate-mass cores and are overpopulated by intermediate-mass cores compared to low-mass cores (see \cref{fig:cmfs software}). \begin{figure}[ht] \centering \includegraphics[width=1.\linewidth]{bootstrap_KSmetric.png} \caption{Bootstrapping probability density histogram of the $N=10^4$ slopes fitted using the \cite{alstott2014} KS metric, measured for data sets generated using the metric parameters obtained for our sample of 205 cores. The black and red vertical lines indicate the resulting slope coefficient of $\alpha=-0.98\pm0.10$ and the \textsl{getsf} fitted slope of $\alpha=-0.95\pm0.04$ (orange area corresponding to 1$\sigma$, see Sect.~\ref{sect:top heavy cmf}). 1, 2, and 3$\sigma$ dispersions are estimated from the bootstrapping (shaded gray areas). The Salpeter slope (dashed vertical magenta line) is rejected with a probability of 99.98\%.} \label{fig:bootstrap powerlaw} \end{figure} We use statistical tests to compare the \textsl{getsf} CMF with either the \textsl{GExt2D} CMF or the Salpeter IMF. A two-sample Kolmogorov-Smirnov (KS) test is used to assess the likelihood that two distributions are drawn from the same parent sample (null hypothesis). In the case of the \textsl{getsf} and \textsl{GExt2D} CMFs, above the \textsl{GExt2D} completeness level we found no significant evidence that the core samples are drawn from different populations (with a KS statistic of $0.09$ and a p-value of $0.91$). We also used a statistical library based on the KS metric and dedicated to probability laws fitted by power-laws \citep{clauset2009, alstott2014} to estimate the robustness of our linear regression fit. Run on the \textsl{getsf} CMF of W43-MM2\&MM3 shown in \cref{fig:cmfs software}a, this toolbox suggests that if fitted by a power-law, its best-fit parameters would be a slope coefficient of $\alpha=-0.95\pm0.08$ above a minimum mass of $0.61~\ensuremath{M_{\odot}}$. This result is in good agreement with the regression fit performed on the \textsl{getsf} sample of cores, above our completeness level of $0.8~\ensuremath{M_{\odot}}$. Figure~\ref{fig:bootstrap powerlaw} presents the bootstrapping probability density histogram of the $N=10^4$ slopes fitted using the KS metric of \cite{alstott2014}, measured for data sets generated using the metric parameters obtained for our sample of 205 cores. The resulting slope coefficient is slightly steeper, $\alpha=-0.98\pm0.1$, but still consistent with those found by the KS metric alone and the fitted value of \cref{fig:cmfs software}a. Moreover, the sigma value obtained with this bootstrapping allows the Salpeter slope to be rejected with a probability of 99.98\%, that is further than the $3.5\sigma$ level. \subsection{Robustness against our assumptions} \label{sect:robustness against our assumptions} Figure~\ref{fig:cmf tests} shows various W43-MM2\&MM3 ridge CMFs built for a different core catalog, under different assumptions of dust temperature and emissivity, and fit over a different mass range. For each CMF, we introduced randomly generated CMFs by varying core fluxes, dust temperatures, and opacities and computed the associated $3\,\sigma$ global uncertainty of their fit. We discuss below the robustness of the observed CMF slope against the chosen extraction strategy and assumptions behind the measurements of core masses. Comparing Figs.~\ref{fig:cmfs software}a--b shows that the CMF of the W43-MM2\&MM3 ridge is top-heavy regardless of the source extraction technique, either \textsl{getsf} or \textsl{GExt2D} (see \cref{tab:cmf and cores}). Of the 100 cores detected by both software packages (see \cref{appendixtab:core detection table}), 90\% have no significant differences in their integrated fluxes. Above the \textsl{GExt2D} completeness limit, they constitute the $1.1-69.9~\ensuremath{M_{\odot}}$ range of the CMF. This striking similarity argues for the robustness of core fluxes measured with different extraction methods, as long as they have a similar core definition. Furthermore, when comparing \cref{fig:cmf tests}a and \cref{fig:cmfs software}a, our extraction strategy, which is based on \texttt{bsens}\xspace images denoised by \textsl{MnGSeg}, does not seem to impact quantitatively the W43-MM2\&MM3 CMF. Figure~\ref{fig:cmf tests}a indeed presents the CMF of cores extracted in a companion paper (Paper~V; Louvet et al. in prep.) from the \texttt{original}\xspace \& \texttt{cleanest}\xspace images of the W43-MM2 and W43-MM3 protoclusters\footnote{ For consistency, we applied our filtering and analysis methods to the \texttt{original}\xspace \& \texttt{cleanest}\xspace core catalog (see Sects.~\ref{sect:extraction of compact sources} and \ref{sect:nature of compact sources}) and made the same assumptions for the mass estimates (see Sect.~\ref{sect:mass estimation}). The resulting catalog of $\sim$75 cores is thus slightly different from that obtained in Paper~V (Louvet et al. in prep.)}. In agreement with the noise level of the \texttt{cleanest}\xspace images at 1.3~mm (see \cref{tab:sensivity stat}), the 90\% completeness limit of the \texttt{cleanest}\xspace core catalog is two times larger than that of the \texttt{denoised}\xspace \& \texttt{bsens}\xspace CMF. The power-law index of the high-mass end of the \texttt{original}\xspace \& \texttt{cleanest}\xspace CMF is close to, but even shallower than, that of the \texttt{denoised}\xspace \& \texttt{bsens}\xspace CMF (see \cref{tab:tests cmf}). The $\sim$75 cores detected in the \texttt{cleanest}\xspace image are in fact among the most massive cores listed in \cref{appendixtab:derived core table}. Moreover, the consistency between the two CMFs comes from the fact that, on average, the \texttt{original}\xspace \& \texttt{cleanest}\xspace cores have fluxes within 15\% of their corresponding flux in \cref{appendixtab:core detection table} and are at worst within 50\% of each other. Beyond the uncertainty of flux measurements used to compute the core masses, the main uncertainties of CMFs arise from the mass-averaged dust temperature and dust opacity used to convert fluxes into masses (see Eq.~\ref{appendixeq:core mass}, \cref{appendixfig:dust temperature map}, and \cref{appendixtab:core detection table}). If we do not take into account the central heating by protostars and self-shielding of pre-stellar cores, the core temperatures would homogeneously be $\overline{T_{\rm dust}}\simeq23\pm 2$~K. The CMF of \textsl{getsf}-extracted cores with a constant temperature (\cref{fig:cmf tests}b) has a slightly shallower slope than when the individual dust temperature estimates are used (\cref{fig:cmfs software}a, see \cref{tab:tests cmf}). We also determined that the CMF flattening is robust against dust opacity variations. As the dust opacity is expected to increase with core density \citep[e.g.,][]{OssenkopfHenning1994}, we made a test assuming a linear relation with mass, starting at $\kappa_{\rm 1.3mm}= 0.007$~cm$^2$\,g$^{-1}$ for the lowest-density core ($0.12~\ensuremath{M_{\odot}}$) and ending at $\kappa_{\rm 1.3mm}= 0.015$~cm$^2$\,g$^{-1}$ for the highest-density core ($69.9~\ensuremath{M_{\odot}}$). The resulting CMF has a power-law index lower than the CMF index found in \cref{fig:cmfs software}a, but still greater than the Salpeter slope (see \cref{fig:cmf tests}c and \cref{tab:tests cmf}). With all tests summarized in Tables~\ref{tab:cmf and cores} and \ref{tab:tests cmf}, we can state that the W43-MM2\&MM3 CMF is top-heavy with a power-law index within the $\alpha=[-1.02;-0.83]$ range. The resulting 1$\,\sigma$ uncertainty is estimated to be about $\pm 0.08$, still excluding the Salpeter slope. \section{Discussion on the origin of stellar masses} \label{sect:discussion on the origin of stellar masses} In Sect.~\ref{sect:classical interpretation}, we compare the CMF of the W43-MM2\&MM3 mini-starburst to published CMF studies. In the framework of several scenarios, we then predict the IMF that would result from the observed W43-MM2\&MM3 CMF. In particular, we apply various mass conversion efficiencies (Sects.~\ref{sect:classical interpretation}--\ref{sect:mass conversion eff}) and various subfragmentation scenarios (Sect.~\ref{sect:core subfrag}), and mention the other processes to consider (Sect.~\ref{sect:other processes}). \cref{tab:tests cmf} lists the parameters of the W43-MM2\&MM3 IMFs derived and fitted under these various assumptions. \begin{figure}[ht] \centering \includegraphics[width=1.\linewidth]{CMF-comparison.png} \caption{Comparison of the W43-MM2\&MM3 CMF (blue histogram, see \cref{fig:cmfs software}a) with power-laws fitted to the high-mass end, $>$1~$\ensuremath{M_{\odot}}$, of CMFs measured in three star-forming regions. The proto-typical CMF of low-mass star-forming regions, derived in Aquila \citep[green line,][]{konyves2015}, resembles the Salpeter slope of the canonical IMF \citep[dashed magenta line,][]{salpeter1955}. In contrast, the CMFs in W43-MM2\&MM3 and in the two high-mass star-forming protoclusters W43-MM1 and G28.37+0.07 \citep[red dot-dashed and orange dotted lines,][]{motte2018b,kong2019} are top-heavy.} \label{fig:CMF comparison} \end{figure} \subsection{In the framework of the classical interpretation} \label{sect:classical interpretation} CMFs measured in low-mass star-forming regions are generally strikingly similar to the IMF \cite[e.g.,][]{motte1998, enoch2008, konyves2015}. In contrast, CMFs of Figs.~\ref{fig:cmfs software}a--b are much shallower than the high-mass end of the canonical IMF. The usual methodology to compare observed CMFs to the IMF is to assume a one-to-one correspondence between cores and stars and a given mass conversion efficiency of core mass into star mass. CMF studies of low-mass, low-density cores, $10^5-10^7$~cm$^{-3}$, often derived mass conversion efficiencies of $\epsilon_{\rm core}\sim 30-40\%$ \citep[e.g.,][]{alves2007, konyves2015}. We could expect a larger mass conversion efficiency for our extreme-density cores, $\gtrsim5\times 10^7$~cm$^{-3}$ (see \cref{appendixtab:derived core table}). Therefore, we assume here a mass conversion efficiency of $\epsilon_{\rm core} =50\%$, following \cite{motte2018b}. With this efficiency, the mass range of $0.8-69.9$~\ensuremath{M_{\odot}}, where the \textsl{getsf} sample is 90\% complete, covers the progenitors of low- to high-mass stars, $0.4-35~\ensuremath{M_{\odot}}$. Fitting the CMF high-mass end, which would then formally start above $1~\ensuremath{M_{\odot}}$ or $2~\ensuremath{M_{\odot}}$, would lead to a slightly steeper slope, $\alpha$ values between $-0.98\pm 0.06$ and $-1.07\pm 0.07$, still shallower than the Salpeter slope of the canonical IMF (see \cref{tab:cmf and cores} for a fit above $2~\ensuremath{M_{\odot}}$). As shown in Figs.~\ref{fig:cmfs software}a and \ref{fig:cmf tests}d, the \textsl{getsf} CMFs for all cores and for those that should form low- to intermediate-mass stars are similarly flat (see \cref{tab:cmf and cores}). We refrain from fitting the CMF of high-mass cores alone because it has too few cores to be statistically robust. The flattening observed for the W43-MM2\&MM3 CMF is a general trend in all mass regimes. Therefore, it cannot solely be attributed to high-mass stars that could form by processes different from those of low-mass stars \citep[e.g.,][]{motte2018a}. Figure~\ref{fig:CMF comparison} compares the high-mass end, $>$1~$\ensuremath{M_{\odot}}$, of the W43-MM2\&MM3 CMF with a few reference CMF studies obtained in one low-mass star-forming region, Aquila \citep{konyves2015}, and two high-mass protoclusters, W43-MM1 and G28.37+0.07 \citep{motte2018b, kong2019}. All published studies of core populations found in the nearby, low-mass star-forming regions have argued for the interpretation that the shape of the IMF can simply be derived directly from the CMF \citep[e.g.,][]{motte1998, andre2014}. We here use a similar definition for cores and very similar tools to extract them to those used in these studies. In particular, \textsl{getsf} \citep{men2021getsf} has the same philosophy as the software used to extract cores from \textit{Herschel} images, \textsl{getsources} and \textsl{CuTEx} \citep{men2012multi, molinari2011}, and ground-based images, \textsl{MRE-GCL} \citep{motte2007}. Even so, the CMF measured for the W43-MM2\&MM3 ridge is different from the CMF found in low-mass star-forming regions, including Aquila, which was studied in detail with \textsl{Herschel} \citep[][see \cref{fig:CMF comparison}]{konyves2015}. It has a high-mass end shallower than most published CMFs, and thus shallower than the IMF of \cite{salpeter1955}. It only resembles, for now, the CMFs observed for the W43-MM1 mini-starburst ridge \citep{motte2018b} and the G28.37+0.07 filament \citep{kong2019} (see \cref{fig:CMF comparison}). The CMF results obtained for both the W43-MM2\&MM3 and W43-MM1 ridges indicate that either their IMF will be abnormally top-heavy and/or that the mapping between their core and star masses will not be direct. In the framework of the first interpretation, we assume that the shape of the IMF is directly inherited from the CMF. The results from these two mini-starbursts would thus put into question the IMF universality, which is now being debated \citep[e.g.,][]{hopkins2018}. In the framework of the second interpretation, several processes could, in principle, help reconcile the top-heavy CMF observed in the W43-MM2\&MM3 ridge with a Salpeter-like IMF. We investigate below the effect of several of them: mass conversion efficiency, core subfragmentation, star formation history, and disk fragmentation. \begin{figure}[htbp!] \centering \begin{minipage}{0.49\textwidth} \centering \includegraphics[width=1.\textwidth]{fragmentation_CMF_efficiency.png} \end{minipage} \begin{minipage}{0.49\textwidth} \centering \includegraphics[width=1.\textwidth]{fragmentation_CMF_Jeans+Analytic.png} \end{minipage} \begin{minipage}{0.49\textwidth} \centering \includegraphics[width=1.\textwidth]{fragmentation_CMF_eta1p5.png} \end{minipage} \caption{IMFs resulting from various mass conversion efficiencies and fragmentation scenarios, all applied to the W43-MM2\&MM3 CMF of \cref{fig:cmfs software}a (blue histogram). Panel \textsl{(a)}: IMFs predicted for a constant mass conversion efficiency of 50\% (red histogram), linear with the mass (yellow histogram), and dependent on the core density \citep[][cyan histogram]{louvet2014} (see Sect.~\ref{sect:mass conversion eff}). Panel \textsl{(b)}: IMFs predicted by the two extreme fragmentation scenarios in Sect.~\ref{sect:core subfrag}: thermally supported Jeans fragmentation (black) and the analytic fragmentation function leading to a Salpeter slope (green histogram). Panel \textsl{(c)}: IMFs predicted by the hierarchical cascade scenario of Thomasson et al. (subm.), leading to binary fragments. A 2:1 mass partition and two constant mass conversion efficiencies, 50\% (orange histogram) or linear with the mass (gray histogram), are assumed. The number of fragments is taken as the lower integer, with a minimum of 1.} \label{fig:fragmentation} \end{figure} \subsection{Using different mass conversion efficiencies} \label{sect:mass conversion eff} In the present paper we define cores as emission peaks whose sizes are limited by their structured background and neighboring cores. In dynamical clouds, however, the mass, the structure, and even the existence of these cores will evolve over time, as they are expected to accrete or dissolve gas from their background and split into several components or merge with their neighbors \citep[e.g.,][]{smith2014, motte2018a, vazquez2019}. To account for both these static and dynamical views of cores, we used different functions for the conversion efficiency of core mass into star mass and predict the resulting IMF. We first assume a mass conversion efficiency that accounts for the mass loss associated with protostellar outflows in the core-collapse model \citep{matznerMckee2000}. With a constant mass conversion efficiency with core mass, the IMF has the same shape as the CMF as it is simply shifted to lower masses. As mentioned in Sect.~\ref{sect:classical interpretation}, we choose a mass conversion efficiency of $\epsilon_{\rm core} =50\%$. Figure~\ref{fig:fragmentation}a displays the IMF resulting from cores whose distribution is shown in \cref{fig:cmfs software}a. The predicted IMF presents, as expected, the same high-mass end slope above $\sim$0.4$~\ensuremath{M_{\odot}}$ (see \cref{tab:tests cmf}). In the case of dynamical clouds, the competitive or gravitationally driven accretion process allows high-mass cores to more efficiently accrete gas mass from their surroundings than low-mass cores \citep[e.g.,][]{bonnell2006, clarkWhitworth2021}. This generally leads to efficiencies of the core formation and mass conversion that depend, to the first order, on the clump and core masses, respectively. We use two analytical models for the mass conversion efficiency. Since the gravitational force scales linearly with mass, as a first toy model we assumed a linear relation between the mass conversion efficiency and the core mass, normalized by its maximum value: $\epsilon_{\rm core} = \frac{M}{69.9\,\ensuremath{M_{\odot}}}\times 100\%$. The IMF resulting from this relation applied to the CMF of \cref{fig:cmfs software}a presents a much shallower high-mass end slope (see \cref{fig:fragmentation}a and \cref{tab:tests cmf}). As a second toy model, we assumed a mass conversion efficiency depending on the mean volume density of cores, normalized by its maximum value: $\epsilon_{\rm core} = \left(\frac{n_{\rm {H_2}}}{2.2\times 10^8{\rm cm}^{-3}}\right)^{0.9}\times 100\%$. This quasi-linear relation is an extrapolation at 3\,400~au scales (the typical size of our cores) of the relation observed in W43-MM1 for large cloud structures, $\sim$1~pc \citep{louvet2014}. The IMF resulting from this toy model has a high-mass end slope which is slightly shallower than the CMF of \cref{fig:cmfs software}a (see \cref{fig:fragmentation}a and \cref{tab:tests cmf}). Therefore, the fact that a core is no longer considered a static and isolated cloud structure, but rather a cloud structure that accretes its mass from its surrounding cloud at a rate depending on its mass and location in the cloud, tends to flatten the high-mass end of the predicted IMF relative to the observed CMF of cores. This result is in qualitative agreement with analytical models following the evolution of the CMF through the growth of core mass expected in dynamical clouds \citep{dib2007, hatchellFuller2008, clarkWhitworth2021}. \subsection{Using different scenarios of core subfragmentation} \label{sect:core subfrag} The definition of a core is also closely associated with the angular resolution of the observed (or simulated) images of a protocluster \citep[see][]{leeHennebelle2018a, pelkonen2021, louvet2021simu}. The turbulent subfragmentation within these core entities cannot be neglected, but fragmentation functions are barely constrained. We therefore assumed three extreme fragmentation scenarios after applying a 50\% mass conversion efficiency to the W43-MM2\&MM3 CMF displayed in \cref{fig:cmfs software}a. Figure~\ref{fig:fragmentation}b presents the resulting distribution of fragment masses, here called core fragmentation mass function, as in \cite{elmegreen2011}, and sometimes also called system mass function \citep{clarkWhitworth2021}. Since a mass conversion efficiency is applied beforehand, the core fragmentation mass function could directly correspond to the IMF. The first most extreme fragmentation scenario is the Jeans fragmentation of a core only supported by its thermal pressure. Under this hypothesis and with a mass conversion efficiency of $\epsilon_{\rm core}=50\%$, we assume a mass equipartition between fragments; the number of fragments is thus half the ratio of the core mass to its Jeans mass, $N_{\rm frag}(M) = 0.5 \times \frac{M}{M_{\rm Jeans}}$. We took the measured temperature and FWHM size of our cores (see Tables~\ref{appendixtab:core detection table}-\ref{appendixtab:derived core table}) and computed the Jeans mass of fragments within cores with masses ranging from $2~\ensuremath{M_{\odot}}$ and $\sim$70$~\ensuremath{M_{\odot}}$. In the W43-MM2\&MM3 ridge, most cores are super-Jeans and the most massive cores, in the $16-70~\ensuremath{M_{\odot}}$ range, would fragment into $50-85$ objects. The resulting IMF is much steeper than the CMF of \cref{fig:cmfs software}a and even steeper than the Salpeter slope of the canonical IMF (see \cref{fig:fragmentation}b and \cref{tab:tests cmf}). As a second extreme scenario, we found that a fragmentation function of the form $N_{\rm frag}(M) = \left(\frac{\epsilon_{\rm core}\times M}{0.12\;\ensuremath{M_{\odot}}}\right)^{0.4}$ is necessary to steepen the high-mass end slope of the CMF to a core fragmentation mass function and IMF with a slope close to Salpeter (see \cref{fig:fragmentation}b). This analytical function predicts a single star in $0.24~\ensuremath{M_{\odot}}$ cores, about five stars in $16~\ensuremath{M_{\odot}}$ cores and about ten stars in the $\sim$70~$\ensuremath{M_{\odot}}$ core of W43-MM2. These fragmentation prescriptions may apply to evolved cores (referred to as IR-bright), which are observed with a high level of fragmentation \citep{broganHunter2016, palau2018, tang2021}. However, most of the W43-MM2\&MM3 cores are expected to be much younger \citep{motte2003}. Since the subfragmentation of young cores (referred to as IR-quiet or IR-dark) is rarely observed and therefore very poorly constrained, we simply assume similar levels of fragmentation from young massive clumps to cores, that is from $\sim$20\,000~au to $\sim$2\,000~au scales, and from cores to fragments, that is from $\sim$2\,000~au to $\sim$500~au scales. If we follow studies by \cite{bontemps2010}, \cite{palau2013}, \cite{busquet2016} and \cite{louvet2019} showing that high-mass clumps generally fragment in two cores at most, the two extreme fragmentation scenarios proposed above are both unlikely to be taking place in the W43-MM2\&MM3 ridge. A third fragmentation scenario is derived from a new type of model aimed at constraining the hierarchical cascade, also called the fragmentation cascade, in observed and simulated clouds (e.g., Thomasson et al. subm.). These studies are based on the finding that the density structure of molecular clouds is hierarchical, and more precisely multi-fractal \citep{elmegreen2001fract, robitaille2020}, and that the spatial distribution of stars is also hierarchical \citep{joncour2017, joncour2018}. Thomasson et al. (subm.) studied the fractal hierarchical cascade of the intermediate-mass star-forming region NGC~2264, using {\it Herschel}-based column density maps. The authors found, for clustered clumps, a fractal fragmentation index of $\eta \simeq 1.4\pm0.1$, from the clump to the core scales and more precisely from 13\,000~au to 5\,000~au. A fractal index of $\eta=1.4$ means that for every factor of $2$ decrease in physical scale, the number of fragments multiplies by $1.4$. If we use this fractal index to extrapolate to scales ranging from 2\,500~au to 500~au and generally apply it to all of our cores, we expect to find about two fragments at 500~au resolution within our $0.12-69.9~\ensuremath{M_{\odot}}$ cores. Below this 500~au scale, we assume that disk fragmentation dominates turbulent fragmentation and that therefore the hierarchical cascade stops. The distribution of the core mass between subfragments, hereafter called mass partition, is not yet well constrained; we assume below two different cases. The simplest case assumes a uniformly random mass distribution. As shown by \cite{swift2008}, among others, with this mass partition the high-mass end slopes of the core fragmentation mass function of fragments and the resulting IMF cannot change much from that of the CMF of their parental cores. For the second case we can assume a very unbalanced mass partition. A preliminary study of 11 W43-MM2\&MM3 core systems\footnote{ At a $2\,\Theta_{\rm beam}$ distance, paired systems are cores [\#1, \#7], [\#9, \#94], [\#12, \#28], [\#35, \#217], [\#80, \#103], [\#112, \#131], [\#135, \#142], [\#157, \#171], [\#155, \#285]. At a $4\Theta_{\rm beam}$ distance, multiple systems are cores [\#2, \#135, \#142], [\#3, \#43], [\#86, \#98], and [\#112, \#131, \#204].} identified within $<\,2\,\Theta_{\rm beam}$ distances (or 5000~au in \cref{fig:1.3mm and trichrone}a) suggests mass partition fractions close to 2:1. Interestingly, this is consistent with observations of other high-mass core systems \citep{busquet2016, motte2018b}. Such an unbalanced mass partition is also predicted in the competitive accretion model of \cite{clarkWhitworth2021}, which shows that the large majority of the core mass is used to increase the masses of existing fragments. This unbalanced mass partition and a mass conversion efficiency of $\epsilon_{\rm core} = 50\%$, applied to the W43-MM2\&MM3 CMF, slightly steepens the high-mass end slope (see \cref{fig:fragmentation}c and \cref{tab:tests cmf}). As the last and most complex test, we assumed the third fragmentation scenario with a 2:1 mass partition and a mass conversion efficiency depending on the core mass, $\epsilon_{\rm core} \propto M$. The resulting IMF is top-heavy with a slope even shallower than that in \cref{fig:cmfs software}a. Interestingly, these assumptions tend to agree with the model of \cite{clarkWhitworth2021}, which combines turbulent fragmentation and competitive accretion. The high-mass end of the the predicted core fragmentation mass functions is broadly invariant over time because the formation of new multiple cores balances the accretion of the gas mass onto existing cores. \subsection{In the framework of other processes} \label{sect:other processes} Beyond the turbulent fragmentation discussed in Sect.~\ref{sect:core subfrag}, disk fragmentation and N-body interactions could further alter the shape of the core fragmentation mass function and thus of the resulting IMF of single stars. Stellar multiplicity studies of low- to intermediate-mass systems have generally revealed mass equipartition \citep{duquennoy1991}, which would not impact the slope of the IMF high-mass end \citep[e.g.,][]{swift2008}. In contrast, given the low number statistics of high-mass star studies, the mass partition of stellar systems that contain high-mass stars is poorly constrained \citep{ducheneKraus2013}. Because of the lack of constraints on disk fragmentation and on N-body interactions, we did not apply a model to the core fragmentation mass function to determine the IMF of single stars. The other process used to reconcile the observed top-heavy CMF high-mass end with a Salpeter-like CMF is the continuous formation of low-mass cores versus short bursts of formation of high-mass stars. In the case of dense clumps or ridges, most high-mass cores could indeed form in short bursts of $\sim$10$^5$~years, while lower-mass cores would more continuously form over longer periods of time. We recall that the IMF of young stellar clusters of a few $10^6$~years is the sum of several instantaneous CMFs built over one to two free-fall times with $\tau_{\rm free-fall} \simeq 10^5$~years. Before and after a burst with a single top-heavy CMF, about ten star formation events of more typical CMFs could develop, diluting the top-heavy IMF resulting from the star formation burst into an IMF with a close-to-canonical shape. Studying the evolution of the CMF shape over time is necessary to quantify this effect, and is one of the goals of the ALMA-IMF survey \citep[see Paper~I and Paper~V;][Louvet et al. in prep.]{motte2021}. In conclusion, it is difficult to predict the resulting IMF from the observed CMF in the W43-MM2\&MM3 ridge. However, the various mass conversion efficiencies and fragmentation scenarios discussed here suggest that the high-mass end of the IMF could remain top-heavy. This will have to go through the sieve of more robust functions of the mass conversion efficiency and core subfragmentation, and of better constrained disk fragmentation and burst-versus-continuous star formation scenarios. If it is confirmed that the predicted IMF of W43-MM2\&MM3 is top-heavy, this result will clearly challenge the IMF universality. If we dare to generalize, the IMFs emerging from starburst events could inherit their shape from that of their parental CMFs and could all be top-heavy, disproving the IMF universality. \section{Summary and conclusion} \label{sect:conclusions} We used ALMA images of the W43-MM2\&MM3 mini-starburst to make an extensive census of cores and derive its CMF. Our main results and conclusions can be summarized as follows: \begin{itemize} \item We combined the 12~m array images of the W43-MM2 and W43-MM3 protoclusters that were individually targeted by the ALMA-IMF Large Program \citep[see Sect.~\ref{sect:obs and DR} and \cref{tab:observation table};][]{motte2021,ginsburg2021}. At 1.3~mm, the resulting $\rm 4.2~pc\times 3.2~pc$ mosaic has a spatial resolution of $\sim$0.46\arcsec, or 2\,500~au. The 3~mm mosaic is wider, $\rm 7.3~pc\times 5.3~pc$, with a similar angular resolution but a mass sensitivity about three times lower (see \cref{appendixfig:3mm image with cores}). % \item To have the most complete and most robust sample of cores possible, we used both the best-sensitivity and the line-free ALMA-IMF images and removed part of the cirrus noise with \textsl{MnGSeg} (see Sect.~\ref{sect:extraction of compact sources}). This new strategy proved to be efficient both in increasing the number of sources detected and in improving the accuracy of their measurements, when applied to present observations and synthetic images (see \cref{tab:sensivity stat} and \cref{appendixsect:mngseg}). In the end, it allows the $5\,\sigma$ detection of point-like cores with gas masses of $\sim$0.20~$\ensuremath{M_{\odot}}$ at 23~K (see \cref{fig:1.3mm and trichrone}a). % \item We extracted 1.3~mm compact sources using both the \textsl{getsf} and \textsl{GExt2D} software packages. \textsl{getsf} provides a catalog of 208 objects, which have a median FWHM size of 3\,400~au (see \cref{appendixtab:core detection table} and Figs.~\ref{fig:1.3mm and trichrone}--\ref{fig:fwhm distribution}). The 100 cores extracted by both \textsl{getsf} and \textsl{GExt2D} have sizes and thus fluxes, on average, consistent to within 30\%. % \item The nature of the W43-MM2\&MM3 sources is investigated to exclude free-free emission peaks and correct source fluxes from line contamination (see Figs.~\ref{fig:freefree}--\ref{fig:hotcore} and Sects.~\ref{sect:freefree contamination}--\ref{sect:line contamination}). The resulting catalog contains 205 \textsl{getsf} cores (see \cref{appendixtab:derived core table}). Their masses are estimated and, for the most massive cores, they are corrected for their optically thick thermal dust emission (see Eq.~\ref{eq:optically thick mass} in Sect.~\ref{sect:mass estimation} and \cref{appendixsect:detailed approach for the mass calculation}). The core mass range is $0.1-70~\ensuremath{M_{\odot}}$ and the \textsl{getsf} catalog is 90\% complete down to $0.8~\ensuremath{M_{\odot}}$ (see \cref{appendixsect:completeness simulation}). % \item The W43-MM2\&MM3 CMFs derived from the \textsl{getsf} and \textsl{GExt2D} core samples are both top-heavy with respect to the Salpeter slope of the canonical IMF (see Sect.~\ref{sect:top heavy cmf} and \cref{fig:cmfs software}). The high-mass end of the \textsl{getsf} CMF is well fitted, above its 90\% completeness limit, by a power-law of the form $N(>\log M)\propto M^{\alpha}$, with $\alpha = -0.95 \pm 0.04$ (see \cref{tab:cmf and cores}). The error bars include the effect of uncertainties on core mass, fit, and completeness level. The CMF high-mass end thus cannot be represented by a function resembling the Salpeter IMF (see also \cref{fig:bootstrap powerlaw}). We showed that the shape of the CMF is robust against flux differences arising from the map or software chosen to extract cores, and against variations of the dust emissivity and temperature variations (see Sect.~\ref{sect:robustness against our assumptions}, \cref{fig:cmf tests} and \cref{tab:tests cmf}). Our result, in striking contrast with most CMF studies, argues against the universality of the CMF shape. % \item We used different functions of the conversion efficiency from core to stellar masses to predict the IMF resulting from the W43-MM2\&MM3 CMF (see Sect.~\ref{sect:discussion on the origin of stellar masses}). While in the framework of the core-collapse model the slope of the IMF high-mass end remains unchanged, it becomes shallower for competitive accretion or hierarchical global collapse models (see \cref{fig:fragmentation}a). We explored several fragmentation scenarios, which all slightly steepen the high-mass end of the predicted IMF (see \cref{fig:fragmentation}b--c). It is possible to set an artificial analytical model that predicts an IMF with the Salpeter slope. However, the best-constrained fragmentation model, which is a hierarchical cascade with 2:1 mass partition, predicts an IMF slope which does not reconcile with the canonical value (see \cref{fig:fragmentation}c). % Most scenarios tested here suggest that the resulting IMF could remain top-heavy. More constrained functions of the mass conversion efficiency, core subfragmentation, disk fragmentation, and burst development are required to provide a more definitive prediction. However, if this result is confirmed, the IMFs emerging from starburst events could inherit their shape from that of their parental CMFs and be top-heavy, thus challenging the IMF universality. \end{itemize} \begin{acknowledgements} This paper makes use of the ALMA data ADS/JAO.ALMA\#2017.1.01355.L. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. % This project has received funding from the European Research Council (ERC) via the ERC Synergy Grant \textsl{ECOGAL} (grant 855130), from the French Agence Nationale de la Recherche (ANR) through the project \textsl{COSMHIC} (ANR-20-CE31-0009), and the French Programme National de Physique Stellaire and Physique et Chimie du Milieu Interstellaire (PNPS and PCMI) of CNRS/INSU (with INC/INP/IN2P3). % YP acknowledges funding from the IDEX Universit\'e Grenoble Alpes under the Initiatives de Recherche Strat\'egiques (IRS) “Origine de la Masse des \'Etoiles dans notre Galaxie” (OMEGa). % YP and GB acknowledge funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme, for the Project “The Dawn of Organic Chemistry” (DOC), grant agreement No 741002. % RGM and TN acknowledge support from UNAM-PAPIIT project IN104319. RGM is also supported by CONACyT Ciencia de Frontera project ID 86372. TN acknowledges support from the postdoctoral fellowship program of the UNAM. % SB acknowledges support from the French Agence Nationale de la Recherche (ANR) through the project \textsl{GENESIS} (ANR-16-CE92-0035-01). % FL acknowledges the support of the Marie Curie Action of the European Union (project \textsl{MagiKStar}, Grant agreement number 841276). % AGi acknowledges support from the National Science Foundation under grant No. 2008101. % PS and BW were supported by a Grant-in-Aid for Scientific Research (KAKENHI Number 18H01259) of the Japan Society for the Promotion of Science (JSPS). P.S. and H.-L.L. gratefully acknowledge the support from the NAOJ Visiting Fellow Program to visit the National Astronomical Observatory of Japan in 2019, February. % AS gratefully acknowledges funding support through Fondecyt Regular (project code 1180350), from the ANID BASAL project FB210003, and from the Chilean Centro de Excelencia en Astrof\'isica y Tecnolog\'ias Afines (CATA) BASAL grant AFB-170002. % TB acknowledges the support from S. N. Bose National Centre for Basic Sciences under the Department of Science and Technology, Govt. of India. % GB also acknowledges funding from the State Agency for Research (AEI) of the Spanish MCIU through the AYA2017-84390-C2-2-R grant and from the PID2020-117710GB-I00 grant funded by MCIN/ AEI /10.13039/501100011033 . % TCs has received financial support from the French State in the framework of the IdEx Universit\'e de Bordeaux Investments for the future Program. % % LB gratefully acknowledges support by the ANID BASAL projects ACE210002 and FB210003. % K.T. was supported by JSPS KAKENHI (Grant Number 20H05645). % DW gratefully acknowledges support from the National Science Foundation under Award No. 1816715. \end{acknowledgements} \bibliographystyle{aa}
1,314,259,992,803
arxiv
\section{Overview of the MPT} \section*{Abstract} Our interest lies in the identification of hidden conducting permeable objects from measurements of the perturbed magnetic field in metal detection taken over range of low frequencies. The magnetic polarizability tensor (MPT) provides a characterisation of a conducting permeable object using a small number of coefficients, has explicit formula for their calculation and a well understood frequency behaviour, which we call its spectral signature. However, to compute such signatures, and build a library of them for object classification, requires repeated solution of a direct (full order) problem, which is typically accomplished using a finite element discretisation. To overcome this issue, we propose an efficient reduced order model (ROM) using a proper orthogonal decomposition (POD) for the rapid computation of MPT spectral signatures. Our ROM benefits from output certificates, which give bounds on the accuracy of the predicted outputs with respect to the full order model solutions. To further increase the efficiency of the computation of the MPT spectral signature, we provide scaling results, which enable an immediate calculation of the signature under changes in the object size or conductivity. We illustrate our approach by application to a range of homogenous and inhomogeneous conducting permeable objects. {\bf Keywords} Metal detection; Magnetic polarizability tensor; Reduced order model; Object classification. {\bf MSC CLASSIFICATION} 65N30; 35R30; 35B30 \section{Introduction} There is considerable interest in using the magnetic polarizability tensor (MPT) characterisation of conducting permeable objects to classify and identify hidden targets in metal detection. The MPT is a complex symmetric rank 2 tensor, which has $6$ independent coefficients, although the number of independent coefficients for objects with rotational or reflectional symmetries is smaller~\cite{LedgerLionheart2015}. Its coefficients are a function of the exciting frequency, the object's size, its shape as well as its conductivity and permeability. Explicit formulae for computing the tensor coefficients have been derived~\cite{Ammari2014,LedgerLionheart2015,LedgerLionheart2018,LedgerLionheart2019} and validated against exact solutions and measurements~\cite{LedgerLionheart2016,LedgerLionheart2018}. Also, the way in which the tensor coefficients vary with the exciting frequency is theoretically well understood~\cite{LedgerLionheart2019} offering improved object classification. The frequency (or spectral) behaviour of the MPT, henceforth called its spectral signature, has been exploited in a range of different classification algorithms including simple library classification for homogeneous~\cite{Ammari2015} and inhomogeneous objects~\cite{LedgerLionheartamad2019}, a $k$ nearest neighbours (KNN) classification algorithm~\cite{Makkonen2014} and machine learning approaches~\cite{WoutervanVerre2019}. The MPT classification of objects has already been applied to a range of different applications including airport security screening~\cite{marsh2014,Makkonen2014}, waste sorting~\cite{karimian2017} and anti-personal landmine detection~\cite{rehim2016}. The aforementioned {\em supervised} classification techniques rely on a library of MPT spectral signatures to {\em learn} how to classify the objects. The purpose of this paper is to describe an efficient computational tool for computing this library. One approach to obtaining a library of spectral signatures is to use a metal detector or dedicated measurement device to obtain MPT coefficients of different objects~\cite{zhao2016,zhao2014,rehim2015}, however, to do so, over a range of frequencies for a large number of objects, is time consuming and will result in unavoidable measurement errors and noise. Therefore, there is considerable interest in their automated computation. By post processing finite element method (FEM) solutions to eddy current problems using commercial packages (e.g with ANSYS as in~\cite{rehim2015}) MPT coefficients can be obtained, however, improved accuracy, and a better understanding, can be gained by using the available explicit expressions for MPT coefficients, which rely on computing finite element (FE) approximations to a transmission problem~\cite{LedgerLionheart2015,LedgerLionheart2018,LedgerLionheart2019}. Nevertheless, to produce an accurate MPT spectral signature, the process must be repeated for a large number of excitation frequencies leading to potentially expensive computations for fine discretisations (with small mesh spacing and high order elements). The present paper addresses this issue by proposing a reduced order model, in the form of a (projected) proper orthogonal decomposition (POD) scheme, that relies on full order model solutions computed using the established open source FE package, \texttt{NGSolve}, and the recently derived alternative explicit expressions formulae for the MPT coefficients~\cite{LedgerLionheart2019}. The use of \texttt{NGSolve}~\cite{NGSolve,netgendet} ensures that the solutions to underlying (eddy current type) transmission problems are accurately computed using high order $\bm{H}(\text{curl})$ conforming (high order edge element) discretisations (see~\cite{ledgerzaglmayr2010,SchoberlZaglmayr2005,zaglmayrphd} and references therein) and the POD technique ensures their rapid computation over sweeps of frequency. Reduced order models (ROMs) based on POD have been successfully applied to efficiently generate solutions for new problem parameters using a small number full order model snapshots in a range of engineering applications including mechanics~\cite{niroomandi2010model,radermacher2016pod}, thermal problems \cite{wang2012comparative,bialecki2005proper}, fluid flow \cite{luo2011reduced,pettit2002application} as well as electromagnetic problems with application to integrated circuits \cite{kerler2017model}. However, they have not been applied to the computation of MPT spectral signatures. A review of current POD techniques is provided in~\cite{hesthaven2016,Chatterjee2000}. The main novelty of the work is the application of a POD approach for the efficient and accurate computation of the MPT spectral signature and the derivation of output certificates that ensure accuracy of the reduced order predictions. This ROM approach is motivated by the previous success of POD approaches and the theoretical study~\cite{LedgerLionheart2019}, which shows the spectral behaviour of the MPT is characterised by a small number of functions and, hence, has a sparse representation. The practical computation requires only computing full order model solution snapshots at a small number of frequencies and the evaluation of the MPT spectral signature follows from solving a series of extremely small linear systems. A second novelty is the presentation of simple scaling results, which enable the MPT spectral signature to easily be computed from an existing set of coefficients under the scaling of an object's conductivity or object size. The paper is organised as follows: In Section~\ref{sect:eddycurrent}, the eddy current model, which applies in metal detection, and the asymptotic expansion of the perturbed magnetic field in the presence of a conducting permeable object, which leads to the explicit expression of the MPT, is briefly reviewed. Then, in Section~\ref {sect:fullorder}, the FE model used for full order model problem is described. Section~\ref{ROM} presents the POD reduced order model scheme. This is followed, in Section~\ref{sect:scaling}, by the derivation of results that describe the scaling of the MPT under parameter changes. Sections~\ref{sect:examplespodp} and~\ref{sect:examplesscale} present numerical examples of the POD scheme for computing the frequency behaviour of the MPT and examples of the scaling of the MPT under parameter changes, respectively. \section{The eddy-current model and asymptotic expansion}\label{sect:eddycurrent} We briefly discuss the eddy-current model along with stating the asymptotic expansion that forms the basis of the magnetic polarizability description of conducting objects in metal detection. \subsection{Eddy-current model} The eddy current model is a low frequency approximation of the Maxwell system that neglects the displacement currents, which is valid when the frequency is small and the conductivity of the body is high. A rigorous justification of the model involves the topology of the conducting body~\cite{ammaribuffa2000}. The eddy current model is described by the system \begin{subequations}\label{Eddy Current} \begin{align} \nabla\times\bm{E}_{\alpha}&=\mathrm{i} \omega\mu\bm{H}_{\alpha},\\ \nabla\times\bm{H}_{\alpha}&=\bm{J}_0+\sigma\bm{E}_{\alpha}. \end{align} \end{subequations} where $\bm{E}_{\alpha}$ and $\bm{H}_{\alpha}$ are the electric and magnetic interaction fields, respectively, $ \bm{J}_0$ is an external current source, $\mathrm{i}:=\sqrt{-1}$, $\omega$ is the angular frequency, $\mu$ is the magnetic permeability and $\sigma$ is the electric conductivity. We will use the eddy current model for describing the forward and inverse problems associated with metal detection. \subsubsection{Forward problem} \label{sect:forward} In the forward (or direct) problem, the position and materials of the conducting body $B_{\alpha}$ are known. The object has a high conductivity, $\sigma=\sigma_*$, and a permeability, $\mu=\mu_*$. For the purpose of this study, the conducting body is assumed to be buried in soil, which is assumed to be of a much lower conductivity so that $\sigma \approx 0$ and have a permeability $\mu=\mu_0:= 4 \pi \times 10^{-7}\text{H/m}$. A background field is generated by a solenodial current source $\bm{J}_0$ with support in the air above the soil, which also has $\sigma=0$ and $\mu =\mu_0$. The region around the object is $B_{\alpha}^c\vcentcolon=\mathbb{R}^3\setminus B_{\alpha}$ as shown in Figure \ref{metal detection}. Note that a similar model also applies in the situation of identifying hidden targets in security screening~\cite{marsh2014,Makkonen2014} and waste sorting ~\cite{karimian2017} amongst others. \begin{figure}[H] \begin{center} \includegraphics[width=0.7\textwidth, keepaspectratio]{MetalDetectionSoilSmall-eps-converted-to.pdf} \caption{A diagram showing a hidden conducting object $B_{\alpha}$, buried in soil, with a current source located in the air above.} \label{metal detection} \end{center} \end{figure} The forward model is described by the system (\ref{Eddy Current}), which holds in ${\mathbb R}^3$, with \begin{align} \mu (\bm{x}) = \left \{ \begin{array}{ll} \mu_* & \bm{x} \in B_{\alpha} \\ \mu_0 & \bm{x} \in B_{\alpha}^c \end{array} \right . , \qquad \sigma (\bm{x}) = \left \{ \begin{array}{ll} \sigma_* & \bm{x} \in B_{\alpha} \\ 0 & \bm{x} \in B_{\alpha}^c \end{array} \right ., \end{align} and the regions $B_\alpha$ and $B_\alpha^c $ are coupled by the transmission conditions \begin{align} \left [\bm{n} \times \bm{E}_\alpha\right ]_{\Gamma_{\alpha}}=\left [\bm{n} \times \bm{H}_\alpha\right ]_{\Gamma_{\alpha}}=\bm{0},\label{jump} \end{align} \noindent which hold on $\Gamma_\alpha:= \partial B_\alpha$. In the above, $[u ]_{\Gamma_{\alpha}}:= u| _+ - u|_- $ denotes the jump, the $+$ refers to just outside of $B_\alpha$ and the $-$ to just inside and $\bm{n}$ denotes a unit outward normal to $\Gamma_{\alpha}$. The electric interaction field is non-physical in $B_\alpha^c$ and, to ensure uniqueness of this field, the condition $\nabla \cdot \bm{E}_\alpha =0$ is imposed in this region. Furthermore, we also require that $\bm{E}_{\alpha}=O(1/|\bm{x}|)$ and $\bm{H}_{\alpha}=O(1/|\bm{x}|)$ as $|\bm{x} | \to \infty$, denoting that the fields go to zero at least as fast as $1/|\bm{x}|$, although, in practice, this rate can be faster. \subsubsection{Inverse problem} \label{sect:inverseproblem} In metal detection, the inverse problem is to determine the location, shape and material properties ($\sigma_*$ and $\mu_*$) of the conducting object $B_\alpha$ from measurements of $(\bm{H}_\alpha - \bm{H}_0) (\bm{x})$ taken at a range of locations $\bm{x}$ in the air. As described in the introduction, there are considerable advantages in using spectral data, i.e. additionally measuring $(\bm{H}_\alpha - \bm{H}_0) (\bm{x})$ over a range of frequencies $\omega$, within the limit of the eddy current model. Here, $\bm{H}_0$ denotes the background magnetic field and $\bm{E}_0$ and $\bm{H}_0$ are the solutions of (\ref{Eddy Current}) with $\sigma =0$ and $\mu=\mu_0$ in ${\mathbb R}^3$. Similar to above, we also require the decay conditions $\bm{E}_{0}=O(1/|\bm{x}|)$ and $\bm{H}_{0}=O(1/|\bm{x}|)$ as $|\bm{x} | \to \infty$. Note that practical metal detectors measure a voltage perturbation, which corresponds to $\int_S \bm{n} \cdot (\bm{H}_\alpha - \bm{H}_0) (\bm{x}) \mathrm{d} \bm{x}$ over an appropriate surface $S$~\cite{LedgerLionheart2018}. For very small coils, this voltage perturbation is approximated by $\bm{m} \cdot (\bm{H}_\alpha - \bm{H}_0) (\bm{x})$ where $\bm{m}$ is the magnetic dipole moment of the coil~\cite{LedgerLionheart2018}. A traditional approach to the solution of this inverse problem involves creating a discrete set of voxels, each with unknown $\sigma$ and $\mu$, and posing the solution to the inverse problem as an optimisation process in which $\sigma $ and $\mu$ are found through minimisation of an appropriate functional e.g.~\cite{manuch2006}. From the resulting images of $\sigma$ and $\mu$ one then attempts to infer the shape and position of the object. However, this problem is highly ill-posed~\cite{brown2016} and presents considerable challenges mathematically and computationally in the case of limited noisy measurement data. Instead, we seek an approximation of the perturbation $(\bm{H}_{\alpha}-\bm{H}_0)( \bm{x})$ at some point $\bm{x}$ exterior to $B_\alpha$, which allows objects to be characterised by a small number of coefficients in a MPT that are easily obtained from the measurements of $(\bm{H}_\alpha - \bm{H}_0) (\bm{x})$ once the object position is known, which can be found from a MUSIC algorithm for example~\cite{Ammari2014}. The object identification then reduces to a classification problem, as discussed in the introduction. \subsection{The asymptotic expansion and MPT description} Following~\cite{Ammari2014,LedgerLionheart2015} we define $B_\alpha := \alpha B + \bm{z}$ where $B$ is a unit size object, $\alpha$ is the object size and $\bm{z}$ is the object's translation from the origin as shown in Figure ~\ref{BAlphaShifted}. \begin{figure}[H] \begin{center} \includegraphics[width=0.8\textwidth, keepaspectratio]{BAlphaShifted-eps-converted-to.pdf} \caption{A diagram showing the physical description of $B_{\alpha}$ with respect to the coordinate axes.} \label{BAlphaShifted} \end{center} \end{figure} \noindent Then, using the asymptotic formula obtained by Ammari, Chen, Chen, Garnier and Volkov~\cite{Ammari2014}, Ledger and Lionheart~ \cite{LedgerLionheart2015} have derived the simplified form \begin{align} (\bm{H}_{\alpha}-\bm{H}_0)(\bm{x})_i=(\bm{D}_{\bm{x}}^2G(\bm{x},\bm{z}))_{ij}(\mathcal{M})_{jk}(\bm{H}_0(\bm{z}))_k+O(\alpha^4), \label{eqn:asymp} \end{align} which holds as $\alpha\to 0$ and makes the MPT explicit. The relationship between the leading order term in the above to the dipole expansion of $(\bm{H}_{\alpha}-\bm{H}_0)(\bm{x})$ is discussed in~\cite{LedgerLionheart2018}. In the above, $G(\bm{x} ,\bm{z} ) := 1/ {4\pi | \bm{x} -\bm{z} |}$ is the free space Laplace Green's function, $ \bm{D}^2_x G$ denotes the Hessian of $G$ and Einstein summation convention of the indices is implied. In addition, ${\mathcal M}=({\mathcal M})_{jk} {\bm e}_j \otimes {\bm e}_k$, where $\bm{e}_i$ denotes the $i$th orthonormal unit vector, is the symmetric rank 2 MPT, which describes the shape and material properties of the object $B_\alpha $ and is frequency dependent, but is independent of the object's position $\bm{z}$. We will sometimes write $\mathcal{M}[ \alpha B, \omega ]$ to emphasise this. The above formulation, and the definition of ${\mathcal M}$ below, are presented for the case of a single homogenous object $B$, the extension to multiple inhomogeneous objects can be found in~\cite{LedgerLionheartamad2019,LedgerLionheart2019}. Using the derivation in~\cite{LedgerLionheart2019}, we state the explicit formulae for the computation of the coefficients of $\mathcal{M}$, which are particularly well suited to a FEM discretisation. The earlier explicit expressions in~\cite{LedgerLionheart2015,LedgerLionheart2016,LedgerLionheart2018} are equivalent for exact fields. We use the splitting $(\mathcal{M})_{ij}:=(\mathcal{N}^0)_{ij}+(\mathcal{R})_{ij}+\mathrm{i}( \mathcal{I})_{ij}$ obtained in ~\cite{LedgerLionheart2019} with \begin{subequations} \label{eqn:NRI} \begin{align} (\mathcal{N}^0[ \alpha B] )_{ij}&:=\alpha^3\delta_{ij}\int_{B}(1-\mu_r^{-1})\mathrm{d} \bm{\xi}+\frac{\alpha^3}{4}\int_{B\cup B^c}\tilde{\mu}_r^{-1}\nabla\times\tilde{\bm{\theta}}_i^{(0)}\cdot\nabla\times\tilde{\bm{\theta}}_j^{(0)}\ \mathrm{d} \bm{\xi},\\ (\mathcal{R}[\alpha B, \omega])_{ij}&:=-\frac{\alpha^3}{4}\int_{B\cup B^c}\tilde{\mu}_r^{-1}\nabla\times\bm{\theta}_j^{(1)}\cdot\nabla\times\overline{\bm{\theta}_i^{(1)}}\ \mathrm{d} \bm{\xi},\\ (\mathcal{I}[\alpha B, \omega])_{ij}&:=\frac{\alpha^3}{4}\int_B\nu\Big(\bm{\theta}_j^{(1)}+(\tilde{\bm{\theta}}_j^{(0)}+\bm{e}_j\times\bm{\xi})\Big)\cdot\Big(\overline{\bm{\theta}_i^{(1)}+(\tilde{\bm{\theta}}_i^{(0)}+\bm{e}_i\times\bm{\xi})}\Big)\ \mathrm{d} \bm{\xi}, \end{align} \end{subequations} where $\mathcal{N}^0[ \alpha B]$, $\mathcal{R}[\alpha B, \omega]$ and $\mathcal{I}[\alpha B, \omega]$ are real symmetric rank 2 tensors, which each have real eigenvalues. In the above, \begin{align} \tilde{\mu}_r ( \bm{\xi} ) := \left \{ \begin{array}{ll} \mu_r :=\mu_*/\mu_0 & \bm{\xi} \in B\\ 1 & \bm{\xi} \in B^c \end{array} \right . \nonumber, \end{align} and $\nu:=\alpha^2\omega\mu_0\sigma_*$, $\delta_{ij}$ is the Kronecker delta and the overbar denotes the complex conjugate. The computation of (\ref{eqn:NRI}) rely on the solution of the transmission problems~\cite{LedgerLionheart2019} \begin{subequations} \label{eqn:Theta0} \begin{align} \nabla\times\tilde{\mu}_r^{-1}\nabla\times\bm{\theta}_i^{(0)}&=\bm{0} &&\textrm{in }B\cup B^c,\\ \nabla\cdot\bm{\theta}_i^{(0)}&=0 &&\textrm{in }B\cup B^c,\\ [{\bm{n}}\times\bm{\theta}_i^{(0)}]_{\Gamma}&=\bm{0} &&\textrm{on }\Gamma,\\ [{\bm{n}}\times\tilde{\mu}_r^{-1}\nabla\times\bm{\theta}_i^{(0)}]_{\Gamma}&=\bm{0} &&\textrm{on }\Gamma,\\ \bm{\theta}_i^{(0)}-{\bm{e}}_i\times\bm{\xi}&=\bm{O}(|\bm{\xi}|^{-1}) &&\textrm{as }|\bm{\xi}|\rightarrow\infty, \end{align} \end{subequations} where $\Gamma:=\partial B$ and \begin{subequations} \label{eqn:Theta1} \begin{align} \nabla\times {\mu}_r^{-1}\nabla\times\bm{\theta}_i^{(1)}-\mathrm{i} \nu (\bm{\theta}_i^{(0)}+\bm{\theta}_i^{(1)})&=\bm{0}&&\textrm{in }B,\\ \nabla\times \nabla\times\bm{\theta}_i^{(1)} &=\bm{0}&&\textrm{in }B^c,\\ \nabla\cdot\bm{\theta}_i^{(1)}&=0&&\textrm{in }B^c,\\ [{\bm{n}}\times\bm{\theta}_i^{(1)}]_{\Gamma}&=\bm{0}&&\textrm{on }\Gamma,\\ [{\bm{n}}\times\tilde{\mu}_r^{-1}\nabla\times\bm{\theta}_i^{(1)}]_{\Gamma}&=\bm{0}&&\textrm{on }\Gamma,\\ \bm{\theta}_i^{(1)}&=\bm{O}(|\bm{\xi}|^{-1})&&\textrm{as }|\bm{\xi}|\rightarrow\infty. \end{align} \end{subequations} Note also that we choose to introduce $\tilde{\bm{\theta}}_i^{(0)}\vcentcolon=\bm{\theta}_i^{(0)}-{\bm{e}}_i\times\bm{\xi}$, which can be shown to satisfy the same transmission problem as (\ref{eqn:Theta0}) except with a non-zero jump condition for $[{\bm{n}}\times\tilde{\mu}_r^{-1}\nabla\times\tilde{\bm{\theta}}_i^{(0)}]_{\Gamma}$ and the decay condition $\tilde{\bm{\theta}}_i^{(0)}(\bm{\xi})=\bm{O}(|\bm{\xi}|^{-1})$ as $|\bm{\xi}|\rightarrow\infty$. \section{Full order model}\label{sect:fullorder} To approximate the solutions to the transmission problems (\ref{eqn:Theta0}) and (\ref{eqn:Theta1}) we truncate the unbounded domain $B^c$ at a finite distance from the object $B$ and create a bounded domain $\Omega$ containing $B$. On $\partial \Omega$, we approximate the decay conditions (\ref{eqn:Theta0}e) and (\ref{eqn:Theta1}f) by $\bm{n} \times \tilde{\bm{\theta}}_i^{(0)}= \bm{n} \times ( {\bm{\theta}}_i^{(0)} - \bm{e}_i \times \bm{\xi})=\bm{0}$ and $\bm{n} \times {\bm{\theta}}_i^{(1)} =\bm{0}$, respectively. On this finite domain, we approximate the associated weak variational statements to these problems using FEM with a $\bm{H}(\text{curl})$ conforming discretisation with mesh spacing $h$ and order elements $p$ where \begin{equation} \bm{H}(\text{curl}) :=\left \{ {\bm{u}} :{\bm{u}} \in (L^2(\Omega))^3, \ \nabla \times {\bm{u}} \in (L^2(\Omega))^3 \right \}, \end{equation} and $L^2(\Omega)$ denotes the standard space of square integrable functions. In Section~\ref{sect:weak} we provide their weak formulations and provide their discretisation in Section~\ref{sect:fem}. Henceforth, we call this discrete approximation the full order model. \subsection{Weak formulation of the problem} \label{sect:weak} Following the approach advocated in~\cite{ledgerzaglmayr2010} for magnetostatic and eddy current problems, we add a regularisation term $\varepsilon \int_{\Omega} \tilde{\bm{\theta}}_i^{(0)} \cdot \bm{\psi} \mathrm{d} \bm{\xi}$, where $\varepsilon$ is a small regularisation parameter, to the weak variational statement of (\ref{eqn:Theta0}), written in terms of $\tilde{\bm{\theta}}_i^{(0)}$, in order to circumvent the Coulomb gauge $\nabla\cdot\tilde{\bm{\theta}}_i^{(0)}=0$. For details of the small error induced by this approximation see~\cite{ledgerzaglmayr2010,zaglmayrphd}. Then, by choosing an appropriate set of $\bm{H}(\text{curl})$ conforming finite element functions in $W^{(hp)} \subset \bm{H}(\text{curl})$, we obtain the following discrete regularised weak form for (\ref{eqn:Theta0}) : Find real solutions $\tilde{\bm{\theta}}_i^{(0,hp)} \in Y^{\varepsilon}\cap W^{(hp)} $ such that \begin{align}\label{Weak0} \int_{\Omega} \tilde{\mu}_r^{-1} \nabla \times \tilde{\bm{\theta}}_i^{(0,hp)} \cdot \nabla \times \bm{\psi}^{(hp)} \mathrm{d} \bm{\xi} &+ \varepsilon \int_\Omega \tilde{\bm{\theta}}_i^{(0,hp)} \cdot \bm{\psi}^{(hp)} \mathrm{d} \bm{\xi}\nonumber\\ &=2 \int_B(1-\mu_r^{-1}) \bm{e}_i \cdot \nabla \times \bm{\psi}^{(hp)} \mathrm{d} \bm{\xi}, \end{align} for all $\bm{\psi}^{(hp)} \in Y^{\varepsilon} \cap W^{(hp)}$, where $$Y^{\varepsilon} = \Big\{ \bm{u} \in \bm{H}(\text{curl}) : {\bm{n}} \times {\bm{u}} =\bm{0} \textrm{ on } \partial \Omega \Big\}.$$ In a similar manner, the discrete weak variational statement of (\ref{eqn:Theta1}) is: Find complex solutions ${\bm{\theta}}_i^{(1,hp)} \in Y^{\varepsilon}\cap W^{(hp)} $ such that \begin{align}\label{Weak1} \int_{\Omega}\big(\mu_r^{-1}\nabla\times\bm{\theta}_i^{(1,hp)}\big)&\cdot\big(\nabla\times\overline{\bm{\psi}^{(hp)}}\big) \mathrm{d} \bm{\xi}-\mathrm{i} \int_{B}\nu\bm{\theta}_i^{(1,hp)}\cdot\overline{\bm{\psi}^{(hp)}} \mathrm{d}\bm{\xi}\nonumber\\ &+\varepsilon\int_{\Omega\setminus B}\bm{\theta}_i^{(1,hp)}\cdot\overline{\bm{\psi}^{(hp)}}\mathrm{d} \bm{\xi}=\mathrm{i} \int_B\nu\bm{\theta}_i^{(0,hp)}\cdot\overline{\bm{\psi}^{(hp)}} \mathrm{d}\bm{\xi}, \end{align} for all $\bm{\psi}^{(hp)} \in Y^{\varepsilon} \cap W^{(hp)}$ where the overbar denotes the complex conjugate. For what follows it is beneficial to restate (\ref{Weak1}) in the following form: Find ${\bm{\theta}}_i^{(1,hp)} \in Y^{\varepsilon}\cap W^{(hp)}$ such that \begin{equation}\label{bilinear} a\big(\bm{\theta}_i^{(1,hp)},\bm{\psi}^{(hp)};\bm{\omega}\big)=r\big(\bm{\psi}^{(hp)}; \bm{\theta}_i^{(0,hp)} ,\bm{\omega}\big), \end{equation} for all $\bm{\psi}^{(hp)} \in Y^{\varepsilon} \cap W^{(hp)}$ where \begin{subequations}\label{eqn:BilinearExpanded} \begin{align} a\big(\bm{\theta}_i^{(1,hp)},\bm{\psi}^{(hp)};\bm{\omega}\big)\vcentcolon&= \left < \tilde{\mu}^{-1} \nabla\times\bm{\theta}_i^{(1,hp)}, \nabla\times {\bm{\psi}^{(hp)}} \right>_{L^2(\Omega)} \nonumber\\ &- \mathrm{i} \left < \nu\bm{\theta}_i^{(1,hp)} , {\bm{\psi}^{(hp)}} \right >_{L^2(B)} \nonumber \\ &+ \varepsilon \left < \bm{\theta}_i^{(1,hp)}, \bm{\psi}^{(hp)} \right >_{L^2(\Omega\setminus B)}, \\ r\big(\bm{\psi}^{(hp)};\bm{\theta}_i^{(0,hp)}, \bm{\omega}\big)\vcentcolon&= \mathrm{i} \left < \nu \bm{\theta}_i^{(0,hp)}, {\bm{\psi}^{(hp)}} \right >_{L^2(B)}, \end{align} \end{subequations} $ \left < \bm{u},\bm{v} \right >_{L^2(\Omega)} := \int_\Omega {\bm{u}} \cdot \overline{\bm{v}} \mathrm{d} \bm {\xi}$ denotes the $L^2$ inner product over $\Omega$ and $\bm{\omega}$ indicates the list of the problem parameters $\{\omega,\sigma_*, \mu_r,\alpha\}$ that one might wish to vary. Note that $r\big(\cdot ;\cdot, \cdot \big)$ is a function of $\mu_r$ as $\bm{\theta}_i^{(0,hp)}$ depends on $\mu_r$. \subsection{Finite element discretisation} \label{sect:fem} For the implementation of the full order model, we use \texttt{NGSolve} \cite{NGSolvecode,NGSolve,netgendet,zaglmayrphd} along with the hierarchic set of $\bm{H}(\text{curl})$ conforming basis functions proposed by Sch\"{o}berl and Zaglmayr~\cite{SchoberlZaglmayr2005}, which are available in this software. In the following, for simplicity, we focus on the treatment of $\bm{\theta}_i^{(1,hp)}$ and drop the index $i$ as each direction can be computed in a similar way (as can $\tilde{\bm{\theta}}_i^{(0,hp)}$). We denote these basis functions with $\bm{N}^{(k)}(\bm{\xi})\in W^{(hp)}$ leading to the expression of the solution function along with the weighting functions \begin{subequations}\label{FE deconstruct} \begin{align} \bm{\theta}^{(1,hp)}(\bm{\xi},\bm{\omega})&\vcentcolon=\sum_{k=1}^{N_d}\bm{N}^{(k)}(\bm{\xi})\mathrm{q}_k(\bm{\omega}),\\ \bm{\psi}^{(hp)}(\bm{\xi},\bm{\omega})&\vcentcolon=\sum_{k=1}^{N_d}\bm{N}^{(k)}(\bm{\xi})\mathrm{l}_k(\bm{\omega}), \end{align} \end{subequations} where $N_d$ is the number of degrees of freedom. Here, and in the following, the bold italic font denotes a vector field and the bold non-italic Roman font represents a matrix (upper case) or column vector (lower case). With this distinction, we rewrite (\ref{FE deconstruct}) in matrix form as \begin{subequations}\label{solution breakdown} \begin{align} \bm{\theta}^{(1,hp)}(\bm{\xi},\bm{\omega})&=\textbf{N}(\bm{\xi})\textbf{q}(\bm{\omega}),\\ \bm{\psi}^{(hp)}(\bm{\xi},\bm{\omega})&=\textbf{N}(\bm{\xi})\textbf{l}(\bm{\omega}), \end{align} \end{subequations} where $\textbf{N}(\bm{\xi})$ is the matrix constructed with the basis vectors $\bm{N}^k(\bm{\xi})$ as its columns, i.e. $$\textbf{N}(\bm{\xi})\vcentcolon=\big [\bm{N}^{(1)}(\bm{\xi}),\bm{N}^{(2)}(\bm{\xi}),...,\bm{N}^{(N_{d})} (\bm{\xi} )\big ].$$ With this, we may also rewrite (\ref{bilinear}) as follows \begin{equation}\label{eqn:basisbilinear} \sum_{i=1}^{N_d}\sum_{j=1}^{N_d}\overline{l_i(\bm{\omega})}a\big(\bm{N}^{(j)}(\bm{\xi}),\bm{N}^{(i)}(\bm{\xi});\bm{\omega}\big)q_j(\bm{\omega})=\sum_{i=1}^{N_d}\overline{l_i(\bm{\omega})}r\big(\bm{N}^{(i)}(\bm{\xi}); \bm{\theta}^{(0,hp)}, \bm{\omega}\big), \end{equation} and, with a suitable choice of $l_i(\bm{\omega})$, we may rewrite (\ref{eqn:basisbilinear}) as the linear system of equations \begin{equation}\label{eqn:Linear} \textbf{A}(\bm{\omega})\textbf{q}(\bm{\omega})=\textbf{r}(\bm{\theta}^{(0,hp)} , \bm{\omega}), \end{equation} where the coefficients of $\bf{A}(\bm{\omega})$ and ${\bf r}({\bm \theta}^{(0,hp)}, \bm{\omega})$ are defined to be \begin{subequations} \label{eqn:definestiffandrhs} \begin{align} (\textbf{A}(\bm{\omega}))_{ij}&\vcentcolon=a\big(\bm{N}^{(j)}(\bm{\xi}),\bm{N}^{(i)}(\bm{\xi});\bm{\omega}\big),\\ (\textbf{r}( \bm{\theta}^{(0,hp)}, \bm{\omega}))_{i}&\vcentcolon=r\big(\bm{N}^{(i)}(\bm{\xi}); \bm{\theta}^{(0,hp)}, \bm{\omega}\big). \end{align} \end{subequations} \texttt{NGSolve} offers efficient approaches for the computational solution to (\ref{eqn:Linear}) using preconditioned iterative solvers~\cite{zaglmayrphd,ledgerzaglmayr2010}, which we exploit. Following the solution of (\ref{eqn:Linear}), we can obtain $\bm{\theta}^{(1,hp)}(\bm{\xi},\bm{\omega})$ using (\ref{solution breakdown}) and, by repeating the process for $i=1,2,3$, we get $\bm{\theta}_i^{(1,hp)}(\bm{\xi},\bm{\omega})$. Then $(\mathcal{M}[\alpha B, \omega, \mu_r, \sigma_*])_{ij}$, for the full order model, is found by using (\ref{eqn:NRI}). \section{Reduced order model (ROM)}\label{ROM} A traditional approach for the computation of the MPT spectral signature, i.e. the variation of the coefficients ${\mathcal M}[\alpha B, \omega]$ with frequency, would involve the repeated solution of the $N_d$ sized system (\ref{eqn:Linear}) for different $\omega$. To reduce the computational cost of this, we wish to apply a ROM in which the solution of (\ref{eqn:Linear}) is replaced by a surrogate problem of reduced size. Thus, reducing both the computation cost and time to produce a solution for each new $\omega$. In particular, in Section~\ref{POD}, we describe a ROM based on the POD method ~\cite{Chatterjee2000,AbdiWilliams2010,hesthaven2016,Seoane2019} and, in Section~\ref{sect:podp}, apply the variant called projection based POD (which we denote by PODP), which has already been shown to work well in the analysis of magneto-mechanical coupling applied to MRI scanners~\cite{Seoane2019}. To emphasise the generality of the approach, the formulation is presented for an arbitrary list of problem parameters denoted by $\bm{\omega}$. In Section~\ref{sect:outputcert} we derive a procedure for computing certificates of accuracy on the ROM solutions with negligible additional cost. \subsection{Proper orthogonal decomposition}\label{POD} Following the solution of (\ref{eqn:Linear}) for $\mathbf{q}(\bm{\omega})$ for different values of the set of parameters, $\bm{\omega}$, we construct a matrix $\mathbf{D}\in\mathbb{C}^{N_d\times N}$ with the vector of solution coefficients as its columns in the form \begin{equation}\label{D} \mathbf{D}\vcentcolon=\big[\mathbf{q}(\bm{\omega}_1),\mathbf{q}(\bm{\omega}_2),...,\mathbf{q}(\bm{\omega}_{N})\big], \end{equation} where $N\ll N_d$ denote the number of snapshots. Application of a singular value decomposition (SVD)~e.g.~\cite{bjorck,hansen} gives \begin{equation}\label{eq:SVD} \mathbf{D}=\mathbf{U\Sigma V}^*,\end{equation} where $\mathbf{U}\in\mathbb{C}^{N_d\times N_d}$ and $\mathbf{V}^*\in\mathbb{C}^{N\times N}$ are unitary matrices and $\mathbf{\Sigma}\in\mathbb{R}^{N_d\times N}$ is a diagonal matrix enlarged by zeros so that it becomes rectangular. In the above, $\mathbf{V}^*= \overline{\mathbf{V}}^T$ is the hermitian of $\mathbf{V}$. The diagonal entries $(\mathbf{\Sigma})_{ii}=\sigma_i$~\footnote{Note that $\sigma_*$ is used for conductivity and $\sigma_i$ for a singular value, however, it should be clear from the application as to which definition applies} are the singular values of $\mathbf{D}$ and they are arranged as $\sigma_1>\sigma_2>...>\sigma_{N}$. Based on the sparse representation of the solutions to (\ref{eqn:Theta1}) as function of $\nu$, and hence $\omega$, (and hence also the sparse representation of the MPT) found in~\cite{LedgerLionheart2019}, we expect these to decay rapidly towards zero, which motivates the introduction of the truncated singular value decomposition (TSVD)~e.g.~\cite{bjorck,hansen} \begin{equation}\label{eq:truncSVD} \mathbf{D}\approx \mathbf{D}^M = \mathbf{U}^M\mathbf{\Sigma}^M(\mathbf{V}^M)^*, \end{equation} where $\mathbf{U}^M\in\mathbb{C}^{N_d\times M} $ are the first $M$ columns of $\mathbf{U}$, $\mathbf{\Sigma}^M\in{\mathbb R}^{M\times M}$ is a diagonal matrix containing the first $M$ singular values and $(\mathbf{V}^M)^*\in{\mathbb C}^{M\times N}$ are the first $M$ rows of $\mathbf{V}^*$. The computation of (\ref{eq:truncSVD}) constitutes the off-line stage of the POD. Using (\ref{eq:truncSVD}) we can recover an approximate representation for each of our solution snapshots as follows \begin{equation} \mathbf{q}(\bm{\omega}_j)\approx\mathbf{U}^M\mathbf{\Sigma}^M((\mathbf{V}^M)^*)_j, \end{equation} where $((\mathbf{V}^M)^*)_j$ refers to the $j$th column of $(\mathbf{V}^M)^*$. \subsection{Projection based proper orthogonal decomposition (PODP)} \label{sect:podp} In the online stage of PODP, $\mathbf{q}^{PODP} ( {\bm \omega}) \approx\mathbf{q}({\bm \omega})$ is obtained by taking a linear combination of the columns of ${\bf U}^M$ where the coefficients of this projection are contained in the vector ${\bf p}^M$. We choose to also approximate $\mathbf{l}({\bm \omega})$ in a similar way so that \begin{subequations}\label{eqn:solution:rebreakdown} \begin{align} \bm{\theta}^{(1,hp)}(\bm{\xi},\bm{\omega}) \approx(\bm{\theta}^{(1,hp)})^{\text{PODP}} ({\bm \xi}, \bm{\omega)} :=& \textbf{N}(\bm{\xi}) \mathbf{q}^{PODP} ( {\bm \omega}) = \textbf{N}(\bm{\xi}) \mathbf{U}^M\textbf{p}^M( \bm{\omega}) \in Y^{(PODP)} , \\ \bm{\psi}^{(hp)}(\bm{\xi},\bm{\omega})\approx(\bm{\psi}^{(1,hp)})^{\text{PODP}} ({\bm \xi}, \bm{\omega)} :=& \textbf{N}(\bm{\xi})\textbf{l}^{PODP} (\bm{\omega})= \textbf{N}(\bm{\xi})\mathbf{U}^M\textbf{o}^M(\bm{\omega}) \in Y^{(PODP)}, \end{align} \end{subequations} where $ \in Y^{(PODP)}\subset Y^\varepsilon \cap W^{(hp)}$. Substituting these lower dimensional representations in to (\ref{eqn:basisbilinear}) we obtain the following \begin{align} \sum_{i=1}^{M}\sum_{j=1}^{M}\overline{o^M_i(\bm{\omega})}a\big(\bm{N}^{(j)}(\bm{\xi})( \mathbf{U}^{M})_j &,\bm{N}^{(i)}(\bm{\xi})(\mathbf{U}^{M})_i;\bm{\omega}\big)p^M_j(\bm{\omega})\nonumber\\ &=\sum_{i=1}^{M}\overline{o^M_i(\bm{\omega})}r\big(\bm{N}^{(i)}(\bm{\xi})( \mathbf{U}^{M})_i; \bm{\theta}^{(0,hp)},\bm{\omega}\big), \nonumber \\ ({\mathbf{o}^M} (\bm{\omega}))^* ( (\mathbf{U}^M)^*\mathbf{A}(\bm{\omega}) \mathbf{U}^M )\mathbf{p}^M ( \bm{\omega}) & = ({\mathbf{o}^M}(\bm{\omega}))^*(\mathbf{U}^M)^* \mathbf{r}(\bm{\theta}^{(0,hp)}, \bm{\omega}). \label{eqn:basisbilinearPOD} \end{align} Then, if we choose $\mathbf{o}^M (\bm{\omega})$ appropriately, we obtain the linear system \begin{equation}\label{eqn:ReducedA} \mathbf{A}^M(\bm{\omega})\mathbf{p}^M( \bm{\omega})=\mathbf{r}^M(\bm{\theta}^{(0,hp)}, \bm{\omega}), \end{equation} which is of size $M\times M$ where $\mathbf{A}^M(\bm{\omega})\vcentcolon=(\mathbf{U}^M)^*\mathbf{A}(\bm{\omega})\mathbf{U}^M$ and $\mathbf{r}^M(\bm{\theta}^{(0,hp)} , \bm{\omega})\vcentcolon=(\mathbf{U}^M)^*\mathbf{r} ( \bm{\theta}^{(0,hp)}, \bm{\omega})$. Note, since $M<N \ll N_d$, this is significantly smaller than (\ref{eqn:Linear}) and, therefore, substantially computationally cheaper to solve. After solving this reduced system, and obtaining $\mathbf{p}^M(\bm{\omega})$, we obtain an approximate solution for $\bm{\theta}^{(1,hp)}(\bm{\xi},\bm{\omega})$ using (\ref{eqn:solution:rebreakdown}). Focusing on the particular case where $\bm{\omega}=\omega$, from (\ref{eqn:BilinearExpanded}) we observe that we can express $\mathbf{A}$ and $\mathbf{r}$ as the simple sums \begin{align} \mathbf{A}(\omega)=& \mathbf{A}^{(0)} + \omega \mathbf{A}^{(1)} , \nonumber \\ \mathbf{r}(\bm{\theta}^{(0,hp)},\omega)=&\omega \mathbf{r}^{(1)}(\bm{\theta}^{(0,hp)}) , \nonumber \end{align} where the definitions of $ \mathbf{A}^{(0)}$, $ \mathbf{A}^{(1)}$ and $\mathbf{r}^{(1)}(\bm{\theta}^{(0,hp)}) $ are obvious from (\ref{eqn:definestiffandrhs}),(\ref{eqn:BilinearExpanded}) and the definition of $\nu$. Then, by computing and storing $(\mathbf{U}^M)^*\mathbf{A}^{(0)} \mathbf{U}^M$, $(\mathbf{U}^M)^*\mathbf{A}^{(1)} \mathbf{U}^M$ , $(\mathbf{U}^M)^*\mathbf{r}^{(1)}(\bm{\theta}^{(0,hp)})$, which are independent of $\omega$, it follows that $\mathbf{A}^M({\omega})$ and $\mathbf{r}^M(\bm{\theta}^{(0,hp)},{\omega})$ can be efficiently calculated for each new $\omega$ from the stored data. In a similar manner, by precomputing appropriate data, the MPT coefficients in (\ref{eqn:NRI}) can also be rapidly evaluated for each new $\omega$ using the PODP solutions. This leads to further considerable computational savings. We emphasise that the PODP is only applied to obtain ROM solutions for $\bm{\theta}^{(1)}(\bm{\xi},{\omega})$ and not to $\bm{\theta}^{(0)}(\bm{\xi})$, which does not depend on $\omega$. \subsection{PODP output certification} \label{sect:outputcert} We follow the approach described in~\cite{hesthaven2016}, which enables us to derive and compute certificates of accuracy on the MPT coefficients obtained with PODP, with respect to those obtained with full order model, as a function of $\omega$. To do this, we set $\bm{\epsilon}_i (\omega)= \bm{\theta}_i^{(1,hp)} (\omega)- (\bm{\theta}_i^{(1,hp)})^{\text{PODP}} (\omega) \in Y^{(hp)}$, where we have reintroduced the subscript $i$, as we need to distinguish between the cases $i=1,2,3$. Although $\bm{\epsilon}_i$ also depends on $\bm{\xi}$, we have chosen here, and in the following, to only emphasise its dependence on $\omega$. We have also introduced $Y^{(hp)}= Y^\varepsilon \cap W^{(hp)}$ for simplicity of notation, and note that this error satisfies \begin{align} a(\bm{\epsilon}_i (\omega), \bm{\psi}; {\omega} ) =r (\bm{\psi};\bm{\theta}_i^{(0,hp)} ,\omega) \qquad \forall \bm{\psi} \in Y^{(hp)}, \label{eqn:erroreqn} \end{align} which is called the error equation~\cite{hesthaven2016} and \begin{align} a(\bm{\epsilon}_i (\omega) , \bm{\psi} ; \omega) =0 \qquad \forall \bm{\psi} \in Y^{(PODP)}, \end{align} which is called Galerkin orthogonality~\cite{hesthaven2016}. The Riesz representation~\cite{hesthaven2016} of $r (\cdot ;\bm{\theta}_i^{(0,hp)}, \omega)$ denoted by $\hat{\bm{r}}_i(\omega) \in Y^{(hp)}$ is such that \begin{align} (\hat{\bm{r}}_i (\omega) , \bm{\psi} )_{Y^{(hp)}} =r(\bm{\psi};\bm{\theta}_i^{(0,hp)},\omega) \qquad \forall \bm{\psi} \in Y^{(hp)}, \label{eqn:riesz} \end{align} so that \begin{align} a(\bm{\epsilon}_i(\omega), \bm{\psi}; {\omega} ) =(\hat{\bm{r}}_i(\omega) , \bm{\psi} )_{Y^{(hp)}} \qquad \forall \bm{\psi} \in Y^{(hp)}. \end{align} Then, by using the alternative set of formulae for the tensor coefficients~\cite{LedgerLionheart2019} \begin{subequations} \label{eqn:tensoraltform} \begin{align} (\mathcal{R}[\alpha B, \omega])_{ij}&=-\frac{\alpha^3}{4} \int_{B} \nu\text{Im} ( \bm{\theta}_j^{(1,hp)} ) \cdot \bm{\theta}_i^{(0,hp)} \mathrm{d} \bm{\xi} = -\frac{\alpha^3}{4} \left < \nu\text{Im} ( \bm{\theta}_j^{(1,hp)} ), \bm{\theta}_i^{(0,hp)} \right >_{L^2(B)}, \\ (\mathcal{I}[\alpha B, \omega])_{ij}&= \frac{\alpha^3}{4} \left ( \int_{B} \nu\text{Re} ( \bm{\theta}_j^{(1,hp)} ) \cdot \bm{\theta}_i^{(0,hp)} \mathrm{d} \bm{\xi} + \int_{B} \nu \bm{\theta}_j^{(0,hp)} \cdot \bm{\theta}_i^{(0,hp)}\mathrm{d} \bm{\xi} \right ) \nonumber \\ & = \frac{\alpha^3}{4}\left ( \left < \nu\text{Re} ( \bm{\theta}_j^{(1,hp)} ), \bm{\theta}_i^{(0,hp)} \right >_{L^2(B)} + \left < \nu \bm{\theta}_j^{(0,hp)} , \bm{\theta}_i^{(0,hp)} \right >_{L^2(B)} \right ), \end{align} \end{subequations} written in terms of the full order solutions, we obtain the certificates for the tensor entries computed using PODP stated in the lemma below. Note that the formulae stated in (\ref{eqn:NRI}) are used for the actual POD computation of $(\mathcal{R}^{PODP}[\alpha B, \omega])_{ij}$ and $(\mathcal{I}^{PODP}[\alpha B, \omega])_{ij}$, but the form in (\ref{eqn:tensoraltform}) is useful for obtaining certificates. Also, as $(\mathcal{N}[\alpha B])_{ij}$ is independent of $\omega$ we have $(\mathcal{N}^{0,PODP}[\alpha B])_{ij}= (\mathcal{N}^{0 }[\alpha B])_{ij}$ and we write $\mathcal{M}^{PODP}[\alpha B, \omega] = \mathcal{N}^{0,PODP}[\alpha B]+ \mathcal{R}^{PODP}[\alpha B, \omega]+ \mathrm{i} \mathcal{I}^{PODP}[\alpha B, \omega]$ for the MPT obtained by PODP. \begin{lemma} An error certificate for the tensor coefficients computed using PODP is \begin{subequations} \label{eqn:certifcate} \begin{align} \left | (\mathcal{R}[\alpha B, \omega])_{ij} - (\mathcal{R}^{PODP}[\alpha B, \omega])_{ij} \right |\le & (\Delta[\omega])_{ij} ,\\ \left | (\mathcal{I}[\alpha B, \omega])_{ij} - (\mathcal{I}^{PODP}[\alpha B, \omega])_{ij} \right | \le &(\Delta[\omega])_{ij}, \end{align} \end{subequations} where \begin{align} (\Delta[\omega])_{ij}: = \frac{\alpha^3}{8\alpha_{LB}} \left ( \| \hat{\bm{r}}_i (\omega) \|_{Y^{(hp)}}^2 + \| \hat{\bm{r}}_j (\omega) \|_{Y^{(hp)}}^2 + \| \hat{\bm{r}}_i (\omega) - \hat{\bm{r}}_j (\omega) \|_{Y^{(hp)}}^2 \right ) , \nonumber \end{align} and $\alpha_{LB}$ is a lower bound on a stability constant. \end{lemma} \begin{proof} We concentrate on the proof for $ \left | (\mathcal{R}[\alpha B, \omega])_{ij} - (\mathcal{R}^{PODP}[\alpha B, \omega])_{ij} \right |$ as the proof for the second bound is similar and leads to the same result. Recalling the symmetry of $\mathcal{R}[\alpha B, \omega]$, we have $ (\mathcal{R}[\alpha B, \omega])_{ij} = \frac{1}{2} \left ( (\mathcal{R}[\alpha B, \omega])_{ij} + (\mathcal{R}[\alpha B, \omega])_{ji} \right )$ so that \begin{align} D:=\left | (\mathcal{R}[\alpha B, \omega])_{ij} - (\mathcal{R}^{PODP}[\alpha B, \omega])_{ij} \right | =& \frac{\alpha^3}{8} \left | \left < \nu\text{Im} ( \bm{\epsilon}_i ), \bm{\theta}_j^{(0,hp)} \right >_{L^2(B)}+ \left < \nu\text{Im} ( \bm{\epsilon}_j ), \bm{\theta}_i^{(0,hp)} \right >_{L^2(B)} \right | \nonumber \\ =& \frac{\alpha^3}{8} \left | \left < \nu\text{Im} ( \bm{\epsilon}_i ), \bm{\theta}_i^{(0,hp)} \right >_{L^2(B)}+ \left < \nu\text{Im} ( \bm{\epsilon}_i ),\bm{\theta}_j^{(0,hp)} - \bm{\theta}_i^{(0,hp)} \right >_{L^2(B)}+ \right . \nonumber\\ &\left . \left < \nu\text{Im} ( \bm{\epsilon}_j ), \bm{\theta}_j^{(0,hp)} \right >_{L^2(B)} + \left < \nu\text{Im} ( \bm{\epsilon}_j ), \bm{\theta}_i^{(0,hp)} -\bm{\theta}_j^{(0,hp)} \right >_{L^2(B)} \right | \nonumber \\ = & \frac{\alpha^3}{8} \left | \left < \nu\text{Im} ( \bm{\epsilon}_i ), \bm{\theta}_i^{(0,hp)} \right >_{L^2(B)}+ \left < \nu\text{Im} ( \bm{\epsilon}_i - \bm{\epsilon}_j ),\bm{\theta}_j^{(0,hp)} - \bm{\theta}_i^{(0,hp)} \right >_{L^2(B)}+ \right . \nonumber\\ &\left . \left < \nu\text{Im} ( \bm{\epsilon}_j ), \bm{\theta}_j^{(0,hp)} \right >_{L^2(B)} \right | \nonumber , \end{align} which follows since $\nu$ and $\bm{\theta}_i^{(0,hp)}$ are real valued and where we have dropped the dependence of $\omega$ on $\bm{\epsilon}_i$ for simplicity of presentation. Thus, \begin{align} D\le \frac{\alpha^3}{8}\left ( \left | \left < \nu \bm{\epsilon}_i , \bm{\theta}_i^{(0,hp)} \right >_{L^2(B)} \right | + \left | \left < \nu (\bm{\epsilon}_i - \bm{\epsilon}_j ),\bm{\theta}_j^{(0,hp)} - \bm{\theta}_i^{(0,hp)} \right >_{L^2(B)} \right | + \left | \left < \nu \bm{\epsilon}_j , \bm{\theta}_j^{(0,hp)} \right >_{L^2(B)} \right | \right ) .\nonumber \end{align} Next, using (\ref{eqn:erroreqn}), we make the observation that \begin{align} \left | \left < \nu \bm{\epsilon}_i , \bm{\theta}_i^{(0,hp)} \right >_{L^2(B)} \right | = \left | r(\bm{\epsilon}_i; \bm{\theta}_i^{(0,hp)} ,\omega ) \right | = \left | a(\bm{\epsilon}_i , \bm{\epsilon}_i ; {\omega} ) \right | = \| \bm{\epsilon}_i \|_\omega^2. \nonumber \end{align} Also, since $r( \bm{\psi}; \bm{\theta}_j^{(0,hp)} - \bm{\theta}_i^{(0,hp)} ,\omega )= a( \bm{\theta}_j^{(1,hp)} (\omega) - \bm{\theta}_i^{(1,hp)} (\omega), \bm{\psi};\omega) = a( \bm{\epsilon}_j- \bm{\epsilon}_i, \bm{\psi};\omega) $ for all $\bm{\psi} \in Y^{(hp)}$, we have $ \left | \left < \nu \bm{\psi} , \bm{\theta}_j^{(0,hp)} - \bm{\theta}_i^{(0,hp)} \right >_{L^2(B)} \right | = \left | a( \bm{\epsilon}_j- \bm{\epsilon}_i, \bm{\psi};\omega) \right | $ so that \begin{align} \left | \left < \nu ( \bm{\epsilon}_i -\bm{\epsilon}_j ) , \bm{\theta}_j^{(0,hp)} - \bm{\theta}_i^{(0,hp)} \right >_{L^2(B)} \right | = \left | r(\bm{\epsilon}_i-\bm{\epsilon}_j; \bm{\theta}_j^{(0,hp)} - \bm{\theta}_i^{(0,hp)} , \omega ) \right | = \left | a(\bm{\epsilon}_j -\bm{\epsilon}_i , \bm{\epsilon}_i - \bm{\epsilon}_j ; {\omega} ) \right | = \| \bm{\epsilon}_i - \bm{\epsilon}_j \|_\omega^2, \nonumber \end{align} and hence \begin{align} D \le \frac{\alpha^3}{8}\left ( \| \bm{\epsilon}_i \|_\omega^2 + \| \bm{\epsilon}_i - \bm{\epsilon}_j \|_\omega^2 + \| \bm{\epsilon}_j \|_\omega^2\right ) \label{eqn:bderrors}. \end{align} Following similar steps to~\cite[pg47-50]{hesthaven2016}, and introducing a Riesz representation in (\ref{eqn:riesz}), we can find that \begin{align} \| \bm{\epsilon}_i \|_\omega^2 \le \frac{\| \hat{\bm{r}}_i (\omega) \|_{Y^{(hp)}}^2}{\alpha_{LB}} , \qquad \| \bm{\epsilon}_j \|_\omega^2 \le \frac{ \| \hat{\bm{r}}_j (\omega) \|_{Y^{(hp)}}^2}{\alpha_{LB}} , \qquad \| \bm{\epsilon}_i - \bm{\epsilon}_j \|_\omega^2 \le \frac{ \| \hat{\bm{r}}_i(\omega)- \hat{\bm{r}}_j (\omega) \|_{Y^{(hp)}}^2}{\alpha_{LB}} \nonumber , \end{align} and, combining this with (\ref{eqn:bderrors}), completes the proof. \end{proof} The efficient evaluation of (\ref{eqn:certifcate}) follows the approach presented in~\cite[pg52-54]{hesthaven2016}, adapted to complex matrices and with the simplification that we compute a Riesz representation $ \hat{\bm{r}}_i(\omega) \in Y^{(h0)}$ using lowest order elements for computational efficiency. The computations are split in to those performed in the off-line stage and those in the on-line stage as follows. In the off-line stage, the following $(2M+1) \times (2M+1) $ hermitian matrices are computed \begin{align} \mathbf{G}^{(i,j)} = \left ( \textbf{W}^{(i)} \right )^H \textbf{M}_0^{-1} \textbf{W}^{(j)} , \nonumber \end{align} where, since $\mathbf{G}^{(j,i)} = (\mathbf{G}^{(i,j)})^H$, it follows that, in practice, only the 3 matrices $\mathbf{G}^{(1,1)}$, $\mathbf{G}^{(2,2)}$ and $\mathbf{G}^{(3,3)}$ are required for computing the certificates on the diagonal entries of the tensors, and the further 3 matrices $\mathbf{G}^{(1,2)}$, $\mathbf{G}^{(1,3)}$ and $\mathbf{G}^{(2,3)}$ are needed for the off-diagonal terms. In the above, $(\textbf{M}_0)_{ij} = \left < \bm{N}^{(i)} , \bm{N}^{(j)} \right >_{L^2(\Omega)}$ are the coefficient of a real symmetric FEM mass matrix for the lowest order, with $ \bm{N}^{(i)}, \bm{N}^{(j)}\in W^{(h0)}$ being typical lowest order basis functions, and \begin{align} \textbf{W}^{(i)}: =\textbf{P}_0^p \left (\begin{array}{ccc} \mathbf{r}^{(1)}( \bm{\theta}_i^{(0)}) & \mathbf{A} ^{(0)} \mathbf{U}^{(M,i)} & \mathbf{A} ^{(1)} \mathbf{U}^{(M,i)} \end{array} \right), \nonumber \end{align} where $\textbf{P}_0^p$ is a projection matrix of the FEM basis functions from order $p$ to the lowest order $0$, $ \mathbf{U}^{(M,i)}$ is the $\mathbf{U}^M$ obtained in (\ref{eq:truncSVD}) for the $i$th direction. The stability constant $\alpha_{LB} =\lambda_{min}\text{min}(1,\frac{\omega}{\omega'})$ is obtained from the smallest eigenvalue of an eigenvalue problem~\cite[pg56]{hesthaven2016}, which, in practice, is only performed once for smallest frequency of interest $\omega'$. In the on-line stage, we evaluate \begin{align} \| \hat{\bm{r}}_i (\omega) \|_{Y^{(hp)}}^2 =& \left ( (\mathbf{w}^{(i)}(\omega))^H \mathbf{G}^{(i,i)} (\mathbf{w}^{(i)}(\omega)) \right )^{1/2} , \nonumber \\ \| \hat{\bm{r}}_i (\omega)- \hat{\bm{r}}_j (\omega) \|_{Y^{(hp)}}^2 =& \left ( \| \hat{\bm{r}}_i (\omega) \|_{Y^{(hp)}}^2 + \| \hat{\bm{r}}_j (\omega) \|_{Y^{(hp)}}^2 - 2 \text{Re}( (\mathbf{w}^{i}(\omega))^H \mathbf{G}^{(i,j)} (\mathbf{w}^{(j)}(\omega)) )\right )^{1/2} , \nonumber \end{align} for each $\omega$ by updating the vector \begin{equation} \mathbf{w}^{(i)}(\omega) =\left ( \begin{array}{c} \omega \\- \mathbf{p}^M( {\omega}) \\ -\omega \mathbf{p}^M({\omega}) \end{array} \right ). \nonumber \end{equation} We then apply~(\ref{eqn:certifcate}) to obtain the output certificates. \section{Scaling of the MPT under parameter changes}\label{sect:scaling} Two results that aid the computation of the frequency sweep of an MPT for an object with scaled conductivity and an object with a scaled object size from an already known frequency sweep of the MPT for the same shaped object are stated below. \begin{lemma} \label{lemma:condscale} Given the MPT coefficients for an object $ \alpha B $ with material parameters $ \mu_r$ and $ \sigma_*$ at frequency $s\omega$, the coefficients of the MPT for an object, which has the same $B$, $\alpha $ and $\mu_r$, but with conductivity $s\sigma_* $, at frequency $\omega$, are given by \begin{align} ({\mathcal M}[\alpha B, \omega , \mu_r ,s\sigma_*])_{ij} =& ( {\mathcal M}[\alpha B, s\omega , \mu_r ,\sigma_*] )_{ij}, \label{eqn:condscale} \end{align} where $( {\mathcal M}[\alpha B, s\omega , \mu_r ,\sigma_*] )_{ij}$ denote the coefficients of the original MPT at frequency $s\omega$. \end{lemma} \begin{proof} This result immediately follows from (\ref{eqn:NRI}) and (\ref{eqn:Theta1}) since both are written in terms of $\nu=\alpha^2 \sigma_*\mu_0 \omega$. \end{proof} \begin{lemma}\label{lemma:alphascale} Given the MPT coefficients for an object $ \alpha B $ with material parameters $ \mu_r$ and $ \sigma_*$ at frequency $s^2\omega$, the coefficients of the MPT for an object $s \alpha B $, which is the same as $B$ apart from having size $s\alpha$, at frequency $\omega$, are given by \begin{align} ({\mathcal M}[s \alpha B, \omega , \mu_r ,\sigma_*])_{ij} =& s^3 ({\mathcal M}[\alpha B, s^2\omega , \mu_r ,\sigma_*] )_{ij},\label{eqn:alphascale} \end{align} where $( {\mathcal M}[\alpha B, s^2\omega , \mu_r ,\sigma_*] )_{ij}$ denote the coefficients of the original MPT at frequency $s^2\omega$. \end{lemma} \begin{proof} For the case of $\mu_r=1$ this result was proved by Ammari {\em et al.}~\cite{Ammari2015}. We generalise this to $0<\mu_r<\mu_r^{max}<\infty$ as follows: We use the splitting $(\mathcal{M})_{ij}:=(\mathcal{N}^0)_{ij}-(\mathcal{C}^{\sigma_*})_{ij} + ( \mathcal{N}^{\sigma_*})_{ij}$ presented in~\cite{LedgerLionheart2016} and let $\bm{\theta}_{i,B}^{(0)}$ denote the solution to (\ref{eqn:Theta0}). Then, we find that \begin{equation} \frac{1}{s} \bm{\theta}_{i,sB}^{(0)} (s\bm{\xi}') = \bm{\theta}_{i,B}^{(0)} (\bm{\xi}'), \nonumber \end{equation} where $\bm{\theta}_{i,sB}^{(0)} $ is the solution to (\ref{eqn:Theta0}) with $B$ replaced by $sB$. If $ \bm{\theta}_{i,B}^{(1)}[s^2\nu]$ is the solution to (\ref{eqn:Theta1}) with $\nu$ replaced by $s^2\nu$, then, we find that \begin{equation} \frac{1}{s} \bm{\theta}_{i,sB}^{(1)} [\nu ](s\bm{\xi}') = \bm{\theta}_{i,B}^{(1)}[ s^2\nu] (\bm{\xi}') , \nonumber \end{equation} where $\bm{\theta}_{i,sB}^{(1)} [\nu ]$ is the solution to (\ref{eqn:Theta1}) with $B$ replaced by $sB$. Using the above, the definitions in Lemma 1 of~\cite{LedgerLionheart2016}, and $\bm{\xi} = s {\bm \xi}'$ we find \begin{align} ({\mathcal C}^{\sigma_*}[ \alpha (sB), \omega, \mu_r,\sigma_* ] )_{ij} = & - \frac{\mathrm{i} \alpha^3\nu }{4} \int_{sB} \bm{e}_i \cdot \left ( \bm{\xi} \times \left ( \bm{\theta}_{i,sB}^{(1)}[\nu]+ \bm{\theta}_{i,sB}^{(0)} \right ) \right ) \mathrm{d} \bm{\xi} \nonumber\\ = & \frac{\mathrm{i} s^3 \alpha^3\nu }{4} \int_{B} \bm{e}_i \cdot \left ( s \bm{\xi}' \times \left ( \bm{\theta}_{i,sB}^{(1)}[\nu](s \bm{\xi}')+ \bm{\theta}_{i,sB}^{(0)} (s \bm{\xi}') \right ) \right ) \mathrm{d} \bm{\xi}'\nonumber\\ = & \frac{\mathrm{i} s^3 \alpha^3(s^2\nu) }{4} \int_{B} \bm{e}_i \cdot \left ( \bm{\xi}' \times \left ( \bm{\theta}_{i,B}^{(1)}[s^2\nu] + \bm{\theta}_{i,B}^{(0)} \right ) \right ) \mathrm{d} \bm{\xi}' = s^3 ({\mathcal C}^{\sigma_*} [ \alpha B, s^2\omega, \mu_r,\sigma_*] )_{ij} , \nonumber \end{align} \begin{align} ({\mathcal N}^0[ \alpha (sB), \mu_r] )_{ij} = & \frac{ \alpha^3}{2} [\tilde{\mu}^{-1} ]_\Gamma \int_{sB} \bm{e}_i \cdot \nabla_\xi \times \bm{\theta}_{i,sB}^{(0)} \mathrm{d} \bm{\xi} \nonumber\\ = & \frac{ s^3 \alpha^3}{2} [\tilde{\mu}^{-1} ]_\Gamma \int_{B} \bm{e}_i \cdot\frac{1}{s} \nabla_{\xi'} \times (s \bm{\theta}_{i,B}^{(0)} ) \mathrm{d} \bm{\xi}' = s^3 ({\mathcal N}^0[ \alpha B, \mu_r] )_{ij} , \nonumber \end{align} \begin{align} ({\mathcal N}^{\sigma_*}[ \alpha (sB), \omega, \mu_r,\sigma_* ] )_{ij} = & \frac{ \alpha^3}{2} [\tilde{\mu}^{-1} ]_\Gamma \int_{sB} \bm{e}_i \cdot \nabla_\xi \times \bm{\theta}_{i,sB}^{(1)}[\nu] \mathrm{d} \bm{\xi} \nonumber\\ = & \frac{ s^3 \alpha^3}{2} [\tilde{\mu}^{-1} ]_\Gamma \int_{B} \bm{e}_i \cdot\frac{1}{s} \nabla_{\xi'} \times (s \bm{\theta}_{i,B}^{(1)} [s^2 \nu] ) \mathrm{d} \bm{\xi}' = s^3 ({\mathcal N}^{\sigma_*} [ \alpha B, s^2\omega, \mu_r,\sigma_*] )_{ij} , \nonumber \end{align} and the quoted result immediately follows. \end{proof} \section{Numerical examples of PODP} \label{sect:examplespodp} The PODP algorithm has been implemented in the Python interface to the high order finite element solver \texttt{NGSolve} package led by the group of Sch\"oberl~\cite{NGSolve,zaglmayrphd,netgendet} available at \texttt{https://ngsolve.org}. The snapshots are computed by solving (\ref{Weak0}) and (\ref{bilinear}) using \texttt{NGSolve} and their $\bm{H}(\text{curl})$ conforming tetrahedral finite element basis functions of order $p$ on meshes of spacing $h$~\cite{SchoberlZaglmayr2005}. Following the solution of (\ref{eqn:Linear}), and the application of (\ref{FE deconstruct}), the coefficients of ${\mathcal M}[\alpha B, \omega]$~\footnote{ In the following, when presenting numerical results for the PODP, we frequently choose to drop the superscript PODP on $\mathcal{M}[\alpha B,\omega]$ $\mathcal{R}[\alpha B,\omega]$, $\mathcal{I}[\alpha B,\omega]$ and $\mathcal{N}^0[B]$, introduced in Section~\ref{sect:outputcert}, for brevity of presentation where no confusion arises. Also, we will return to using the notation ${\mathcal M}[\alpha B, \omega,\mu_r,\sigma_*]$, which illustrates the full parameter dependence, in Section~\ref{sect:examplesscale} when considering scaling of conductivity and object size.} follow by simple post-processing of (\ref{eqn:NRI}). If desired, PODP output certificates can also be efficiently computed using the approach described in Section~\ref{sect:outputcert}. The Python scripts for the computations presented can be accessed at \\\texttt{https://github.com/BAWilson94/MPT-Calculator}. \subsection{Conducting permeable sphere}\label{sect:ConductingSphere} We begin with the case where $B_{\alpha}=\alpha B$ is a permeable conducting sphere of radius $\alpha=0.01$ m and $B$ is the unit sphere centred at the origin. The sphere is chosen to have a relative permeability $\mu_r=1.5$ and conductivity $\sigma_*=5.96\times10^6$ S/m. To produce the snapshots of the full order model, we set $\Omega$ to be a ball 100 times the radius of $B$ and discretize it using a mesh of 26\,385 unstructured tetrahedral elements, refined towards the object, and a polynomial order of $p=3$. We have chosen this discretization since it has already been found to produce an accurate representation of $\mathcal{M}[\alpha B, \omega]$ for $10^2 <\omega<10^8$ rad/s by comparing with exact solution of the MPT for a sphere~\cite{wait1951,LedgerLionheart2018}. Indeed, provided that the geometry discretisation error is under control, performing $p$-refinement of the full order model solution results in exponential convergence to the true solution~\cite{LedgerLionheart2015}. We follow two different schemes for choosing frequencies $\omega$ for generating the solution vectors $\mathbf{q}(\omega)$ required for $\mathbf{D}$ in (\ref{D}). Firstly, we consider linearly spaced frequencies $\omega_{min}\le \omega_{n}\le \omega_{max}$, $n=1,2,\ldots,N$, where, as in Section~\ref{POD}, $N$ is the number of snapshots, and denote this choice of samples by ``Lin" in the results. Secondly, we consider logarithmically spaced frequencies $\omega_{min}\le \omega_{n}\le \omega_{max}$ and denote this regime by ``Log" in the results. Considering both linearly and logarithmically spaced frequencies with $\omega_{min}= 1\times 10^2 \text{ rad/s}$, $\omega_{max}= 1\times 10^8 \text{ rad/s}$ and $N=9,13,17$, in turn, to generate the snapshots, the application of an SVD to $\mathbf{D}$ in (\ref{eq:SVD}) leads to the results shown in Figure~\ref{fig:Singular} where the values have been scaled by $\sigma_1$ and are strictly decreasing. We observe that ``Log'' case produces singular values $\sigma_i/\sigma_1$, which tend to $0$ with increasing $i$, while the ``Lin'' case produces $\sigma_i/\sigma_1$, which tend to a finite constant with increasing $i$. Also shown is the tolerance $TOL=1\times 10^{-3}$, i.e. we define $M$ such that $\sigma_M/\sigma_1\leq TOL<\sigma_{M+1}/\sigma_1$ and create the matrices $\mathbf{U}^M$, $\mathbf{\Sigma}^M$ and $(\mathbf{V}^*)^M$ by taking the first $M$ columns of $\mathbf{U}$, $M$ rows of $\mathbf{V}^*$ and first $M$ rows and columns of $\mathbf{\Sigma}$. \begin{figure}[H] \begin{center} $$\begin{array}{cc} \includegraphics[width=0.5\textwidth, keepaspectratio]{LinearSingularValues.pdf} & \includegraphics[width=0.5\textwidth, keepaspectratio]{SingularValues.pdf} \\ \textrm{\footnotesize{(b) Linearly spaced snapshots}} & \textrm{\footnotesize{(a) Logarithmically spaced snapshots}} \end{array}$$ \caption{Sphere with $\mu_r=1.5$, $\sigma_*=5.96\times10^6$ S/m, $\alpha=0.01$ m: PODP applied to the computation of $\mathcal{M}[\alpha B, \omega]$ showing $\sigma_i/\sigma_1$ for $(a)$ linearly spaced snapshots and $(b)$ logarithmically spaced snapshots.} \label{fig:Singular} \end{center} \end{figure} The superior performance of logarithmically spaced frequency snapshots over those linearly spaced is illustrated in~Figure \ref{fig:LogvsLin}~$(a)$ where the variation of the condition number $\kappa(\mathbf{A}^M(\omega))$ with $\omega$ for the frequency range and snapshots presented in Figure~\ref{fig:Singular} is shown. Included in Figure~\ref{fig:LogvsLin}~$(b)$ is the corresponding error measure $|e(\Lambda_i(\omega))|:= |\Lambda_i^{exact}(\omega)-\Lambda_i^{PODP} (\omega)| / |\Lambda_i^{exact}(\omega)| $ with $\omega$, where $\Lambda_i(\omega)=\lambda_i(\mathcal{R}[\alpha B, \omega]+{\mathcal N}^0[\alpha B])+\textrm{i}\lambda_i(\mathcal{I}[\alpha B ,\omega])$, $\lambda_i(\cdot)$ indicates the $i$th eigenvalue. Note that, since the results for $i=1,2,3$ are identical on this scale, only $i=1$ is shown. From the results shown in this figure, we see that, with the exception of $N=17$, the logarithmically spaced frequency snapshots results in a lower condition number compared to the linear ones and that all the logarithmically spaced snapshots result in a smaller error. \begin{figure}[H] $$\begin{array}{cc} \includegraphics[width=0.5\textwidth, keepaspectratio]{PointsVsConditionNumber.pdf} & \includegraphics[width=0.5\textwidth, keepaspectratio]{PointsVsError.pdf}\\ \textrm{\footnotesize{(a) $\kappa(\mathbf{A}^M(\omega))$}} & \textrm{\footnotesize{(b) $e(\Lambda_i(\omega))$}} \end{array}$$ \caption{Sphere with $\mu_r=1.5$, $\sigma_*=5.96\times10^6$ S/m, $\alpha=0.01$ m: PODP applied to the computation of $\mathcal{M}[\alpha B, \omega]$ showing $(a)$ Variation of $\kappa(\mathbf{A}^M(\omega))$ with $\omega$ for linearly and logarithmically spaced snapshots $(b)$ Variation of $e(\Lambda_i(\omega))$ with $\omega$ for the same snapshots.} \label{fig:LogvsLin} \end{figure} Further tests reveal that the accuracy of the PODP using $N=9,13,17$ and logarithmically spaced snapshots remains similar to that shown in Figure~\ref{fig:LogvsLin}~$(b)$ for $TOL\le 1\times 10^{-3}$ for this problem. We complete the discussion of the sphere by showing a comparison of $\lambda_i(\mathcal{R}[\alpha B, \omega])$ and $\lambda_i(\mathcal{I}[\alpha B, \omega])$, each with $\omega$, for the full order model, PODP using $N=9$ and the exact solution in Figure~\ref{Best}. Again, the results for $i=1,2,3$ are identical and, hence, only $i=1$ is shown. In this figure, we observe excellent agreement between PODP, the full order model solution and exact solution. \begin{figure}[H] $$\begin{array}{cc} \includegraphics[width=0.5\textwidth, keepaspectratio]{SphereReal.pdf} & \includegraphics[width=0.5\textwidth, keepaspectratio]{SphereImag.pdf}\\ \textrm{\footnotesize{(a) $\lambda_i(\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega])$}} & \textrm{\footnotesize{(b) $\lambda_i(\mathcal{I}[\alpha B, \omega]$)}} \end{array}$$ \caption{Sphere with $\mu_r=1.5$, $\sigma_*=5.96\times10^6$ S/m, $\alpha=0.01$ m: PODP applied to the computation of $\mathcal{M}[\alpha B, \omega]$ with $N=9$ and $TOL=1\times 10^{-4}$ showing $(a)$ $\lambda_i(\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega])$ and $(b)$ $\lambda_i(\mathcal{I}[\alpha B, \omega])$ each with $\omega$.} \label{Best} \end{figure} In Figure~\ref{fig:sphereerror}, we show the output certificates $(\mathcal{R}^{PODP}[\alpha B, \omega]+{\mathcal N}^{0, PODP}[\alpha B ])_{ii}\pm (\Delta[\omega])_{ii}$ (summation of repeated indices is not implied) and $(\mathcal{I}^{PODP}[\alpha B, \omega])_{ii}\pm (\Delta[\omega])_{ii}$, each with $\omega$, obtained by applying the technique described in Section~\ref{sect:outputcert} for the case where $i=1$ and with $N=17, 21$ and $TOL=1\times10^{-6}$. Similar certificates can be obtained for the other tensor coefficients. We observe that certificates are indistinguishable from the the MPT coefficients obtained with PODP for low frequencies in both cases and the certificates rapidly tend to the MPT coefficients for all $\omega$ as $N$ is increased. Note that $TOL=1\times10^{-6}$ is chosen as larger tolerances lead to larger certificates, however, this reduction in tolerance does not substantially affect the computational cost of the ROM. Although the effectivity indices $(\Delta [\omega])_{11}/|(\mathcal{R}[\alpha B, \omega]- \mathcal{R}^{PODP}[\alpha B, \omega] )_{11}|$ and $(\Delta [ \omega])_{11}/|(\mathcal{I}[\alpha B, \omega]- \mathcal{I}^{PODP}[\alpha B, \omega] )_{11}|$) of the PODP with respect to the full order model are clearly larger at higher frequencies, we emphasise that they are computed at negligible additional cost, they converge rapidly to the MPT coefficients obtained with PODP as $N$ is increased and give credibility in the PODP solution without the need of performing additional full order model solutions to validate the ROM. \begin{figure}[H] $$\begin{array}{cc} \includegraphics[width=0.5\textwidth, keepaspectratio]{SphereErrorBarsRealn_17.pdf} & \includegraphics[width=0.5\textwidth, keepaspectratio]{SphereErrorBarsImagn_17.pdf} \\ \textrm{\footnotesize{(a) $(\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega])_{11}$, $N=17$}} & \textrm{\footnotesize{(b) $(\mathcal{I}[\alpha B, \omega])_{11}$, $N=17$} } \\ \includegraphics[width=0.5\textwidth, keepaspectratio]{SphereErrorBarsRealn_21.pdf} & \includegraphics[width=0.5\textwidth, keepaspectratio]{SphereErrorBarsImagn_21.pdf}\\ \textrm{\footnotesize{(c) $(\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega])_{11}$, $N=21$}} & \textrm{\footnotesize{(d) $(\mathcal{I}[\alpha B, \omega])_{11}$, $N=21$} } \end{array}$$ \caption{Sphere with $\mu_r=1.5$, $\sigma_*=5.96\times10^6$ S/m, $\alpha=0.01$ m: PODP applied to the computation of $\mathcal{M}[\alpha B, \omega]$ with $TOL=1\times 10^{-6}$ showing the PODP solution, full order model solutions and output certificates $(\cdot ) \pm (\Delta [ \omega])_{11}$ for $(a)$ $ (\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega])_{11}$ using $N=17$, $(b)$ $ (\mathcal{I}[\alpha B, \omega])_{11}$ using $N=17$, $(c)$ $(\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega])_{11}$ using $N=21$ and $(d)$ $(\mathcal{I}[\alpha B, \omega])_{11}$ using $N=21$, each with $\omega$.} \label{fig:sphereerror} \end{figure} The computational speed-ups offered by using the PODP compared to a frequency sweep performed with the full order model are shown in Figure~\ref{fig:Speedup} where $N=9,13,17$ and logarithmically spaced snapshots are chosen with $\omega_{min}= 1\times 10^2 \text{ rad/s}$, $\omega_{max}= 1\times 10^8 \text{ rad/s}$, as before. For the comparison, we vary the number of output points $N_0$ produced in a frequency sweep and measure the time taken to produce each of these frequency sweeps using a 2.9 GHz quad core Intel i5 processor and also show the percentage speed up offered by each of these PODP sweeps. Also shown is the break down of the computational time for the offline and online stages of the PODP for the case where $N=13$. Note, in particular, that the computational cost increases very slowly with $N_0$ and that the additional cost involved in computing the output certificates is negligible. The breakdown of computational costs for other $N$ is similar. \begin{figure}[H] $$\begin{array}{cc} \includegraphics[width=0.5\textwidth, keepaspectratio]{UpdatedTimings.pdf} & \includegraphics[width=0.5\textwidth, keepaspectratio]{TimeBreakDown.pdf}\\ \textrm{\footnotesize{(a) Sweep Time}} & \textrm{\footnotesize{(b) Break Down of PODP Timings}} \end{array}$$ \caption{Sphere with $\mu_r=1.5$, $\sigma_*=5.96\times10^6$ S/m, $\alpha=0.01$ m: PODP applied to the computation of $\mathcal{M}[\alpha B, \omega]$ with $N=13,17,21$ and $TOL=1\times 10^{-6}$ showing, for different numbers of outputs $N_0$, $(a)$ sweep computational time compared with full order and $(b)$ a typical break down of the offline and online computational times for $N=13$.} \label{fig:Speedup} \end{figure} \subsection{Conducing permeable torus}\label{sect:condtorus} Next, we consider $B_{\alpha}= \alpha B$ to be a torus where $B$ has major and minor radii $a=2$ and $b=1$, respectively, $\alpha=0.01$ m and the object is permeable and conducting with $\mu_r=1.5$, $\sigma_*=5\times10^5$ S/m. The object is centred at the origin so that it has rotational symmetry around the $\bm{e}_1$ axis and hence ${\mathcal M}[\alpha B, \omega]$ has independent coefficients $({\mathcal M}[\alpha B, \omega])_{11}$ and $({\mathcal M}[\alpha B, \omega])_{22}=({\mathcal M}[\alpha B, \omega])_{33}$, and thus $\mathcal{N}^0[\alpha B]$, $\mathcal{R}[\alpha B, \omega]$, $\mathcal{I}[\alpha B, \omega]$ each have $2$ independent eigenvalues. To compute the full order model, we set $\Omega$ to be a sphere of radius 100, centred at the origin, such that it contains $B$ and discretise it by a mesh of 26142 unstructured tetrahedral elements, refined towards the object, and a polynomial order of $p=3$. This discretisation has already been found to produce an accurate representation of $\mathcal{M}[\alpha B, \omega]$ for the frequency range with $\omega_{min}=1 \times10^2 {\textrm{ rad}}{/ \textrm{s}}$ and $\omega_{max} = 1 \times10^8 {\textrm{ rad}} / {\textrm{s}}$ with the full order model. The reduced order model is constructed using $N=13$ snapshots at logarithmically spaced frequencies with $TOL=1 \times 10^{-4} $. Figure~\ref{fig:Torus} shows the results for $\lambda_i(\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega])$ and $\lambda_i(\mathcal{I}[\alpha B, \omega])$, each with $\omega$, for both the full order model and the PODP. The agreement is excellent in both cases. \begin{figure}[H] $$\begin{array}{cc} \includegraphics[width=0.5\textwidth, keepaspectratio]{TorusRealEigenvalues.pdf} & \includegraphics[width=0.5\textwidth, keepaspectratio]{TorusImaginaryEigenvalues.pdf}\\ \textrm{\footnotesize{(a) $\lambda_i(\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega])$}} & \textrm{\footnotesize{(b) $\lambda_i(\mathcal{I}[\alpha B, \omega]$)}} \end{array}$$ \caption{Torus with major and minor radii of $a=2$ and $b=1$, respectively, and $\mu_r=1.5$, $\sigma_*=5\times10^5$ S/m, $\alpha=0.01$ m: PODP applied to the computation of $\mathcal{M}[\alpha B, \omega]$ $N=13$ and $TOL=1 \times 10^{-4} $ showing $(a)$ $\lambda_i(\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega])$ and $(b)$ $\lambda_i(\mathcal{I}[\alpha B, \omega])$, each with $\omega$. } \label{fig:Torus} \end{figure} In Figure~\ref{fig:ErrorTorus} we show the output certificates $(\mathcal{R}^{PODP}[\alpha B, \omega]+{\mathcal N}^{0, PODP}[\alpha B ])_{ii}\pm (\Delta[\omega])_{ii}$ (no summation over repeated indices implied) and $(\mathcal{I}^{PODP}[\alpha B, \omega])_{ii}\pm (\Delta[\omega])_{ii}$, each with $\omega$, obtained by applying the technique described in Section~\ref{sect:outputcert} for the case where $N=17$ and $TOL=1\times10^{-6}$. Note that we increased the number of snapshots from $N=13$ to $N=17$ and have reduced the tolerance to ensure tight certificates bounds. \begin{figure}[H] $$\begin{array}{cc} \includegraphics[width=0.5\textwidth, keepaspectratio]{TorusDiagonalReal.pdf} & \includegraphics[width=0.5\textwidth, keepaspectratio]{TorusDiagonalImag.pdf} \\ \textrm{\footnotesize{(a) $(\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega] )_{ii}$}} & \textrm{\footnotesize{(b) $(\mathcal{I}[\alpha B, \omega])_{ii} $}} \end{array}$$ \caption{ Torus with $\mu_r=1.5$, $\sigma_*=5\times10^5$ S/m, $\alpha=0.01$ m: PODP applied to the computation of $\mathcal{M}[\alpha B, \omega]$ with $TOL=1\times 10^{-6}$ and $N=17$ showing the PODP solution and output certificates $(\cdot ) \pm( \Delta [\omega])_{ii}$ for $(a)$ $ (\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega])_{ii}$, $(b)$ $ (\mathcal{I}[\alpha B, \omega])_{ii}$, each with $\omega$. } \label{fig:ErrorTorus} \end{figure} \subsection{Conducting permeable tetrahedron}\label{sect:tetra} The third object considered is where $B_\alpha = \alpha B$ is a conducting permeable tetrahedron. The vertices of the tetrahedron $B$ are chosen to be at the locations \begin{equation} v_1=\begin{pmatrix} 0\\0\\0\end{pmatrix},\ v_2=\begin{pmatrix} 7\\0\\0\end{pmatrix},\ v_3=\begin{pmatrix} 5.5\\4.6\\0\end{pmatrix}\ \textrm{and}\ v_4=\begin{pmatrix} 3.3\\2\\5\end{pmatrix}, \nonumber \end{equation} the object size is $\alpha = 0.01$ m and the tetrahedron is permeable and conducting with $\mu_r=2$ and $\sigma_*=5.96\times10^6$ S/m. The object does not have rotational or reflectional symmetries and, hence, ${\mathcal M}[\alpha B,\omega]$ has $6$ independent coefficients and, thus, $\mathcal{N}^0[\alpha B]$, $\mathcal{R}[\alpha B, \omega]$, $\mathcal{I}[\alpha B, \omega]$ each have $3$ independent eigenvalues. To compute the full order model, we set $\Omega$ to be a cube with sides of length 200 centred about the origin and discretise it with a mesh of 21427 unstructured tetrahedral elements, refined towards the object, and a polynomial order of $p=3$. This discretisation has already been found to produce an accurate representation of $\mathcal{M}[\alpha B, \omega] $ for the frequency range with $\omega_{min} = 1 \times 10^2 {\textrm{ rad}} / { \textrm{s}}$ and $\omega_{max} = 1 \times 10^8 {\textrm{ rad}} / {\textrm{s}}$. The reduced order model is constructed using $N=13 $ snapshots at logarithmically spaced frequencies with $TOL=1 \times 10^{-4} $. Figure~\ref{fig:Tetra} shows the results for $\lambda_i(\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega])$ and $\lambda_i(\mathcal{I}[\alpha B, \omega])$, each with $\omega$, for both the full order model and the PODP. The agreement is excellent in both cases. \begin{figure}[H] $$\begin{array}{cc} \includegraphics[width=0.5\textwidth, keepaspectratio]{TetraBestRealTol.pdf} & \includegraphics[width=0.5\textwidth, keepaspectratio]{TetraBestImagTol.pdf}\\ \textrm{\footnotesize{(a) $\lambda_i(\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega] )$}} & \textrm{\footnotesize{(b) $\lambda_i(\mathcal{I}[\alpha B, \omega] $)}} \end{array}$$ \caption{ Irregular tetrahedron with $\mu_r=2$, $\sigma_*=5.96\times10^6$ S/m, $\alpha=0.01$ m: PODP applied to the computation of $\mathcal{M}[\alpha B, \omega]$ $N=13$ and $TOL=1 \times 10^{-4} $ showing $(a)$ $\lambda_i(\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega])$ and $(b)$ $\lambda_i(\mathcal{I}[\alpha B, \omega])$, each with $\omega$.} \label{fig:Tetra} \end{figure} In Figure~\ref{fig:ErrorTetra} we show the output certificates $(\mathcal{R}^{PODP}[\alpha B, \omega]+{\mathcal N}^{0, PODP}[\alpha B ])_{ij}\pm (\Delta[\omega])_{ij}$ and \\ $(\mathcal{I}^{PODP}[\alpha B, \omega])_{ij}\pm (\Delta[\omega])_{ij}$, both with $\omega$, for $i=j$ and $i\ne j$ obtained by applying the technique described in Section~\ref{sect:outputcert} for the case where $N=21$ and $TOL=1\times10^{-6}$. Once again, we increased the number of snapshots from $N=13$ to $N=21$ and have reduced the tolerance to ensure tight certificates bounds, except at large frequencies. \begin{figure}[H] $$\begin{array}{cc} \includegraphics[width=0.5\textwidth, keepaspectratio]{TetraDiagonalReal.pdf} & \includegraphics[width=0.5\textwidth, keepaspectratio]{TetraDiagonalImag.pdf} \\ \textrm{\footnotesize{(a) $(\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega] )_{ii}$}} & \textrm{\footnotesize{(b) $(\mathcal{I}[\alpha B, \omega])_{ii} $}} \\ \includegraphics[width=0.5\textwidth, keepaspectratio]{TetraOffDiagonalReal.pdf} & \includegraphics[width=0.5\textwidth, keepaspectratio]{TetraOffDiagonalImag.pdf} \\ \textrm{\footnotesize{(c) $ (\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega] )_{ij}, i\ne j$}} & \textrm{\footnotesize{(d) $\mathcal{I}[\alpha B, \omega])_{ij}, i \ne j $}} \end{array}$$ \caption{ Irregular tetrahedron with $\mu_r=2$, $\sigma_*=5.96\times10^6$ S/m, $\alpha=0.01$ m: PODP applied to the computation of $\mathcal{M}[\alpha B, \omega]$ with $TOL=1\times 10^{-6}$ and $N=21$ showing the PODP solution and output certificates $(\cdot)\pm (\Delta [ \omega])_{ij}$ for $(a)$ $ (\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega])_{ij}$, with $i=j$ $(b)$ $ (\mathcal{I}[\alpha B, \omega])_{ij}$, with $i=j$, $(c)$ $ (\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega])_{ij}$, with $i\ne j$, $(d)$ $ (\mathcal{I}[\alpha B, \omega])_{ij}$ with $i\ne j$, each with $\omega$. } \label{fig:ErrorTetra} \end{figure} \subsection{Inhomogeneous conducting bar} As a final example we consider $B_\alpha = \alpha B$ to the inhomogeneous conducting bar made up from two different conducting materials. The size, shape and materials of this object are the same as those presented in Section 6.1.3 of~\cite{LedgerLionheartamad2019}. This object has rotational and reflectional symmetries such that ${\mathcal M} [ \alpha B, \omega]$ has independent coefficients $({\mathcal M} [ \alpha B, \omega])_{11}$, $({\mathcal M} [ \alpha B, \omega])_{22}=({\mathcal M} [ \alpha B, \omega])_{33} $ and, thus, $\mathcal{N}^0[\alpha B]$, $\mathcal{R}[\alpha B, \omega]$, $\mathcal{I}[\alpha B, \omega]$ each have $2$ independent eigenvalues. To compute the full order model, we set $\Omega$ to be a sphere of radius 100 centred about the origin and discretise it with a mesh of 30209 unstructured tetrahedral elements, refined towards the object, and a polynomial order of $p=3$. This discretisation has already been found to produce an accurate representation of $\mathcal{M}[\alpha B, \omega] $ for the frequency range with $\omega_{min} = 1 \times 10^2 {\textrm{ rad}} / { \textrm{s}}$ and $\omega_{max} = 1 \times 10^8 {\textrm{ rad}} / {\textrm{s}}$. The reduced order model is constructed using $N=13 $ snapshots at logarithmically spaced frequencies with $TOL=1 \times 10^{-4} $. Figure~\ref{fig:Tetra} shows the results for $\lambda_i(\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega])$ and $\lambda_i(\mathcal{I}[\alpha B, \omega])$, each with $\omega$, for both the full order model and the PODP. The agreement is excellent in both cases. \begin{figure}[H] $$\begin{array}{cc} \includegraphics[width=0.48\textwidth, keepaspectratio]{DbarRealEigenvalues.pdf} & \includegraphics[width=0.48\textwidth, keepaspectratio]{DbarImaginaryEigenvalues.pdf}\\ \textrm{\footnotesize{(a) $\lambda_i(\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega] )$}} & \textrm{\footnotesize{(b) $\lambda_i(\mathcal{I}[\alpha B, \omega] $)}} \end{array}$$ \caption{Inhomogeneous bar with two distinct conductivities (see Section 6.1.3 of~\cite{LedgerLionheartamad2019}): PODP applied to the computation of $\mathcal{M}[\alpha B, \omega]$ $N=13$ and $TOL=1 \times 10^{-4} $ showing $(a)$ $\lambda_i(\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega])$ and $(b)$ $\lambda_i(\mathcal{I}[\alpha B, \omega])$, each with $\omega$ } \label{fig:Bar} \end{figure} In Figure~\ref{fig:ErrorBar} we show the output certificates $(\mathcal{R}^{PODP}[\alpha B, \omega]+{\mathcal N}^{0, PODP}[\alpha B ])_{ii}\pm (\Delta[\omega])_{ii}$ (no summation over repeated indices implied) and $(\mathcal{I}^{PODP}[\alpha B, \omega])_{ii}\pm (\Delta[\omega])_{ii}$, both with $\omega$, obtained by applying the technique described in Section~\ref{sect:outputcert} for the case where $N=23$ and $TOL=1\times10^{-6}$. Note that we increased the number of snapshots from $N=13$ to $N=23$ and have reduced the tolerance to ensure tight certificates bounds, except at large frequencies. \begin{figure}[H] $$\begin{array}{cc} \includegraphics[width=0.5\textwidth, keepaspectratio]{DbarDiagonalRealn_23.pdf} & \includegraphics[width=0.5\textwidth, keepaspectratio]{DbarDiagonalImagn_23.pdf} \\ \textrm{\footnotesize{(a) $(\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega] )_{ii}$}} & \textrm{\footnotesize{(b) $(\mathcal{I}[\alpha B, \omega])_{ii} $}} \end{array}$$ \caption{ Inhomogeneous bar with two distinct conductivities (see Section 6.1.3 of~\cite{LedgerLionheartamad2019}): PODP applied to the computation of $\mathcal{M}[\alpha B, \omega]$ with $N=23$ showing the PODP solution and output certificates $(\cdot ) \pm (\Delta [\omega])_{ii}$ for $(a)$ $ (\mathcal{N}^0[\alpha B]+\mathcal{R}[\alpha B, \omega])_{ii}$, $(b)$ $ (\mathcal{I}[\alpha B, \omega])_{ii}$, each with $\omega$. } \label{fig:ErrorBar} \end{figure} \section{Numerical examples of scaling}\label{sect:examplesscale} In this section we illustrate the application of the results presented in Section~\ref{sect:scaling}. \subsection{Scaling of conductivity} As an illustration of Lemma~\ref{lemma:condscale}, we consider a conducting permeable sphere $B_\alpha=\alpha B $ where $\alpha=0.01$~m with materials properties $\mu_r=1.5$ and $\sigma_{*}^{(1)}=1 \times 10^7$ S/m and a second object, which is the same as the first except that $\sigma_*^{(2)} = s \sigma_*^{(1)} = 10 \sigma_*^{(1)}$. In Figure~\ref{fig:SphereSigma}, we compare the full order computations of ${\mathcal M} [ \alpha B , \omega, \mu_r,\sigma_*^{(1)}]$ and ${\mathcal M} [ \alpha B , \omega, \mu_r,\sigma_*^{(2)}]$ with that obtained from (\ref{eqn:condscale}). We observe that the translation predicted by (\ref{eqn:condscale}) is in excellent agreement with the full order model solution for ${\mathcal M} [ \alpha B, \omega, \mu_r,\sigma_*^{(2)}] $. \begin{figure}[H] $$\begin{array}{cc} \includegraphics[width=0.5\textwidth, keepaspectratio]{SphereScaleSigmaReal.pdf} & \includegraphics[width=0.5\textwidth, keepaspectratio]{SphereScaleSigmaImaginary.pdf}\\ \textrm{\footnotesize{(a) $\lambda_i(\mathcal{N}^0[\alpha B,\mu_r]+\mathcal{R}[\alpha B, \omega,\mu_r,\sigma_*])$}} & \textrm{\footnotesize{(b) $\lambda_i(\mathcal{I}[\alpha B, \omega,\mu_r,\sigma_*])$}} \end{array}$$ \caption{Sphere with $\mu_r=1.5$, $\sigma_*^{(1)}=1\times10^7$ S/m , $\alpha=0.01$ m and second sphere, which is the same as the first except that $\sigma_*^{(2)} = s \sigma_*^{(1)} = 10 \sigma_*^{(1)}$: showing the translation predicted by (\ref{eqn:condscale}) compared with the full order model solutions for $(a)$ $\lambda_i(\mathcal{N}^0[\alpha B,\mu_r]+\mathcal{R}[\alpha B, \omega,\mu_r,\sigma_*])$ and $ (b)$ $\lambda_i(\mathcal{I}[\alpha B, \omega,\mu_r,\sigma_*])$.} \label{fig:SphereSigma} \end{figure} \subsection{Scaling of object size} To illustrate Lemma~\ref{lemma:alphascale}, we consider a conducting permeable tetrahedron $B_\alpha^{(1)}=\alpha^{(1)} B=0.01B $ with vertices as described in Section~\ref{sect:tetra} and material properties $\mu_r=1.5$ and $\sigma_*=1 \times 10^6$ S/m. Then, we consider a second object $B_\alpha^{(2)} = \alpha^{(2)}B = s\alpha^{(1)} B=0.015B$, which, apart from its size, is otherwise the same as $B_\alpha^{(1)}$. In Figure~\ref{fig:TetraAlpha}, we compare the full order computations of ${\mathcal M} [ \alpha^{(1)} B, \omega, \mu_r,\sigma_*]$ and ${\mathcal M} [ \alpha^{(2)} B, \omega, \mu_r,\sigma_*]$ with that obtained from (\ref{eqn:alphascale}). We observe that the translation and scaling predicted by (\ref{eqn:alphascale}) is in excellent agreement with the full order model solution for ${\mathcal M} [ \alpha^{(2)} B, \omega, \mu_r,\sigma_*]$. \begin{figure}[H] $$\begin{array}{cc} \includegraphics[width=0.5\textwidth, keepaspectratio]{TetraScaleAlphaReal.pdf} & \includegraphics[width=0.5\textwidth, keepaspectratio]{TetraScaleAlphaImaginary.pdf}\\ \textrm{\footnotesize{(a) $\lambda_i(\mathcal{N}^0[\alpha B,\mu_r]+\mathcal{R}[\alpha B, \omega,\mu_r,\sigma_*])$}} & \textrm{\footnotesize{(b) $\lambda_i(\mathcal{I}[\alpha B, \omega,\mu_r,\sigma_*])$}} \end{array}$$ \caption{Tetrahedron $B_\alpha^{(1)}=\alpha^{(1)} B=0.01B $ with $\mu_r=1.5$ and $\sigma_*=1 \times 10^6$~S/m, $\alpha=0.01$ m and a second tetrahedron, which is the same as the first except that $B_\alpha^{(2)} = \alpha^{(2)}B = s\alpha^{(1)} B=0.015B$: showing the translation and scaling predicted by (\ref{eqn:alphascale}) compared with the full order model solutions for $(a)$ $\lambda_i(\mathcal{N}^0[\alpha B,\mu_r]+\mathcal{R}[\alpha B, \omega,\mu_r,\sigma_*])$ and $ (b)$ $\lambda_i(\mathcal{I}[\alpha B, \omega,\mu_r,\sigma_*])$.} \label{fig:TetraAlpha} \end{figure} \section{Conclusions} An application of a ROM using PODP for the efficient computation of the spectral signature of the MPT has been studied in this paper. The full order model has been approximated by ${\bm H}(\hbox{curl})$ conforming discretisation using the \texttt{NGSolve} finite element package. The offline stage of the ROM involves computing a small number of snapshots of the full order model at logarithmically spaced frequencies, then in the online stage, the spectral signature of the MPT is rapidly and accurately predicted to arbitrarily fine fidelity using PODP. Output certificates have been derived and can be computed in the online stage at negligible computational cost and ensure accuracy of the ROM prediction. If desired, these output certificates could be used to drive an adaptive procedure for choosing new snapshots, in a similar manner to the approach presented in~\cite{hesthaven2016}. However, by choosing the frequency snapshots logarithmically, accurate spectral signatures of the MPT were already obtained with tight certificate bounds. In addition, simple scaling results, which enable the MPT spectral signature to be easily computed from an existing set of coefficients under the scaling of an object's conductivity or object size, have been derived. A series of numerical examples have been presented to demonstrate the accuracy and efficiency of our approach for homogeneous and inhomogeneous conducting permeable objects. Future work involves applying the presented approach to generate a dictionary of MPT spectral signatures for different objects for the purpose of metallic object identification using a classifier. \section*{Acknowledgements} B.A. Wilson gratefully acknowledges the financial support received from EPSRC in the form of a DTP studentship with project reference number 2129099. P.D. Ledger gratefully acknowledges the financial support received from EPSRC in the form of grant EP/R002134/1. The authors are grateful to Professors W.R.B. Lionheart and A.J. Peyton from The University of Manchester and Professor T. Betcke from University College London for research discussions at project meetings and to the group of Professor J. Sch\"oberl from the Technical University of Vienna for their technical support on \texttt{NGSolve}. {\bf EPSRC Data Statement:} All data is provided in Section~\ref{sect:examplespodp}. This paper does not have any conflicts of interest. \bibliographystyle{plain}
1,314,259,992,804
arxiv
\section{Introduction} On August 17, 2017, advanced LIGO~\cite{TheLIGOScientific:2014jea} and advanced Virgo~\cite{TheVirgo:2014hva} detected gravitational waves from a binary neutron star (BNS) merger, GW170817, for the first time~\cite{TheLIGOScientific:2017qsa}. In this event, not only gravitational waves but also the electromagnetic signals in the gamma-ray~\cite{Monitor:2017mdv, Goldstein:2017mmi,Savchenko:2017ffs}, ultraviolet-optical-infrared~\cite{Evans:2017mmy,Drout:2017ijr,Kilpatrick:2017mhz,Kasliwal:2017ngb,Nicholl:2017ahq,Utsumi:2017cti,Tominaga:2017cgo,Chornock:2017sdf,Arcavi:2017vbi,Diaz:2017uch,Shappee:2017zly,Coulter:2017wya,Soares-Santos:2017lru,Valenti:2017ngx,Pian:2017gtc,Smartt:2017fuw}, X-ray~\cite{Haggard:2017qne,Margutti:2017cjl,Troja:2017nqp}, and radio bands~\cite{Alexander:2017aly,Hallinan:2017woc,Margutti:2018xqd,Dobie:2018zno,Mooley:2017enz,Mooley:2018dlz} were detected. This monumental event GW170817, GRB170817A, and AT2017gfo heralded the opening of the multi-messenger astrophysics. Furthermore, advanced LIGO and advanced Virgo have started a new observation run, O3, from April 2019 and a new BNS merger event, GW190425, was reported~\cite{Abbott:2020uma} and 7 candidates of a BNS merger as of Feb. 17, 2020, have been detected~\cite{GCN}. One noteworthy finding in GW170817 is that tidal deformability of the neutron star (NS) was constrained for the first time. Due to a tidal field generated by a companion, NSs in a binary system could be deformed significantly in the late inspiral stage~\cite{Flanagan:2007ix}. The response to the tidal field, the tidal deformability, is imprinted as a phase shift in gravitational waves and its measurement gives a constraint on the equation of state (EOS) of NSs because the tidal deformability depends on EOSs. GW170817 constrained the binary tidal deformability in the range of $100 \lesssim \tilde{\Lambda} \lesssim 800$ with the binary total mass of $2.73^{+0.04}_{-0.01}M_\odot$~\cite{TheLIGOScientific:2017qsa,Abbott:2018exr,De:2018uhw,Abbott:2018wiz} where the precise value depends on the analysis methods. To extract information of the tidal deformability from observed gravitational wave data, a high precision template for gravitational waveforms plays an essential role. Numerical relativity simulation is the unique tool to derive high-precision gravitational waveforms in the late inspiral stage during which the gravitational-wave phase shift due to the tidal deformation becomes prominent. During this stage, any analytic techniques break down. Dietrich and his collaborators constructed a gravitational wave template for the inspiral stage based on the numerical relativity simulations in a series of papers~\cite{Dietrich:2015pxa,Dietrich:2017feu,Dietrich:2017aum,Dietrich:2018uni,Dietrich:2018phi,Dietrich:2019kaq} and their template was used in gravitational wave data analysis by LIGO Scientific and Virgo Collaborations to infer the tidal deformability from GW170817~\cite{Abbott:2018exr}. However, the residual phase error caused mainly by the finite grid resolution in their simulations is $\approx 0.5$--$2.3$ rad~\cite{Dietrich:2019kaq}. The phase error of $O(1)$ rad could be an obstacle to construct a high-quality inspiral gravitational waveform template (see also Refs.~\cite{Haas:2016cop,Foucart:2018lhe}). In Ref.~\cite{Kiuchi:2017pte}, we tackled this problem by using our numerical relativity code {\tt SACRA-MPI} and performed long-term simulations with the highest grid resolution to date (see also Refs.~\cite{Hotokezaka:2013mm,Hotokezaka:2015xka,Hotokezaka:2016bzh,Shibata:2005xz} for our effort in the early stage of this project). In our numerical results, the gravitational-wave phase error caused by the finite grid resolution is less than $0.5$ rad for $31$--$32$ inspiral gravitational wave cycles. On the basis of these high-precision gravitational waveforms, Ref.~\cite{Kawaguchi:2018gvj} presented a waveform template, the SACRA inspiral gravitational waveform template, of BNS mergers. Specifically, we multiply the tidal-part phase of the $2.5$ Post-Newtonian (PN) order derived in Ref.~\cite{Damour:2012yf} by a correction term composed of the PN parameter and the binary tidal deformability. Then, we validated it by confirming that it reproduces the high-precision gravitational waveforms derived in Ref.~\cite{Kiuchi:2017pte}. We also validated a correction term in the tidal-part amplitude of the $1$ PN order derived in Refs.~\cite{Damour:2012yf,Vines:2011ud}. In Refs.~\cite{Kiuchi:2017pte,Kawaguchi:2018gvj}, we performed simulations for a limited class of BNS systems, i.e., two equal-mass and two unequal-mass systems. Thus, the applicable range of the SACRA inspiral gravitational waveform template has not quantified precisely yet. In this paper, we derive a number of gravitational waveforms from BNS mergers by performing numerical-relativity simulations in a wider parameter space for EOSs, binary total mass, and mass ratio than that in the previous papers~\cite{Kiuchi:2017pte,Kawaguchi:2018gvj}. For each binary parameter, we perform an in-depth resolution study to assess the accuracy of our waveforms. On the basis of newly derived high-precision gravitational waveforms, we validate the template. In addition, we analyze post-merger gravitational wave signals derived in this paper. The post-merger signal in GW170817 has not been detected~\cite{Abbott:2017dke}, but a post-merger signal could be detected in near future for the nearby events or in the third generation detectors such as Einstein Telescope or Cosmic Explorer~\cite{Punturo:2010zz,Evans:2016mbw}. The signal could bring us information of the EOS complementary to that imprinted in the late inspiral signal. To extract such information, we should explore a heuristic relation between post-merger signals and the tidal deformability/NS radius in numerical relativity simulations. In several previous papers, such an attempt has been made~\cite{Rezzolla:2016nxn,Read:2013zra,Zappa:2017xba,Bauswein:2011tp,Bauswein:2012ya,Bernuzzi:2014owa,Bernuzzi:2015rla}. However, systematics contained in these relations are unclear because of the lack of resolution study, the approximate treatment of relativistic gravity, the lack of the estimation for the systematics with the uncertainty of the NS EOS, and the narrow range of the BNS parameter space explored. In this paper, we assess to what extent the proposed universal relations between the post-merger gravitational wave signal and tidal deformability/NS radius~\cite{Rezzolla:2016nxn,Read:2013zra,Zappa:2017xba,Bauswein:2011tp,Bauswein:2012ya,Bernuzzi:2014owa,Bernuzzi:2015rla} hold. To stimulate an independent attempt by other researchers for constructing a gravitational waveform template based on the numerical relativity simulations and/or to stimulate a comparison to numerical relativity waveforms derived by other groups, we release our simulation data on a website \href{https://www2.yukawa.kyoto-u.ac.jp/~nr_kyoto/SACRA_PUB/catalog.html}{SACRA Gravitational Waveform Data Bank}~\cite{DB}. This paper is organized as follows. Section~\ref{sec:model} describes our method, grid setup, and initial condition of the simulations. Section \ref{sec:result} is devoted to describing the accuracy of inspiral gravitational waveforms. Section~\ref{sec:WFmodel} presents validation of the SACRA inspiral gravitational waveform template. Section~\ref{sec:universal-relation} describes the assessment of the universal relations of the post-merger signals. This section also presents the energy and angular momentum carried by gravitational waves. We summarize this paper in Sec.~\ref{sec:summary}. Throughout this paper, we employ the geometrical unis of $c=G=1$ where $c$ and $G$ are the speed of light and the gravitational constant, respectively. \section{Method, grid setup, and initial models}\label{sec:model} \subsection{Method and grid setup} We use our numerical relativity code, {\tt SACRA-MPI}~\cite{Yamamoto:2008js,Kiuchi:2017pte}, to simulate a long-term inspiral stage of BNS up to early post-merger. {\tt SACRA-MPI} implements the Baumgarte-Shapiro-Shibata-Nakamura-puncture formulation~\cite{SN,BS,Capaneli,Baker}, {\it locally} incorporating a Z4c-type constraint propagation prescription~\cite{Hilditch:2012fp}, to solve Einstein's equation. We discretize the field equation with the 4th-order accuracy in both the space and time. We also apply the 4th-order lop-sided finite difference scheme for the advection term~\cite{Bruegmann:2006at}. In {\tt SACRA-MPI}, a conservation form of general relativistic hydrodynamics equations is employed and we implement a high-resolution shock capturing scheme proposed by Kurganov and Tadmor~\cite{Kurganov} together with the 3rd-order accurate cell reconstruction~\cite{Colella:1982ee}. We also implement the Berger-Oliger type adaptive mesh refinement (AMR) algorithm~\cite{BergerOliger} to enlarge a simulation domain to a local wave zone of gravitational waves while guaranteeing a high spatial grid resolution around NSs. A simulation domain consists of two sets of the 4 Cartesian AMR domains which follow orbital motion of each of NSs and the 6 Cartesian AMR domains whose center is fixed to the coordinate origin throughout all the simulations. The grid spacing of a coarser refinement level is twice as large as that of its finer refinement level. Thus, the grid spacing of a refinement level $l$ is given by $\Delta x_l = L/(2^{l}N)$ with $l=0,1,\cdots 9$. $L$ denotes the distance from the coordinate origin to the outer boundary along each coordinates axis. $N$ is an even number and each of the AMR domains possesses the grid point $(2N+1,2N+1,N+1)$ in the $(x,y,z)$ directions where we assumed the orbital plane symmetry. In this work, we performed simulations with $N=182,150,130,110,102,$ and $90$ for all the systems to check the convergence of gravitational waveforms with respect to the grid resolution. The values of $L$ and $\Delta x_9$ are summarized in Table~\ref{tb:model}. \subsection{Binary system parameters and gravitational wave extraction} Table~\ref{tb:model} shows the list of the binary systems as well as the grid setup for the simulations. \subsubsection{Equation of state} Following the previous papers~\cite{Kiuchi:2017pte,Kawaguchi:2018gvj}, we employ a parameterized piecewise polytropic EOS to describe the NS matter~\cite{rlof2009}. Specifically, we assume that the pressure and specific internal energy consist of two segments with respect to the rest-mass density: \begin{align} &P_\text{cold}(\rho) = \kappa_i \rho^{\Gamma_i},\nonumber\\ &\epsilon_\text{cold}(\rho) = \frac{\kappa_i}{\Gamma_i-1}\rho^{\Gamma_i-1} + \Delta \epsilon_i~(\rho_i \le \rho < \rho_{i+1}), \nonumber \end{align} with $i=0,1$, $\rho_0=0{\rm~g~cm^{-3}}$, and $\rho_2 = \infty$. $\rho_1$ is the rest-mass density which divides the pressure and specific internal energy into the two segments. Given the adiabatic indices $\Gamma_0,\Gamma_1$ and one of the polytropic constants $\kappa_0$, the other polytropic constant $\kappa_1$ is calculated from the continuity of the pressure at $\rho=\rho_1$ by $\kappa_0\rho_1^{\Gamma_0}=\kappa_1\rho_1^{\Gamma_1}$. $\Delta \epsilon_1$ is also calculated from the continuity of the specific internal energy at $\rho=\rho_1$ by $\kappa_0\rho_1^{\Gamma_0-1}/(\Gamma_0-1)=\kappa_1\rho_1^{\Gamma_1-1}/(\Gamma_1-1)+\Delta \epsilon_1$. Note that $\Delta \epsilon_0=0$. Following Ref.~\cite{rlof2009}, we fix $\Gamma_0=1.3562395$, $\Gamma_1=3$, and $\kappa_0=3.594\times 10^{13}$ in cgs units. By varying the remaining parameter $\rho_1$ for a wide range as shown in Table~\ref{tb:pwp}, we can derive plausible NS models with a variety of the radii and tidal deformability (see Table~\ref{tb:eos_model}). In addition to the piecewise polytropic EOS, we employ one tabulated EOS, SFHo~\cite{Steiner:2012rk}. To model an EOS for cold NS, we simply set $T=0.1$ MeV which is the minimum temperature in the table of SFHo EOS. We also impose the neutrinoless low-temperature $\beta$-equilibrium condition to set the value of $Y_e$. Then, the original tabulated EOS is reduced to a one dimensional SFHo (tabulated) EOS, i.e., $P_\text{cold}(\rho)$ and $\epsilon_\text{cold}(\rho)$ (see also Table~\ref{tb:eos_model} for the NS radius and tidal deformability). During simulations (in particular for the post-merger stage), we employ a hybrid EOS to capture the shock heating effect. Specifically, we assume that the pressure consists of the cold and thermal parts: \begin{align} P = P_\text{cold}(\rho) + ( \Gamma_\text{th} - 1 )\rho( \epsilon - \epsilon_\text{cold} (\rho) ), \label{eq:pres} \end{align} where $\epsilon$ is the specific internal energy and we assumed that the thermal part is described by the $\Gamma$-law EOS with the index $\Gamma_\text{th}$. Following Refs.~\cite{Kiuchi:2017pte,Kawaguchi:2018gvj}, we fix $\Gamma_\text{th}=1.8$. We note that gravitational waveforms for the post-merger stage depend on the value of $\Gamma_{\rm th}$~\cite{Shibata:2005ss}, although inspiraling waveforms do not. Since the major purpose of the present paper is to derive the accurate inspiraling waveforms, the choice of $\Gamma_{\rm th}$ does not have any essential importance. On the other hand, it has been long known that the post-merger waveform depends strongly on this value (see, e.g., Ref.~\cite{Shibata:2005ss}). Thus, we have to keep in mind that the systematics exist due to the uncertainty of this value~\cite{Carbone:2019pkr}. \subsubsection{Binary systems} In this paper, we consider 6 irrotational binary systems assuming that NSs have no spin before merger. We fix a chirp mass, ${\cal M}_c$, and symmetric mass ratio, $\eta$, to be $({\cal M}_c,\eta)=(1.1752M_\odot,0.2500)$, $(1.1752M_\odot,0.2485)$, $(1.1752M_\odot,0.2455)$, $(1.1752M_\odot,0.2450)$, $(1.0882M_\odot,0.2470)$, and $(1.0882M_\odot,0.2440)$. With this setting, gravitational masses of a less massive and massive components for the infinite orbital separation is $(m_1,m_2)=(1.35M_\odot,1.35M_\odot)$, $(1.25M_\odot,1.46M_\odot)$, $(1.18M_\odot,1.55M_\odot)$, $(1.17M_\odot,1.56M_\odot)$, $(1.12M_\odot,1.40M_\odot)$, and $(1.07M_\odot,1.46M_\odot)$ (see Table~\ref{tb:model}). For the SFHo (tabulated) EOS, we only consider the equal-mass binary system with $m_1=1.35M_\odot$ and $m_2=1.35M_\odot$. Table~\ref{tb:model} also shows the binary tidal deformability for all the binary systems~\cite{Wade:2014vqa,Favata:2013rwa}: \begin{align} \tilde{\Lambda} &= \frac{8}{13}\Big[(1+7\eta-31\eta^2)(\Lambda_1 + \Lambda_2) \nonumber \\ &- \sqrt{1-4\eta}(1+9\eta-11\eta^2)(\Lambda_1-\Lambda_2)\Big], \end{align} where $\Lambda_1(\Lambda_2)$ is the tidal deformability of the less massive (massive) component. The value of the tidal deformability in this paper covers a wide range of $\approx 300$--$1800$. Figure~\ref{fig:model} plots the BNS systems simulated for long durations by our group to date. For the SFHo (tabulated) EOS case, an interpolation of the thermodynamic variables is necessary in the simulations. Because we implement the linear interpolation scheme for this purpose, the associated truncation error can be a non-negligible error source for generating high-precision gravitational waveforms. This system is used to assess the error budget possibly caused by employing tabulated EOS (see also Ref.~\cite{Foucart:2019yzo} for the gravitational-wave phase error stemming from different analytical descriptions of the EOSs). We name all the systems according to the EOS, the mass of the less massive component, and that of the massive component. For example, 15H125-146 refers to the system with 15H EOS, $m_1= 1.25M_\odot$, and $m_2= 1.46 M_\odot$. We set the initial orbital angular velocity to be $m_0\Omega_0 = 0.0150$--$0.0155$ with $m_0=m_1+m_2$. With this, the BNSs experience $15$--$16$ orbits before the onset of merger for all the systems. To generate a high-precision inspiral waveform from BNS inspirals by a numerical relativity simulation, initial data with low orbital eccentricity are necessary because the orbital motion of a BNS in the late inspiral stage is circularized due to the gravitational-wave emission. We numerically obtain quasi-equilibrium sequences of the BNSs by a spectral-method library, LORENE~\cite{LORENE,Taniguchi}. Then, we reduce orbital eccentricity by using the prescription in Ref.~\cite{Kyutoku:2014yba}. With this method, we confirm that the initial orbital eccentricity is reduced typically to $\approx 10^{-3}$ which is low enough to generate a high-precision inspiral waveform (see also Appendix in Refs.~\cite{Kiuchi:2017pte,Kawaguchi:2018gvj}). \subsection{Gravitational wave extraction} We calculate a complex Weyl scalar $\Psi_4$ from simulation data to derive gravitational waveforms~\cite{Yamamoto:2008js}. Given an extraction radius $r_0$, the Weyl scalar $\Psi_4$ is decomposed into $(l,m)$ modes with the spin-weighted spherical harmonics by \begin{align} \Psi_4(t_\text{ret},r_0,\theta,\phi) = \sum_{l,m} \Psi_4^{l,m}(t_\text{ret},r_0) _{-2}Y_{lm}(\theta,\phi), \end{align} where $t_\text{ret}$ is a retarded time defined by \begin{align} t_\text{ret} \equiv t - \left[ D + 2 m_0 \ln\left(\frac{D}{2m_0}-1\right) \right], \label{eq:tret} \end{align} with $D=\sqrt{A/4\pi}$. $A$ is a proper area of the extraction sphere. We apply the Nakano's method~\cite{Nakano:2015} to extrapolate $\Psi_4^{l,m}$ to infinity by \begin{align} D \Psi_4^{l,m,\infty}(t_\text{ret}) & \equiv C(r_0)\Big[D \Psi_4^{l,m}(t_\text{ret},r_0) \nonumber\\ & - \frac{(l-1)(l+2)}{2}\int^{t_\text{ret}} \Psi_4^{l,m}(t',r_0) dt' \Big], \end{align} where $C(r_0)$ is a function of $r_0$. Following Ref.~\cite{Kiuchi:2017pte}, we choose $D \approx r_0[1+m_0/(2r_0)]^2$ and $C(r_0)=1-2m_0/D$ because our coordinates are similar to isotropic coordinates of non-rotating black holes in the wave zone. Gravitational waves of each harmonic mode are calculated by integrating $\Psi_4^{l,m,\infty}$ twice in time: \begin{align} h^{l,m,\infty}(t_\text{ret}) &= h^{l,m,\infty}_+ (t_\text{ret}) - i h^{l,m,\infty}_\times(t_\text{ret}) \nonumber\\ &= - \int^{t_\text{ret}} dt'\int^{t'} \Psi_4^{l,m,\infty}(t'')dt''. \end{align} For the time integration, we employ the fixed frequency method~\cite{Reisswig:2010di} by \begin{align} h^{l,m,\infty}(t_\text{ret}) = \int df' \frac{\tilde{\Psi}_4^{l,m,\infty}(f')}{(2\pi\max[f',f_\text{cut}])^2} \exp(2\pi i f' t_\text{ret}), \end{align} where $\tilde{\Psi}_4^{l,m,\infty}(f)$ is the Fourier component of $\Psi_4^{l,m,\infty}(t)$ and $f_\text{cut}$ is set to be $0.8m\Omega_0/(2\pi)$. To check the convergence with respect to the extraction radius $r_0$, we repeat this analysis for $r_0$ $=244\,m_0$, $199 \,m_0,$ and $155 \,m_0$ for ${\cal M}_c= 1.1752M_\odot$ and $r_0=262\,m_0$, $213\,m_0$, and $156\,m_0$ for ${\cal M}_c= 1.0882M_\odot$ (see Table~\ref{tb:model}). In general, gravitational waves for each $(l,m)$ mode are decomposed into the amplitude and phase as \begin{align} h^{l,m,\infty}(t_\text{ret}) = A^{l,m,\infty}(t_\text{ret}) e^{-i\Phi^{l,m}(t_\text{ret})}, \label{eq:freqGW} \end{align} and instantaneous gravitational-wave frequency is defined by $d\Phi^{l,m}/dt_\text{ret}$. In Sec.~\ref{sec:result}, we explore the accuracy of the gravitational-wave phase of the $(l,m)=(2,2)$ mode, and simply refer to $\Phi^{2,2}$ as the gravitational-wave phase. With Eq.~(\ref{eq:freqGW}), the instantaneous frequency of the $(l,m)=(2,2)$ mode is calculated by \begin{align} f_\text{GW} = \frac{1}{2\pi}{\rm Im} \left(\frac{h^{*2,2,\infty}\dot{h}^{2,2,\infty}}{|h^{2,2,\infty}|^2}\right), \label{eq:GWfreq} \end{align} where the asterisk symbol denotes the complex conjugate of $h^{2,2,\infty}$. We also calculate the energy and angular momentum flux due to gravitational-wave emission by~\cite{Shibata:text} \begin{align} \frac{dE_\text{GW}^{l,m}}{dt} &= \lim_{r \to \infty} \frac{r^2}{16\pi} \left| \int^t \Psi_4^{l,m,\infty}(t')dt'\right|^2, \label{eq:EGW}\\ \frac{dJ_\text{GW}^{l,m}}{dt} &= - \lim_{r \to \infty} \frac{r^2}{16\pi} \text{Im} \Big[m \left(\int^t \Psi_4^{l,m,\infty}(t')dt' \right)^* \nonumber\\ &\times \int^t dt' \int^{t'}dt'' \Psi_4^{l,m,\infty}(t'')\Big]. \label{eq:JGW} \end{align} Thus, the energy and angular momentum carried by gravitational waves are calculated by \begin{align} E_\text{GW}^{l,m} &= \int^{t_\text{sim}} \frac{dE_\text{GW}^{l,m}}{dt}dt, \label{eq:EGW2}\\ J_\text{GW}^{l,m} &= \int^{t_\text{sim}} \frac{dJ_\text{GW}^{l,m}}{dt}dt, \label{eq:JGW2} \end{align} where $t_\text{sim}$ denotes the time we terminate the simulations. \begin{table*}[t] \centering \caption{List of the systems for which we performed new simulations. The names of the systems are given in the 1st column. The 2nd and 3rd columns show gravitational mass of less massive NS, $m_1$, and massive NS, $m_2$, respectively. The 4th column shows EOS. Dimensionless initial orbital angular velocity, $m_0\Omega_0$, with the total gravitational mass of the binary systems, $m_0=m_1+m_2$, is given in the 5th column. The 6th, 7th, and 8th columns show chirp mass, ${\cal M}_c=(m_1m_2)^{3/5}(m_1+m_2)^{-1/5}$, symmetric mass ratio, $\eta=m_1m_2(m_1+m_2)^{-2}$, and binary tidal deformability, $\tilde{\Lambda}$, respectively. Location of outer boundary in a computational domain, $L$, and grid spacing of a finest AMR level, $\Delta x_9$, are given in the 9th and 10th columns, respectively. The grid spacing with $N=182,150,130,110,102,$ and $90$ is shown in the parenthesis in the 10th column. The final column shows the extraction radii of gravitational waves. } \begin{tabular}{c|cccc|ccc|ccc} \hline\hline System & $m_1~[M_\odot]$ & $m_2~[M_\odot]$ & EOS & $m_0\Omega_0$ & ${\cal M}_{\rm c}$ & $\eta$ & ${\tilde \Lambda} $& $L\,[{\rm km}]$ & $\Delta x_9\,[{\rm m}]$ & $r_0/m_0$ \\\hline 15H125-146 & 1.25 & 1.46 & 15H & 0.0155 & 1.1752 & 0.2485 & 1200 & 7823 & (84,102,117,138,149,169) & (244,199,155) \\ 125H125-146 & 1.25 & 1.46 & 125H & 0.0155 & 1.1752 & 0.2485 & 858 & 7323 & (78,95,110,129,140,158) & (244,199,155) \\ H125-146 & 1.25 & 1.46 & H & 0.0155 & 1.1752 & 0.2485 & 605 & 6824 & (73,89,102,121,130,147) & (244,199,155) \\ HB125-146 & 1.25 & 1.46 & HB & 0.0155 & 1.1752 & 0.2485 & 423 & 6491 & (69,84,97,115,124,140) & (244,199,155) \\ B125-146 & 1.25 & 1.46 & B & 0.0155 & 1.1752 & 0.2485 & 290 & 5992 & (64,78,90,106,114,129) & (244,199,155) \\ 15H118-155 & 1.18 & 1.55 & 15H & 0.0155 & 1.1752 & 0.2455 & 1194 & 7889 & (84,102,118,139,150,170) & (242,198,154) \\ 125H118-155 & 1.18 & 1.55 & 125H & 0.0155 & 1.1752 & 0.2455 & 855 & 7390 & (79,96,111,131,141,159) & (242,198,154) \\ H118-155 & 1.18 & 1.55 & H & 0.0155 & 1.1752 & 0.2455 & 606 & 6990 & (75,91,105,124,133,151) & (242,198,154) \\ HB118-155 & 1.18 & 1.55 & HB & 0.0155 & 1.1752 & 0.2455 & 423 & 6491 & (69,84,97,115,124,140) & (242,198,154) \\ B118-155 & 1.18 & 1.55 & B & 0.0155 & 1.1752 & 0.2455 & 292 & 5992 & (64,78,90,106,114,129) & (242,198,154) \\ 15H117-156 & 1.17 & 1.56 & 15H & 0.0155 & 1.1752 & 0.2450 & 1170 & 7889 & (84,102,118,139,150,170) & (242,198,154) \\ 125H117-156 & 1.17 & 1.56 & 125H & 0.0155 & 1.1752 & 0.2450 & 837 & 7323 & (78,95,110,129,140,158) & (242,198,154) \\ H117-156 & 1.17 & 1.56 & H & 0.0155 & 1.1752 & 0.2450 & 592 & 6990 & (75,91,105,124,133,151) & (242,198,154) \\ HB117-156 & 1.17 & 1.56 & HB & 0.0155 & 1.1752 & 0.2450 & 414 & 6491 & (69,84,97,115,124,141) & (242,198,154) \\ B117-156 & 1.17 & 1.56 & B & 0.0155 & 1.1752 & 0.2450 & 285 & 6058 & (65,79,91,107,115,131) & (242,198,154) \\ 15H112-140 & 1.12 & 1.40 & 15H & 0.0150 & 1.0882 & 0.2470 & 1842 & 7989 & (85,104,120,141,152,172) & (262,214,167) \\ 125H112-140 & 1.12 & 1.40 & 125H & 0.0150 & 1.0882 & 0.2470 & 1332 & 7490 & (80,97,112,132,143,162) & (262,214,167) \\ H112-140 & 1.12 & 1.40 & H & 0.0150 & 1.0882 & 0.2470 & 955 & 6990 & (75,91,105,124,133,151) & (262,214,167) \\ HB112-140 & 1.12 & 1.40 & HB & 0.0150 & 1.0882 & 0.2470 & 677 & 6491 & (69,84,97,115,124,140) & (262,214,167) \\ B112-140 & 1.12 & 1.40 & B & 0.0150 & 1.0882 & 0.2470 & 475 & 6092 & (65,79,91,108,116,131) & (262,214,167) \\ 15H107-146 & 1.07 & 1.46 & 15H & 0.0150 & 1.0882 & 0.2440 & 1845 & 7989 & (85,104,120,141,152,172) & (261,213,166) \\ 125H107-146 & 1.07 & 1.46 & 125H & 0.0150 & 1.0882 & 0.2440 & 1335 & 7490 & (80,97,112,132,143,162) & (261,213,166) \\ H107-146 & 1.07 & 1.46 & H & 0.0150 & 1.0882 & 0.2440 & 957 & 6990 & (75,91,105,124,133,151) & (261,213,166) \\ HB107-146 & 1.07 & 1.46 & HB & 0.0150 & 1.0882 & 0.2440 & 684 & 6591 & (71,86,99,117,126,142) & (261,213,166) \\ B107-146 & 1.07 & 1.46 & B & 0.0150 & 1.0882 & 0.2440 & 481 & 6091 & (65,79,91,108,116,131) & (261,213,166) \\ SFHo135-135 & 1.35 & 1.35 & SFHo & 0.0155 & 1.1752 & 0.2500 & 460 & 6491 & (69,84,97,115,124,140) & (244,200,156) \\ \hline\hline \end{tabular}\label{tb:model} \end{table*} \begin{table} \centering \caption{List of $\rho_1$ in two-piecewise polytropic EOSs.} \begin{tabular}{c|c}\hline\hline EOS & $\rho_1[\rm g~cm^{-3}]$ \\\hline 15H & $9.3108 \times 10^{13}$\\ 125H & $1.0711 \times 10^{14}$\\ H & $1.2323 \times 10^{14}$\\ HB & $1.4177 \times 10^{14}$\\ B & $1.6309 \times 10^{14}$\\ \hline \end{tabular}\label{tb:pwp} \end{table} \begin{table*} \centering \caption{The radius, $R_M$, and the dimensionless tidal deformability, $\Lambda_M$, for spherical NSs with gravitational mass $M=1.07$, $1.12$, $1.17$, $1.18$, $1.25$, $1.35$, $1.40$, $1.46$, $1.55$ and $1.56\,M_\odot$ for the given EOS. $R_M$ is listed in units of ${\rm km}$. For SFHo (tabulated) EOS, the quantities for the spherical star with $M=1.35\,M_\odot$ are listed. The last column in the upper table shows the maximum mass of the spherical NS in units of $M_\odot$.} \begin{tabular}{c|ccccccccccccccccccccc}\hline\hline EOS & $R_{1.07}$ & $R_{1.12}$ & $R_{1.17}$& $R_{1.18}$ & $R_{1.25}$ & $R_{1.35}$ & $R_{1.40}$ &$~R_{1.46}$& $~R_{1.55}$& $~R_{1.56}$ & $M_\text{max}$\\\hline 15H & 13.54 & 13.58 & 13.61 & 13.62 & 13.65 & 13.69 & 13.71 & 13.72 & 13.74 & 13.74 & 2.53\\ 125H & 12.86 & 12.89 & 12.91 & 12.92 & 12.94 & 12.97 & 12.98 & 12.99 & 12.98 & 12.98 & 2.38\\ H & 12.22 & 12.23 & 12.24 & 12.24 & 12.26 & 12.27 & 12.28 & 12.18 & 12.26 & 12.25 & 2.25\\ HB & 11.60 & 11.59 & 11.60 & 11.60 & 11.61 & 11.61 & 11.60 & 11.59 & 11.55 & 11.55 & 2.12\\ B & 10.97 & 10.97 & 10.98 & 10.98 & 10.98 & 10.96 & 10.95 & 10.92 & 10.87 & 10.86 & 2.00\\ SFHo & -- & -- & -- & -- & -- & 11.91 & -- & -- & -- & -- & 2.06\\ \hline\hline EOS &$~\Lambda_{1.07}$&$~\Lambda_{1.12}$&$~\Lambda_{1.17}$&$~\Lambda_{1.18}$&$~\Lambda_{1.25}$&$~\Lambda_{1.35}$&$~\Lambda_{1.40}$&$~\Lambda_{1.46}$ & $~\Lambda_{1.55}$ & $~\Lambda_{1.56}$\\\hline 15H & 4361 & 3411 & 2692 & 2575 & 1871 & 1211 & 975 & 760 & 530 & 509 \\ 125H & 3196 & 2490 & 1963 & 1875 & 1351 & 863 & 693 & 535 & 366 & 350 \\ H & 2329 & 1812 & 1415 & 1354 & 966 & 607 & 484 & 369 & 249 & 238 \\ HB & 1695 & 1304 & 1013 & 966 & 684 & 422 & 333 & 252 & 165 & 157 \\ B & 1216 & 933 & 719 & 681 & 477 & 289 & 225 & 168 & 107 & 101 \\ SFHo & -- & -- & -- & -- & -- & 460 & -- & -- & -- & -- \\\hline \end{tabular}\label{tb:eos_model} \end{table*} \begin{figure} \hspace{-13.6mm} \includegraphics[width=1.15\linewidth]{fig1.pdf} \caption{Symmetric mass ratio, $\eta$, and binary tidal deformability, $\tilde{\Lambda}$, of all the models simulated for long durations by our group. The circle and triangle symbols denote BNS systems with ${\cal M}_c = 1.1752 M_\odot$ and with ${\cal M}_c = 1.0882 M_\odot$, respectively. The open symbols denote the systems reported in Refs.~\cite{Kiuchi:2017pte,Kawaguchi:2018gvj}. The filled symbols are the systems newly simulated in this study. The purple, green, cyan, orange, and red colors are for the systems with EOS 15H, 125H, H, HB, and B, respectively. The blue cross symbol is for SFHo135-135. }\label{fig:model} \end{figure} \begin{figure*}[t] \includegraphics[width=.75\linewidth]{fig2a.pdf} \includegraphics[width=.75\linewidth]{fig2b.pdf} \caption{\label{fig:GW3} (Top) $h_+$ for $(l,m)=(2,2)$ mode of the gravitational waveforms for binary systems with $m_1=1.12M_\odot$ and $m_2=1.40M_\odot$. (Bottom) The same as the top panel, but for 15H125-125, 15H112-140, and 15H107-146. In both panels, the grid resolution is $N=182$. } \end{figure*} \begin{figure*}[t] \includegraphics[width=.5\linewidth]{fig3a.pdf} \includegraphics[width=.5\linewidth]{fig3b.pdf} \caption{\label{fig:GW4} (Top) The same as Fig.~\ref{fig:GW3}, but for 15H112-140 with $N=182,110,$ and $90$. (Bottom) The gravitational-wave phase shift, $\delta\Phi^\text{shift}(t_\text{ret};{\rm EOS},{\rm B},182)$, for the binary systems with $m_1=1.12M_\odot$, $*m_2=1.40M_\odot$, and $\text{EOS}={\rm 15H},{\rm 125H},{\rm H},{\rm HB}$. The shaded region shows $\delta \Phi^\text{error}(t_\text{ret};15{\rm H},150,182)$ (red) and $\delta \Phi^\text{error}(t_\text{ret};15{\rm H},90,182)$ (blue), respectively, for 15H112-140. The overlapped region has purple color. The vertical dashed line denotes the peak time of the gravitational-wave amplitude for 15H112-140 with $N=182$ (see the text for details). } \end{figure*} \begin{figure*} \includegraphics[width=.47\linewidth]{fig4a.pdf} \includegraphics[width=.47\linewidth]{fig4b.pdf} \caption{ (Left) Gravitational-wave phase error, $\delta\Phi^\text{error}(t_\text{ret};{\rm B},N,182)$, with $N=150,130,110,102,$ and $90$ for B107-146. The vertical dashed line denotes the peak time of the gravitational-wave amplitude for $N=182$. (Right) Gravitational-wave phase error at the peak time, $\delta\Phi^\text{error}(t_\text{peak};{\rm B},N_\text{max},N)$, with a reference grid resolution denoted by $N_\text{max}$ and $N=90,102,\cdots,N_\text{max}$. The purple circles show the phase error with $N_\text{max}=182$ and $N=150,130,110,102,90$. The light-green color is for $N_\text{max}=150$ and $N=130,110,102,90$, and the cyan color is for $N_\text{max}=130$ and $N=110,102,90$. The fitting parameters $p$ and $\Delta \Phi^{2,2}_\text{peak}(N_\text{max})$ are listed in the legend. The purple, light-green, and cyan lines denote $\Delta \Phi^{2,2}_\text{peak} (N_\text{max}) [(N_\text{max}/N)^p-1]$ with these fitting parameters for $N_\text{max}=182$, 150, and 130, respectively. }\label{fig:dephase} \end{figure*} \section{Accuracy of waveforms}\label{sec:result} To date, we have simulated for long durations 46 binary systems with 6 grid resolutions for each model. 26 binary systems are newly reported in this paper and 20 binary systems have been reported in Refs.~\cite{Kiuchi:2017pte,Kawaguchi:2018gvj}. Our waveform data are publicly available on the website:\\ \href{https://www2.yukawa.kyoto-u.ac.jp/~nr_kyoto/SACRA_PUB/catalog.html}{SACRA Gravitational Waveform Data Bank}~\cite{DB}.\\ On the website, the waveform data are tabulated according to the system name, dimensionless initial orbital angular velocity, and grid resolution. For example, 15H\_135\_135\_00155\_182 refers to the employed EOS as 15H, $m_1= 1.35M_\odot$, $m_2= 1.35M_\odot$, $m_0 \Omega_0 = 0.0155$, and $N=182$ (see also Table~\ref{tb:model}). A user can download the data for $\Psi^{2,2}_4(t_\text{ret},r_0)$ extracted at several values of $r_0$ and $h^{2,2,\infty}_{+,\times}(t_\text{ret})$ from the link on the system name. \subsection{Overview of physical and numerical phase shifts}\label{subsec:overview} First we briefly illustrate that the waveforms depend on EOSs and each mass of binary systems. The top panel of Fig.~\ref{fig:GW3} shows the dependence of the gravitational waveforms on the EOSs for the binary systems with $m_1=1.12M_\odot$, $m_2=1.40M_\odot$ and $N=182$. It shows that the systems with the larger values of $\tilde{\Lambda}$ merge earlier than those with the smaller values of $\tilde{\Lambda}$ because the tidal force due to its companion induces the quadrupole moment and the resultant attractive force accelerates the orbital shrinkage. The bottom panel of Fig.~\ref{fig:GW3} shows the dependence of the gravitational waveforms on the symmetric mass ratio for the binary systems with 15H125-125, 15H112-140, and 15H107-146 with $N=182$. It shows that the systems with the larger values of $\eta$ merge earlier than those with the smaller values of $\eta$ because the emissivity of gravitational waves decreases as the symmetric mass ratio decreases~\cite{Blanchet:2013haa}. The top panel of Fig.~\ref{fig:GW4} shows the dependence of the gravitational waveforms on the grid resolutions for 15H112-140 with $N=182,110$, and $N=90$. Errors in the amplitude and phase caused by the finite grid resolution become prominent for the late inspiral and post-merger stages. The bottom panel of Fig.~\ref{fig:GW4} plots the phase shift among the systems of different EOSs for $m_1=1.12M_\odot$, $m_2=1.40M_\odot$, and $N=182$. The phase shift is defined by \begin{align} &\delta \Phi^\text{shift}(t_\text{ret};\text{EOS1},\text{EOS2},N) \nonumber\\ &= \Phi^{2,2}(t_\text{ret};{\rm EOS1},N) - \Phi^{2,2}(t_\text{ret};{\rm EOS2},N) \end{align} where $\Phi^{2,2}(t_\text{ret};\text{EOS},N)$ is the gravitational-wave phase for $l=|m|=2$ mode derived from a simulation with employing EOS and the grid number $N$. Because we compare the phase among models with common masses of components, we omit the masses from the argument. The shaded region shows the evolution of the phase error defined by \begin{align} & \delta \Phi^\text{error} (t_\text{ret};\text{EOS},N_1,N_2) \nonumber\\ & = \Phi^{2,2}(t_\text{ret};\text{EOS},N_1) - \Phi^{2,2}(t_\text{ret};\text{EOS},N_2), \end{align} where $N_1$ and $N_2$ denote the employed grid numbers. The red shaded region shows $\delta \Phi^\text{error}(t_\text{ret};15{\rm H},150,182)$ and the blue shaded region shows $\delta \Phi^\text{error}(t_\text{ret};15{\rm H},90,182)$, respectively, for 15H112-140. The overlapped region has purple color. The vertical dashed line denotes the peak time, $t_\text{peak}$, at which the gravitational-wave amplitude becomes maximal for 15H112-140 with $N=182$. Just after the peak time, burst-type gravitational waves are emitted for a short time as shown in the upper panel of Fig.~\ref{fig:GW4}, i.e., for $58~{\rm ms}\lesssim t_\text{ret} \lesssim 59~{\rm ms}$. These waves cause very rapid increase in phase during this short-term interval and consequently the phase shift shows very rapid increase. This feature can be also seen in the phase error and the very rapid increase appears later in $\delta \Phi^\text{error}(t_\text{ret};15{\rm H},150,182)$ than in $\delta \Phi^\text{error}(t_\text{ret};15{\rm H},90,182)$ because the peak time becomes later with improving the grid resolution. The phase shift and the phase error up to the peak time are comparable, in particular, for the case with the coarser grid resolution. Therefore, unless a convergence study is sufficiently carried out, a capability of inspiral waveform models to measure the tidal deformability is unclear. This is also the case for the the post-merger stage. In particular, the phase evolution loses the convergence as found in the bottom panel of Fig.~\ref{fig:GW4}, i.e., $\delta \Phi^\text{error}(t_\text{ret};{\rm 15H},150,182)$ (red shaded region) is larger than $\delta \Phi^\text{error}(t_\text{ret};{\rm 15H},90,182)$ (blue shaded region). Therefore, time-domain post-merger gravitational waves derived in numerical-relativity simulations are not very reliable. Instead, we will discuss the post-merger signal in terms of the energy and angular momentum carried by gravitational waves and their spectrum amplitude. These quantities are calculated by a time integration of the gravitational waveforms and the convergence in the phase could be subdominant as discussed in Sec.~\ref{sec:universal-relation}. \subsection{Estimation of the residual phase error in gravitational waves}\label{subsec:phase_error} Following Refs.~\cite{Kiuchi:2017pte,Kawaguchi:2018gvj}, we estimate a residual gravitational-wave phase error at the peak time in the simulations. The left panel of Fig.~\ref{fig:dephase} plots evolution of the phase error, $\delta \Phi^\text{error}(t_\text{ret};{\rm B},N,182)$, with $N=150,130,110,102$, and $90$ for B107-146. The vertical dashed line denotes the peak time for B107-146 with $N=182$. Although the phase error is accumulated with time, its value at the peak time decreases as improving the grid resolution. We estimate the residual phase error by assuming that the gravitational-wave phase at the peak time obeys the following functional form, \begin{align} &\Phi^{2,2}(t_\text{peak};\text{EOS},N) \nonumber\\ &= \Phi_\text{peak}^{2,2,\infty}(N_\text{max}) - \Delta \Phi^{2,2}_\text{peak} (N_\text{max}) \left(\frac{N_\text{max}}{N}\right)^p, \label{eq:dephase} \end{align} where $\Phi_\text{peak}^{2,2,\infty}(N_\text{max})$ and $p$ denote the gravitational-wave phase at the peak time in the continuum limit of the finite difference $(N \to \infty)$ and an order of the convergence, respectively. $\Delta \Phi^{2,2}_\text{peak} (N_\text{max})$ should be recognized as the residual phase error for the simulation with $N=N_\text{max}$. $N_\text{max}$ denotes a reference value of $N$ to estimate unknown quantities $\Phi^{2,2,\infty}_\text{peak}(N_\text{max})$, $\Delta \Phi^{2,2}_\text{peak} (N_\text{max})$, and $p$. For example, with $N_\text{max}=182$, these unknowns are obtained by fitting the simulation results of $N=150,130,110,102,$ and $90$ with Eq.~(\ref{eq:dephase}) given an EOS, a chirp mass, and a symmetric mass ratio. The right panel of Fig.~\ref{fig:dephase} plots the gravitational-wave phase error at the peak time, $\delta \Phi^\text{error}(t_\text{peak};\text{B},N_\text{max},N)$, as a function of $1/N^p$ with a reference grid number $N_\text{max}$ and $N=90,102,\cdots,N_\text{max}$. Assuming Eq.~(\ref{eq:dephase}), the phase error at the peak time in a binary system is given as \begin{align} & \delta \Phi^\text{error}(t_\text{peak};\text{EOS},N_\text{max},N)\nonumber\\ &= \Delta \Phi^{2,2}_\text{peak} (N_\text{max}) \left[ \left(\frac{N_\text{max}}{N}\right)^p - 1 \right]. \label{eq:dephase2} \end{align} The values of $\Delta \Phi^{2,2}_\text{peak} (N_\text{max})$ and $p$ are shown in the legend of this plot. It is clear that the order of the convergence $p$ is improved and the residual gravitational-wave phase error is reduced as increasing $N_\text{max}$. Table~\ref{tb:residual_phase} summarizes the residual phase error and the order of the convergence of the gravitational-wave phase at the peak time for all the systems. We estimate the residual phase error with respect to three reference values of $N_\text{max}$ as $182,150,$ and $130$. In some systems, the residual phase error and the order of the convergence show an irregular behavior. That is, the residual phase error (the order of convergence) for $N_\text{max}=130$ happens to be smaller (higher) than that for $N_\text{max}=150$. Nonetheless, the residual phase error (the order of convergence) for $N_\text{max}=182$ is smaller (higher) than that for $N_\text{max}=150$ except for 125H125-146. Thus, we adopt the values for $N_\text{max}=182$ as the residual phase error in our waveforms and it is in the range of $\approx 0.1$--$0.5$ rad. For the SFHo (tabulated) EOS, we find that the residual phase error still remains within sub-radian accuracy. Because SFHo135-135 and HB135-135 have nearly identical values of $\tilde{\Lambda}$~\cite{Kiuchi:2017pte}, The phase error due to the tabulated EOS is estimated by comparing the results for them. For HB135-135, the residual phase error and the order of the convergence are $(\Delta \Phi^{2,2}_\text{peak}(182),p)=(0.17~\text{rad},3.6)$, $(\Delta \Phi^{2,2}_\text{peak}(150),p)=(0.48~\text{rad},3.2)$, and $(\Delta \Phi^{2,2}_\text{peak}(130),p)=(2.0~\text{rad},1.7)$~\cite{Kiuchi:2017pte}. For SFHo135-135, the residual phase error and the order of the convergence are $(\Delta \Phi^{2,2}_\text{peak}(182),p)=(0.43~\text{rad},2.3)$, $(\Delta \Phi^{2,2}_\text{peak}(150),p)=(0.76~\text{rad},2.2)$, and $(\Delta \Phi^{2,2}_\text{peak}(130),p)=(0.33~\text{rad},4.2)$, respectively. Thus, the system with the SFHo (tabulated) EOS has slightly larger residual phase error than with the piecewise polytropic EOS. This indicates that the linear interpolation of the thermodynamic quantities could cause a phase error of $\approx 0.2$--$0.3$ rad. Nonetheless, it is encouraging that our waveforms have the sub-radian accuracy even for the SFHo (tabulated) EOS. For more detailed estimate of the error budget due to tabulated EOSs, we need to perform BNS simulations with a wide class of tabulated EOSs. In particular, we speculate that the phase error when using a tabulated EOS with a phase transition could be even larger. \begin{table*}[t] \centering \caption{Residual phase error (rad) and order of the convergence of the gravitational-wave phase at the peak time calculated by Eq.~(\ref{eq:dephase}) for $N_\text{max}=182,150$, and $130$. } \begin{tabular}{c|ccc} \hline\hline System & $(\Delta \Phi^{2,2}_\text{peak}(182),p)$ & $(\Delta \Phi^{2,2}_\text{peak}(150),p)$ & $(\Delta \Phi^{2,2}_\text{peak}(130),p)$ \\ \hline 15H125-146 & (0.11,~4.1) & (0.58,~2.7) & (5.44,~0.7) \\ 125H125-146 & (0.31,~2.6) & (0.15,~4.5) & (0.45,~3.6) \\ H125-146 & (0.17,~3.4) & (0.78,~2.2) & (0.73,~2.8) \\ HB125-146 & (0.13,~3.7) & (1.10,~1.7) & (1.00,~2.2) \\ B125-146 & (0.12,~3.8) & (0.28,~3.7) & (0.45,~3.8) \\ 15H118-155 & (0.22,~3.1) & (0.75,~2.2) & (0.47,~3.5) \\ 125H118-155 & (0.26,~2.9) & (0.83,~2.1) & (1.44,~1.7) \\ H118-155 & (0.23,~3.1) & (0.48,~3.0) & (0.56,~3.4) \\ HB118-155 & (0.44,~2.3) & (1.21,~1.6) & (0.79,~2.5) \\ B118-155 & (0.29,~2.7) & (0.69,~2.2) & (0.47,~3.3) \\ 15H117-156 & (0.26,~2.9) & (0.36,~3.2) & (0.39,~4.0) \\ 125H117-156 & (0.28,~2.8) & (0.38,~2.8) & (0.92,~2.4) \\ H117-156 & (0.24,~3.0) & (0.31,~3.5) & (0.74,~2.9) \\ HB117-156 & (0.22,~3.0) & (0.84,~2.0) & (1.42,~1.7) \\ B117-156 & (0.42,~2.3) & (0.43,~2.8) & (0.23,~4.8) \\ 15H112-140 & (0.19,~3.4) & (0.70,~2.5) & (0.66,~3.2) \\ 125H112-140 & (0.21,~3.4) & (0.53,~3.0) & (0.66,~3.3) \\ H112-140 & (0.17,~3.5) & (0.92,~2.1) & (1.00,~2.4) \\ HB112-140 & (0.42,~2.5) & (0.48,~3.0) & (0.21,~5.5) \\ B112-140 & (0.19,~3.6) & (0.34,~3.7) & (39.59,~0.13) \\ 15H107-146 & (0.38,~2.6) & (0.86,~2.2) & (0.43,~3.9) \\ 125H107-146 & (0.54,~2.2) & (2.93,~1.0) & (0.61,~3.2) \\ H107-146 & (0.41,~2.4) & (0.60,~2.5) & (1.03,~2.3) \\ HB107-146 & (0.35,~2.8) & (0.44,~3.3) & (0.43,~4.2) \\ B107-146 & (0.33,~2.8) & (0.73,~2.4) & (1.05,~2.4) \\ SFHo135-135 & (0.43,~2.3) & (0.76,~2.2) & (0.33,~4.2) \\ \hline\hline \end{tabular}\label{tb:residual_phase} \end{table*} \section{Inspiral gravitational waveform modeling} \label{sec:WFmodel} \subsection{SACRA inspiral gravitational waveform template} In the previous paper~\cite{Kawaguchi:2018gvj}, we developed a frequency-domain gravitational waveform model for inspiralling BNSs (with $l=|m|=2$) based on high-precision numerical-relativity data. In this section, we extend the examination of the inspiral waveform model to a parameter space wider than the previous papers~\cite{Kiuchi:2017pte,Kawaguchi:2018gvj} by employing new waveforms obtained in this paper. Before moving on to the comparison, we briefly review our inspiral waveform model. First we calculate the Fourier component for the quadrupole mode of gravitational waves for all the systems by \begin{align} \tilde{h}_{+,\times}(f)=\int_{t_{\rm i}}^{t_{\rm f}} h^{2,2,\infty}_{+,\times}(t)e^{-2\pi i f t}dt, \label{eq:deffreqdom} \end{align} where $t_{\rm i}$ and $t_{\rm f}$ are the initial and final time of the waveform data, respectively. Then, we decompose $\tilde{h}_+(f)$ in Eq.~(\ref{eq:deffreqdom}) into the frequency-domain amplitude, $A\left(f\right)$, and phase, $\Psi\left(f\right)$, (with an ambiguity in the origin of the phase) by \begin{align} {\tilde h}_+\left(f\right)=A\left(f\right) {\rm e}^{-i\Psi\left(f\right)}. \end{align} We only use $h^{2,2,\infty}_+$ for modeling the inspiral gravitational waveforms because the difference between $h^{2,2,\infty}_+$ and $h^{2,2,\infty}_\times$ is approximately only the phase difference of $\pi/2$. We define the corrections due to the NS tidal deformation to the gravitational-wave amplitude and phase by \begin{align} A^{\rm tidal}\left(f\right)=A\left(f\right)-A_{\rm BBH}\left(f\right)\label{eq:defAtidal} \end{align} and \begin{align} \Psi^{\rm tidal}\left(f\right)=\Psi\left(f\right)-\Psi_{\rm BBH}\left(f\right),\label{eq:defphitidal} \end{align} respectively. Here, $A_{\rm BBH}\left(f\right)$ and $\Psi_{\rm BBH}\left(f\right)$ are the gravitational-wave amplitude and phase of a binary black hole with the same mass as the BNS, respectively (hereafter referred to as the point-particle parts: see Ref.~\cite{Kawaguchi:2018gvj} for details). Our numerical-relativity waveforms only contain the waveforms for the frequency higher than $\approx 400\,{\rm Hz}$. Thus, we employ the effective-one-body waveforms of Refs.~\cite{Hinderer:2016eia,Steinhoff:2016rfi,Lackey:2018zvw,Taracchini:2013rva} (SEOBNRv2T) to model the low-frequency part waveforms, in which the effect of dynamical tides is taken into account, and construct hybrid waveforms combining them with the numerical-relativity waveforms. The hybridization of the waveforms is performed in the time-domain by the procedure described in Refs.~\cite{Hotokezaka:2016bzh,Kawaguchi:2018gvj} and we set the matching region to be from $t_\text{ret} \approx 7.38$ ms to $14.78$ ms. After the hybridization, the waveforms are transformed into the frequency domain employing Eq.~\eqref{eq:deffreqdom}, and the tidal-part amplitude and phase are extracted by Eqs.~\eqref{eq:defAtidal} and~\eqref{eq:defphitidal}. For modeling the tidal-part phase and amplitude, we employ the following functional forms motivated by the 2.5 PN order formula~\cite{Damour:2012yf}: \begin{align} &\Psi_{\rm model}^{\rm tidal}=\frac{3}{128\eta}\left[-\frac{39}{2}{\tilde \Lambda}\left(1+a\,{\tilde \Lambda}^{2/3} x^p \right)\right]x^{5/2}\nonumber \\&\times\left(1+\frac{3115}{1248}x-\pi x^{3/2}+\frac{28024205}{3302208}x^2 -\frac{4283}{1092}\pi x^{5/2}\right)\label{eq:phimodel} \end{align} for the phase correction and \begin{align} A_{\rm model}^{\rm tidal}&=\sqrt{\frac{5\pi\eta}{24}}\frac{m_0^2}{D_{\rm eff}} {\tilde \Lambda} x^{-7/4}\nonumber\\ &\times \left(-\frac{27}{16}x^{5}-\frac{449}{64}x^{6}-b\,x^q\right)\label{eq:Amodel} \end{align} for the amplitude correction where $D_{\rm eff}$ is the effective distance to the binary~\cite{Hotokezaka:2016bzh} and $x\equiv (\pi m_0 f)^{2/3}$. $a$, $p$, $b$, and $q$ are the free parameters of the models. To focus on the inspiral waveform and to avoid the contamination from the post-merger waveforms of high frequency, which would have large uncertainties, we restrict the gravitational-wave frequency range in $10$--$1000\,{\rm Hz}$. The fitting parameters were determined by employing the hybrid waveforms of 15H125-125, which has the largest value of binary tidal deformability in the systems studied in the previous study~\cite{Kawaguchi:2018gvj}. By performing the least square fit with respect to the phase shift and relative difference of the amplitude, we obtained $a=12.55$, $p=4.240$, $b=4251$, and $q=7.890$. In Ref.~\cite{Kawaguchi:2018gvj}, the validity of the inspiral waveform model was examined employing hybrid waveforms which were not used for the parameter determination. We should stress again that the parameters $a,p,b$, and $q$ in Eqs.~(\ref{eq:phimodel}) and (\ref{eq:Amodel}) were determined by the particular system 15H125-125. We found that the tidal-part waveform model always reproduced the tidal-part phase and amplitude of the hybrid waveforms within $\sim 0.1\,{\rm rad}$ and $15\%$, respectively, for the equal-mass and unequal-mass cases with ${\cal M}_{\rm chirp}= 1.1752\,M_\odot$ and the equal-mass cases with ${\cal M}_{\rm chirp}= 1.0882\,M_\odot$, covering the parameter space of $0.244 \le \eta \le 0.250$ and $300\lesssim \tilde{\Lambda} \lesssim 1800$. \subsection{Validation of SACRA inspiral gravitational waveform template} While the validity of our inspiral waveform model was already examined in the most interesting part of the parameter space of BNSs~\cite{Kawaguchi:2018gvj}, there still remain some important cases which were not examined in the previous study~\cite{Kawaguchi:2018gvj}. First, the dependence of the error of the tidal correction on the mass ratio has to be checked for less massive BNSs. While unequal-mass cases with total mass of $\approx\,2.7\,M_\odot$ were checked in the previous study~\cite{Kawaguchi:2018gvj}, it is important to check whether our inspiral waveform models are also applicable to unequal-mass cases with smaller total mass, for which tidal effect is enhanced due to increase of tidal deformability. Second, the systematics due to simplification on the high-density part of the EOS should be checked. For the inspiral waveforms, we expect that the high-density part of the EOS has a minor effect, and thus, we employ simplified two-piecewise polytropic EOS models. However, we should confirm that this assumption is indeed valid. To check the points listed above, we compare our inspiral waveform model with hybrid waveforms employing the numerical-relativity waveforms obtained in this paper. Hybrid waveforms are constructed in the same manner as in the previous study~\cite{Kawaguchi:2018gvj} employing the SEOBNRv2T waveforms as the low-frequency part waveforms. In particular, we focus on the validity of the tidal correction model to the waveform, comparing it with the tidal-part phase and amplitude of the hybrid waveforms computed based on Eqs.~\eqref{eq:defAtidal} and~\eqref{eq:defphitidal} using the SEOBNRv2 waveforms with no-tides as the point-particle parts. \begin{figure*}[t] \hspace{-25.0mm} \begin{center} \includegraphics[width=0.49\linewidth]{fig5a.pdf} \includegraphics[width=0.49\linewidth]{fig5b.pdf} \end{center} \caption{(Left) Difference in the tidal-part phase between the hybrid waveforms and the model given by Eq.~\eqref{eq:dphi} for the binary systems with ${\cal M}_c=1.1752M_\odot$. Phase differences are plotted after the alignment in the frequency range of $10$--$1000,{\rm Hz}$. (Right) Relative difference of tidal-part amplitude between the hybrid waveforms and the model given by Eq.~\eqref{eq:reldA}.}\label{fig:model_comp_27} \end{figure*} \begin{figure*}[t] \hspace{-25.0mm} \begin{center} \includegraphics[width=.49\linewidth]{fig6a.pdf} \vspace{-15mm} \includegraphics[width=.49\linewidth]{fig6b.pdf} \end{center} \caption{The same as in Fig.~\ref{fig:model_comp_27} but for the models with ${\cal M}_c= 1.0882\,M_\odot$.}\label{fig:model_comp_25} \end{figure*} Figures~\ref{fig:model_comp_27} and \ref{fig:model_comp_25} show the difference of the tidal-part phase and amplitude between our inspiral waveform model~\eqref{eq:phimodel} and \eqref{eq:Amodel} and the hybrid waveforms for the models with ${\cal M}_c= 1.1752\,M_\odot$ and ${\cal M}_c= 1.0882\,M_\odot$. Here, the phase difference between the tidal-part phase of hybrid waveforms, $\Psi_{\rm Hybrid}^{\rm tidal}$, and that of our inspiral waveform model, $\Psi_{\rm model}^{\rm tidal}$, is computed by \begin{align} \Delta\Psi(f)=\Psi_{\rm Hybrid}^{\rm tidal}(f)-\Psi_{\rm model}^{\rm tidal}(f)-2\pi f t_0+\phi_0,\label{eq:dphi} \end{align} where $t_0$ and $\phi_0$ are the free parameters which correspond to the degrees of freedom in choosing the origins of time and phase, respectively, and are determined by minimizing $\int |\Delta\Psi(f)|^2 df$ integrated in the range of $f=10$--$1000\,{\rm Hz}$. For the comparison of the tidal-part amplitude, relative difference of the amplitude, \begin{align} \Delta A(f)/A(f)=(A_{\rm Hybrid}^{\rm tidal}(f)-A_{\rm model}^{\rm tidal}(f))/A_{\rm model}(f),\label{eq:reldA} \end{align} is computed, where $A_{\rm Hybrid}^{\rm tidal}$ and $A_{\rm model}=A_{\rm model}^{\rm tidal}+A_{\rm BBH}$ are the tidal-part amplitude of hybrid waveforms and the amplitude of the model waveforms including the point-particle part, respectively. Again, we employ the amplitude of the SEOBNRv2 waveforms with no-tides for $A_{\rm BBH}$. The systems of mass $1.25M_\odot$--$1.46M_\odot$, $1,18M_\odot$--$1.55M_\odot$, and $1.17M_\odot$--$1.56M_\odot$ are within the parameter space which we studied in the previous study~\cite{Kawaguchi:2018gvj}, and thus, we expect that those waveforms are well reproduced by our inspiral waveform model. Indeed Fig.~\ref{fig:model_comp_27} shows that differences in both phase and amplitude are within the error which we observed in the previous study~\cite{Kawaguchi:2018gvj}. Figure~\ref{fig:model_comp_27} also shows that tidal-part phase and amplitude for system SFHo135-135 are well reproduced by our inspiral waveform model. This confirms that, at least for the frequency range and $m_0$ we focus on, employing an EOS whose high-density part is simplified has only a minor effect on the systematics of the model. Figure~\ref{fig:model_comp_25} shows the results in the unequal-mass cases with ${\cal M}_c= 1.0882\,M_\odot$. The difference in the tidal-part phase is larger than the cases with ${\cal M}_c= 1.1752\,M_\odot$. This is reasonable because we found that the error of tidal-part model becomes relatively large for a small mass ratio or a large value of tidal deformability in the previous study~\cite{Kawaguchi:2018gvj}. Nevertheless, the phase error is always smaller than $\approx0.1\,{\rm rad}$, which is smaller than the systematics in the waveforms stemming from the finite difference as shown in the previous section. The deviation for the amplitude model is also the same level as for the models with ${\cal M}_c= 1.1752\,M_\odot$. To quantify the deviation of our inspiral waveform model from the new sets of hybrid waveforms, we calculate the mismatch between those waveforms, ${\bar F}$, defined by \begin{align} {\bar F}=1-\max_{\phi_0,t_0}\frac{\left({\tilde h}_1\middle|{\tilde h}_2{\rm e}^{2\pi i f t_0 +i \phi_0}\right)}{||{\tilde h}_1||\,||{\tilde h}_2||},\label{eq:mismatch} \end{align} where $(\cdot|\cdot)$ and $||\cdot||$ are defined by \begin{align} \left({\tilde h}_1\middle|{\tilde h}_2\right)=4{\rm Re}\left[\int_{f_{\rm min}}^{f_{\rm max}} \frac{{\tilde h}_1\left(f\right){\tilde h}^*_2\left(f\right)}{S_{\rm n}\left(f\right)}df\right],\label{eq:inp} \end{align} where $f_{\rm min}=10$~Hz and $f_{\rm max}=1000$~Hz and \begin{align} ||{\tilde h}||=\sqrt{\left({\tilde h}\middle|{\tilde h}\right)}. \end{align} Here, $h_1$ and $h_2$ denote the hybrid waveforms and our inspiral waveform models, respectively. The inspiral waveform model employs Eqs.~(\ref{eq:phimodel}) and (\ref{eq:Amodel}) as the tidal part and the SEOBNRv2 waveforms with no-tides as the point-particle baseline. $S_{\rm n}$ denotes the one-sided noise spectrum density of the detector, and we employ the noise spectrum density of the {\tt ZERO\_DETUNED\_HIGH\_POWER} configuration of advanced LIGO~\cite{aLIGOnoise} for it. \begin{table*} \centering \caption{Mismatch between the inspiral waveform model and hybrid waveforms.} \begin{tabular}{c|c}\hline\hline System&$~{\bar F}\,(\times10^{-5})$\\\hline 15H125-146 & 0.83\\ 125H125-146 & 0.36\\ H125-146 & 0.29\\ HB125-146 & 0.28\\ B125-146 & 0.22\\ 15H118-155 & 0.82\\ 125H118-155 & 0.26\\ H118-155 & 0.30\\ HB118-155 & 0.32\\ B118-155 & 0.31\\ 15H117-156 & 0.97\\ 125H117-156 & 0.31\\ H117-156 & 0.25\\ HB117-156 & 0.30\\ B117-156 & 0.17\\ \hline 15H112-140 & 0.88\\ 125H112-140 & 0.24\\ H112-140 & 0.37\\ HB112-140 & 0.71\\ B112-140 & 0.91\\ 15H107-146 & 1.82\\ 125H107-146 & 0.45\\ H107-146 & 0.30\\ HB107-146 & 0.79\\ B107-146 & 1.12\\ \hline SFHo135-135 & 0.45\\ \hline \end{tabular}\label{tb:mismatch} \end{table*} We summarize the values of mismatch between our inspiral waveform model and hybrid waveforms in Table~\ref{tb:mismatch}. For all the cases, the value of mismatch is smaller than $\approx 2\times10^{-5}$. According to our previous results~\cite{Kawaguchi:2018gvj}, these results indicate that the the signal to noise ratio of the difference between our inspiral waveform model and hybrid waveforms are as small as $1$ even for the case in which the total signal to noise ratio is as large as $200$. \section{Assessment of universal relation for late inspiral and post-merger gravitational waves}\label{sec:universal-relation} \subsection{frequency and amplitude} Instantaneous gravitational-wave frequency defined by Eq.~(\ref{eq:GWfreq}) at some characteristic time in the late inspiral or post-merger stage is reported to be correlated with the tidal deformability or the tidal coupling constant~\cite{Read:2013zra,Rezzolla:2016nxn,Bernuzzi:2015rla,Bernuzzi:2014owa}. In addition, characteristic peak frequencies imprinted in the spectrum amplitude of post-merger gravitational waves are reported to be correlated with the tidal coupling constant or NS radius~\cite{Rezzolla:2016nxn,Shibata:2005xz,Hotokezaka:2013iia,Bauswein:2011tp}. We assess these proposed universal relations using our waveform data, for which the systematic study has been conducted in a wide range of the binary parameters with a wide range of the grid resolution of the simulations. We also propose new relations in terms of the binary tidal deformability. \subsubsection{Peak frequency and binary tidal deformability relation} Reference~\cite{Read:2013zra} reported that the instantaneous gravitational-wave frequency (of $l=|m|=2$ mode) at the peak time $(t_\text{peak})$, $f_\text{peak}$, has a tight correlation with the binary tidal deformability $\tilde{\Lambda}$ (see also Refs.~\cite{Rezzolla:2016nxn,Bernuzzi:2015rla,Bernuzzi:2014owa} for the relation with the tidal coupling constant: In Ref.~\cite{Rezzolla:2016nxn}, they referred to it as $f_\text{max}$). Figure~\ref{fig:fpeak} plots the dependence of $f_\text{peak}$ on the grid resolution where $f_\text{peak,ave}$ is the average of $f_\text{peak}$ over the results with different grid resolutions. $f_\text{peak}$ does not converge perfectly with respect to the grid resolution, but the fluctuation around the averaged value is less than 2$\%$ for a wide range of the grid resolution. This is also the case for all the binary systems. Thus, we estimate a relative error due to the finite grid resolution in $f_\text{peak}$ to be 2$\%$ and tabulate the values of $f_\text{peak}$ in Table~\ref{tb:fpeak}. The right panel of Fig.~\ref{fig:fpeak} plots $m_0 f_\text{peak}$ as a function of $\tilde{\Lambda}^{1/5}$. The error bar shows the systematics associated with the finite grid resolution in $f_\text{peak}$. We also plot the universal relations reported in Refs.~\cite{Read:2013zra} (black dashed line) and \cite{Rezzolla:2016nxn} (black dotted line). We find that the universal relation in Ref.~\cite{Rezzolla:2016nxn} holds only for the symmetric binary systems with ${\cal M}_c= 1.1752M_\odot$ and ${\cal M}_c= 1.0882M_\odot$ (see also Table~\ref{tb:fpeak}). Given an EOS and a chirp mass, $f_\text{peak}$ shifts to a lower value as the symmetric mass ratio decreases. This is attributed to following three facts. First, given the total mass $m_0$ and $f_\text{GW}$, $df_\text{GW}/dt$ decreases as the symmetric mass ratio decreases because the gravitational-wave luminosity is proportional to $\eta^2$~\cite{Blanchet:2013haa}. Second, the time at which the two NSs come into contact becomes earlier as the symmetric mass ratio decreases because the less massive companion is more subject to the tidal elongation and the resultant mass accretion on the massive component starts earlier than for the symmetric binary. Third, the difference between the peak time and the contact time becomes small as the symmetric mass ratio decreases because the peak time corresponds to the moment when a dumbbell-like density structure with double dense cores formed after the contact disappears as discussed in Ref.~\cite{Kiuchi:2017pte} and the dumbbell-like density structure becomes less prominent in the asymmetric binary systems. Due to these effects, $f_\text{peak}$ becomes lower as the symmetric mass ratio decreases. In a short summary, the $m_0f_\text{peak}$--$\tilde{\Lambda}^{1/5}$ relation depends strongly on the symmetric mass ratio and the universal relations reported in Refs.~\cite{Read:2013zra} and \cite{Rezzolla:2016nxn} suffer from this systematics (see also Ref.~\cite{Kiuchi:2017pte}). This finding is consistent with a discussion in Ref.~\cite{Rezzolla:2016nxn}. They mentioned that the mass asymmetry could break the universality in the $m_0f_\text{peak}$--$\tilde{\Lambda}^{1/5}$ relation for a {\it possibly unrealistic} mass ratio. We find that the {\it realistic} value of the mass ratio breaks the universality as the symmetric mass ratio adopted in this paper is consistent with that in GW170817~\cite{TheLIGOScientific:2017qsa}. The scatter from the proposed universal relation in Ref.~\cite{Rezzolla:2016nxn} is as large as $\approx$ 18--19$\%$ at the maximum for $0.244\le\eta\le 0.250$. We propose an improved fitting formula: \begin{align} &\log_{10}\left[\left(\frac{f_\text{peak}}{\rm Hz}\right)\left(\frac{m_0}{M_\odot}\right)\right] = a_0(\eta) + a_1(\eta) \tilde{\Lambda}^{1/5},\nonumber\\ &a_0(\eta) = 4.536 - 1.230 \eta,\nonumber\\ &a_1(\eta) = - 0.929 + 3.120 \eta. \label{eq:fpeak} \end{align} With $\eta=0.2500$, $a_0(\eta)$ and $a_1(\eta)$ approximately reduce to be $a_0$ and $a_1$~\cite{footnote1} reported in Ref.~\cite{Rezzolla:2016nxn}. Figure~\ref{fig:fpeak2} plots the improved relation with the simulation data and we confirm that the relative error between the data and the fitting formula~(\ref{eq:fpeak}) is smaller than $3\%$. We should keep in mind that this relation could still suffer from systematics associated with physical effects that are not taken into the simulation. Because of the spin-orbit coupling, high NS spin could change $f_\text{peak}$ compared to the non-spinning case. NS magnetic fields also could produce systematics in Eq.~(\ref{eq:fpeak}) because at the contact of the two NSs, which occurs before the peak time, the magnetic field could be exponentially amplified by the Kelvein-Helmholtz instability within a very short timescale $\ll 1$\,ms~\cite{Kiuchi:2014hja,Kiuchi:2015sga} and the magnetic pressure could reach near the equipartition of the pressure locally, affecting the value of $f_\text{peak}$. These points should be explored in future work. \subsubsection{Peak amplitude and binary tidal deformability relation} References~\cite{Read:2013zra,Kiuchi:2017pte} reported that the gravitational-wave amplitude at the peak time, $h_\text{peak}$, correlates with $f_\text{peak}$, i.e., with $\tilde{\Lambda}^{1/5}$. Because we do not find perfectly convergent result for $h_\text{peak}$ with respect to the grid resolution, first, we assess deviation of $h_\text{peak}$ relative to the averaged value of $h_\text{peak}$ (average of the results with different grid resolutions) in the left panel of Fig.~\ref{fig:hpeak} for the binary systems with $m_1= 1.07 M_\odot$ and $m_2= 1.46 M_\odot$. It is found that fluctuation around the averaged value is $\approx 1$--$2\%$. This is also the case for all the binary systems. Thus, we adopt $2\%$ as the systematics associated with the finite grid resolution in $h_\text{peak}$ and summarize the values of $h_\text{peak}$ in Table~\ref{tb:fpeak}. The right panel of Fig.~\ref{fig:hpeak} plots $D h_\text{peak}/m_0$ as a function of $\tilde{\Lambda}^{1/5}$. The error bar shows the systematics associated with the finite grid resolution in $h_\text{peak}$. This figure shows that the relation depends strongly on the symmetric mass ratio. That is, the relation proposed in Refs.~\cite{Read:2013zra,Kiuchi:2017pte} is not in general satisfied. We propose a fitting formula for $D h_\text{peak}/m_0$: \begin{align} &\frac{D h_\text{peak}}{m_0} = b_0(\eta) + b_1(\eta) \tilde{\Lambda}^{1/5},\nonumber\\ &b_0(\eta) = -0.0583 + 1.896 \eta,\nonumber\\ &b_1(\eta) = -0.1602 + 0.454 \eta. \label{eq:hpeak} \end{align} Figure~\ref{fig:hpeak2} plots the improved relation with the simulation data. We find that the relative error between the data and the fitting formula~(\ref{eq:hpeak}) is within $4\%$. Again note that this relation is calibrated in a limited class of the binary systems, i.e., non-magnetized non-spinning binary systems. We should keep in mind this point in using this relation to infer the tidal deformability from observational data. \begin{figure*} \includegraphics[width=.41\linewidth]{fig7a.pdf} \includegraphics[width=.45\linewidth]{fig7b.pdf} \caption{(Left) A deviation of instantaneous gravitational-wave frequency at the peak time $f_\text{peak}$ relative to $f_\text{peak,ave}$ as a function of $1/N$ for the binary systems with $m_1= 1.17 M_\odot$ and $m_2= 1.56 M_\odot$. $f_\text{peak,ave}$ is an average of $f_\text{peak}$ over the results with different grid resolutions. (Right) $m_0f_\text{peak}$ as a function of $\tilde{\Lambda}^{1/5}$. Meaning of the color and symbols is the same as that in Fig.~\ref{fig:model}. The error bar of $\pm 2\%$ comes from the systematics associated with the finite grid resolution in $f_\text{peak}$. The proposed universal relations in Refs.~\cite{Read:2013zra,Rezzolla:2016nxn} are shown. }\label{fig:fpeak} \end{figure*} \begin{figure} \includegraphics[width=.9\linewidth]{fig8.pdf} \caption{An improved $m_0f_\text{peak}$--$\tilde{\Lambda}^{1/5}$ relation with $a_0(\eta)$ and $a_1(\eta)$ in Eq.~(\ref{eq:fpeak}). }\label{fig:fpeak2} \end{figure} \begin{figure*} \includegraphics[width=.41\linewidth]{fig9a.pdf} \includegraphics[width=.45\linewidth]{fig9b.pdf} \caption{(Left) A deviation of the gravitational-wave amplitude at the peak time, $h_\text{peak}$, relative to $h_\text{peak,ave}$ as a function of $1/N$ for the binary systems with $m_1= 1.07M_\odot$ and $m_2= 1.46 M_\odot$. $h_\text{peak,ave}$ is an average of $h_\text{peak}$ over the results with different grid resolutions. (Right) $D h_\text{peak}/m_0$ as a function of $\tilde{\Lambda}^{1/5}$. Meaning of the color and symbols is the same as Fig.~\ref{fig:model}. The error bar of $\pm 2\%$ comes from the uncertainty associated with the finite grid resolution in $h_\text{peak}$. }\label{fig:hpeak} \end{figure*} \begin{figure} \includegraphics[width=.9\linewidth]{fig10.pdf} \caption{An improved $D h_\text{peak}/m_0$--$\tilde{\Lambda}^{1/5}$ relation with $b_0(\eta)$ and $b_1(\eta)$ in Eq.~(\ref{eq:hpeak}). }\label{fig:hpeak2} \end{figure} \begin{table*}[t] \centering \caption{Binary tidal deformability $\tilde{\Lambda}$, $f_\text{peak}$, $h_\text{peak}$, $f_2$, $E^{2,2}_{\rm GW,i}$, $E^{2,2}_{\rm GW,p}$, $J^{2,2}_{\rm GW,p}$, $J_\text{rem}$, and $m_0-M_\text{ADM,0}$. $M_\text{ADM,0}$ is the Arnowitt-Deser-Misner mass of the initial condition of the simulations. We adopt 2$\%$ relative error for $f_{\rm peak}$ and $h_\text{peak}$ and $5\%$ relative error for $f_2$, respectively, as a typical value. For $f_2$, we exclude binary systems which collapse to a black hole within a few ms after the merger. For $E^{2,2}_{\rm GW,i}$ and $J_\text{rem}$, we adopt $2\%$ and $1\%$ relative error, respectively. $E_\text{GW}$ and $m_0-M_\text{ADM,0}$ are given in units of $M_\odot$. $J_\text{GW}$ and $J_\text{rem}$ are in units of $M_\odot^2$. } \begin{tabular}{c|cccccccccc} \hline\hline System & $\tilde{\Lambda}^{1/5}$ & $f_{\rm peak}$ [Hz] & $D h_\text{peak} / m_0$ & $f_2 [{\rm Hz}]$ & $E^{2,2}_{\text{GW},{\rm i}}$ & $E^{2,2}_{\text{GW},{\rm p}} $ & $J^{2,2}_{\text{GW},{\rm p}}$ & $J_\text{rem}$ & $m_0-M_\text{ADM,0}$ \\ \hline\ 15H135-135 & 4.14 & 1503$\pm$30 & 0.226$\pm$0.005 & 2321$\pm$116 & (7.90$\pm 0.16)\times 10^{-3}$ & 1.35$\times 10^{-2}$ & 0.40 & 6.64$\pm 0.07$ & $1.65\times 10^{-2}$\\ 125H135-135 & 3.87 & 1652$\pm$33 & 0.236$\pm$0.005 & 2517$\pm$126 & (9.04$\pm 0.18)\times 10^{-3}$ & 1.76$\times 10^{-2}$ & 0.48 & 6.54$\pm 0.07$ & $1.64\times 10^{-2}$\\ H135-135 & 3.60 & 1820$\pm$36 & 0.249$\pm$0.005 & 2790$\pm$139 & (1.03$\pm 0.02)\times 10^{-2}$ & 2.32$\times 10^{-2}$ & 0.56 & 6.46$\pm 0.06$ & $1.63\times 10^{-2}$\\ HB135-135 & 3.35 & 1986$\pm$40 & 0.261$\pm$0.005 & 3243$\pm$162 & (1.17$\pm 0.02)\times 10^{-2}$ & 2.89$\times 10^{-2}$ & 0.59 & 6.39$\pm 0.06$ & $1.64\times 10^{-2}$\\ B135-135 & 3.11 & 2133$\pm$43 & 0.274$\pm$0.005 & -- & (1.30$\pm 0.03)\times 10^{-2}$ & 7.39$\times 10^{-3}$ & 0.13 & 6.33$\pm 0.06$ & $1.65\times 10^{-2}$\\ 15H121-151 & 4.13 & 1356$\pm$27 & 0.212$\pm$0.004 & 2261$\pm$163 & (7.47$\pm 0.15)\times 10^{-3}$ & 5.47$\times 10^{-3}$ & 0.17 & 6.66$\pm 0.07$ & $1.66\times 10^{-2}$\\ 125H121-151 & 3.86 & 1490$\pm$30 & 0.224$\pm$0.004 & 2379$\pm$119 & (8.53$\pm 0.17)\times 10^{-3}$ & 8.24$\times 10^{-3}$ & 0.23 & 6.57$\pm 0.07$ & $1.66\times 10^{-2}$\\ H121-151 & 3.60 & 1637$\pm$33 & 0.236$\pm$0.005 & 2749$\pm$137 & (9.70$\pm 0.19)\times 10^{-3}$ & 1.05$\times 10^{-2}$ & 0.26 & 6.49$\pm 0.06$ & $1.66\times 10^{-2}$\\ HB121-151 & 3.35 & 1809$\pm$36 & 0.249$\pm$0.005 & 3268$\pm$161 & (1.10$\pm 0.02)\times 10^{-2}$ & 2.26$\times 10^{-2}$ & 0.48 & 6.41$\pm 0.06$ & $1.66\times 10^{-2}$\\ B121-151 & 3.11 & 1994$\pm$40 & 0.263$\pm$0.005 & -- & (1.23$\pm 0.02)\times 10^{-2}$ & 6.85$\times 10^{-3}$ & 0.13 & 6.35$\pm 0.06$ & $1.66\times 10^{-2}$\\ 15H125-125 & 4.51 & 1450$\pm$29 & 0.211$\pm$0.004 & 2159$\pm$108 & (6.26$\pm 0.13)\times 10^{-3}$ & 7.98$\times 10^{-3}$ & 0.25 & 5.95$\pm 0.06$ & $1.53\times 10^{-2}$\\ 125H125-125 & 4.23 & 1568$\pm$31 & 0.222$\pm$0.004 & 2350$\pm$118 & (7.19$\pm 0.14)\times 10^{-3}$ & 9.29$\times 10^{-3}$ & 0.27 & 5.87$\pm 0.06$ & $1.53\times 10^{-2}$\\ H125-125 & 3.95 & 1710$\pm$34 & 0.234$\pm$0.005 & 2749$\pm$137 & (8.15$\pm 0.16)\times 10^{-3}$ & 1.67$\times 10^{-2}$ & 0.42 & 5.80$\pm 0.06$ & $1.52\times 10^{-2}$\\ HB125-125 & 3.69 & 1900$\pm$38 & 0.245$\pm$0.005 & 2873$\pm$144 & (9.35$\pm 0.19)\times 10^{-3}$ & 1.66$\times 10^{-2}$ & 0.39 & 5.74$\pm 0.06$ & $1.53\times 10^{-2}$\\ B125-125 & 3.43 & 2099$\pm$42 & 0.257$\pm$0.005 & 3353$\pm$168 & (1.06$\pm 0.02))\times10^{-2}$ & 2.19$\times 10^{-2}$ & 0.44 & 5.69$\pm 0.06$ & $1.53\times 10^{-2}$\\ 15H116-158 & 4.12 & 1273$\pm$26 & 0.205$\pm$0.004 & 2148$\pm$107 & (7.19$\pm 0.14)\times 10^{-3}$ & 4.63$\times 10^{-3}$ & 0.15 & 6.84$\pm 0.07$ & $1.65\times 10^{-2}$\\ 125H116-158 & 3.85 & 1406$\pm$28 & 0.214$\pm$0.004 & 2276$\pm$124 & (8.20$\pm 0.16)\times 10^{-3}$ & 1.01$\times 10^{-2}$ & 0.28 & 6.76$\pm 0.07$ & $1.65\times 10^{-2}$\\ H116-158 & 3.60 & 1540$\pm$31 & 0.227$\pm$0.005 & 2767$\pm$138 & (9.30$\pm 0.19)\times 10^{-3}$ & 1.23$\times 10^{-2}$ & 0.31 & 6.69$\pm 0.07$ & $1.66\times 10^{-2}$\\ HB116-158 & 3.35 & 1709$\pm$34 & 0.240$\pm$0.005 & 3242$\pm$162 & (1.05$\pm 0.02)\times 10^{-2}$ & 1.40$\times 10^{-2}$ & 0.30 & 6.63$\pm 0.06$ & $1.65\times 10^{-2}$\\ B116-158 & 3.11 & 1885$\pm$37 & 0.254$\pm$0.005 & -- & (1.18$\pm 0.02)\times 10^{-2}$ & 4.64$\times 10^{-3}$ & 0.10 & 6.58$\pm 0.07$ & $1.65\times 10^{-2}$\\ 15H125-146 & 4.13 & 1401$\pm$28 & 0.214$\pm$0.004 & 2336$\pm$117 & (7.62$\pm 0.02)\times 10^{-3}$ & 1.01$\times 10^{-2}$ & 0.30 & 6.81$\pm 0.07$ & $1.66\times 10^{-2}$\\ 125H125-146 & 3.86 & 1560$\pm$31 & 0.226$\pm$0.005 & 2576$\pm$129 & (8.77$\pm 0.18)\times 10^{-3}$ & 1.26$\times 10^{-2}$ & 0.34 & 6.73$\pm 0.07$ & $1.66\times 10^{-2}$\\ H125-146 & 3.60 & 1691$\pm$34 & 0.238$\pm$0.003 & 2827$\pm$141 & (9.91$\pm 0.20)\times 10^{-3}$ & 1.89$\times 10^{-2}$ & 0.45 & 6.66$\pm 0.07$ & $1.66\times 10^{-2}$\\ HB125-146 & 3.35 & 1856$\pm$37 & 0.252$\pm$0.005 & 3251$\pm$163 & (1.12$\pm 0.20)\times 10^{-2}$ & 2.50$\times 10^{-2}$ & 0.52 & 6.60$\pm 0.07$ & $1.66\times 10^{-2}$\\ B125-146 & 3.11 & 2039$\pm$41 & 0.265$\pm$0.005 & -- & (1.26$\pm 0.25)\times 10^{-2}$ & 7.99$\times 10^{-3}$ & 0.14 & 6.56$\pm 0.06$ & $1.66\times 10^{-2}$\\ 15H118-155 & 4.12 & 1308$\pm$26 & 0.206$\pm$0.004 & 2161$\pm$108 & (7.31$\pm 0.15)\times 10^{-3}$ & 5.72$\times 10^{-3}$ & 0.18 & 6.83$\pm 0.07$ & $1.66\times 10^{-2}$\\ 125H118-155 & 3.86 & 1441$\pm$29 & 0.218$\pm$0.004 & 2358$\pm$118 & (8.35$\pm 0.17)\times 10^{-3}$ & 7.12$\times 10^{-3}$ & 0.21 & 6.75$\pm 0.07$ & $1.67\times 10^{-2}$\\ H118-155 & 3.60 & 1590$\pm$32 & 0.230$\pm$0.005 & 2782$\pm$139 & (9.49$\pm 0.19)\times 10^{-3}$ & 1.59$\times 10^{-2}$ & 0.39 & 6.68$\pm 0.07$ & $1.66\times 10^{-2}$\\ HB118-155 & 3.35 & 1759$\pm$35 & 0.243$\pm$0.005 & 3259$\pm$163 & (1.08$\pm 0.02)\times 10^{-2}$ & 2.03$\times 10^{-2}$ & 0.43 & 6.62$\pm 0.07$ & $1.66\times 10^{-2}$\\ B118-155 & 3.11 & 1942$\pm$39 & 0.257$\pm$0.005 & -- & (1.20$\pm 0.02)\times 10^{-2}$ & 5.54$\times 10^{-3}$ & 0.11 & 6.66$\pm 0.07$ & $1.66\times 10^{-2}$\\ 15H117-156 & 4.11 & 1293$\pm$26 & 0.204$\pm$0.004 & 2161$\pm$108 & (7.26$\pm 0.15)\times 10^{-3}$ & 5.09$\times 10^{-3}$ & 0.17 & 6.83$\pm 0.07$ & $1.66\times 10^{-2}$\\ 125H117-156 & 3.84 & 1425$\pm$29 & 0.216$\pm$0.004 & 2416$\pm$121 & (8.30$\pm 0.17)\times 10^{-3}$ & 8.09$\times 10^{-3}$ & 0.23 & 6.76$\pm 0.07$ & $1.66\times 10^{-2}$\\ H117-156 & 3.58 & 1574$\pm$32 & 0.229$\pm$0.005 & 2775$\pm$139 & (9.43$\pm 0.19)\times 10^{-3}$ & 1.39$\times 10^{-2}$ & 0.34 & 6.69$\pm 0.07$ & $1.66\times 10^{-2}$\\ HB117-156 & 3.34 & 1724$\pm$35 & 0.242$\pm$0.005 & 3201$\pm$160 & (1.06$\pm 0.02)\times 10^{-2}$ & 1.61$\times 10^{-2}$ & 0.35 & 6.62$\pm 0.07$ & $1.66\times 10^{-2}$\\ B117-156 & 3.10 & 1933$\pm$38 & 0.256$\pm$0.005 & -- & (1.20$\pm 0.02)\times 10^{-2}$ & 5.26$\times 10^{-3}$ & 0.11 & 6.58$\pm 0.06$ & $1.64\times 10^{-2}$\\ 15H112-140 & 4.50 & 1281$\pm$26 & 0.197$\pm$0.004 & 2188$\pm$109 & (5.91$\pm 0.12)\times 10^{-3}$ & 5.37$\times 10^{-3}$ & 0.17 & 5.97$\pm 0.06$ & $1.49\times 10^{-2}$\\ 125H112-140 & 4.21 & 1412$\pm$28 & 0.208$\pm$0.004 & 2269$\pm$113 & (6.80$\pm 0.14)\times 10^{-3}$ & 4.80$\times 10^{-3}$ & 0.15 & 5.89$\pm 0.06$ & $1.49\times 10^{-2}$\\ H112-140 & 3.94 & 1558$\pm$31 & 0.220$\pm$0.004 & 2470$\pm$123 & (7.78$\pm 0.16)\times 10^{-3}$ & 6.18$\times 10^{-3}$ & 0.17 & 5.82$\pm 0.06$ & $1.50\times 10^{-2}$\\ HB112-140 & 3.68 & 1717$\pm$34 & 0.231$\pm$0.005 & 2791$\pm$140 & (8.84$\pm 0.18)\times 10^{-3}$ & 9.52$\times 10^{-3}$ & 0.23 & 5.76$\pm 0.06$ & $1.50\times 10^{-2}$\\ B112-140 & 3.43 & 1890$\pm$38 & 0.244$\pm$0.005 & 3271$\pm$164 & (9.98$\pm 0.20)\times 10^{-3}$ & 1.59$\times 10^{-2}$ & 0.33 & 5.71$\pm 0.06$ & $1.52\times 10^{-2}$\\ 15H107-146 & 4.50 & 1203$\pm$24 & 0.189$\pm$0.004 & 2054$\pm$103 & (5.70$\pm 0.11)\times 10^{-3}$ & 3.63$\times 10^{-3}$ & 0.13 & 5.99$\pm 0.06$ & $1.51\times 10^{-2}$\\ 125H107-146 & 4.22 & 1328$\pm$27 & 0.200$\pm$0.004 & 2291$\pm$115 & (6.57$\pm 0.13)\times 10^{-3}$ & 4.56$\times 10^{-3}$ & 0.14 & 5.91$\pm 0.06$ & $1.50\times 10^{-2}$\\ H107-146 & 3.94 & 1475$\pm$30 & 0.212$\pm$0.004 & 2546$\pm$127 & (7.49$\pm 0.15)\times 10^{-3}$ & 7.82$\times 10^{-3}$ & 0.21 & 5.84$\pm 0.06$ & $1.49\times 10^{-2}$\\ HB107-146 & 3.69 & 1620$\pm$32 & 0.224$\pm$0.004 & 2870$\pm$143 & (8.51$\pm 0.17)\times 10^{-3}$ & 1.02$\times 10^{-2}$ & 0.25 & 5.78$\pm 0.06$ & $1.50\times 10^{-2}$\\ B107-146 & 3.44 & 1786$\pm$36 & 0.237$\pm$0.005 & 3298$\pm$165 & (9.60$\pm 0.19)\times 10^{-3}$ & 1.29$\times 10^{-2}$ & 0.27 & 5.73$\pm 0.06$ & $1.51\times 10^{-2}$\\ SFHo135-135 & 3.41 & 1987$\pm$40 & 0.261$\pm$0.005 & 3250$\pm$163 & (1.17$\pm 0.02)\times 10^{-2}$ & 2.91$\times 10^{-3}$ & 0.61 & 6.60$\pm 0.07$ & $1.68\times 10^{-2}$\\ \hline\hline \end{tabular}\label{tb:fpeak} \end{table*} \subsubsection{$f_1,f_2$ and binary tidal deformability relation} Reference~\cite{Rezzolla:2016nxn} reported that several gravitational-wave frequencies associated with the main peaks in the spectrum amplitude for post-merger gravitational waves correlate with the tidal coupling constant. Figures~\ref{fig:PSD}--\ref{fig:PSD2} show the spectrum amplitudes for the quadrupole mode of gravitational waves for all the systems defined by \begin{align} h_\text{eff}(f)=f\sqrt{\frac{|\tilde{h}_+(f)|^2+|\tilde{h}_\times(f)|^2}{2}}, \label{eq:deffreqdom2} \end{align} with $\tilde{h}_+(f)$ and $\tilde{h}_\times(f)$ in Eq.~(\ref{eq:deffreqdom}). In Figs.~\ref{fig:PSD}--\ref{fig:PSD2}, the vertical dashed lines indicate the so-called $f_1$ frequency for the fitting formula in Ref.~\cite{Rezzolla:2016nxn}. This peak is a side-band peak of the main peak of $f=f_2$, and it is naturally understood as a result of the modulation of the main peak. According to Ref.~\cite{Takami:2014tva}, the remnant might be represented by a mechanical toy model composed of a rotating disk with two spheres. In this model, the two spheres, which mimic the double dense cores appearing after merger, are connected with a spring and oscillate freely (see their Fig.~17). $f_1$ frequency corresponds to the spin frequency when the separation between the two spheres becomes largest if we assume the angular momentum conservation. They claimed this scenario for the interpretation of $f_1$ frequency. In Ref.~\cite{Rezzolla:2016nxn}, $f_1$ frequency is determined by identifying one of the main peaks in the spectrum amplitude and the spectrogram of post-merger gravitational waves. For the symmetric binary systems, $f_1$ peak could be identified in our numerical results using the same methods. However, the structure of the spectrum amplitude around $f=f_1$ depends highly on the grid resolution (see 125H135-135 and H135-135 systems for example). For a sequence with the fixed EOS and chirp mass, e.g., 15H135-135, 15H125-146, 15H121-151, 15H118-155, 15H117-156, and 15H116-158, we find it more difficult to identify $f_1$ peak as the symmetric mass ratio decreases. This was also pointed out in Ref.~\cite{Dietrich:2015iva} although their grid resolution was much lower than those in our present study and the resolution study on the spectrum amplitude of gravitational waves is not performed (see their Fig.~13). As demonstrated in Fig.~\ref{fig:PSD}, $f_1$ peak cannot be clearly identified for the asymmetric binary systems. Figure~\ref{fig:PSDb} shows that this is also the case for binary systems of relatively small mass $\sim 2.5M_\odot$ as discussed in Refs.~\cite{Foucart:2015gaa,Bauswein:2015yca,Rezzolla:2016nxn}. We also analyze the spectrogram of post-merger gravitational waves and confirm that there is no prominent peak around $f_\text{GW}=f_1$ for the asymmetric binary systems. Therefore, we conclude that the universal relation for $f_1$ could be only applicable to nearly symmetric binary systems: essentially no universal relation is present. We speculate that for the asymmetric binary systems, the mechanical toy model proposed in Ref.~\cite{Takami:2014tva} could not describe the merger remnant because the less massive NS is tidally disrupted before merger and there is no prominent double dense cores. We also note that the method for constraining the EOS proposed in Ref.~\cite{Takami:2014zpa} could not be applied unless the symmetric mass ratio is measured precisely to be $0.25$ because this method relies on $f_1$ universal relation. In Ref.~\cite{Rezzolla:2016nxn}, the peak frequency, $f_2$, in the spectrum amplitude~\cite{footnote2} is reported to have a correlation with the tidal coupling constant. This peak frequency approximately corresponds to the f--mode oscillation of the remnant massive NS (see also Refs.~\cite{Shibata:2005xz,Hotokezaka:2013iia,Bauswein:2011tp,Shibata:2005ss}). The left panel of Fig.~\ref{fig:f2} plots fluctuation around the averaged value of $f_2$ (average of the results with different grid resolutions) for the binary systems with $m_1= 1.12M_\odot$ and $m_2= 1.40 M_\odot$. We measure $f_2$ in the spectrum amplitude as a prominent peak for $f \ge 2$ kHz. The fluctuation is within $\approx 4$--$5\%$ and we find that this is also the case for all the binary systems. Thus, we adopt $5\%$ as a relative error of $f_2$ (see also Table~\ref{tb:fpeak}). The right panel of Fig.~\ref{fig:f2} shows $f_2$ as a function of $\tilde{\Lambda}^{1/5}$. We exclude the systems which collapse to a black hole within a few ms after merger because the peak associated with $f_2$ is not prominent or absent in the spectrum amplitude. We also overplot the fitting formula proposed in Ref.~\cite{Rezzolla:2016nxn}. It is found that with this fitting formula, the scatter is $\approx 14\%$ at the maximum. Thus, we propose an improved fitting formula for $m_0f_2$; \begin{align} &\log_{10}\left[\left(\frac{f_2}{\rm Hz}\right)\left(\frac{m_0}{M_\odot}\right)\right] = c_0(\eta) + c_1(\eta) \tilde{\Lambda}^{1/5},\nonumber\\ &c_0(\eta) = 11.363 - 27.418 \eta,\nonumber\\ &c_1(\eta) = -2.158 + 7.941 \eta. \label{eq:f2} \end{align} Even with this formula, the relative error is as large as $9\%$ (see also Fig.~\ref{fig:f2v2}). This implies that even if the value of $f_2$ is determined precisely in the data analysis of gravitational waves, $\tilde{\Lambda}^{1/5}$ will be constrained with the error of $\approx \pm 0.1$. \subsubsection{$f_2$ and NS radius with $1.6M_\odot$ relation} References~\cite{Bauswein:2011tp,Bauswein:2012ya} reported that $f_2$ frequency has a tight correlation with the NS radius of $1.6M_\odot$ (see Eq.~(3) in Ref.~\cite{Bauswein:2012ya}). In Ref.~\cite{Hotokezaka:2013iia}, we assessed their relation by using our numerical-relativity results and found that the scatter in the relation is larger than that reported in Ref.~\cite{Bauswein:2012ya}. We revisit this assessment because the initial orbital eccentricity reduction was not implemented in Ref.~\cite{Hotokezaka:2013iia}. In addition, the grid resolution in Ref.~\cite{Hotokezaka:2013iia} is much lower than that in this paper. These ingredients could modify the post-merger dynamics and the resulting gravitational waveforms. Because the relation in Ref.~\cite{Bauswein:2012ya} holds only for symmetric binary systems of $m_0 = 2.7M_\odot$, we first assess this relation by employing binary systems of $(m_1,m_2)=(1.35M_\odot,1.35M_\odot)$ and found that the error is $\approx 6\%$~\cite{crust_comment}. Second, we assess the relation by employing binary systems of $(m_1,m_2)=(1.25M_\odot,1.46M_\odot)$, $(1.21M_\odot,1.51M_\odot)$, $(1.18M_\odot,1.55M_\odot)$, $(1.17M_\odot,1.56M_\odot)$, and $(1.16M_\odot,1.58M_\odot)$. We found that the scatter from their fitting formula is $\approx 10\%$. Therefore, the scatter larger than that reported in Ref.~\cite{Bauswein:2012ya} stems from the mass asymmetry of the binary. Our numerical results suggest that the fitting formula in Ref.~\cite{Bauswein:2012ya} could infer the radius of the $1.6M_\odot$ NS within the $1$ km accuracy only if the symmetric mass ratio is well constrained to be $0.25$. Otherwise, we constrain the radius of the $1.6M_\odot$ NS with the accuracy of $\approx \pm 1$ km if the value of $f_2$ is determined precisely, \begin{figure*}[t] \hspace{-18.0mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11a.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11b.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11c.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11d.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11e.pdf} \end{center} \end{minipage}\\ \vspace{-9mm} \hspace{-18.0mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11f.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11g.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11h.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11i.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11j.pdf} \end{center} \end{minipage}\\ \vspace{-9mm} \hspace{-18.0mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11k.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11l.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11m.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11n.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11o.pdf} \end{center} \end{minipage}\\ \vspace{-9mm} \hspace{-18.0mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11p.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11q.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11r.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11s.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11t.pdf} \end{center} \end{minipage}\\ \vspace{-9mm} \hspace{-18.0mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11u.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11v.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11w.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11x.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11y.pdf} \end{center} \end{minipage}\\ \vspace{-9mm} \hspace{-18.0mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11z.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11aa.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11ab.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11ac.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig11ad.pdf} \end{center} \end{minipage}\\ \caption{\label{fig:PSD}Spectrum amplitudes of gravitational waves for the binary systems with ${\cal M}_c= 1.1752M_\odot$. The number attached in the right-hand side vertical axis is the symmetric mass ratio $\eta$. We also show $f_1$ frequency proposed in Ref.~\cite{Rezzolla:2016nxn} with vertical dashed lines. For completeness, we also show the systems reported in Refs.~\cite{Kiuchi:2017pte,Kawaguchi:2018gvj}. } \end{figure*} \begin{figure*}[t] \hspace{-18.0mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig12a.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig12b.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig12c.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig12d.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig12e.pdf} \end{center} \end{minipage}\\ \vspace{-9mm} \hspace{-18.0mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig12f.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig12g.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig12h.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig12i.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig12j.pdf} \end{center} \end{minipage}\\ \vspace{-9mm} \hspace{-18.0mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig12k.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig12l.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig12m.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig12n.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig12o.pdf} \end{center} \end{minipage}\\ \caption{\label{fig:PSDb}The same as Fig.~\ref{fig:PSD}, but for the binary systems with ${\cal M}_c= 1.0882M_\odot$. } \end{figure*} \begin{figure}[t] \includegraphics[width=.9\linewidth]{fig13} \caption{\label{fig:PSD2}The same as Fig.~\ref{fig:PSD}, but for the SFHo (tabulated) EOS case. } \end{figure} \begin{figure*} \includegraphics[width=.41\linewidth]{fig14a.pdf} \includegraphics[width=.45\linewidth]{fig14b.pdf} \caption{(Left) A deviation of $f_2$ frequency in the spectrum amplitude relative to $f_\text{2,ave}$ as a function of $1/N$ for the binary systems with $m_1= 1.12 M_\odot$ and $m_2= 1.40 M_\odot$. $f_\text{2,ave}$ is an average of $f_{2}$ over the results with different grid resolutions. (Right) $f_2$--$\tilde{\Lambda}^{1/5}$ relation for the binary systems except for those which collapse to a black hole within a few ms after merger. The error bar of $\pm 5\%$ comes from the systematics associated with the finite grid resolution in $f_2$. }\label{fig:f2} \end{figure*} \begin{figure} \includegraphics[width=.9\linewidth]{fig15.pdf} \caption{An improved $m_0 f_2$--$\tilde{\Lambda}^{1/5}$ relation with $c_0(\eta)$ and $c_1(\eta)$ in Eq.~(\ref{eq:f2}). }\label{fig:f2v2} \end{figure} In Table~\ref{tb:universal}, we summarize to what extent the so-called universal relations hold. \subsection{Energy and angular momentum} \label{subsec:energy} Using Eqs.~(\ref{eq:EGW})--(\ref{eq:JGW2}), we calculate the energy and angular momentum carried by gravitational waves. We define $E_{\rm GW,i}^\text{tot}$ and $E_{\rm GW,p}~(J_{\rm GW,p})$ as the energy (angular momentum) emitted in the inspiral stage and in the post-merger stage, respectively. The subscripts $\rm i$ and $\rm p$ in these quantities denote the inspiral and the post-merger stage, respectively. The peak time introduced in Sec.~\ref{subsec:overview} defines the boundary between the inspiral and post-merger stages. In the following we summarize the energy and angular momentum emitted in each stage for all the systems. Their values are presented in Table~\ref{tb:fpeak}. \subsubsection{inspiral stage} Table~\ref{tb:fpeak} and Fig.~\ref{fig:EGWi} show the energy, $E_{\rm GW,i}^{2,2}$, carried by gravitational waves with $(l,m)=(2,2)$ mode during the inspiral stage. We measure the relative error with respect to the averaged value in the left panel of Fig.~\ref{fig:EGWi} and find that the error relative to its averaged value of $E^{2,2}_{\rm GW,i}$ (average of the results with different grid resolutions) never exceeds $2\%$ for a wide range of the grid resolution. This is also the case for all the binary systems. Thus, we adopt this fluctuation as an error in $E_{\rm GW,i}^{2,2}$. Note that the other modes such as $(l,m)=(2,1)$ and $(3,3)$ are $\lesssim 0.1\%$ and $\lesssim 0.5\%$, respectively, of $E^{2.2}_{\rm GW,i}$. The right panel of Fig.~\ref{fig:EGWi} plots $E^\text{tot}_{\rm GW,i}/(m_0\eta)$ as a function of $\tilde{\Lambda}^{1/5}$. We include the contribution due to the gravitational-wave emission during evolution from infinite separation to the initial orbital separation of the simulation, $m_0-M_\text{ADM,0}$ in Table~\ref{tb:fpeak}, by $E^\text{tot}_{\rm GW,i} \approx 2E_{\rm GW,i}^{2,2}+m_0-M_\text{ADM,0}$. $M_\text{ADM,0}$ is the Arnowitt-Deser-Misner mass of the initial condition of the simulations. As proposed in Ref.~\cite{Zappa:2017xba}, this quantity correlates with the tidal coupling constant. We explicitly derive a fitting formula with the binary tidal deformability as \begin{align} &\log_{10} \left[\frac{E^\text{tot}_{\rm GW,i}}{m_0\eta}\right] = - 0.869 - 0.111 \tilde{\Lambda}^{1/5}. \label{eq:EGWi} \end{align} It is reasonable that $E^\text{tot}_{\rm GW,i}$ decreases as $\tilde{\Lambda}$ increases because the binary systems with larger values of $\tilde{\Lambda}$ merge earlier than those with smaller values of $\tilde{\Lambda}$. This fitting formula reproduces the simulation data of $E^\text{tot}_{\rm GW,i}$ within an error of $\approx 4\%$. In the limit to a binary black hole merger $(\tilde{\Lambda}\to 0)$, the fitting formula predicts $E^\text{tot}_\text{GW,i} \approx 0.034m_0$ for $\eta=0.250$ and $E^\text{tot}_\text{GW,i} \approx 0.033m_0$ for $\eta=0.244$, respectively. On the other hand, high-precision binary black hole merger simulations for non-spinning system suggests $E^\text{tot}_\text{GW,i} \approx 0.03 m_0$ for $0.247 \le \eta \le 0.250$~\cite{Blackman:2017dfb,Boyle:2019kee}. We conclude that the fitting formula Eq.~(\ref{eq:EGWi}) reproduces the BBH result with $\approx 10\%$ error. \subsubsection{Post-merger stage} We estimate angular momentum of the remnant, $J_\text{rem}$ at the peak time of the gravitational-wave amplitude in the retarded time~(\ref{eq:tret}) by performing a surface integral on the sphere of $r=r_0$; \begin{align} J_\text{rem} = \frac{1}{8\pi}\epsilon^{zjk}\oint_{r=r_0} x_j ({K^l}_k-K{\delta^l}_k)dS_l. \label{eq:Jrem2} \end{align} $K_{ij}$, $K$, ${\delta^i}_j$, and $dS_l$ are the extrinsic curvature, its trace part, the Kronecker delta, and an element of the surface integral, respectively. We typically integrate it on the sphere of $r_0 = 200 m_0$ and $214m_0$ for the binary systems with ${\cal M}_c=1.1752M_\odot$ and $1.0882M_\odot$, respectively. Table~\ref{tb:fpeak} and Fig.~\ref{fig:jrem} show the result. In the left panel of Fig.~\ref{fig:jrem}, we estimate the residual error in $J_\text{rem}$ for HB$118$--$155$. We again assume that the numerical result obeys the following form; \begin{align} J_\text{rem}(N) = J_\text{rem}^\infty(N_\text{max}) - \Delta J_\text{rem}(N_\text{max}) \left(\frac{N_\text{max}}{N}\right)^p, \label{eq:jrem_num} \end{align} where $J_\text{rem}^\infty(N_\text{max})$ is the angular momentum of the remnant in the continuum limit of the finite difference. We estimate three unknowns, $J_\text{rem}^\infty(N_\text{max})$, $\Delta J_\text{rem}(N_\text{max})$, and $p$ by fitting the numerical data with $N=90,102,\cdots,$ and $N_\text{max}$ with Eq.~(\ref{eq:jrem_num}). By comparing $N_\text{max}=150$ and $182$ cases, we confirm that adding a result of the higher resolution simulation reduces the residual error (see the legend of Fig.~\ref{fig:jrem} for $p$ and $\Delta J_\text{rem}(N_\text{max})$). We find that $\Delta J_\text{rem}(N_\text{max})$ is $\lesssim 1\%$ of the continuum limit, $J_\text{rem}^{\infty}(N_\text{max})$, for $N_\text{max}=182$. This is also the case for all the binary systems. Thus, we adopt $1\%$ as a systematics associated with the finite grid resolution in $J_\text{rem}$. Because $J_\text{rem}$ could correlate with $\tilde{\Lambda}^{1/5}$, we propose a fitting formula of $J_\text{rem}/(m_0^2\eta)$: \begin{align} &\log_{10}\left[\frac{J_\text{rem}}{m_0^2\eta}\right] = d_0(\eta) + d_1(\eta) \tilde{\Lambda}^{1/5},\nonumber\\ &d_0(\eta) = 1.552 - 4.275 \eta,\nonumber\\ &d_1(\eta) = -0.141+0.642 \eta. \label{eq:Jrem} \end{align} The right panel of Fig.~\ref{fig:jrem} plots this relation and we confirm that it is accurate within $3\%$ error. Figures~\ref{fig:EGWp} and \ref{fig:JGWp} plot $E_{\rm GW,p}^{2,2}$ and $J_{\rm GW,p}^{2,2}$ emitted in the post-merger stage. It is worth noting that energy and angular momentum radiated by gravitational waves in $(l,m)=(2,1)$ and $(3,3)$ modes are $\lesssim 2.5\%$ of $E^{2,2}_{\rm GW,p}$ and $\lesssim 2.4\%$ of $J^{2,2}_{\rm GW,p}$, respectively, even for the highly asymmetric binary systems, e.g., 15H107-146 (see also the upper panel of Fig.~\ref{fig:dEdJ3}). The left panels in these figures show that it is hard to achieve a perfect convergence and the scatter is rather large compared to $E_{\rm GW,i}^{2,2}$, although the scatter never exceeds $50\%$ in $E_{\rm GW,p}^{2,2}$ and $J_{\rm GW,p}^{2,2}$. This is also the case for all the binary systems. The right panels in Figs.~\ref{fig:EGWp} and \ref{fig:JGWp} show $E_{\rm GW,p}^{2,2}/(m_0\eta)$ and $J_{\rm GW,p}^{2,2}/(m_0^2\eta)$ as a function of $\tilde{\Lambda}^{1/5}$. As discussed in Ref.~\cite{Zappa:2017xba}, the energy and angular momentum radiated in the post-merger stage peak around $\tilde{\Lambda} \approx 400$ because the binary systems with $\tilde{\Lambda} \lesssim 350$ collapse to a black hole within a few ms after the peak time. However, $\tilde{\Lambda}$ at the peak in $E_{\rm GW,p}^{2,2}$ and $J_{\rm GW,p}^{2,2}$ could decrease for general EOSs because as discussed in Ref.~\cite{Kiuchi:2019lls} the remnant would survive for more than $20$ ms after the peak time even for the binary systems with $\tilde{\Lambda} \lesssim 300$. For $\tilde{\Lambda}\gtrsim 400$, correlation between $E_{\rm GW,p}^{2,2}$ and the binary tidal deformability is not as tight as that in $E^\text{tot}_{\rm GW,i}/(m_0\eta)$--$\tilde{\Lambda}^{1/5}$. For $J_{\rm GW,p}^{2,2}$, the correlation with the binary tidal deformability is also not very tight. Note that $E_{\rm GW,p}^{2,2}$ and $J_{\rm GW,p}^{2,2}$ could increase from the values listed in Table~\ref{tb:fpeak} because we artificially terminated the simulations at $10$--$15$ ms after the peak time. At that moment, the gravitational-wave amplitude is still comparable to that in the late inspiral stage except for the systems which collapse to a black hole within a few ms after the peak time. We also should keep in mind that we might miss relevant physics such as effective turbulent viscosity generated by the magneto-hydrodynamical instabilities during the merger~\cite{Kiuchi:2014hja,Kiuchi:2015sga,Kiuchi:2017pte} and/or the neutrino cooling~\cite{Sekiguchi:2011zd,Foucart:2015gaa} for modeling the post-merger signal. Reference \cite{Shibata:2017xht} suggests that the post-merger signal could be significantly suppressed in the presence of efficient angular momentum transport by the viscous effect inside the remnant NS. As already mentioned, the post-merger gravitational wave signal is dominated by the f--mode oscillation with $(l,m)=(2,2)$ of the remnant massive NS~\cite{Bauswein:2011tp,Hotokezaka:2013iia}. Thus, it is natural to expect that a relation holds between the energy emission rate and angular momentum emission rate (\ref{eq:EGW})--(\ref{eq:JGW}) with instantaneous gravitational-wave frequency~(\ref{eq:GWfreq}); \begin{align} \frac{dE_{\rm GW}^\text{post}}{dt} \approx \pi f_\text{GW} \frac{dJ_{\rm GW}^\text{post}}{dt}, \label{eq:dEdJ} \end{align} where $dE_{\rm GW}^\text{post}/dt=\sum_{l,m}dE_{\rm GW}^{l,m}/dt$ and $dJ_{\rm GW}^\text{post}/dt=\sum_{l,m}dJ_{\rm GW}^{l,m}/dt$ for $t\ge t_\text{peak}$ in Eqs.~(\ref{eq:EGW}) and (\ref{eq:JGW}). To investigate to what extent this relation is satisfied, we generate Figs.~\ref{fig:dEdJ}--\ref{fig:dEdJ2}. In these figures, the solid curve is the left hand side of Eq.~(\ref{eq:dEdJ}) and the dashed curve is the right hand side of Eq.~(\ref{eq:dEdJ}). We find that they agree with each other with a relative error $\lesssim 8\%$ for any time. Because the emissivity reduces quickly to zero at $t_\text{ret}-t_\text{peak}\approx 0.5$ ms as shown in Figs.~\ref{fig:dEdJ}--\ref{fig:dEdJ2}, we estimate the error for $t_\text{ret}-t_\text{peak}\gtrsim 1$ ms. We also find that the time integrated values of Eq.~(\ref{eq:dEdJ}) agree with each other with a relative error $\lesssim 1\%$. This is also the case for the relation of $E_{\rm GW,p}\approx \pi f_2 J_{\rm GW,p}$. We also confirm that a contribution from the one-arm spiral instability in the post-merger stage~\cite{Paschalidis:2015mla,Radice:2016gym} is negligible because the energy flux for $(l,m)=(2,1)$ mode is $\lesssim 0.5\%$ of that for $(l,m)=(2,2)$ mode even for the symmetric binary systems as shown in the bottom panel of Fig.~\ref{fig:dEdJ3}. Thus, we conclude that Eq.~(\ref{eq:dEdJ}) is well satisfied and confirm that the main gravitational-wave emission mechanism during the post-merger stage is the f--mode oscillation of the remnant massive NS, i.e, $f_\text{GW} \approx f_2$ (see also Figs.~\ref{fig:PSD}--\ref{fig:PSD2}). These findings encourage us to build a model for the post-merger gravitational-wave emission (see Ref.~\cite{Shibata:2019ctb}). \begin{figure*} \includegraphics[width=.45\linewidth]{fig16a.pdf} \includegraphics[width=.45\linewidth]{fig16b.pdf} \caption{(Left) A deviation of $E_{\rm GW,i}^{2,2}$ relative to $E_{\rm GW,i,ave}^{2,2}$ as a function of $1/N$ for binary systems with $m_1= 1.25M_\odot$ and $m_2= 1.46 M_\odot$. $E^{2,2}_{\rm GW,i,ave}$ is an average of $E^{2,2}_{\rm GW,i}$ over the results with different grid resolutions. (Right) $E^\text{tot}_{\rm GW,i}/(m_0\eta)$--$\tilde{\Lambda}^{1/5}$ relation with a fitting formula~(\ref{eq:EGWi}). In the right panel, the error bar of $\pm 2\%$ comes from the systematics associated with the finite grid resolution in $E_{\rm GW,i}^{2,2}$. }\label{fig:EGWi} \end{figure*} \begin{figure*} \includegraphics[width=.45\linewidth]{fig17a.pdf} \includegraphics[width=.45\linewidth]{fig17b.pdf} \caption{(Left) Convergence of $J_\text{rem}$ with respect to the grid resolution for HB$118$--$155$. (Right) $J_\text{rem}/(m_0^2\eta)$--$\tilde{\Lambda}^{1/5}$ relation with $d_0(\eta)$ and $d_1(\eta)$ in Eq.~(\ref{eq:Jrem}). The error bar of $\pm1\%$ comes from the systematics associated with the finite grid resolution in $J_\text{rem}$. }\label{fig:jrem} \end{figure*} \begin{figure*} \includegraphics[width=.45\linewidth]{fig18a.pdf} \includegraphics[width=.45\linewidth]{fig18b.pdf} \caption{(Left) A deviation of $E_{\rm GW,p}^{2,2}$ relative to $E_\text{GW,p,ave}^{2,2}$ as a function of $1/N$ for binary systems with $m_1= 1.25M_\odot$ and $m_2= 1.46 M_\odot$. $E^{2,2}_\text{GW,p,ave}$ is an average of $E^{2,2}_{\rm GW,p}$ over the results with different grid resolutions. (Right) $E_{\rm GW,p}^{2,2}/(m_0\eta)$--$\tilde{\Lambda}^{1/5}$ relation. In the right panel, the error bar of $\pm 50\%$ comes from the systematics associated with the finite grid resolution in $E_{\rm GW,p}^{2,2}$. }\label{fig:EGWp} \end{figure*} \begin{figure*} \includegraphics[width=.45\linewidth]{fig19a.pdf} \includegraphics[width=.45\linewidth]{fig19b.pdf} \caption{The same as Fig.~\ref{fig:EGWp}, but for $J_{\rm GW,p}^{2,2}$. The left panel is for the binary systems with $m_1= 1.12 M_\odot$ and $m_2= 1.40 M_\odot$. }\label{fig:JGWp} \end{figure*} \begin{figure*}[t] \hspace{-18.0mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20a.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20b.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20c.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20d.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20e.pdf} \end{center} \end{minipage}\\ \vspace{-9mm} \hspace{-18.0mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20f.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20g.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20h.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20i.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20j.pdf} \end{center} \end{minipage}\\ \vspace{-9mm} \hspace{-18.0mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20k.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20l.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20m.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20n.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20o.pdf} \end{center} \end{minipage}\\ \vspace{-9mm} \hspace{-18.0mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20p.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20q.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20r.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20s.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20t.pdf} \end{center} \end{minipage}\\ \vspace{-9mm} \hspace{-18.0mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20u.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20v.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20w.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20x.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20y.pdf} \end{center} \end{minipage}\\ \vspace{-9mm} \hspace{-18.0mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20z.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20aa.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20ab.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20ac.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig20ad.pdf} \end{center} \end{minipage}\\ \caption{\label{fig:dEdJ} Energy (solid) and angular momentum (dashed) emission rate by gravitational waves~(\ref{eq:dEdJ}) for the binary systems with ${\cal M}_c= 1.1752M_\odot$. The time axis is set to be zero at the peak time of the gravitational-wave amplitude. For completeness, we also show the systems reported in Refs.~\cite{Kiuchi:2017pte,Kawaguchi:2018gvj}. } \end{figure*} \begin{figure*}[t] \hspace{-18.0mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig21a.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig21b.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig21c.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig21d.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig21e.pdf} \end{center} \end{minipage}\\ \vspace{-9mm} \hspace{-18.0mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig21f.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig21g.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig21h.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig21i.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig21j.pdf} \end{center} \end{minipage}\\ \vspace{-9mm} \hspace{-18.0mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig21k.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig21l.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig21m.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig21n.pdf} \end{center} \end{minipage} \hspace{-13.35mm} \begin{minipage}{0.27\hsize} \begin{center} \includegraphics[width=4.5cm,angle=0]{fig21o.pdf} \end{center} \end{minipage}\\ \caption{\label{fig:dEdJb} The same as Fig.~\ref{fig:dEdJ}, but for ${\cal M}_c= 1.0882M_\odot$. } \end{figure*} \begin{figure}[t] \includegraphics[width=.9\linewidth]{fig22.pdf} \caption{\label{fig:dEdJ2}The same as Fig.~\ref{fig:dEdJ}, but for the SFHo (tabulated) EOS case. } \end{figure} \begin{figure*}[t] \includegraphics[width=.5\linewidth]{fig23a.pdf} \includegraphics[width=.5\linewidth]{fig23b.pdf} \caption{\label{fig:dEdJ3} (Top) Gravitational-wave energy flux~(\ref{eq:EGW}) for $(l,m)=(2,2)$, $(2,1)$, and $(3,3)$ modes for 15H107-146 with $N=182$. (Bottom) The same as the top panel, but for $(l,m)=(2,2)$ and $(2,1)$ modes for 125H125-125 with $N=182$. } \end{figure*} In Table~\ref{tb:universal}, we summarize to what extent $E^\text{tot}_{\rm GW,i}/(m_0\eta)$--$\tilde{\Lambda}^{1/5}$ and $J_\text{rem}/(m_0^2\eta)$--$\tilde{\Lambda}^{1/5}$ relations of Eqs.~(\ref{eq:EGWi}) and (\ref{eq:Jrem2}) hold. \begin{table*} \centering \caption{Summary of the assessment of the universal relations for the non-spinning and non-magnetized binary systems. Neutrino radiation is not taken into account. We show the maximum relative errors produced by the original relation (upper row) and by the improved relation derived in this paper (lower row). For $f_1$, the error is unable to be estimated because of the absence of $f_1$ peak in the asymmetric binary systems. Therefore, we conclude there is no universal relation between $f_1$ and $\tilde{\Lambda}$. For $f_2$--$R_{1.6}$ relation, we do not propose an improved relation and sym. (asym.) in the parenthesis means the symmetric (asymmetric) binary. For $E^{2,2}_{\rm GW,p}$ and $J^{2,2}_{\rm GW,p}$, we do not propose an improved relation because uncertainties of the life time of the merger remnant NSs are large. } \begin{tabular}{ccccccccc}\hline\hline $m_0f_\text{peak}$--$\tilde{\Lambda}^{1/5}$ & $D h_\text{peak}/m_0$--$\tilde{\Lambda}^{1/5}$ & $f_1$--$\tilde{\Lambda}^{1/5}$ & $m_0 f_2$--$\tilde{\Lambda}^{1/5}$ & $f_2$--$R_{1.6}$ & $E^\text{tot}_{\rm GW,i}/(m_0\eta)$--$\tilde{\Lambda}^{1/5}$ & $J_\text{rem}/(m_0^2\eta)$--$\tilde{\Lambda}^{1/5}$\\ \hline $\approx 17\%$ & N/A & -- & $\approx 14\%$ & $\approx 6\%~\text{(sym.)}$ and $\approx 10\%~\text{(asym.)}$ & N/A & N/A\\ $\approx 3 \%$ & $\approx 4 \%$ & -- & $\approx 9 \%$ & -- & $\approx 4\%$ & $\approx 3\%$\\ \hline \end{tabular}\label{tb:universal} \end{table*} \section{Summary} \label{sec:summary} We performed long-term simulations for new 26 systems of the non-spinning BNS mergers in numerical relativity. To derive high-precision gravitational waveforms in a large parameter space, we systematically vary the EOSs of NS, the chirp mass, and the mass ratio. To assess gravitational-wave phase error stemming from a finite grid resolution, we change the grid spacing by a factor of two for simulating each binary system. First, we found that the residual gravitational-wave phase error at the peak time of gravitational-wave amplitude is $\lesssim 0.5$ rad irrespective of the binary mass and NS EOS. By comparing the results for the piecewise polytropic and SFHo (tabulated) EOS systems, we also found that the interpolation of the thermodynamic quantities during the simulations could generate the phase error of $\approx 0.2$--$0.3$ rad. However the gravitational-wave phase error for the SFHo (tabulated) EOS system still remains within the sub-radian accuracy level. Second, we validated our SACRA inspiral gravitational waveform template~\cite{Kawaguchi:2018gvj} by comparing with the high-precision gravitational waveforms derived in this paper. We found that for a variety of BNS the error in our inspiral waveform model is less than $0.1$ rad in the gravitational-wave phase and less than $20\%$ in the amplitude up to $f_\text{GW}=1000$ Hz. This template can be used for a new gravitational wave data analysis for extracting tidal deformability from GW170817~\cite{Narikawa:2019xng} and for future event of BNS merger. Third, we assessed the universal relations between the gravitational-wave related quantities and the binary tidal deformability/NS radius proposed in the literature~\cite{Rezzolla:2016nxn,Read:2013zra,Zappa:2017xba,Bauswein:2011tp,Bauswein:2012ya,Bernuzzi:2014owa,Bernuzzi:2015rla}. We found that the gravitational-wave frequency at the peak time $f_\text{peak}$, the gravitational-wave amplitude at the peak time $h_\text{peak}$, and the peak frequency $f_2$ associated with the f--mode oscillation of the remnant massive NS in the spectrum amplitude of post-merger gravitational waves depend strongly on the symmetric mass ratio and/or the grid resolution. This clearly illustrates that the universal relations proposed in the literature~\cite{Rezzolla:2016nxn,Read:2013zra,Zappa:2017xba,Bauswein:2011tp,Bauswein:2012ya,Bernuzzi:2014owa,Bernuzzi:2015rla} are not as universal as proposed. We proposed improved fitting formulae~(\ref{eq:fpeak}) for $m_0f_\text{peak}$--$\tilde{\Lambda}^{1/5}$, (\ref{eq:hpeak}) for $D h_\text{peak}/m_0$--$\tilde{\Lambda}^{1/5}$, and (\ref{eq:f2}) and for $m_0f_2$--$\tilde{\Lambda}^{1/5}$. However these fitting formulae may still suffer from systematics such as NS spin, NS magnetic fields, and the neutrino radiation, which are not taken into account in our simulations. In addition, the EOS of NS, in particular, for a high-density part of the NS, is still uncertain, and hence, the systematics due to this uncertainty should be kept in mind. We also note that we assessed the errors of these formulae only with our simulation data. A close comparison among the results of the independent BNS simulations with the existing numerical relativity codes is necessary to better understand the systematic error in these formulae. This should be done as a future project. We also found that $f_1$ frequency in the spectrum amplitude could be extracted only for the nearly symmetric binary systems. Unless we can determine the symmetric mass ratio accurately, using the universal relation for $f_1$ could lead to a misleading result in the gravitational-wave data analysis. Finally, we assessed the energy, $E_\text{GW}$, and angular momentum, $J_\text{GW}$, carried by gravitational waves in the inspiral and post-merger stages. As proposed in Ref.~\cite{Zappa:2017xba}, the correlation between $E_{\rm GW,i}^\text{tot}$ and the binary tidal deformability is tight and it does not depend significantly on the symmetric mass ratio. We found that the relation $E_\text{GW} \approx \pi f_2 J_\text{GW}$ is well satisfied in the post-merger gravitational wave signal irrespective of the binary mass and NS EOS because the signal from the remnant NSs is approximately monochromatically emitted by the f--mode oscillation. The angular momentum of the remnant massive NS, $J_\text{rem}$, correlates with the binary tidal deformability. This quantity is relevant to build a model of post-merger evolution of merger remnants~\cite{Shibata:2019ctb}. \acknowledgments Numerical computation was performed on K computer at AICS (project numbers hp160211, hp170230, hp170313, hp180179, hp190160), on Cray XC50 at cfca of National Astronomical Observatory of Japan, Oakforest-PACS at Information Technology Center of the University of Tokyo, and on Cray XC40 at Yukawa Institute for Theoretical Physics, Kyoto University. This work was supported by Grant-in-Aid for Scientific Research (16H02183, 16H06342, 16H06341, 16K17706, 17H01131, 17H06361, 17H06363, 18H01213, 18H04595, 18H05236, 18K03642, 19H14720) of JSPS and by a post-K computer project (Priority issue No.~9) of Japanese MEXT. Our waveform data is publicly available on the web page.
1,314,259,992,805
arxiv
\section{Introduction} \label{Introduction} \subsection{Human coronaviruses} The novel human coronavirus SARS-CoV-2 (formerly, 2019-nCoV) first emerged in Wuhan, China, in December 2019, the causative agent for Coronavirus Disease-2019 (COVID-19) pandemic, has claimed 1.38 million mortality in the globe as of Nov.24, 2020 \citep{owidcoronavirus}. Understanding the molecular structure and evolution of SARS-CoV-2 genome is of urgency for tracing the origin of the virus and provides insights on vaccine development and drug design for controlling the current COVID-19 pandemic. Human coronaviruses (CoVs) are common viral respiratory pathogens that cause mild to moderate upper-respiratory tract illnesses. Two common CoVs, 229E, and OC43 were identified in 1965 and can cause the common cold. Four typical human CoVs found in recent years are Severe Acute Respiratory Syndrome Coronavirus (SARS-CoV) in 2002, NL63 in 2004, HKU1 in 2005, and Middle East respiratory syndrome coronavirus (MERS-CoV) in 2012. Among these human CoVs, SARS-CoV and MERS-CoV are highly pathogenic and caused severe and fatal infections. MERS symptoms are very severe, usually including fever, cough, and shortness of breath which often progress to pneumonia. About 30\% with MERS had died. SARS symptoms often include fever, chills, and body aches which usually progressed to pneumonia. About 10\% with SARS-CoV had died. The current coronavirus SARS-CoV-2, which causes a worldwide COVID-19 pandemic, is milder than SARS-CoV, but can cause severe syndromes and fatality in people with cardiopulmonary disease, people with weakened immune systems, infants, and older adults. SARS-CoV-2 is a beta coronavirus, like MERS-CoV and SARS-CoV. All three of these coronaviruses have their origins in bats. Yet the zoonotic origin of SARS-CoV-2 is still unconfirmed. \cite{zhou2020pneumonia,zhou2020pneumoniaB} 's study showed that the bat SARS-related coronavirus strain SARSr-CoV/RaTG13, identified from a bat \textit{Rhinolophus affinis} in Yunnan province, China, in July 2012, shares 96.2\% nucleotide identity. A recent study identified a new SARSr-CoV/RmYN02 (2019) from \textit{Rhinolophus malayanus}, which is closely related to SARS-CoV-2 \citep{zhou2020novel}. SARSr-CoV/RmYN02 shares 93.3\% nucleotide identity with SARS-CoV-2 and comprises natural insertions at the S1/S2 cleavage site of the Spike protein. The unique S1/S2 cleavage in the Spike protein in SARS-CoV-2 may confer the zoonotic spread of SARS-CoV-2. However, the originating relationship among these CoVs is not entirely clear. \subsection{Coding structures of SARS-CoV-2 genome} SARS-CoV-2 coronavirus contains a linear single-stranded positive RNA genome (Fig.1). The SARS-CoV-2 RNA genome of 29.9kb has a total of 11 genes with 11 open reading frames (ORFs) \citep{yoshimoto2020proteins}, consisting of the leader sequence (5'UTR), the coding regions, and 3'UTR pseudoknot stem-loop \citep{wu2020new}. The coding regions include ORF1ab and genes encoding 16 non-structural proteins \citep{finkel2020coding} and structural proteins (spike (S), envelope (E), membrane (M), and nucleocapsid (N)) \citep{gordon2020sars}, and several accessory proteins. ORF1ab encodes replicase polyproteins required for viral RNA replication and transcription \citep{chen2020emerging,cavasotto2020functional}. Nonstructural protein 1 (nsp1) likely inhibits host translation by interacting with 40S ribosomal subunit, leading to host mRNA degradation through cleavage near their 5’UTRs. Nsp 1 promotes viral gene expression and immunoevasion in part by interfering with interferon-mediated signaling. Nonstructural protein 2 (nsp2) interacts with host factors prohibitin 1 and prohibitin 2, which are involved in many cellular processes including mitochondrial biogenesis. The third non-structural protein (nsp3) is Papain-like proteinase. Nsp3 is an essential and the largest component of the replication and transcription complex. The Papain-like proteinase cleaves non-structural proteins 1-3 and blocks the host's innate immune response, promoting cytokine expression \citep{serrano2009nuclear,lei2018nsp3}. Nsp4 encoded in ORF1ab is responsible for forming double-membrane vesicle (DMV). The other non-structural proteins are 3CLPro protease (3-chymotrypsin-like proteinase, 3CLpro) and nsp6. 3CLPro protease is essential for RNA replication. The 3CLPro proteinase accounts for processing the C-terminus of nsp4 through nsp16 in coronaviruses \citep{anand2003coronavirus}. Together, nsp3, nsp4, and nsp6 can induce DMV \citep{angelini2013severe}. SARS-coronavirus has a unique RNA replication facility, including two RNA-dependent RNA polymerases (RNA pol). The first RNA polymerase is a primer-dependent non-structural protein 12 (nsp12), and the second RNA polymerase is nsp8, nsp8 has the primase capacity for \textit{de novo} replication initiation without primers \citep{te2012sars}. Nsp7 and nsp8 are essential proteins in the replication and transcription of SARS-CoV-2. Nsp7 is responsible for nuclear transport. The SARS-coronavirus nsp7-nsp8 complex is a multimeric RNA polymerase for both \textit{de novo} initiation and primer extension \citep{prentice2004identification, te2012sars}. Nsp8 also interacts with ORF6 accessory protein. The nsp9 replicase protein of SARS-coronavirus binds RNA and interacts with nsp8 for its functions \citep{sutton2004nsp9}. Helicase (nsp13) possesses helicase activity, thus catalyzing the unwinding dsRNA or structured RNA into single strands. Importantly, nsp14 may function as a proofreading exoribonuclease for virus replication, hence, SARS-CoV-2 mutation rate remains low. \begin{figure}[tbp] \centering {\includegraphics[width=4.0in]{SARS-CoV2-genome.png}}\quad% \caption{The structural diagram of SARS-CoV-2 genome (GenBank: NC\_045512). The diagram of SARS-CoV-2 genome was made using DNA Feature Viewer \citep{zulkower2020dna}.} \label{fig:sub1} \end{figure} Furthermore, the SARS-CoV-2 genome encodes several structural proteins. The structural proteins possess much higher immunogenicity for T cell responses than the non-structural proteins \citep{li2008t}. The structural proteins include spike (S), envelope (E), membrane protein (M), and nucleoprotein (N) \citep{marra2003genome,ruan2003comparative}. The Spike glycoprotein has two domains S1 and S2. Spike protein S1 attaches the virion to the host cell membrane through the receptor ACE2, initiating the infection \citep{wan2020receptor, wong2004193}. After being internalized into the endosomes of the cells, the S glycoprotein is then cleaved by cathepsin CTSL. The spike protein domain S2 mediates fusion of the virion and cellular membranes by acting as a class I viral fusion protein. Especially, the spike glycoprotein of coronavirus SARS-CoV-2 contains a furin-like cleavage site \citep{coutard2020spike}. Recent study indicates that SARS-CoV-2 is more infectious than SARS-CoV according to the changes of S protein-ACE2 binding affinity \citep{chen2020mutations}. The envelope (E) protein interacts with membrane protein M in the budding compartment of the host cell. The M protein holds dominant cellular immunogenicity \citep{liu2010membrane}. Nucleoprotein (ORF9a) packages the positive-strand viral RNA genome into a helical ribonucleocapsid (RNP) during virion assembly through its interactions with the viral genome and a membrane protein M \citep{he2004characterization}. Nucleoprotein plays an important role in enhancing the efficiency of subgenomic viral RNA transcription and viral replication. \subsection{Non-coding structures of the SARS-CoV-2 genome} In addition to the coding regions, SARS-CoV-2 genome contains hidden structures that can retain genome stability, regulate gene replication and expression, and control virus life cycles. The non-coding genome structures include leader sequences, transcriptional regulatory sequences (TRS), G-quadruplex structures, frame-shifting regions, and repeats. The first non-coding structure is the 5' leader sequence of about 265 bp is the unique characteristic in coronavirus replication and plays critical roles in the gene expression of coronavirus during its discontinuous sub-genomic replication \citep{li2005sirna}. SARS-CoV-2 contains G-quadruplex structures \citep{ji2020discovery}. It is well established that sequences with G-blocks (adjacent runs of Guanines) can potentially form non-canonical G-quadruplex (G4) structures \citep{choi2011conformational,metifiot2014g}. The G4 structures are formed by stacking two or more G-tetrads by Hoogsteen hydrogen bonds and often are the sites of genomic instability, serving one or more biological functions \citep{bochman2012dna}. An inverted repeat is a single-stranded sequence of nucleotides followed by downstream its reverse complement downstream. The intervening sequence between the initial sequence and the reverse complement is called a spacer. When the spacer sequence is zero, the inverted repeat is called a palindrome. For example, the inverted repeat, 5'-ATTCGCGAAT-3' is a palindrome, the palindrome-first sequence is 5'-ATTCG-3', and the palindrome-second sequence is 5'-CGAAT-3'. When the spacer in an inverted repeat is non-zero, the repeat is generally inverted. In a generally inverted repeat, we still denote the initial sequence as a palindrome-first sequence and the downstream reverse complement as a palindrome-second sequence. For example, in the general inverted repeat, 5'-TTTAGGT...ACCTAAA-3', the palindrome-first sequence is 5'-TTTAGGT-3', and the palindrome-second sequence is 5'-ACCTAAA-3'. Through self-complementary base pairing, an inverted repeat can form a stem-loop (hairpin) structure in an RNA molecule, where the palindrome-first and palindrome-second sequences make a stem, and the spacer sequence makes a loop. It should be noted that an inverted repeat may not have perfect complementary base pairing in palindrome-first and palindrome-second sequences, so the stem formed by an imperfect inverted repeat can have mismatches, insert, or deletions. Inverted repetitive sequences are principal components of the archaeal and bacterial CRISPR-CAS systems \citep{mojica2005intervening}, which function as adaptive antiviral defense systems. Inverted repeats have important biological functions in viruses. Inverted repeats delimit the boundaries in transposons in genome evolution and form stem-loop structures in retaining genome instability and flexibility. Inverted repeats are described as hotspots of eukaryotic and prokaryotic genomic instability\citep{voineagu2008replication}, replication \citep{pearson1996inverted}, and gene silencing \citep{selker1999gene}. Therefore, inverted repeats involve cellular evolution and genetic diversity, mutations, and diseases. Despite the paramount roles of the non-coding structures, the non-coding structures are not immediately visible as the coding regions. This study is to identify one of the crucial non-coding structures, inverted repeats in SARS-CoV-2 genome, and investigate the cohort of the inverted repeats and the virus evolution. \section{Materials and methods} \subsection{Identification of inverted repeats} The complete genomes of coronaviruses were scanned for inverted repeats using Palindrome analyzer \citep{brazda2016palindrome}. Palindrome analyzer (http://bioinformatics.ibp.cz/) is a web-based server for retrieving palindromic and inverted repeats in DNA or RNA sequences. Palindrome server describes the features of inverted repeats including similarity analysis, localization, and visualization. \subsection{Inverted repeat analysis} To ensure consistency in comparing coronavirus genomes, we only extracted the inverted repeats with the perfect complementary base pairing of the palindrome-first and palindrome-second sequences. Noted that a short inverted repeat of length $P$ can be inside a long inverted repeat of length $Q$ ($Q>P$), in this case, we only extracted the inverted repeats of length $Q$ and excluded the inverted repeat of length $P$. The retrieved inverted repeats were mapped on the protein genes in a genome according to the positions of the palindrome-first and palindrome-second sequences of the inverted repeats. The distributions of inverted repeats on protein genes in the different genomes are assessed by the Wasserstein distance, known as the earth mover’s distance. The Wasserstein distance corresponds to the minimum amount of work required to transform one distribution into the other. The $p-th$ Wasserstein distance between two probability distributions $\mu$ and $\nu$ is defined as follows \citep{vallender1974calculation}, \[ W_p (\mu ,\nu ) = \left( {\mathop {\inf }\limits_{\pi \in \Gamma (\mu ,\nu )} \int_{\mathbb{R} \times \mathbb{R}} {\left| {x - y} \right|d\pi (x,y)} } \right)^{1/p} \] , where $\Gamma (\mu ,\nu )$ denotes the set of probability distributions on $\mathbb{R} \times \mathbb{R}$ with marginals $\mu$ and $\nu$. \subsection{Genome data} The following complete genomes of SARS-CoVs and SARS-related coronaviruses (SARSr-CoVs) were downloaded from NCBI GenBank: SARS-CoV-2 (GenBank: NC\_045512.2) \citep{wu2020new}, SARS-COV/BJ01 (GenBank: AY278488), SARSr-CoV/RaTG13 (GenBank: MN996532) \citep{zhou2020pneumonia}, SARSr-CoV/RmYN02 (GISAID: EPI\_ISL\_412977) \citep{zhou2020novel,shu2017gisaid}, and MERS-CoV (GenBank: NC\_019843) \citep{zaki2012isolation}. \section{Results} \subsection{Inverted repeats in SARS-CoV-2 genome} Long inverted repeats are deemed to greatly influence the stability of the genomes of various organisms. The longest inverted repeats identified in SARS-CoV-2 genome is 15 bp sequence, the palindrome-first sequence 5'-ACTTACCTTTTAAGT-3' is at 8474-8489 (nsp3 gene), and the palindrome-second sequence 5'-ACTTAAAAGGTAAGT-3' is at 13295-13310 (nsp10 gene). The repeats of 11-15 bp are predominantly located in the gene of the Spike (S) protein (Fig.2(a) and (b)). The other three protein genes (nsp3, RdRp, and N protein) are also enriched with long inverted repeats. Long inverted repeats often contribute to the stability of a genome because of stable stems formed by the long inverted repeats. The results also suggest the recombinations took place at the gene of the Spike protein during evolution. Together, four protein genes (S, nsp3, RdRp, and N protein) of abundant inverted repeats are evolving dramatically and are critical for virus survival, therefore, can be the pharmaceutical targets \citep{gao2020machine}. \begin{figure}[tbp] \centering \subfloat[]{\includegraphics[width=3.75in]{IRepeats_SARS-CoV-2_11212020_fig1_NC_045512.png}}\quad \subfloat[]{\includegraphics[width=3.75in]{IRepeats_SARS-CoV-2_11212020_fig2_NC_045512.png}}\quad \caption{Distributions of inverted repeats consisting of first half sequences and second sequences on SARS-CoV-2 genome (NC\_045512). (a) Inverted repeats of 11-15 bp. (b) Repeat numbers of inverted repeats of 12-15 bp in the protein genes of the genome. In (b), the repeat numbers are counted by both palindrome-first and palindrome-second sequences.} \end{figure} The relation of virus genomes may provide insights on the zoonotic origin and evolution of the viruses. To examine the close relevance of human and bat CoVs, we evaluate and compare the distributions of inverted repeats of 11-15 bp in four CoV genomes: SARS-CoV-2 (Fig.2(a)), SARS-CoV (Fig.3(a)), MERS-CoV (Fig.4(a)) SARSr-CoV/RaTG13 (Fig.5(a)), and SARSr-CoV/RmYN02 (Fig.6(a)). The repeat numbers of the inverted repeats of 11-15 bp on each protein gene in the genomes are shown in Fig.2(b), Fig.3(b), Fig.4(b), Fig.5(b), and Fig.6(b). The repeat numbers are counted by both the palindrome-first and palindrome-second sequences of the inverted repeats. Taking account of the inverted repeats of wide ranges 8-15 bp, we computed the pairwise Wasserstein distances of the repeat numbers of protein genes in three closely related SARSr-CoVs: the distance between SARS-CoV-2 and SARSr-CoV/RaTG1 is 6.8571, the distance between SARS-CoV-2 and SARSr-CoV/RmYN02 is 5.7143, and the distance between SARSr-CoV/RaTG1 and SARSr-CoV/RmYN02 is 6.3571. Therefore, we conclude that SARS-CoV-2 strain is more closely related to SARSr-CoV/RaTG1 (2013) than SARSr-CoV/RmYN02 (2019). Both SARS-CoV-2 and SARSr-CoV/RmYN02 may evolve from SARSr-CoV/RaTG1. We also observe that the Spike protein gene in SARSr-CoV/RmYN02 (Fig.6(b)) have more long inverted repeats than the counterparts of SARS-CoV-2 (Fig.2(b)) and SARSr-CoV/RaTG1 (Fig.5(b)). Unsurprisingly, the Spike protein in SARSr-CoV/RmYN02 contains natural insertions at the S1/S2 cleavage site. This cleavage site may originate from some recombination events of the Spike genes as the result of inverted repeats. \begin{figure}[tbp] \centering \subfloat[]{\includegraphics[width=3.75in]{IRepeats_SARS-CoV-2_11212020_fig1_AY278488.png}}\quad \subfloat[]{\includegraphics[width=3.75in]{IRepeats_SARS-CoV-2_11212020_fig2_AY278488.png}}\quad \caption{Distributions of inverted repeats consisting of palindrome-first and palindrome-second sequences on SARS-CoV genome (AY278488). (a) Inverted repeats of 11-15 bp. (b) Repeat numbers of inverted repeats of 12-15 bp in the protein genes of the genome.} \end{figure} \begin{figure}[tbp] \centering \subfloat[]{\includegraphics[width=3.75in]{IRepeats_SARS-CoV-2_11212020_fig1_NC_019843.png}}\quad \subfloat[]{\includegraphics[width=3.75in]{IRepeats_SARS-CoV-2_11212020_fig2_NC_019843.png}}\quad \caption{Distributions of inverted repeats consisting of palindrome-first and palindrome-second sequences on MERS-CoV genome (NC\_019843). (a) Inverted repeats of 11-15 bp. (b) Repeat numbers of inverted repeats of 12-15 bp in the protein genes of the genome.} \end{figure} \begin{figure}[tbp] \centering \subfloat[]{\includegraphics[width=3.75in]{IRepeats_SARS-CoV-2_11212020_fig1_MN996532.png}}\quad \subfloat[]{\includegraphics[width=3.75in]{IRepeats_SARS-CoV-2_11212020_fig2_MN996532.png}}\quad \caption{Distributions of inverted repeats consisting of palindrome-first and palindrome-second sequences on SARSr/RaTG13 genome (MN996532). (a) Inverted repeats of 11-15 bp. (b) Repeat numbers of inverted repeats of 12-15 bp in the protein genes of the genome.} \end{figure} \begin{figure}[tbp] \centering \subfloat[]{\includegraphics[width=3.75in]{IRepeats_SARS-CoV-2_11212020_fig1_RmYN02.png}}\quad \subfloat[]{\includegraphics[width=3.75in]{IRepeats_SARS-CoV-2_11212020_fig2_RmYN02.png}}\quad \caption{Distributions of inverted repeats consisting of palindrome-first and palindrome-second sequences on SARSr-CoV/RmYN02 genome (EPI\_ISL\_412977). (a) Inverted repeats of 11-15 bp. (b) Repeat numbers of inverted repeats of 12-15 bp in the protein genes of the genome.} \end{figure} The total frequencies of inverted repeats of different lengths in the human and bat CoVs also suggest that SARS-CoV-2 is closely related SARSr-CoV/RaTG13 (Fig.7). Notedly, Fig. 7 shows that the inverted repeats of all lengths are increasing from SARS-CoV (in 2003) to SARS-CoV-2 (in 2019). From these repeat analyses, we may infer that during evolution, the recombinations may occur and produce accumulative inverted repeats under natural selection. We see that recombinations can be one of the driven forces for fast evolution. \begin{figure}[tbp] \centering {\includegraphics[width=4.0in]{IRepeats_SARS-CoV-2_11212020_fig3.png}}\quad% \caption{Frequencies of inverted repeats of different lengths in the coronavirus genomes: SARS-CoV-2, SARS-CoV, MERS-CoV, SARSr-CoV/RaTG13, and SARSr-CoV/RmYN02. The repeat numbers are counted by palindrome-first sequences only.} \label{fig:sub1} \end{figure} \section{Discussions} The COVID-19 pandemic has caused substantial health emergencies and economic stress in the world. Vaccine development is critical to mitigating the pandemic. The facts revealed in this study that three proteins nsp3, RdRp, and the Spike protein are rich with inverted repeats suggest that these three proteins are functional significance for virus survivals, and shall be the targets of drug design and vaccine development. If we relax the matching pairs in the inverted repeats, we expect that much longer inverted repeats can be identified, and the number of inverted repeats in the virus genome will be increased significantly. The imperfect inverted repeats are the natural forms of the repeats to maintain the genome structures. Because the perfect inverted repeat distribution and types in a genome are unique and extracting the perfect inverted repeats are parameter-free, the perfect inverted repeats can be considered as the genomic signature. The signatures from perfect inverted repeats are consistent, therefore, can be used for distinguishing the closely related viruses and differing virus mutation variants. The quantitative comparison of the signature can also provide phylogenetic taxonomy when appropriate numerical metrics for the signatures are realized. Therefore, the perfect inverted repeats can be an effective barcode to delimit species and genotypes. \section*{Acknowledgments} We sincerely appreciate the researchers worldwide who sequenced and shared the complete genome data of SARS-CoV-2 and other coronaviruses from GISAID (https://www.gisaid.org/). This research is partially supported by the National Natural Science Foundation of China (NSFC) grant (91746119, to S.S.-T. Yau), Tsinghua University Spring Breeze Fund (2020Z99CFY044, to S.S.-T. Yau), Tsinghua University start-up fund, and Tsinghua University Education Foundation fund (042202008, to S.S.-T. Yau). \section*{Competing interests} We declare we have no competing interests. \section*{Abbreviations} \begin{itemize} \item COVID-19: coronavirus disease 2019 \item SARS: severe acute respiratory syndrome \item SARS-CoV-2: severe acute respiratory syndrome coronavirus 2 \item MERS-CoV: Middle East Respiratory Syndrome coronavirus \item CRISPR: clusters of regularly interspaced short palindromic repeats \item ACE2: angiotensin-converting enzyme 2 \item NCBI: National Center for Biotechnology Information (USA) \end{itemize} \clearpage \bibliographystyle{elsarticle-harv}
1,314,259,992,806
arxiv
\section{Introduction} Since Witten's discover [1] that singular solutions to the string vacuum equations of motion [2] can be represented by exact two dimensional conformal field theories known as gauged Wess-Zumino-Novikov-Witten models (GWZM) [3], a lot of work has been made in the last years about the subject, with special attention put on solutions of relevance in black hole physics and cosmology [4]. In $1+1$ dimensions the ``famous" $SU(1,1)/U(1)$ coset representing a Schwarzchild like black hole has been exhaustively analyzed. Generalizations of this model as $SO(d-1,2)/SO(d-1,1)$ cosets were considered in [5,6], where a guess leading to the exact (to all orders in $1/k$) backgrounds was given. Of course we are ultimately interested in realistic four dimensional models. Some of them, obtained essentially by taking tensor products of $SU(1,1)$'s and $U(1)$'s, were considered in [7,8]. A possible classification of cosets leading to effective target spaces with one time direction was given in [9]. In this paper we consider a model based on gauging the maximal compact subgroup $U(2)$ of the non compact group $SU(2,1)$. The interest is at least two-fold. First, the backgrounds by themselves represent a highly non trivial solution to the string equations, or matter coupled to gravity system; from general arguments the one loop solution should have euclidean signature and then represent some kind of gravitational instanton, but this view could be changed by considering the exact solution. Second, we think it is an instructive algebraic exercise to explicitely work out non abelian groups other than those related to the $A_1$ Lie algebra. The techniques, in particular the parametrizations, used here for $SU(2,1)$ are in principle extensive to general $U(p,q)$. The paper is organized as follows. In Section 2 we set up general definitions and conventions, while Section 3 is devoted to the $SU(2,1)$ parametrizations. In Section 4 we describe the computation of the one loop effective backgrounds, in Section 5 the curvature and equations satisfied by them. In Section 6 we compute the (presumibly) exact backgrounds and conjecture possible interpretations. In Section 7 we quote the expressions of the one loop dual solution. Section 8 is devoted to the conclusions. An appendix divided in three sections is added, where we collect some useful formulae. \newpage \section{Conventions} A bosonic string that sweeps out an euclidean genus $g$ world-sheet $\gS$ embedded in a gravity-axion-dilaton $d$ dimensional background on target space ${\cal M} $ is described by the action \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger S[X;G,B,D] = \frac{k}{4\pi} \int_{\Sigma} \, \left(\, (G_{ab} (X) *\, +\, i B_{ab} (X) ) dX^a \wedge dX^b - \frac{1}{2 k} D(X)*R^{(2)} \,\right) \ee where ``*" stands for the Hodge mapping wrt some metric on $\gS$, $R^{(2)}$ being its Ricci scalar that satisfies $\;\; \int_{\gS}*R^{(2)} = 8 \pi ( 1 - g ) \;\;$. The Weyl invariance condition of this two dimensional sigma model imposes that, at one loop \footnote{ Strickly speaking, at first order in $\frac{1}{k} \equiv \alpha} \def\gb{\beta '$, see i.e. [10]. } the backgrounds satisfy the set of equations [2] \begin{eqnarray} 0 &=& R_{ab} - \, \nabla_a \nabla_b D - \frac{1}{4} H_{acd} H_{b}{}^{cd} \cr 0 &=& -2\; \nabla_a \nabla^a D - \nabla_a D \nabla^a D + R - \frac{1}{12} H_{abc} H^{abc} + \gL\cr 0 &=& \nabla^{c}( e^{D} H_{abc}) \end{eqnarray} where $\; H\equiv dB\;$ and $\;\gL=\frac{26-d}{3} k\,$ (our definitions for curvature, ecc., are those of ref. [11]). These equations follow from the $d$-dimensional action on ${\cal M}$ \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger I[G,B,D] = \int_{\cal M} \, e^{D}\; (*R + \,\nabla D \wedge *\nabla D - \frac{1}{12} H \wedge *H + *\gL) \ee A GWZM is defined as follows. Let $G$ a Lie group, $H$ a subgroup of $G$ and ${\cal G}, {\cal H}$ their respective Lie algebras. If $g: \Sigma \mapsto G$, $\omega} \def\gW{\Omega} \def\gK{\chi (g) \equiv g^{-1}dg = - {\overline \omega} \def\gW{\Omega} \def\gK{\chi}(g^{-1}) \in {\cal G}$ stand for the Maurer -Cartan forms, and ${\cal A} \in {\cal H}$ is a gauge connection, then the defining action of a GWZM is [3] \begin{eqnarray} S[g,{\cal A}] &\equiv& \frac{k}{4\pi} ( I_{WZ}[g] + I_{G}[g,{\cal A}] ) \equiv \frac{k}{4\pi} ( I_{0}[g] + i \Gamma [g] + I_{G}[g,{\cal A}] ) \cr I_{WZ} [g] &=& \frac{1}{2} \int_{\Sigma} tr( \omega} \def\gW{\Omega} \def\gK{\chi (g) \wedge *\omega} \def\gW{\Omega} \def\gK{\chi (g)) + \frac{i}{3} \int_{ {\cal B}, \partial {\cal B}= \Sigma } tr( \omega} \def\gW{\Omega} \def\gK{\chi (g) \wedge \omega} \def\gW{\Omega} \def\gK{\chi (g) \wedge \omega} \def\gW{\Omega} \def\gK{\chi (g) ) \cr I_{G} [g,{\cal A}] &=& \int_\Sigma tr \big( - {\cal A} \wedge (* + i1) \,\omega} \def\gW{\Omega} \def\gK{\chi(g) \; +\; {\cal A} \wedge (* - i1) \,{\overline \omega} \def\gW{\Omega} \def\gK{\chi} (g) \cr &-& g \, {\cal A}\; g^{-1} \wedge (* +i1) {\cal A} \; +\; {\cal A} \wedge * {\cal A} \big) \end{eqnarray} where $``tr"$ is normalized in such a way that the lenght of a long rooth of $\cal{G}$ is $2$ [12]. \ignore{ \footnote{ For $A_n$ algebras the trace in the fundamental representation works.} } This action is invariant under the gauge transformations \footnote{A slightly modified version of the action (2.4) and gauge transformations (2.5), the so called ``axial" gauging, is possible if $\cal{H}$ contains abelian subalgebras; the effective target is different but both theories are equivalent (dual), in agreement with current algebra arguments.} \newpage \begin{eqnarray} g^{h}&=&h \, g \, h^{-1} \cr {\cal A}^{h}&=&h \, {\cal A} \,h^{-1} - {\overline\omega} \def\gW{\Omega} \def\gK{\chi}(h) \end{eqnarray} for an arbitrary map $\;h : \gS\mapsto H \,$. \noindent If we pick a basis $\,\{T_a,\, a=1,\ldots, {\rm dim} H \}$ in $\cal H$, then by integrating out the gauge fields in $I_G$ we obtain the one loop order effective action \begin{eqnarray} S_{eff}[g] &=& \frac{k}{4\pi} W[g] - \frac{1}{8\pi} \int_\Sigma D(g) *R^{(2)} \cr W[g] &=& I_{WZ}[g] + {\tilde I}[g] \cr {\tilde I}[g] &=& -2 \int_\Sigma\, \frac{1}{l} ({\gl^c})^{ab} \, a_a\wedge(*-i1) b_b \end{eqnarray} where $l=l(g)$ and $\gl^c =\gl^c(g)$ are the determinant and the cofactor matrix of \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger \gl_{ab}(g) = \frac{1}{2}\; tr( T_a T_b - g T_a g^{-1} T_b) \ee and \begin{eqnarray} i\, 2\, a_a = tr( T_a \,\omega} \def\gW{\Omega} \def\gK{\chi(g)) \cr i\, 2\, b_a = tr( T_a \,{\overline \omega} \def\gW{\Omega} \def\gK{\chi(g)}) \end{eqnarray} Clearly the gauge invariance condition $\;S_{eff}[g^h]=S_{eff}[g] \;$ makes the effective target dependent on $\; d= {\rm dim}G-{\rm dim}H\;$ gauge invariant field variables constructed from $g$. The $d$ dimensional metric and torsion are read from $W[g]$. The dilaton field \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger D(g) = \ln | l(g)| \ee comes from the determinant in the gaussian integration leading to (2.6) after convenient regularization [9]. \footnote{Because $\gl$ transforms as a 2-tensor in the adjoint representation of $H$, $D(g)$ will be gauge invariant for subgroups with semisimple Lie algebra; for non semisimple subalgebras action (2.6) does not exist, see below.} It is undoubtly of major importance to get $d=4$ target spaces since they can represent realistic backgrounds for string theory, with implications in cosmology and black hole physics in particular. Models with one time direction have been clasified in [9]. Most of them consist of groups product of $SU(1,1)'s $ and $U(1)'s$ (see however [5], where the only ``less" trivial SO(3,1)/SO(2,1) coset is briefly considered). Unfortunately one of the most interesting targets, the ``stringy" Schwarzchild solution (and more generically, geometries with a high degree of isometries), has evaded us. A naive explanation of this fact could be the following one: since at one loop $R_{ab} = 0$ and $D=const.$ for this solution, we would have to have (up to $g$-independent normalizations) $l(g)=1$. But from (2.7) we see that the $\gl$ matrix is null when we approach to $g=1$, and certainly it cannot have a non-vanishing determinant. More generally speaking, if $G$ is semisimple we can always choose an orthogonal set of generators in $\cal G$ of non-zero norm; if we write $g=T U$ with $U\in H$ and $T\in G/H$ then from (2.7) we get \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger \gl (g) = (1 - S(T) R(U))^t h \ee where $R(U)$ is the adjoint representation matrix of $U$, $S(T)$ contains the adjoint action of the coset element $T$ on the $\cal H$ generators and $h$ is the Killing-Cartan metric on $\cal H$. For elements in $ H (S(1)=1) $, $ \gl$ becomes singular on some submanifold (the target space nature of it to be elucidated) and $l(g) \neq 1$. If $G$ is not semisimple, then the Killing-Cartan form has null eigenvalues and $\lambda $ does not exists in general. In any case is hard to see how a singular target space (and, to one loop at least, it should be!) could raise with a constant dilaton in the present context of GWZM. Maybe the non abelian duality transformations recently introduced [13,14] could indirectly lead to an exact conformal field theory representation of the stringy (and others) Schwarzchild black hole. \footnote{We point out that the S-duality [15] recently introduced in the context of superstring theory does not hold here; in fact our solutions has non zero cosmological constant.} \newpage \section{The SU(2,1)/U(2) model} Coming back to our problem, a certainly non trivial four dimensional target we would get by considering $G=SU(2,1)$ and $H=U(2)$. {}From general arguments it will have (at one loop!) euclidean signature [9], and so it could represent some kind of ``gravitational instanton" in the general sense of reference [16]. \footnote{By means of a Wick rotation we can get $(++--)$ signature; it corresponds to gauging the $U(1,1)$ subgroup.} So let us concentrate on this model. In view of the gauge invariance of the theory, it will be of most importance to fix a convenient parametrization. We will denote vectors with bold-type letters; matrices will be understood from the context. An arbitrary element $g \in SU(2,1)$ admits the coset decomposition wrt its maximal compact subgroup $U(2)$, \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger g = T({\bf c}) H(U,u^* ) \ee where $T,H$ are given in eqns. (A.4). Clearly the SU(2,1) topology is $\Re ^4 \times S^3 \times S^1$. Now, outside the origin of $\CC ^2$ the complex 2-vector ${\bf c}$ can be uniquely written as ${\bf c}=s\, {\bf n}$, with $ s\equiv ({\bf c}^\dag {\bf c})^\frac{1}{2} $ being the radial coordinate of $\Re ^4$ and ${\bf n}^{\dag} {\bf n} = 1$. The unitary vector ${\bf n}$ is in one-to-one correspondence with a $SU(2)$ matrix \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger {\bf n} \equiv \left( \matrix{ n_1 \cr n_2 \cr } \right) \Leftrightarrow N\equiv \left( \matrix{ n_1^* &n_2^* \cr -n_2 &n_1 \cr } \right) \ee Since an arbitrary element of $U(2)$ can be written as \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger U=\left( \matrix{ u &0 \cr 0 &1} \right) P \ee with $P \in SU(2)\,$ and $u\equiv e^{i\gvfi}=detU \,$, we can parametrize the $ U \in U(2) $ in (3.1) as \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger U = N^{\dag} \left( \matrix{ u &0 \cr 0 &1 \cr}\right) P \, N \ee \noindent and then we rewrite $g$ in the form \footnote{ From now on we will use the variable $t \in [0,\infty )$ and the symbols $\; s\equiv \sinh t \; ,\; c\equiv \cosh t $. } \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger g = H(N^\dag ,1)\, e^{t \gl_4}\,e^{i \frac{\gvfi}{2}(\gl_3 + \sqrt{3}\gl_8)} H(P,1)\, H(N,1) \ee where the relations $ N\,{\bf n} = \left( \matrix{ 1\cr 0\cr} \right)$ and (A.6) were used. Finally, if according to (C.3) we introduce \newpage \begin{eqnarray} X &\equiv& e^{i \frac{\gvfi}{2} \gs_3 } \, P = e^{i \frac{\theta} \def\gl{\lambda} \def\gL{\Lambda} \def\gk{\kappa}{2} \gs_3} \; {\overline X} \; e^{-i \frac{\theta} \def\gl{\lambda} \def\gL{\Lambda} \def\gk{\kappa}{2}\gs_3} \cr V &\equiv& e^{i \frac{\theta} \def\gl{\lambda} \def\gL{\Lambda} \def\gk{\kappa}{2} (1 - \gs_3 ) } N \;\; \in U(2) \end{eqnarray} we obtain \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger g = H(V^{\dag} ,1) \, e^{t \gl_4}\, e^{i \frac{\sqrt{3}}{2} \gvfi\gl_8 } \, H({\overline X},1) H(V,1) \ee It is clear from this parametrization that $V$ is a gauge variable and decouple from the model. The remaining four gauge invariant variables (for example, $ (t,\gvfi, x_0, x_3) ) \;$ locally parametrize the effective target manifold whose topology might be naively identify with $\Re ^2\times {\cal D}\,$ where $\cal D$ is a disk. This can be seen from the fact that according to (3.1,4) and (A.6), the (complex) variables \begin{eqnarray} {\bf c}^\dag \, N^\dag \, U\, N \, {\bf c} &=& s^2 \, (x_0 + i x_3) \, e^{i\frac{\gvfi}{2}} = s^2 \, (p_0 + i p_3) u \cr tr U &=& 2\, x_0 \, e^{i\frac{\gvfi}{2}} = p_0 (1 + u) - i \, (1-u) \,p_3 \end{eqnarray} are the gauge invariant ones, and belongs to $\Re ^2 $ and $\cal D$ respectively. \footnote{The complex variable $trU$ (that encodes $det\, U = u $) is the gauge invariant variable describing the coset $U(2)/Adj\, U(2)\equiv {\cal D}$. We thank M. Blau for a discussion on this point.} However as follows from (2.10), the origin of $\Re ^2$ as well as the boundary of the disk will become singular. We remark that $X$ belongs to $SU(2)$ only ``locally" , but not globally as $P$ does; it rises from parametrizing a $U(2)$ matrix as a $SU(2) \times U(1)$ element in (3.3), $ U = e^{i \frac{\gvfi}{2} } X $. It is useful to carry out computations and we will also consider it in what follows, as well as with \begin{eqnarray} V &=& e^{i \frac{\phi} \def\gr{\rho} \def\gvfi{\varphi} \def\gh{\eta}{2} } \; (v_0 1 + i \, {\bf v} \cdot {\bf \gs} ) \cr 1 &=& v_0{}^2 + {\bf v}\cdot {\bf v} \end{eqnarray} in Section 6. \newpage \section{Computation of the one loop metric} In this section we will describe with some detail the calculations of the one loop backgrounds. The parametrization (3.7) (with $V=1$) will be assumed. First of all, we have to choose a convenient basis in $\cal H$. We take the following generators ($\; (\check{e}_i )_j = \delta} \def\gD{\Delta} \def\ge{\epsilon_{ij} \;$) \begin{eqnarray} T_i &=& \gl_i - \frac{1}{\sqrt{3}} \,\gl_8 \;\delta} \def\gD{\Delta} \def\ge{\epsilon_{i,3}\; ,\;\;\; i=1,2,3\cr T_4 &=& -\frac{2}{\sqrt{3}} \,\gl_8 \end{eqnarray} In the notation of Appendix B, we compute from (2.7) the matrix $\gl$ to be \begin{eqnarray} M &=& 1 - R^t A \, , \;\;\;\;\; A = c\, 1 - (c-1) Q \cr {\bf m_1} &=& s^2 \, (R^t - 1) \, \check{e}_3 \cr {\bf m_2} &=& {\bf 0} \cr m_0 &=& - 2 \, s^2 \end{eqnarray} where $R\equiv R(X)$ is given in (C.7) and $\; Q=\check{e}_3 \check{e}_3{}^t $. Now from (B.2) we get \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger \gl^c = \left( \matrix{ m_0 M^c &0\cr -{\bf m_1}{}^t M^c &m\cr} \right) \ee where (c.f. (B.4) ) \begin{eqnarray} M^c&=&(1 + (c-1) R_{33} - c\, trR ) \, 1 + c R + c R^t + c(c-1)\, R^t Q - (c-1) Q R \cr m&=&- s^2 (1 - R_{33} ) \end{eqnarray} The next step is to compute the vectors in (2.8). They are given by \begin{eqnarray} {\bf a}&=&{\bf U} - \frac{1}{2} d\varphi \, \check{e}_3 \cr a_4&=& - d\varphi \cr {\bf b}&=&A \; {\overline {\bf U}} - \frac{1}{2} d\varphi \, \check{e}_3 \cr b_4&=& -( 1 + \frac{3}{2} s^2 )\, d\varphi - s^2\, {\overline U}_3 \end{eqnarray} On the other hand, the Wess-Zumino action (2.4) results \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger I_{WZ}[g] = \int_\gS \, (dt \wedge *dt - \frac{3}{4} d\varphi\wedge * d \varphi ) + I_{WZ}[X] \ee With (4.3,6) and after some calculations we get (2.6) in the form \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger W[g] = \int_\gS \big( dt\wedge * dt + \frac{1}{s^2 (1 -R_{33})}\, ( L_{\varphi\varphi} d\varphi\wedge *d\varphi + L_{XX} - L_{X\varphi} ) \big) + i \Gamma [X] \ee where ( $S\equiv (1- c \, trR)\, 1 + c \, R^t + c^2 R $ ) \begin{eqnarray} L_{\gvfi\gvfi}&=&\frac{1}{2} (R M^c)_{33} + \frac{1}{4} (1-R_{33}) (1 + 3 c^2 ) \cr L_{XX}&=&- s^2 (1 - R_{33}) \, {\bf U}\cdot \wedge * {\bf U} + 2 \, {\bf U}\cdot \wedge (* - i1) M^c \, A \,{\overline {\bf U}} \cr L_{X\gvfi}&=& \check{e}_3 \cdot S \, {\bf U} \wedge (* - i1) d\gvfi + {\overline {\bf U}} \cdot S \, \check{e}_3 \wedge (* + i1) d\gvfi \end{eqnarray} and after repeatedly using formulae collected in Appendix C we get \begin{eqnarray} L_{\gvfi\gvfi}&=&-1+ R_{33} + \frac{1}{4} (5-3R_{33}) (c-1)^2 + \frac{c}{2}\, (5-2R_{33}-trR)\cr L_{XX} &=& 2\, (c-1)^2 dx_3\wedge * dx_3 + 2\, (c+1)^2 dx_0 \wedge * dx_0 \cr &+& i 2 s^2 (1 - R_{33}) (x_0 dx_3 - x_3dx_0) \wedge d\theta} \def\gl{\lambda} \def\gL{\Lambda} \def\gk{\kappa \cr L_{X\gvfi}&=&2\, d\gvfi \wedge * ( s^2 x_0\, dx_3 - (s+2)^2 x_3\, dx_0 ) + i s^2 (1-R_{33}) \, d\theta} \def\gl{\lambda} \def\gL{\Lambda} \def\gk{\kappa \wedge d\gvfi \cr \Gamma[X]&=& -2\,\int_\gS \, (x_0 \, dx_3 - x_3 \, dx_0)\wedge d\theta} \def\gl{\lambda} \def\gL{\Lambda} \def\gk{\kappa \end{eqnarray} {}From these results we learn two important facts: \begin{itemize} \item the last term in $L_{XX}$ cancels the $\Gamma$ contribution; \item the last term in $L_{X\gvfi}$ drops out because it gives a total derivative contribution to $W$; \end{itemize} that lead us to conclude that: \begin{enumerate} \item the $\theta} \def\gl{\lambda} \def\gL{\Lambda} \def\gk{\kappa$ variable in $X$ decouples, as should. As we saw in Section 3 this is only a non trivial check of gauge invariance; \item the three terms that cancel are those that could give rise to the axionic field $B$, in other words the target obtained is $torsionless$. \end{enumerate} This last fact is not expected ``a priori". To our kwowledge, a classification of torsionless groups in GWZM is not available. {}From the model considered here we can argue that the key fact for this to happen lies in the possibility of going to a gauge in which the Wess-Zumino term is zero \footnote{In WZM we certainly have zero torsion if $\Gamma = 0$.} (which is made explicit in 1.), but a more general argument is lacking. If the backgrounds are defined as in (2.1), we read from (4.7,9) the non-zero metric components in the $(t,\gvfi , x_0, x_3)$ variables ( $ \gr \equiv +\, \sqrt{1 - x_0{}^2 - x_3{}^2}$ ) \begin{eqnarray} G_{tt}&=&1\cr G_{\gvfi\gvfi}&=&\frac{c^2}{s^2} + \frac{c-1}{4(c+1)} \frac{x_0{}^2}{\gr ^2 } + \frac{c+1}{4(c-1)} \frac{x_3{}^2}{\gr ^2} \cr G_{00}&=&\frac{c+1}{c-1} \frac{1}{\gr ^2} \cr G_{33}&=&\frac{c-1}{c+1} \frac{1}{\gr ^2} \cr G_{0\gvfi}&=& \frac{c+1}{2(c-1)} \frac{x_3}{\gr ^2 } \cr G_{3\gvfi}&=& -\frac{c-1}{2(c+1)} \frac{x_0}{\gr ^2 } \end{eqnarray} and from (2.9), (B.3) and (4.2,4) the dilaton field \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger D = \ln( s^4 \gr ^2 ) + D_0 \ee We notice here the existence of a manifest isometry, a traslation in the $\gvfi$ variable with Killing vector $K_\gvfi = \partial_\gvfi\;$. If we go back to $P$ variables (3.6) by means of the rotation ($0\leq R \leq \pi /2 \,$) \ignore{ \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger \left( \matrix{x_3 \cr x_0 \cr} \right) = e^{i \frac{\gvfi}{2} \gs_2 } \left( \matrix{p_3 \cr p_0 \cr} \right) \ee } \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger x_0 + i\, x_3 = (p_0 + i\, p_3 ) \, e^{i\frac{\gvfi}{2}} = \sin R \, e^{i\psi} = \sin R \, e^{i (\psi_P + \frac{\gvfi}{2} )} \ee the metric takes the form \footnote{ $2\, dx\, dy \equiv dx\otimes dy + dy\otimes dx $.} \begin{eqnarray} G &=& dt^2 \, +\, \frac{c^2}{s^2} \,\, d\gvfi ^2\, +\, \frac{1}{s^2 \, \gr ^2} \, ( \; | c \; e^{i\gvfi} + 1 |^2 \, dp_0{}^2 + | c \; e^{i\gvfi} - 1 |^2 \, dp_3{}^2 \cr &-& 4\, c \sin\gvfi \; dp_0 \, dp_3 \; ) \cr &=& dt^2 \, +\, \frac{c^2}{s^2} \,\, d\gvfi ^2\, +\, \frac{1}{s^2} \;(\; |c\, e^{i2\psi}+1| ^2 \, dR^2 + |c\, e^{i2\psi}-1| ^2 \, \tan ^2 R \, d \psi_P{}^2 \cr &-& \, 4\, c\, \tan R \, \sin 2\psi \; dR\, d\psi_P \; ) \end{eqnarray} \ignore{ where we have introduced $\; p_0 + i p_3\equiv \sin R e^{i\psi_P}\; $ and $\; x_0 +ix_3\equiv \sin R e^{i\psi} \; ,0\leq R\leq \frac{\pi}{2}$. } In this coordinates the metric looks simpler (in particular, has only one non diagonal term), but the isometry is not manifest. \newpage \section{The curvature and the equations of motion.} It is convenient in what follows to introduce an orthonormal basis $\, \{\omega} \def\gW{\Omega} \def\gK{\chi^a\} \,$ in the cotangent space of $\cal M$, $G=\delta} \def\gD{\Delta} \def\ge{\epsilon_{ab} \; \omega} \def\gW{\Omega} \def\gK{\chi^a \otimes \omega} \def\gW{\Omega} \def\gK{\chi^b $. We choose \begin{eqnarray} \omega} \def\gW{\Omega} \def\gK{\chi^1 &=& \frac{c}{s} \, d\gvfi \cr \omega} \def\gW{\Omega} \def\gK{\chi^2 &=& \frac{c+1}{s\, \gr } \, (dx_0 + \frac{x_3}{2} d\gvfi) \cr \omega} \def\gW{\Omega} \def\gK{\chi^3 &=& \frac{c-1}{s\, \gr } \, (dx_3 - \frac{x_0}{2} d\gvfi) \cr \omega} \def\gW{\Omega} \def\gK{\chi^4 &=& dt \end{eqnarray} and its dual in the tangent space $(\,\omega} \def\gW{\Omega} \def\gK{\chi_a (\omega} \def\gW{\Omega} \def\gK{\chi^b ) = \delta} \def\gD{\Delta} \def\ge{\epsilon_a{}^b \, )$ \begin{eqnarray} \omega} \def\gW{\Omega} \def\gK{\chi_1 &=& \frac{s}{c}\, (\partial_\gvfi \, + \,\frac{x_0}{2} \partial_3 \, - \,\frac{x_3}{2} \partial_0 )\cr \omega} \def\gW{\Omega} \def\gK{\chi_2 &=& \frac{c-1}{s} \gr \; \partial_0 \cr \omega} \def\gW{\Omega} \def\gK{\chi_3 &=& \frac{c+1}{s} \gr \; \partial_3 \cr \omega} \def\gW{\Omega} \def\gK{\chi_4 &=& \partial_t \end{eqnarray} {}From the first Cartan's structure equation (torsionless condition) \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger T^a \equiv d\omega} \def\gW{\Omega} \def\gK{\chi^a +\omega} \def\gW{\Omega} \def\gK{\chi^a{}_b \wedge \omega} \def\gW{\Omega} \def\gK{\chi^b = 0 \ee \noindent we read the non vanishing connections \footnote{Remember that in an orthonormal basis the metricity condition $\,\omega} \def\gW{\Omega} \def\gK{\chi^a{}_b =-\omega} \def\gW{\Omega} \def\gK{\chi^b{}_a\, $ holds, as well as the general symmetry properties: $ R_{abcd}=R_{cdab} =-R_{bacd} $ [11].} \begin{eqnarray} \omega} \def\gW{\Omega} \def\gK{\chi^1{}_2 &=& \omega} \def\gW{\Omega} \def\gK{\chi^3{}_4 = \frac{1}{s} \,\omega} \def\gW{\Omega} \def\gK{\chi^3 \cr \omega} \def\gW{\Omega} \def\gK{\chi^1{}_3 &=&-\omega} \def\gW{\Omega} \def\gK{\chi^2{}_4 = \frac{1}{s} \,\omega} \def\gW{\Omega} \def\gK{\chi^2 \cr \omega} \def\gW{\Omega} \def\gK{\chi^2{}_3 &=& \frac{c^2 + 1}{2\, s\, c} \, \omega} \def\gW{\Omega} \def\gK{\chi^1 \,+\, \frac{c+1}{s} \frac{x_3}{\gr}\,\omega} \def\gW{\Omega} \def\gK{\chi^2 \,-\, \frac{c-1}{s} \frac{x_0}{\gr}\,\omega} \def\gW{\Omega} \def\gK{\chi^3 \cr \omega} \def\gW{\Omega} \def\gK{\chi^1{}_4 &=& -\frac{1}{sc} \, \omega} \def\gW{\Omega} \def\gK{\chi^1 \end{eqnarray} By using now the second Cartan's structure equation \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger \gW^a{}_b \equiv d \omega} \def\gW{\Omega} \def\gK{\chi^a{}_b + \omega} \def\gW{\Omega} \def\gK{\chi^a{}_c\wedge \omega} \def\gW{\Omega} \def\gK{\chi^c{}_b = \frac{1}{2} R^a{}_{bcd}\,\omega} \def\gW{\Omega} \def\gK{\chi^c\wedge \omega} \def\gW{\Omega} \def\gK{\chi^d \ee \noindent we read the Riemman curvature tensor \begin{eqnarray} R_{1212} &=& R_{1234} = R_{3434} = \frac{1}{c+1} \cr R_{1324} &=&- R_{1313} = -R_{2424} = \frac{1}{c-1} \cr R_{1223} &=& R_{2334} = \frac{2}{c+1}\frac{x_0}{\gr}\cr R_{1323} &=&- R_{2324} =-\frac{2}{c-1}\frac{x_3}{\gr}\cr R_{2323}&=& \frac{2}{s^2} + R_{22} \cr R_{1423} &=&- R_{1414} = \frac{2}{s^2} \end{eqnarray} and contracting, the Ricci tensor $R_{ab} \equiv R_{cacb} = R_{ba}$ \begin{eqnarray} R_{11} &=& R_{44} = - \frac{4}{s^2} \cr R_{12} &=&-R_{34} = - \frac{2}{c-1} \frac{x_3}{\gr} \cr R_{13} &=& R_{24} = - \frac{2}{c+1} \frac{x_0}{\gr} \cr R_{14} &=& R_{23} = 0 \cr R_{22} &=& R_{33} = -2\; ( 1 + \frac{2}{s^2} + \frac{c-1}{c+1} \frac{x_0{}^2}{\gr ^2} + \frac{c+1}{c-1} \frac{x_3{}^2}{\gr ^2}\; ) \end{eqnarray} Finally, the scalar curvaure $R\equiv R_a{}^a$ is \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger -\frac{1}{4} \, R = 1 + \frac{4}{s^2} + \frac{c-1}{c+1} \frac{x_0{}^2}{\gr ^2} + \frac{c+1}{c-1} \frac{x_3{}^2}{\gr ^2} \ee With these results at hand it is straightforward to verify that the graviton-dilaton system given by equations (4.10,11) verify the consistency equations (2.2) with $B=0$ and $\gL=12$. We do not know if the torsion remains null at higher orders, but we speculate that it is indeed the case. As we anticipate, $t=0$ and $\gr = 0 $ are true singularities of the geometry, where the parametrization (3.7) breaks down. Here a little disgresion is in order. The value of $\gL$ suggests that the model is conformally invariant at one loop iff $k=\frac{18}{11}\simeq 1.64$. On the other hand, from current algebra arguments [12] the {\cal exact} central charge of the model is \begin{eqnarray} c_{SU(2,1)/U(2)} &=& c_{su(3)} - c_{su(2)} - c_{u(1)} = 8 \frac{k}{k-3} - 3 \frac{k}{k-2} - 1 \cr &=& 4\, +\, 6\, \frac{3k-5}{(k-2)(k-3)} \end{eqnarray} Then imposing the cancelation against ghost contribution we obtain the values $k_+ \simeq 3.96\;$ and $\; k_- \simeq 1.86$. The second one is near the value obtained perturbatively at first order. It is believed that by taking into account all loop corrections the value of $\gL$ should lead to $k_+$ or $k_-$; however $k$ does not seem to be big enough to assert that the perturbative theory necessarily corresponds to $k_-$. Moreover, in analogy with the condition that $-k = n $ be a positive integer needed for the quantum consistency of the compact models it is speculated that unitarity would allow only $\; k>3 $, and if true (the subject is far from being well understood by now) $k_+$ should be the right value to be considered. We will take $k \in \Re^+$ for which at least the one loop path integral seems to be well defined [17]; see next section for more about. As a last observation, if we consider the ``non critical" GWZM, i.e., with a dynamical Liouville field, the allowed values of $k$ are rational: $k_{\pm} = 4,\; \frac{13}{7}$. In order to compare with euclidean Einstein gravity, we introduce the metric $G^E\equiv e^{D} G$. Then the backgrounds $(G^E , D)$ are classical solutions of the action \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger S[G^E,D] = \int_{\cal M} \, (*R^E - \frac{1}{2} \,\nabla^E D \wedge *\nabla^E D + *\gL e^{-D} ) \ee which describes a Liouville field coupled to gravity in $d=4$, and may then be interpreted as a ``pseudo-instanton" of this theory. In fact the solution is singular at $t=0$ and $\rho = 0 \,$ as expected, and the $R_{14}$ and $R_{23}$ components fail to be (anti) self-dual, as usually known instantons are [18]. What is more, it is not asymptotically flat in the usual sense (at least in the standard range of the coordinates of the model that we assume), and gives an infinite value for the action \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger I_{inst} = 12 \, \pi^2 \, \sinh^4 T\, e^{D_0} \ee where $T$ is a cut-off in the $t$-integration. In the compact coset $SU(3)/U(2)$ the variable $t$, better to say, its continuation to imaginary values $\, \tau \equiv i\, t \, $ is naturally bounded to the interval $[0,\, \pi /2 ]\,$, and the action is finite. A possible interpretation of the solution is as follows. For $t\gamma 1$ we have \begin{eqnarray} G &\rightarrow& \, dt^2 + d\gvfi ^2 + dR^2 + tg^2 R \, d\psi_P{}^2 \cr D &\rightarrow& \, 4\, t + 2 \ln \cos R \cr R &\rightarrow& \, - 4 \, \sec ^2 R \end{eqnarray} which describes the topology product of a cylinder (a plane in the compact case, for $\tau$ near $\pi /2$ ) and a ``trumpet". On the other hand it may be thought as a euclidean coontinuation of the non singular cosmological solution \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger G_{cs} = -dx_0{}^2 + \tanh^2 x_0 \, dx_1{}^2\, +\, dx_2{}^2 \, +\, dx_3{}^2 \ee arising from the $SL(2,\Re)\times SO(1,1)^2 / SO(1,1)$ model [19]. Then is tempting to interpret the instanton as a path in ``euclidean time t" that interpolates two universes, one in a ``big bang" phase (singularity at $t=0$) and other smoothly evolving according to (5.13). We will see in the next section that for finite $k$ very different (and appealing) possibilities arise. As a final remark we note that being the string coupling constant [4] $$ g_{st} = e^{-D/2} $$ then from (5.11,12) we have $$ I_{inst} = \frac{3\;\pi^2}{4 \; g_{st}{}^{2}} $$ exhibing the usual non perturbative behaviour characterizing the ``tunneling amplitud" $\; exp(-I_{inst})$ for the process described by the instanton. \newpage \section{The exact backgrounds} \noindent {\bf The computation} In references [5,6] an ansatz to obtain the exact metric and dilaton backgrounds was proposed. Here we resume it in a few items. \noindent {\bf A)} Let ${X_a}$ be a basis in $\cal G$ simple and compact, satisfying the algebra \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger [ X_a,X_b ] = i f_{ab}{}^c X_c \ee and $g \in G$. We define left and right currents (that certainly satisfy (6.1)) as linear operators acting on $G$ according to \begin{eqnarray} {\hat J}_a^R g &\equiv& - g X_a \cr {\hat J}_a^L g &\equiv& X_a \; g \end{eqnarray} \noindent {\bf B)} Once we read from (6.2) ${\hat J}_a^{L,R,}$ we construct the quadratic Casimir operators in this $G$-realization, \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger {\hat \gD}_G^{L,R} \equiv g^{ab} {\hat J}_a^{L,R} {\hat J}_b^{L,R} \ee where $g^{ab}$ is the inverse of the Cartan metric $g_{ab} = tr(X_a X_b )$ (for normalizations, see Section 2), and in the same way we construct the Casimir operators ${\hat \gD}_H^{L,R}$ associated with the subgroup $H$, by restricting (6.3) to the $\cal H$ generators. Then we define the Virasoro-Sugawara operators \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger {\hat L}_0^{L,R} \equiv \frac{1}{k + C_{\cal G} } {\hat \gD}_G^{L,R} - \frac{1}{k + C_{\cal H} } {\hat \gD}_H^{L,R} \ee where $C_{ {\cal G},{\cal H} }$ are the respective dual Coxeter numbers. If $\cal{H}$ is semisimple then we will have sums with prefactors corresponding to each simple components [12]. \noindent {\bf C)} We identify the subspace of functions on $G$ dictated by the gauge invariance conditions \footnote{ We remember that $\hat{V_a} = \hat{J}_a ^L + \hat{J}_a^R$ are the generators of the vector transformations (A.6).} \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger ({\hat J}_a^L + {\hat J}_a^R) f(g) = 0 \; , \;\;\;a=1,...,dim{\cal H} \ee \noindent {\bf D)} Finally we apply the hypothesis of [6] \begin{eqnarray} ({\hat L}_0^L + {\hat L}_0^R ) f(g) &\equiv& - (k + C_{\cal G} )^{-1} \gK^{-1} \partial_\mu} \def\gn{\nu} \def\gs{\sigma} \def\gS{\Sigma} \def\gt{\tau(\gK G^{\mu} \def\gn{\nu} \def\gs{\sigma} \def\gS{\Sigma} \def\gt{\tau\gn}\partial_\gn )f(g) \cr \gK &=& e^D \; \sqrt{|det G|} \end{eqnarray} from where we can directly get $G^{\mu} \def\gn{\nu} \def\gs{\sigma} \def\gS{\Sigma} \def\gt{\tau\gn}$ by looking at the quadratic terms, and a system of first order differential equations to determine $\gK$ (and so $D$) from the linear terms. Going to our model, we take $X_a\equiv \gl_a$ the Gell-Mann matrices and consider the parametrization (3.7,8) and (C.4). Let us introduce the commuting linear operators $ {\hat {\bf X}}$ and ${\hat {\bf V}}$ \begin{eqnarray} {\hat X}_1 &=& -i\;(x_2 \, \partial_3 - x_3 \, \partial_2 ) \cr {\hat X}_2 &=& -i\; x_0\; \partial_2 \cr {\hat X}_3 &=& -i\; x_0\; \partial_3 \cr V_i &=& \frac{i}{2} ( v_0 \, \partial_i - \ge_{ijk} \, v_j \, \partial_k ) , \;\;\; i=1,2,3 \end{eqnarray} that verify (6.1) with $ f_{ij}{}^{k}=\ge_{ijk} $. Then from (6.2) we read \footnote{The index $\alpha} \def\gb{\beta=1,2$ refers to the combinations $\; \gl_1^{\pm} = \frac{1}{2} (\gl_4 \pm i\gl_5)\; , \;\; \gl_2^{\pm} = \frac{1}{2} (\gl_6 \pm i\gl_7) $.} \noindent \underline{Right currents} \begin{eqnarray} {\hat R}_i &=& - R(V)_{ji}\; ( {\hat X}_j + u_j ( {\hat V}_3 - i \partial_\phi} \def\gr{\rho} \def\gvfi{\varphi} \def\gh{\eta ) ) \; ,\;\; i=1,2,3 \cr {\bf u} &=& \frac{1}{x_2} ( x_0 \; \check{e}_1 + x_3\; \check{e}_2 - x_2 \; \check{e}_3 ) \cr {\hat R}_\alpha} \def\gb{\beta^{+} &=& - \frac{u^{3/2}}{2} (XV)_{1\alpha} \def\gb{\beta}\; (\partial_t + i \frac{s}{c} \partial_\gvfi ) + A_0^V \partial_\phi} \def\gr{\rho} \def\gvfi{\varphi} \def\gh{\eta + i\; \bf{A^V} \cdot {\bf \hat{V}} + i \, {\bf A^X} \cdot {\bf \hat{X}} \cr i\, 2\, s\,u^{-3/2} A_0^V &=& \frac{c^2 + 3}{2c} (XV)_{1\alpha} \def\gb{\beta} + \frac{cz - z^* }{x_2}\; (XV)_{2\alpha} \def\gb{\beta} \cr i\, 2\, s\, u^{-3/2}\bf{A^V} &=& 2\, (XV)_{2\alpha} \def\gb{\beta}\; ( -\check{e}_1 + i \check{e}_2\, ) + ( \frac{s^2}{2c} (XV)_{1\alpha} \def\gb{\beta} + \frac{cz - z^*}{x_2} (XV)_{2\alpha} \def\gb{\beta} )\, \check{e}_3 \cr i\, 2\, s\, u^{-3/2}{\bf A^X} &=& (c+1) (XV)_{2\alpha} \def\gb{\beta} \; \check{e}_1\, - i\, (c-1) (XV)_{2\alpha} \def\gb{\beta} \; \check{e}_2\, + \frac{s^2}{2c} (XV)_{1\alpha} \def\gb{\beta} \; \check{e}_3\cr {\hat R}_\alpha} \def\gb{\beta^- &=& ( {\hat R}_\alpha} \def\gb{\beta^+ )^* \cr {\hat R}_8 &=& i \frac{2}{\sqrt{3}}\, \partial_\gvfi \end{eqnarray} \noindent \underline{Left currents} \begin{eqnarray} {\hat L}_i &=& -{\hat R}_i + 2 \, R(V)_{ji} {\hat V}_j \; ,\;\;\; i=1,2,3 \cr {\hat L}_\alpha} \def\gb{\beta^+ &=& \frac{1}{2} V_{1\alpha} \def\gb{\beta} (\partial_t - i \frac{s}{c} \partial_\gvfi ) + A_0^V \partial_\phi} \def\gr{\rho} \def\gvfi{\varphi} \def\gh{\eta + i \bf{A^V} \cdot \bf{\hat V} + i \bf{A^X} \cdot \bf{\hat X} \cr i\, 2\, s\, A_0^V &=& -\frac{7 c^2 - 3}{2c} V_{1\alpha} \def\gb{\beta} + \frac{cz^* - z}{x_2} V_{2\alpha} \def\gb{\beta} \cr i\, 2\, s\, \bf{A^V} &=& c\, V_{2\alpha} \def\gb{\beta}\, (\check{e}_1 - i \, \check{e}_2 \, ) +\, ( \frac{s^2}{2c} V_{1\alpha} \def\gb{\beta} + \frac{cz^* - z}{x_2} V_{2\alpha} \def\gb{\beta} )\, \check{e}_3 \cr i\, 2\, s\, \bf{A^X} &=& -(c+1) V_{2\alpha} \def\gb{\beta}\, \check{e}_1 - i(c-1) V_{2\alpha} \def\gb{\beta} \, \check{e}_2 + \frac{s^2}{2c}\, V_{1\alpha} \def\gb{\beta} \, \check{e}_3 \cr {\hat L}_\alpha} \def\gb{\beta^- &=& ({\hat L}_\alpha} \def\gb{\beta^+)^* \cr {\hat L}_8 &=& - {\hat R}_8 + i \, 2\, \sqrt{3}\, \partial_\phi} \def\gr{\rho} \def\gvfi{\varphi} \def\gh{\eta \end{eqnarray} Clearly the first and last equations in (6.9) translate the gauge conditions (6.5) as the independence on $\phi} \def\gr{\rho} \def\gvfi{\varphi} \def\gh{\eta$ and ${\bf v}$, i.e., on the gauge variable $V$. Restricting us to the gauge invariant subspace, we get the laplacians \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger {\hat \gD}_G^L = {\hat \gD}_G^R = {\hat \gD}_{U(1)} + {\hat \gD}_{SU(2)} + \{ {\hat R}_\alpha} \def\gb{\beta^+, {\hat R}_\alpha} \def\gb{\beta^- \} \ee and acording to (6.4) we have \footnote{ The first equality follows from $ \gD_G^L = \gD_G^R$ and $\gD_H^L = \gD_H^R $, this last one valid on gauge invariant functions [5]. Also the usual change $\; k \rightarrow -k\; $ coming from (2.4) for non compact groups is made [1]. } \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger {\hat L}_0^L = {\hat L}_0^R = \frac{1}{k-3} {\hat \gD}_{SU(2,1)} - \frac{1}{k-2} {\hat \gD}_{SU(2)} - \frac{1}{k} {\hat \gD}_{U(1)} \ee Carrying out the computations and applying (6.6) we read the inverse metric; the modified basis (5.1) looks \ignore{ \begin{eqnarray} G^{tt} &=& 1 \cr G^{\gvfi\gvfi} &=& \frac{s^2}{c^2} - \frac{4}{k} \cr G^{3\gvfi} &=& \frac{s^2}{2 c} x_0 \cr G^{22} &=& ( \frac{c-1}{c+1} - \frac{1}{k-2} ) x_0{}^2 + ( \frac{c+1}{c-1} - \frac{1}{k-2} ) x_3{}^2 \cr G^{33} &=& ( \frac{s^2}{4 c^2} - \frac{1}{k-2} ) x_0{}^2 + ( \frac{c+1}{c-1} - \frac{1}{k-2} ) x_2{}^2 \cr G^{23} &=& - ( \frac{c+1}{c-1} - \frac{1}{k-2} ) x_2 x_3 \end{eqnarray} } \begin{eqnarray} \omega} \def\gW{\Omega} \def\gK{\chi^1 &=& ( \frac{s^2}{c^2} - b)^{-\frac{1}{2}}\, d\gvfi \cr \omega} \def\gW{\Omega} \def\gK{\chi^2 &=& \frac{c+1}{s\,\gr} \, \gb^{\frac{1}{2}}\, ( dx_0 + \frac{f}{1 - b\,\frac{c^2}{s^2}}\, \frac{x_3}{2} \, d\gvfi - (f-1) \frac{x_3}{x_0} \, dx_3 ) \cr \omega} \def\gW{\Omega} \def\gK{\chi^3 &=& \frac{c-1}{s\,\gr} \, \left( \frac{f}{1- a \frac{c-1}{c+1}} \right)^{\frac{1}{2}}\, ( dx_3 - (1 - b\,\frac{c^2}{s^2} )^{-1}\, \frac{x_0}{2} \, d\gvfi ) \cr \omega} \def\gW{\Omega} \def\gK{\chi^4 &=& dt \end{eqnarray} and after solving the differential equations, the dilaton \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger D = D_0 + \ln \frac{s^3 \; c}{|\det \, G |^{\frac{1}{2}} } \ee where \begin{eqnarray} det\, G &=& \frac{\gb \; f}{(1 - a\, \frac{c-1}{c+1}) (\frac{s^2}{c^2} - b) \, \gr^4 } \cr \gb^{-1} &=& 1 - \frac{c+1}{c-1} \left( a + ( f - 1) ( \frac{c+1}{c-1} - a) \,\frac{x_3{}^2}{x_0{}^2} \right) \cr f^{-1} &=& 1 - \frac{a\, b}{\ge} \; (\frac{c+1}{c-1} - a)^{-1}\; \frac{ (1-\ge) c^2 -1}{(1- b) c^2 -1}\; \frac{x_0{}^2}{\gr^2} \end{eqnarray} and $\; a=\frac{1}{k-2},\; b=\frac{4}{k},\; \ge = \frac{2}{k-1}\,$ . As usual the exact results are not very enlightening and in general the singularity structure becomes highly complicated. Also regions of different signature appears, fact related to the signs in the arguments of the square roots in (6.12), giving rise to bizarre geometries and possible topologies. For example, for $\, 0<k<2\, $ is easy to see that the signature is strictely minkowskian (within the natural range of the group parameters) with $\gvfi$ being the time like coordinate. However some interesting interpretations can be given. \noindent {\bf The black plane metrics} Let us consider metrics of the form \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger G_{bp} = -f(x) \; d\gt^2 + f(x)^{-1}\; dx^2 + dy^2 + dz^2 \ee Obviously the topology is $P\times Q$ where $P$ is a plane (or some compactified version of it) and $Q$ an indefined signature submanifold parametrized by $(\gt ,x)$ coordinates where the geometry is characterized by the function $f$. Let us first analyze a ``regular" case with \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger f_r (x) = 1 - \frac{\cosh^2 a x_h}{\cosh^2 a x} \ee where $a, x_h$ are positive real constants, and introduce the ``distorted" coordinate \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger x_* = x \; +\; \frac{1}{2\, a\, \tanh a x_h}\; \ln\frac {\sinh a|x-x_h|}{\sinh a|x+x_h |} \ee The inverse relation $x(x_*)$ distinguishes three patches: I for $x>x_h$, II for $|x|<x_h$ and III for $x<-x_h$. By defining null coordinates $ u = \gt + x_* \; , \; v = \gt - x_* $ in regions I and III, and $ u = x_* + \gt \; , \; v = x_* - \gt $ in region II, the metric takes the general form \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger G_{rbp} = - |f_r (x)|\; du \; dv + dy^2 + dz^2 \ee The metric is regular in all three patches as can be seen from the scalar curvature (that characterizes all the curvature tensor) \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger R = 2 \; a^2 \; \cosh^2 a x_h \;\; \frac{-3 + 2 \cosh^2 a x}{\cosh^4 a x} \ee Then we can glue them as is usually done and the maximally extended conformal Penrose diagram for $Q$ (where each point represents $P$) so obtained is similar to that of the Kerr solution of general relativity (for $M^2 > a^2, \theta = 0 $) \footnote{See for example figure 27 in page 312 of reference [20].} with $ r_\pm \sim \pm x_h $, and the manifold described by it is geodesically complete. Clearly $x\rightarrow \pm \infty$ are asymptotically flat regions, and $x = \pm x_h$ are horizons for observers there (in regions I/III); the geometry is then naturelly interpreted as a ``regular black plane" hidden in region II. Its Hawking temperature can be computed by standard methods [4] \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger T_r = \frac{a}{2\pi} \tanh a x_h \ee Let us consider now a ``singular" case defined by \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger f_s (x) = 1 - \frac{\sinh^2 a x_h}{\sinh^2 a x} \ee The distorted coordinate is now defined as in (6.17) with the replacement $$ \tanh a x_h \rightarrow ({\tanh a x_h})^{-1} $$ But now the curvature is \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger R = 2 \; a^2\; \sinh^2 a x_h \;\; \frac{3 + 2 \sinh^2 a x}{\sinh^4 a x} \ee that togheter with (6.21) reveals the existence of flat regions for $|x|\rightarrow \infty$, but also displays a true singularity at $x=0$. Due to this crucial fact we can follow the standard procedure as before and write $G_{sbp}$ as in (6.18), but now we can only glue region I with ``half" region II (until the singularity, remember that here $x$ is timelike) because we can not go beyond the singularity where analyticity breaks down; similar remarks are made for regions III and the other half of region II, which are ``parity" reflected patches of the first ones. The maximally extended conformal Penrose diagram is then similar to Schwarzchild's. We can say that the singularity at $x=0$ separates two worlds; we can not certainly pass through the singular black plane, and once we go across the horizon at $x_h$ we will die there after finite proper time. The Hawking temperature for this ``singular black plane" is \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger T_s = \frac{a}{2\pi} \coth a x_h \ee Now let us establish what these geometries has to do with us. Let us consider the general case of finite $k \neq 2,3,4$. Then it is not difficult to show that exists $0<t_k <\infty$ such that the exact solution given by (6.12,13) has the limit \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger G\;\stackrel{t\gamma t_k }{\longrightarrow}\; dt^2 + \frac{k}{k-4}\; d\gvfi^2 + \frac{k-2}{k-3}\; \frac{dr^2}{1- r^2} + \frac{k-4}{k-3}\; \frac{r^2}{\frac{k-4}{k-2} - r^2} \; d\psi '_P{}^2 \ee \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger D\;\stackrel{t\gamma t_k }{\longrightarrow}\; 4\; t + \frac{1}{2} \ln |(1 - r^2 ) (\frac{k-4}{k-2} - r^2)| \ee where polar coordinates ($r\equiv\sin R ,\psi $) as in (4.12) has been introduced and $$ \psi '_P = \psi - \frac{k}{k-4}\;\; \frac{\gvfi}{2} $$ Now let us take $0<k<2$ (for example, the conformal value $k=k_{-}$ discussed after (5.9)). Then by making the change of variables \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger r^2 = 1 - \frac{2}{2-k} \; \sinh^2 a x \ee with $a = \frac{1}{\sqrt{|k-2|}}$, it is easy to show that the $line$ $element$ $$ ds^2 = (k-3)\; G $$ tends to the the regular black plane metric with the further identifications \begin{eqnarray} y &=& i \;\sqrt{|3-k|}\;\; t \cr z &=& \sqrt{k\; \frac{|3-k|}{|4-k|}} \;\; \gvfi \cr \gt &=& i \;\sqrt{|4-k|} \;\; \psi '_P \end{eqnarray} and $x_h$ defined by $\; \sinh^2 a x_h = 1 - k/2 $. On the other hand, in the case $4<k<\infty $ the change of variables \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger r^2 = 1 - \frac{2}{k-2} \; \cosh^2 a x \ee leads to \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger ds^2 \; \stackrel{t\gamma t_k }{\longrightarrow}\; - G_{sbp} \ee with $a$ as before, $ \sinh^2 a x_h = -2 + k/2 $, and the identifications are (6.27) with the replacement $z\rightarrow i z $. The dilaton field in both cases is given by \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger D\;\stackrel{t\gamma t_k}{\longrightarrow}\; 4\; t + \ln \sinh 2 a |x| \ee {}From these results we are in conditions of interpreting the $exact$ solutions (6.12), as we made in the $k=\infty$ case, as some kind of instantons that ``tunnel" from $t\rightarrow 0$ highly singular universes (whose expressions being little ilumining we do not write) to static black plane like universes for $t\gamma t_k $. We also notice from (6.30) that $t\gamma t_k $ is a weak coupling phase except near the black plane $x \rightarrow 0 $ where we go to an strong coupling region. Let us finally remark that the $\gK$ field introduced in (6.6) results $k$-independent, as verified for some models in [5]. This result gives further strong support to the non renormalization theorem conjectured there for any GWZM from path integral measure conformal invariance arguments. \newpage \section{The dual backgrounds} In reference [21] was showed that it is possible to obtain another solution to the one loop equations (2.2) starting from one which has an isometry. Explicitely, if $(G, B, D)$ are backgrounds satisfying (2.2) that in some coordinate system are independent of the coordinate $\gvfi$, then \begin{eqnarray} \tilde G_{\gvfi\gvfi} &=& G_{\gvfi\gvfi}{}^{-1} \cr \tilde G_{\gvfi\alpha} \def\gb{\beta} &=& \frac{B_{\gvfi\alpha} \def\gb{\beta}}{G_{\gvfi\gvfi}} \cr \tilde G_{\alpha} \def\gb{\beta\gb} &=& G_{\alpha} \def\gb{\beta\gb} - \frac{1}{G_{\gvfi\gvfi}} ( G_{\gvfi\alpha} \def\gb{\beta}\; G_{\gvfi\gb} \; -\; B_{\gvfi\alpha} \def\gb{\beta}\; B_{\gvfi\gb} ) \cr \tilde B_{\gvfi\alpha} \def\gb{\beta} &=& \frac{G_{\gvfi\alpha} \def\gb{\beta}}{G_{\gvfi\gvfi}} \cr \tilde B_{\alpha} \def\gb{\beta\gb} &=& B_{\alpha} \def\gb{\beta\gb} + \frac{1}{G_{\gvfi\gvfi}} ( G_{\gvfi\alpha} \def\gb{\beta}\; B_{\gvfi\gb} \; -\; B_{\gvfi\alpha} \def\gb{\beta}\; G_{\gvfi\gb} ) \cr \tilde D &=& D + \ln |G_{\gvfi\gvfi}| \end{eqnarray} where $\alpha} \def\gb{\beta,\gb \ne \gvfi$, is also a solution. The existence of it is sometimes referred as ``target space duality" or ``abelian duality". As we saw in Section 4, (4.10,11) fulfills the requirements and then a dual solution may be straightforwardly obtained from (7.1). For sake of completeness we present it, \begin{eqnarray} \tilde G &=& dt^2 \; +\; \frac{1}{G_{\gvfi\gvfi}} \; d\gvfi ^2 + \frac{1}{4\; G_{\gvfi\gvfi} \; \gr ^2} \; (\;\; ( \frac{4\; c^2}{(c-1)^2} + \frac{x_0{}^2}{\gr ^2} )\; dx_0{}^2 + 2\;\; \frac{x_0\, x_3}{\gr ^2}\;\; dx_0 \,dx_3 \cr &+& (\, \frac{4\; c^2}{(c+1)^2} + \frac{x_3{}^2}{\gr ^2}\,)\; d x_3{}^2 \; )\cr \tilde B &=& \frac{1}{2 \; G_{\gvfi\gvfi}\; \gr ^2} d\gvfi \wedge ( \frac{c+1}{c-1}\; x_3 \; dx_0 \;- \; \frac{c-1}{c+1} \; x_0 \; dx_3 ) \cr \tilde D &=& \tilde D_0 + \ln |s^4 \; \gr ^2 \; G_{\gvfi\gvfi} | \end{eqnarray} We notice that the crossing terms in (4.3) does not appear in (7.2), at expenses of the axionic field. Also the metric present a submetric in the $(t,x_0,x_3 )$ variables; formally the Cotton-Darboux theorem [20] assures us that it is possible to diagonalize it but unfortunately we have not succeeded in doing it. In [22] was showed that if the coordinate $\gvfi$ is periodic, then both solutions are equivalent, i.e., they describe the same conformal theory. In the natural range of our parameters, $\gvfi$ is in fact periodic, and then both (4.10,11) and (7.2) should be equivalent. This can be understood from the GWZM point of view by noting that, having gauged a subgroup with a semisimple algebra containing a $u(1)$ subalgebra, there exists the possibility of considering other model by $axial$ gauging the $u(1)$ (see footnote 2). We then conclude that the one loop backgrounds (7.2) are those of the $SU(2,1)/SU(2)_{vector}\times U(1)_{axial} \; $ GWZM. \newpage \section{Conclusions} We have presented in this paper an study of the possible effective geometries underlying a coset model based on the pseudo-unitary group $SU(2,1)$, to our knowledge the first one that considers $SU(p,q)$ groups with $p+q>2$. In the natural range of the parameters the one loop metric is strictely positive definite and so it does not present ``horizons", but is singular on two dimensional manifolds $t=0$ (disk) and $\gr =0$ ($\Re^2$). It may be possible that by changing the topology (e.g., limiting the range of coordinates or compactifying some dimensions) a ``regular" gravitational instanton may be obtained. For example, if we introduce in (4.13) the $x$ variable by \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger \sin \, R = e^{- x + t^\gn} \;,\;\; 0<\gn<1 \ee then we have for $t\gamma 1\,$, \begin{equation}} \def\ee{\end{equation}} \def\dag{\dagger G \rightarrow dt^2 + d\gvfi ^2 - dx^2 - d\psi_P{}^2 \ee that is, $G$ results asymptotically flat on $\Re^2\times T^2$ (the Riemann tensor in fact vanishes). Anyway it does not seem any such modified theory will be fully represented by an exact conformal field theory, because only some patch would be covered by the GWZM considered here. For finite $k$ (the physical case) the picture drastically changes. Regions of different signature appears, and the structure of the singularities becomes highly complicated. In the examples considered we remain with them, differing from the $2-d$ black hole model where a possible mechanism to evite the singularity seems to work [23]. A question non addressed in this paper is the global topology of the exact target manifold; we have in fact loosely ignored the ranges of the coordinates in the discussions of section 6, although is clear that (6.12,13) is presumibly a solution of the (unknown) exact background field equations independently of them. In our opinion only the study of the quantum theory of the model and possible consistency conditions (e.g., identification of field operators with current algebra primary fields, renormalization, unitarity, ecc.) needed for its existence can give light on the problem. Finally we remark that, as it occurs with other string solutions, the existence of event horizons with topology different from $S^2$ (in our case, a plane) is not in contradiction with Hawking's theorem, because our solution has $\gL =12 >0$ that gives a negative Liouville potential in (5.10) which violates the dominant energy condition [4]. \newpage
1,314,259,992,807
arxiv
\section{Introduction}\label{s-intro} Modern theory and simulations agree that the star formation of galaxies and the properties of their circumgalactic medium (CGM, defined here as the gas between the inner regions of galaxies and the diffuse intergalactic medium, IGM) should be intimately connected. This is especially true for the dense flows through the CGM: feedback from star formation is understood to drive outflows that carry mass and metals away from galaxies, while infall from the IGM is thought to bring in fresh gas to fuel on-going star formation. In fact, each of these is a necessary component for our current understanding of galaxy evolution. Without significant feedback, most baryons would cool into the centers of halos to form prodigious quantities of stars \citep[e.g.,][]{white78,keres09a}, but with feedback, the baryon content of stars and cold gas in galaxies can be matched \citep[$<20$\% of their cosmic baryons; e.g.,][]{fukugita98,conroy09} by driving matter into the CGM and beyond. Similarly, without continued infall of IGM material, star-forming galaxies would consume their interstellar gas in $\sim$1 Gyr \citep[e.g.,][]{genzel10,prochaska05}. The absence of star formation in some galaxies may be explained by the strangulation of IGM infall, wherein the hot ambient coronal matter in high-mass galaxies is sufficient to heat the infalling gas to temperatures that make it unavailable for immediate star formation (\citealt{dekel06,keres09b}). These exchanges of matter, both in and out, through the CGM thus play critical roles in the evolution of galaxies. The competition between these large-scale inflows and outflows and its behavior with galactic mass is thought to shape such disparate properties of galaxies as the galactic mass-metallicity relation, the galaxy color bimodality, the maintenance of star formation in galaxies over billions of years, and the (stellar) baryonic mass fraction of galaxies \citep[e.g.,][]{keres05,dekel06,faucher-giguere11}. It has, however, been difficult to verify these predictions. There is good reason to believe feedback-driven outflows are important carriers of mass and metals through the CGM since ubiquitous outflows are observed toward galaxy centers \citep[e.g.,][]{pettini01,shapley03,steidel04,steidel10,weiner09,rubin14}. The COS-Halos and COS-Dwarfs surveys have demonstrated that the CGM is a massive reservoir of galactic metals, with galaxies having ejected at least as much metal mass as they have retained (\citealt{tumlinson11a,werk14,peeples14,bordoloi14}, and see also, e.g., \citealt{stocke13,liang14,lehner15} for other works). Similarly, characterizing the infall of matter requires that the accreting gas is first found. It is not often seen in absorption against the galaxies themselves \citep[e.g.,][]{martin12,rubin12} and has been difficult to observe directly in the CGM. To study the relationship between galaxy and CGM properties requires the development of methods for identifying gas infall, outflows, or other phenomena. Our team has approached this problem by using absorption lines toward background QSOs, searching for CGM gas with an H$\;${\small\rm I}\relax\ selection technique and determining the gas metallicity as a ``tracer" of the origin(s) of the gas \citep{ribaudo11,fumagalli11a,fumagalli11b,lehner13}. The selection based only on its H$\;${\small\rm I}\relax\ column density avoids biases that can be present with metal-line selection (e.g., via Mg$\;${\small\rm II}\relax\ absorption). We target absorbers with a detectable break at the Lyman limit and/or with the Lyman series so that the H$\;${\small\rm I}\relax\ column density is in the interval $16\la \log N_{\rm HI} \la 19$. These are known as the partial Lyman limit systems (pLLS, defined in this work as $16\le \log N_{\rm HI} < 17.2$) and LLSs (defined in this work as $17.2\le \log N_{\rm HI} < 19$). The reasons for targeting these absorbers are twofold. First, in cosmological simulations, the LLSs have been shown to be good tracers of cold flows at $z\sim 2$--3 \citep[e.g.,][]{fumagalli11a,fumagalli14,faucher-giguere11,faucher-giguere15,vandevoort12b}. Second, Empirically, at $z\la 1$, the pLLSs and LLSs have been associated with the dense CGM \citep{lanzetta95,penton02,bowen02,chen05}, and in particular for each specific pLLS and LLS with some galaxy information, they have been found well within the virial radius of galaxies (typically at impact parameter $<130$ kpc, (\citealt{lehner13}, hereafter \citetalias{lehner13}). Higher redshift studies can only observe the most luminous galaxies, but notably the Keck Baryonic Structure Survey (KBSS) shows that at $z\sim 2$--3 there is a strong incidence of absorbers with $\log N_{\rm HI} >14.5$ with galaxies at transverse physical distance $\le 300$ kpc and velocity separation between the absorber and galaxy redshifts $\le 300 $ ${\rm km\,s}^{-1}$, but not for the lower $N_{\rm HI}$\ absorbers \citep{rudie12}. The same survey also found that nearly half of the absorbers with $\log N_{\rm HI} >15.5$ are found in the CGM of (massive) galaxies, which also implies that some of the absorbers (especially the pLLSs) may probe more diffuse gas or the CGM of less massive galaxies at high $z$. In any case, at all $z$, by definition of their H$\;${\small\rm I}\relax\ column densities, the pLLSs/LLSs are at the interface between the IGM probed by Ly$\alpha$ forest\ (LYAF) absorbers with $\log N_{\rm HI} \la 15.5$ and virialized structures traced by super-LLSs (SLLS; $19\le \log N_{\rm HI} <20.3$) and damped Ly$\alpha$\ absorbers (DLAs; $ \log N_{\rm HI} \ge 20.3$). Recently, we have shown that the dense CGM of $z<1$ galaxies traced by pLLSs and LLSs has a bimodal metallicity distribution function (MDF) with two well-separated peaks at $Z\simeq 0.02 Z_{{\sun}}$ and $0.5 Z_{{\sun}}$ and with about equal proportions in each branch (\citetalias{lehner13}). We have now doubled the initial sample of pLLSs and LLSs at $z<1$ and found the same MDF (\citealt{wotta16}, hereafter \citetalias{wotta16}). However, as shown in \citetalias{wotta16}, the bimodal nature of the MDF is dominated by the pLLS population and may start to transition to a unimodal distribution in the LLS regime. As argued in these papers, the metal-rich branch must trace expelled matter: galactic winds, recycled outflows, and tidally-stripped gas, i.e., it traces gas that has been in a galaxy previously in view of the relatively large metal enrichment of the gas. On the other hand, the metallicities of pLLSs and LLSs in the metal-poor branch are extremely low for the $z<1$ universe, lower than the metallicities of dwarf galaxies accreting onto central massive galaxies \citep[e.g.,][]{skillman89,tremonti04,nicholls14,jinmy15} and much lower than the lowest metallicities observed for the typical DLAs at similar redshift (\citetalias{lehner13}; \citetalias{wotta16}). These metal-poor LLSs appear to have all the properties of those expected for infalling matter, including the temperature, ionization structure, kinematic properties, and metallicity \citep{fumagalli11a,vandevoort12b,shen13}. Having identified low-metallicity gas in the halos of galaxies at low redshift, we now want to determine how the metallicity of the pLLSs and LLSs evolves with $z$ and $N_{\rm HI}$\ at $z>2$ using the same selection criteria and method to derive the metallicity. This program directly builds on our Keck Observatory Database of Ionized Absorbers towards Quasars (KODIAQ) survey \citep{lehner14,omeara15}, which has used the NASA Keck Observatory Archive (KOA) to characterize the properties of the highly ionized gas associated with pLLSs and LLSs. With our new KODIAQ Z program, we will expand this effort to now determine the MDF and physical properties of the pLLSs and LLSs at $z\ga 2$ in an unprecedently large sample. In this paper, we present the results from a pilot study from a subset of the KODIAQ Z sample with the goal to assemble a sample of pLLSs and LLSs at $2.3 < z < 3.3$ with a similar size as in \citetalias{lehner13} at $z<1$. The total sample consists of 32 H$\;${\small\rm I}\relax\ selected pLLSs and LLSs (19 pLLSs and 13 LLSs); the statistical sample for the metallicity distribution analysis is 31 (18 pLLSs and 13 LLSs; two pLLSs having similar metallicity and are only separated by $\sim$50 ${\rm km\,s}^{-1}$\ in the redshift rest-frame of the absorbers). We emphasize that our study contrasts from the recent HD-LLS survey at $z>2$ (\citealt{prochaska15}; \citealt{fumagalli16}, hereafter \citetalias{fumagalli16}) or from the survey of low-metallicity LLSs at $3.2\la z \la 4.4$ \citep{cooper15,glidden16}. The HD-LLS survey targets H$\;${\small\rm I}\relax-selected LLSs and SLLSs with $\log N_{\rm HI} > 17.2$ at $z\sim 2.5$--3.0, but only 9 LLSs have $\log N_{\rm HI} \sim 17.5$, while all the others have $\log N_{\rm HI} \ga 18$. Similarly the \citeauthor{cooper15} study also targeted a sample of 17 high $N_{\rm HI}$\ LLS (typically $\log N_{\rm HI} \sim 17.5$), but selected them on the absence of metal absorption in Sloan Digital Sky Survey (SDSS) spectra, i.e., they targeted a priori low-metallicity LLSs. These programs are therefore complementary to ours and we will use their results for comparison with our samples. Our paper is organized as follows. In \S\ref{s-data} we describe the new and archival pLLS and LLS samples. In \S\ref{s-metallicity}, we describe the different steps to estimate the metallicities of the absorbers with additional technical details (including the description of each absorber) provided in the Appendix for interested readers. Our main results are presented in \S\S\ref{s-results} and \ref{s-prop} where we discuss the metallicity distribution of the pLLSs and LLSs at $2.3<z<3.3$ and the evolution of their properties. In \S\ref{s-disc} we discuss some of the implications of our new observational results. Finally, in \S\ref{s-sum} we summarize our main results. \begin{figure*} \epsscale{1} \plotone{f1.pdf} \caption{Example of normalized H$\;${\scriptsize\rm I}\relax\ (left) and metal-line (right) profiles of a pLLS with $\log N_{\rm HI} \simeq 16.17$. The red lines are the profile fits to the H$\;${\scriptsize\rm I}\relax\ lines; in this case the most constraining transitions are $\lambda$$\lambda$926, 923, 916, 915. For this pLLS, the metal-line absorption is simple with a single component observed between $-25 \le v\le +20$ ${\rm km\,s}^{-1}$, which aligns well with the H$\;${\scriptsize\rm I}\relax\ transitions (we note that C$\;${\scriptsize\rm IV}\relax\ is slightly shifted in this case by 4 ${\rm km\,s}^{-1}$). The absorption features observed outside the velocity range $-25 \le v\le +20$ ${\rm km\,s}^{-1}$\ are unrelated to this pLLSs. } \label{f-example1} \end{figure*} \section{Data, sample selection and definition}\label{s-data} With this pilot study, we assemble a sample of pLLSs and LLSs at $2<z<3.5$ similar in size and $N_{\rm HI}$\ coverage to the original sample of pLLSs and LLSs in \citetalias{lehner13}. Our final sample for this study consists of 25 new H$\;${\small\rm I}\relax-selected absorbers with $16.1 \le \log N_{\rm HI} \le 18.4$ and 7 from the literature with $16.4 \le \log N_{\rm HI} \le 18.6$. We note that some of the high $N_{\rm HI}$\ absorbers in the new sample were part of the LLS survey by \citet{steidel90}, but, in the present work, all the H$\;${\small\rm I}\relax\ and metal column densities were estimated using high resolution Keck spectra; the Steidel's study used much lower (35--80 ${\rm km\,s}^{-1}$) resolution observations, which led to metallicities being typically crudely estimated. For the literature sample, we searched for H$\;${\small\rm I}\relax-selected absorbers with $\log N_{\rm HI} \ga 16.1$, where we carefully excluded any absorbers that were selected for D/H or using metal diagnostics to preselect them. Two pLLSs are drawn from \citet{crighton13,crighton15}. The rest of the sample comes from our KODIAQ survey used to search for O$\;${\small\rm VI}\relax\ absorption in H$\;${\small\rm I}\relax-selected LLSs with five LLSs ($17.75 \la \log N_{\rm HI} \la 18.60$) \citep{lehner14}. Many of the other pLLSs/LLSs found in the KODIAQ database could not be used to study O$\;${\small\rm VI}\relax\ owing to the contamination of the Ly$\alpha$ forest\ near the O$\;${\small\rm VI}\relax\ doublet transitions, but are useful for studying the metallicity distribution of these absorbers. In this sample, we selected pLLSs and LLSs for which we could derive $N_{\rm HI}$\ reasonably well (specifically with a 1$\sigma$ error less than 0.3 dex, see \S\ref{s-nhi}) and estimate column densities (or column density limits) for Si$\;${\small\rm II}\relax, Si$\;${\small\rm III}\relax, and Si$\;${\small\rm IV}\relax\ (at least two of these ions are required to be uncontaminated), which are key ions to derive the metallicity of the pLLSs and LLSs at $z\sim 2$--3 (see \S\ref{s-nmetal}). All the new data presented here are from our KODIAQ database as part of our new KODIAQ Z survey \citep{lehner14,omeara15}. In short, these data were acquired with the HIgh Resolution Echelle Spectrometer (HIRES) \citep{vogt94} on the Keck\,I telescope on MaunaKea. These data were obtained by different PIs from different institutions with Keck access, and hundreds of spectra of QSOs at $0<z<6$ (most being at $z\sim 2$--$4$) were collected. As part of our previous NASA KODIAQ program, we have uniformly reduced, coadded, and normalized the Keck HIRES QSO spectra (for a full information regarding the data processing, see \citealt{omeara15}). A significant fraction of the reduced KODIAQ data is now publicly available from the KOA \citep{omeara15}.\footnote{Available online at http://koa.ipac.caltech.edu/Datasets/.} Before proceeding to our main analysis, we emphasize two aspects of our sample of the pLLSs and LLSs. First, there is no proximate pLLS or LLS in our sample, i.e., all the absorbers in our sample have velocity separations from the redshift QSOs well above 3000 ${\rm km\,s}^{-1}$. Second, as we emphasize further below, we derive the column densities of H$\;${\small\rm I}\relax\ and the metal lines in the main absorption associated with the pLLSs or LLSs, so the integration of the velocity profiles are over about 40 to 130 ${\rm km\,s}^{-1}$. This contrasts from the HD-LLS survey \citep{prochaska15}, where they consider that a LLS is all of the optically thick gas within a velocity interval of 500 ${\rm km\,s}^{-1}$\ from the redshift of the LLS. Owing to that we use higher resolution spectra in our survey and that the $N_{\rm HI}$\ values are typically below $10^{18}$ cm$^{-2}$, we can consider reliably smaller velocity intervals. However, we note there is one case in our sample where a pLLS has evidence for two pLLSs ($z_{\rm abs}=2.46714$ toward J144453+291905), but the signal-to-noise (S/N) level is not good enough to accurately model them separately. There is also one case where two pLLSs are separated only by 50 ${\rm km\,s}^{-1}$\ ($z_{\rm abs}=2.43307$ and 2.43359 toward J170100+641209) and where we find a similar metallicity for each absorber; in that case we only kept one for our analysis of the metallicity distribution (there is also one similar case in the \citealt{crighton15}, but in this case we adopted their results based on the total column density since there was little variation in the metallicity across the velocity profile). Finally, for two cases, a pLLS is associated with a SLLS, i.e., there is a velocity separation less than 300 ${\rm km\,s}^{-1}$\ between the pLLS and SLLS (one in our new sample -- $z_{\rm abs} = 2.66586$ toward J012156+144823, see Appendix, and one in \citealt{crighton13}). It is unclear at this stage if this could bias in any ways the sample, but since there are only two such cases presently, any effect would be marginal (in the case of the \citealt{crighton13} sample, the metallicity of pLLS is factor 50 than the SLLS, and hence the two absorbers do not have the same origin). In the future, with larger samples, we will be able to investigate more systematically pLLSs in the redshift vicinity of SLLSs or DLAs. \begin{figure*} \epsscale{1} \plotone{f2.pdf} \caption{Same as Fig.~\ref{f-example2} but for stronger pLLS with $\log N_{\rm HI} \simeq 16.63$. Despite that the H$\;${\scriptsize\rm I}\relax\ transitions are all contaminated to some level, the use of many transitions allows us to determine accurately $N_{\rm HI}$. For this pLLS, the metal-line absorption consists of two main components observed between $-45 \le v\le +35$ ${\rm km\,s}^{-1}$. Note that in this case, there is evidence for weaker H$\;${\small\rm I}\relax\ absorption and metal-line features below $-45$ ${\rm km\,s}^{-1}$\ and above $+35$ ${\rm km\,s}^{-1}$\ (in particular C$\;${\scriptsize\rm IV}\relax\ and O$\;${\scriptsize\rm VI}\relax\ have strong absorption from about $-160$ to $+100$ ${\rm km\,s}^{-1}$). For our analysis of the metal lines, we only consider the absorption at $-45 \le v\le +35$ ${\rm km\,s}^{-1}$, which is associated with the main component of the pLLS. } \label{f-example2} \end{figure*} \section{Estimation of the metallicity}\label{s-metallicity} The most robust approach to measure the metallicity of the pLLSs and LLSs would be to use the O$\;${\small\rm I}\relax/H$\;${\small\rm I}\relax\ ratio given that charge exchange reactions with hydrogen ensure that the ionizations of H$\;${\small\rm I}\relax\ and O$\;${\small\rm I}\relax\ are strongly coupled. However, for absorbers with $\log N_{\rm HI} \la 17.5$, O$\;${\small\rm I}\relax\ is rarely detected, and the limit that can be placed on $N_{\rm O\,I}$ is generally not sensitive enough. Hence to determine the metallicity of the pLLSs and LLSs, we have to compare the column densities of metal ions with H$\;${\small\rm I}\relax. Since the pLLSs and LLSs are not pre-dominantly neutral like DLAs, but nearly completely ionized, we need to constrain the ionization of this gas to be able to derive its metallicity (e.g., \citealt{prochaska99,lehner09,lehner13,prochaska15}; \citetalias{fumagalli16}; and see below for more details). LLSs and pLLSs are often multiphase, with absorption seen in different ionization stages, and the low to intermediate ions (e.g., Si$\;${\small\rm II}\relax, Si$\;${\small\rm III}\relax, Si$\;${\small\rm IV}\relax, C$\;${\small\rm II}\relax, C$\;${\small\rm III}\relax, and sometimes C$\;${\small\rm IV}\relax) and high ions (O$\;${\small\rm VI}\relax) often show distinct kinematics (e.g., \citealt{lehner09,lehner13,fox13,crighton13}; \citetalias{fumagalli16}). This is illustrated in Figs.~\ref{f-example1} and \ref{f-example2}, where we show two examples of pLLSs at $z\sim 3$ from our new sample with $\log N_{\rm HI} \simeq 16.17$ and 16.63, respectively. In the left panel of these figures, the H$\;${\small\rm I}\relax\ transitions used to determine the H$\;${\small\rm I}\relax\ column density are shown; the right panel shows some of the metal ions used to determine the metallicity. Other examples of high-$z$ LLS absorption profiles can be found, for example, in \citet{lehner14}, \citet{prochaska15}, and \citet{crighton13,crighton15} as well as in the Appendix for the metal lines. For the ionizing radiation field and for pLLSs with typical metallicities at $z \sim 2$--3 (about 0.1\% solar or ${[\rm X/H]} = -2$, see below and \citetalias{fumagalli16}), even strong transitions like C$\;${\small\rm II}\relax\ $\lambda$1334 and Si$\;${\small\rm II}\relax\ $\lambda$1260 are often not detected, so we have to use Si$\;${\small\rm III}\relax\ and Si$\;${\small\rm IV}\relax\ to determine the metallicity. However, as in our study at low redshift \citep{lehner13}, we typically do not use high ions (specifically O$\;${\small\rm VI}\relax\ at $z\sim 2$--3) because the distinct kinematics of these ions (see Fig.~\ref{f-example2} and \citealt{lehner14}) imply that the bulk of the highest ions (i.e., O$\;${\small\rm VI}\relax) are not produced by the same mechanism that ionizes the lower ions in the pLLSs/LLSs or at the same density. In order to estimate the metallicity, we therefore need accurate column densities of H$\;${\small\rm I}\relax\ and metal ions. We describe in \S\ref{s-nmetal} and \S\ref{s-nhi} how we estimate the column densities of the metal ions and H$\;${\small\rm I}\relax. To correct for the large ionization when comparing H$\;${\small\rm I}\relax\ to metal ions (e.g., Si$\;${\small\rm II}\relax, Si$\;${\small\rm III}\relax, Si$\;${\small\rm IV}\relax, C$\;${\small\rm II}\relax, C$\;${\small\rm III}\relax, C$\;${\small\rm IV}\relax) to determine the metallicity, we use Cloudy \citep{ferland13} models; a full description of this method and its limitations are presented in \S\ref{s-cloudy}. \subsection{Metals and their column densities}\label{s-nmetal} The main ions and transitions used in our study are Si$\;${\small\rm II}\relax\ $\lambda$$\lambda$1190, 1193, 1260, 1304, 1526, Si$\;${\small\rm III}\relax\ $\lambda$1206, Si$\;${\small\rm IV}\relax\ $\lambda$$\lambda$1393, 1402, C$\;${\small\rm II}\relax\ $\lambda$$\lambda$1036, 1334, C$\;${\small\rm III}\relax\ $\lambda$977, and C$\;${\small\rm IV}\relax\ $\lambda$$\lambda$1548, 1550. In some cases, we can also use O$\;${\small\rm I}\relax\ $\lambda$$\lambda$1039, 1302, Al$\;${\small\rm II}\relax\ $\lambda$1670, Fe$\;${\small\rm II}\relax\ $\lambda$1608, Fe$\;${\small\rm III}\relax\ $\lambda$1122. We also consider O$\;${\small\rm VI}\relax\ $\lambda$1031, 1037 and N$\;${\small\rm V}\relax\ $\lambda$$\lambda$1238, 1242 in order to assess whether C$\;${\small\rm IV}\relax\ is likely to arise in the same gas-phase as the low ions. In the Appendix, we show for each pLLS or LLS the normalized profiles of the metal ions or atoms and discuss the specific ions used to determine the metallicity. We emphasize that understanding the physical conditions of all the gas-phases is beyond the scope of this paper. However, to determine the metallicity requires one to determine the column densities of the metal ions that are tracing the ionized gas associated with the H$\;${\small\rm I}\relax\ of the pLLS or LLS. Following \citetalias{lehner13}, the preferred species to constrain the ionization parameter (see below) are those for which the velocity structures of their profiles best follow the H$\;${\small\rm I}\relax\ velocity profiles and that are produced mostly by a single phase ionization model. To estimate the column density of the metal ions, we use the apparent optical depth (AOD) method described by \citet{savage91}. The absorption profiles are converted into apparent column densities per unit velocity, $N_a(v) = 3.768\times 10^{14} \ln[F_c(v)/F_{\rm obs}(v)]/(f\lambda)$ cm$^{-2}$\,(${\rm km\,s}^{-1}$)$^{-1}$, where $F_c(v)$ and $F_{\rm obs}(v)$ are the modeled continuum and observed fluxes as a function of velocity, respectively, $f$ is the oscillator strength of the transition and $\lambda$ is the wavelength in \AA\ (the atomic parameters are from \citealt{morton03}). Although the KODIAQ spectra are normalized \citep{omeara15}, we still model the continuum with a Legendre polynomial within $\pm 500$--2000 ${\rm km\,s}^{-1}$\ of the absorption feature of interest since the original continuum model may have sometimes over/under fitted some regions of the spectrum.\footnote{In this paper, we use high S/N data, so the continuum errors are typically at the 5\% level or less depending on the redshift and if the feature of interest is deep in the LYAF or not. \label{foot-cont}} The velocity ranges used to model the continuum depend on the number of absorbing features and the overall complexity of the continuum in this region of the spectrum. To determine the total column densities, we integrate the profiles over the velocities that correspond to the main absorption of the H$\;${\small\rm I}\relax\ of the pLLS or LLS. In the Appendix, we discuss for each pLLS/LLS the velocity structure of the metals and H$\;${\small\rm I}\relax\ and show the integration range used to estimate $N_a$ (see the listed values in Table~\ref{t-metal}, which can vary somewhat between different ions); typically the integration range is over $\la \pm 50$ ${\rm km\,s}^{-1}$\ in the rest-frame of the absorber. There can be several velocity components within that velocity range, but we do not consider higher-velocity components that correspond to typically weaker H$\;${\small\rm I}\relax\ absorbers clustered around the pLLSs or LLSs since the metallicity can be substantially different in these higher velocity components relative to the pLLSs or LLSs \citep[e.g.,][]{prochter10,crighton13}. For doublets or ions with several available atomic transitions (e.g., C$\;${\small\rm IV}\relax, Si$\;${\small\rm IV}\relax, Si$\;${\small\rm II}\relax), the levels of contamination or saturation can be assessed directly by comparing the $N_a$ values. In that case if there is no evidence of contamination, the absorption is typically resolved, i.e., there is no hidden saturation in the absorption profiles. For ions or atoms with only a single transition available, we require similar velocity structures between different species in the velocity intervals used for integrating $N_a(v)$ to rule out contamination from unrelated absorbers. If the absorption reaches zero flux, the absorption is saturated, and we can only estimate a lower limit on the column density using the AOD method. If the peak optical depth is $\tau_\lambda \la 2$ or similar to that of absorption lines observed with two or more transitions where there is no evidence of saturation, we infer that the absorption is not saturated. For strong absorption ($\tau_\lambda \ga 1$--2), however, we allow in the photoionization modeling for the possibility that the line is saturated if needed by the models (i.e., we treat the column densities as possible lower limits). In many cases, absorption from an ion or atom is not detected. If there is no contamination, we can estimate 2$\sigma$ upper limits on the equivalent widths, simply defined as the 1$\sigma$ error times 2. The 1$\sigma$ error is determined by integrating the spectrum over a similar velocity interval to that of a detected ion or over $\pm 20$ ${\rm km\,s}^{-1}$\ when no metals are detected in the absorber based on the typical smallest velocity intervals in other pLLSs/LLSs with detection of metals. The 2$\sigma$ upper limit on the column density is then derived assuming the absorption line lies on the linear part of the curve of growth. In Table~\ref{t-metal}, we summarize our apparent column density estimates of the metals as well the velocity interval used to integrate the profiles. For species with more than one transition, we list the results for each transition and in the row with no wavelength information the adopted weighted average column densities and velocities (see notes in this table for more information). Note that the errors are $1\sigma$ errors and include statistical and continuum placement errors following the methodology described in \citet{sembach92}. These errors do not, however, include errors arising from the original continuum fits to coadd the data (see \citealt{omeara15} and footnote~\ref{foot-cont}). \subsection{H$\;${\small\rm I}\relax\ column density}\label{s-nhi} \begin{figure} \epsscale{1.2} \plotone{f3.pdf} \caption{Example of an unusual pLLS with $\log N_{\rm HI} \simeq 16.39$ where a large number of transitions shows little contamination (note that at $z<1$, it is typically not possible to model H$\;${\scriptsize\rm I}\relax\ transitions below 916 \AA\ as a consequence of the lower resolution of the data that blends these transitions). } \label{f-example3} \end{figure} The estimation of $N_{\rm HI}$\ for each LLS ($\log N_{\rm HI} \ge 17.2$) was made using a procedure similar to that described in \citet{lehner14}. We use the graphical package {\sc x\_fitlls}\footnote{As part of the {\sc xidl} distribution package available at http://www.ucolick.org/$\sim$xavier/IDL/xidl\_doc.html} that allows us to create Voigt profiles to model the data. We iteratively varied the redshift, $b$-value, and $N_{\rm HI}$\ of each system until a good fit was obtained. In many cases, the absorption in a LLS is complicated, requiring multiple absorption lines to produce a good fit. For the LLSs presented here, we consider all absorption that produces significant absorption (normalized flux at line center $> 0.5$) through at least Lyman-5 (i.e., all components with $\log N_{\rm HI}>15.0$) that might affect our total $N_{\rm HI}$\ estimate. In most cases, such absorption impacts the total $N_{\rm HI}$\ estimate at a level well below our $1\sigma$ error estimate on the $N_{\rm HI}$, but in some cases multiple components of similar strength in $N_{\rm HI}$\ are seen and cannot be ignored in the final $N_{\rm HI}$\ estimate. Since we are fitting the absorption of the LLSs by eye (as opposed to using a reduced-$\chi^2$ approach, see below), we adopt very conservative errors, with a minimum error on the $N_{\rm HI}$\ for any LLS of $\sigma=0.15$ fitted using this methodology. We finally note that we must appeal to further constraints to accurately determine $N_{\rm HI}$\ for the strong LLSs, as the higher order Lyman series lines remain saturated for many more transitions than the pLLS or weak LLSs (see below). We have, however, two important constraints. First, the onset of weak damping features in the Ly$\alpha$\ line can be used to constrain the $N_{\rm HI}$\ from above, as if the $N_{\rm HI}$\ is too large, excess absorption appears on either side of the line-center. Second, the break in flux level below the Lyman limit can be used to determine $N_{\rm HI}$\ if there is enough S/N in the data and no nearby absorption from other strong $N_{\rm HI}$\ systems. For the pLLSs ($16.2 \le \log N_{\rm HI} < 17.2$) and one LLS, the primary tool used to constrain $N_{\rm HI}$\ are the higher order Lyman series transitions (see Figs.~\ref{f-example1}, \ref{f-example2}, \ref{f-example3}). Two authors (O'Meara, Lehner) undertook the analysis of the pLLSs where the continuum placement near each H$\;${\small\rm I}\relax\ transition and profile fits to the pLLSs were independently assessed.\footnote{The only exception is the pLLS at $z=2.90711$ toward J212912-153841 where the S/N is too low to use the high order Lyman series transitions. In that case, we use the combined information of the Lyman series transitions and the flux decrement at the Lyman limit.} O'Meara used the same method described above for the LLSs, but instead fitted high order Lyman series transitions. For example, at the resolution of our HIRES data, a pLLS absorber with $N_{\rm HI}=16.35$ and $b =20$ ${\rm km\,s}^{-1}$\ becomes unsaturated (the normalized flux at the line-center being $>0.1$) at Lyman-9. This and higher order Lyman series transitions can then be used to accurately determine the combination of $N_{\rm HI}$, $b$, and $z$ (or $v$ in the redshift rest-frame of the absorber) that best fits the observed absorption (see Fig.~\ref{f-example3}). Lehner fitted the H$\;${\small\rm I}\relax\ profiles with a minimum reduced-$\chi^2$ method using a modified version of the code described by \citet{fitzpatrick97}. The best-fit values describing the pLLSs were determined by comparing the model profiles convolved with an instrumental Gaussian LSF with the data. The three parameters $N_i$, $b_i$, and $v_i$ for each component, i (typically $i=1,2$), are input as initial guesses and were subsequently varied to minimize $\chi^2$. Since the Lyman series transitions are often blended with the Ly$\alpha$\ and Ly$\beta$\ forest absorbers, the fitting was an iterative process to select transitions that were not blended or with minimum blending. In the case of small blends, we iteratively masked the blended regions. Figs.~\ref{f-example1} and \ref{f-example2} show 2 pLLSs with various levels of contamination, while Fig.~\ref{f-example3} shows a rare pLLS where 10 Lyman series transitions have little contamination. Despite some contamination, the use of different H$\;${\small\rm I}\relax\ transitions with small oscillator strengths allows us to determine accurately $N_{\rm HI}$. For each pLLS, the independently derived $N_{\rm HI}$\ values were in excellent agreement. We adopted $N_{\rm HI}$\ and errors from the Voigt profile fitting with the minimum reduced $\chi^2$. \begin{figure} \epsscale{1.2} \plotone{f4.pdf} \caption{Distribution of the H$\;${\scriptsize\rm I}\relax\ column density in our sample at $2.3<z<3.3$. For comparison, in the same redshift interval, the HD-LLS survey has 9/38 (24\%) LLSs around $\log N_{\rm HI} \sim 17.5$ and 29/38 (76\%) with $18\la \log N_{\rm HI} \la 18.5$. \label{f-nhdist}} \end{figure} Our results are summarized in Table~\ref{t-nhi} and in Fig.~\ref{f-nhdist} where we show the H$\;${\small\rm I}\relax\ column density distribution for the entire sample of pLLSs and LLSs. There are 32 H$\;${\small\rm I}\relax-selected absorbers listed in Table~\ref{t-nhi}, 19 pLLSs ($16.2 \le \log N_{\rm HI} < 17.2$) and 13 LLSs ($\log N_{\rm HI} \ge 17.2$). However, two pLLSs are at essentially the same redshift (separated by about 50 ${\rm km\,s}^{-1}$) and have similar metallicities; we therefore treat these pLLSs as one, so that our total sample for the rest of the paper is 31. This is similar in size to the \citetalias{lehner13} sample of pLLSs and LLSs at $z<1$ (28 absorbers in total, 24 pLLSs and 4 LLSs). Our newer sample at $z<1$ has now doubled with 44 pLLSs and 11 LLSs \citepalias{wotta16}. Our sample is also complementary to the HD-LLS survey, which, by definition of their sample, targets only LLSs with all but 9 LLSs at $z\sim 2.5$--3.3 having $\log N_{\rm HI} \ga 18$ (\citealt{prochaska15}; \citetalias{fumagalli16}). \subsection{Photoionization modeling and metallicity determination}\label{s-cloudy} With the column densities of H$\;${\small\rm I}\relax\ and metals determined, we can estimate the metallicity of each pLLS or LLS. This requires large ionization corrections since the fraction of H that is ionized always exceeds 90\% and is often close to 100\% (i.e., $N_{\rm HII} \gg N_{\rm HI}$). To determine the metallicity we follow closely \citetalias{lehner13}, modeling the ionization using Cloudy \citep[version c13.02;][]{ferland13} and assuming the gas is a uniform slab geometry photoionized by the Haardt-Madau background radiation field from quasars and galaxies (HM05, as implemented within Cloudy -- see also \citealt{haardt96,haardt12}; by adopting HM05 we also reduce any systematics in the comparison with the low redshift pLLSs/LLSs studied by \citetalias{lehner13} and \citetalias{wotta16}). For each absorber, we vary the ionization parameter, which is by definition the ratio of H ionizing photon density to total hydrogen number density ($U =n_\gamma/n_{\rm H}$), and the metallicity (we use the usual notation for the metallicity ${[\rm X/H]} \equiv \log N_{\rm X}/N_{\rm H} - \log ({\rm X/H})_{{\sun}} $, where X is a given element) to search for models that are consistent with the constraints set by the column densities determined from the observations. We assume solar relative heavy element abundances from \citet{asplund09}, i.e., we do not include a priori the effects of dust or nucleosynthesis on the relative abundances. We note that for the main elements (C, Si, see below) that we use to model the photoionization and for the densities that the pLLSs and LLSs typically probe, the dust depletion levels of C and Si are expected to be small. In the Milky Way, the depletions observed in the so-called ``warm-disk" and ``cool-halo" clouds for Si and C are $\la 0.3$ dex \citep[e.g.,][]{savage96,welty99,jenkins09}. At the studied redshift intervals in our survey, even smaller depletion levels of Si are typically observed in the denser environments probed by DLAs and SLLSs \citep[e.g.,][]{ledoux02,prochaska03a,rafelski12,quiret16}; e.g., \citet{rafelski12} found on average ${\rm [Si/S]}\simeq 0.0 \pm 0.2$ for gas metallicities $-2.3 \la {\rm [S/H]}\la -0.3$. Furthermore, \citetalias{fumagalli16} has shown that the strong LLSs reside typically in dust-poor enviromnents. We nevertheless consider these possibilities a posteriori (especially for carbon that can have a different nucleosynthesis history than $\alpha$ elements as silicon or oxygen for example). This can be done a posteriori because the dust depletion or nucleosynthesis effects should affect all the ionization levels of a given element by the same factor. A posteriori, we find that typically dust depletion does not need to be invoked to explain the relative abundances of the pLLSs and LLSs in our sample, a finding consistent with the results from \citetalias{fumagalli16}. The metallicity for each pLLS or LLS is determined using $\alpha$ elements (usually Si), but the ionization model is constrained using the suite of Si and C ions (Si$\;${\small\rm II}\relax, Si$\;${\small\rm III}\relax, Si$\;${\small\rm IV}\relax, C$\;${\small\rm II}\relax, C$\;${\small\rm III}\relax, C$\;${\small\rm IV}\relax), and sometimes other atoms or ions (e.g., O$\;${\small\rm I}\relax, Al$\;${\small\rm II}\relax, etc.). In the Appendix, we provide the set of ions that determines $U$ and ${[\rm X/H]}$ for each LLS or pLLS. In Table~\ref{t-nhi}, we list the derived metallicities while in Table~\ref{t-cloudy} of the Appendix, we provide for each pLLS and LLS the Cloudy output parameters from our models (total column density of H -- $N_{\rm H}$, ${[\rm X/H]}$, $[{\rm C}/\alpha]$, $U$, ionized fraction -- $N_{\rm HII}$/$N_{\rm H}$, temperature -- $T$, and the linear scale of the absorber -- $l \equiv N_{\rm H}/n_{\rm H}$). The errors on the metallicity and $U$ (listed in Table~\ref{t-nhi} and Appendix) reflect the range of values allowed by the $1\sigma$ uncertainties on the observed column densities. They do not include errors from the limitation of the models used to estimate the ionization corrections, which are about 0.3--0.5 dex on the metallicity (see \citetalias{lehner13}; \citetalias{wotta16}). As discussed in \citetalias{lehner13}, uncertainties in the assumed radiation field largely do not affect the {\it shape}| of the metallicity distribution. \citetalias{wotta16} explore the effect of changing the ionizing background from HM05 to HM12 \citep{haardt12} for the pLLSs and LLSs at $z\la 1$ and found that on average it would increase the metallicity of the pLLSs and LLSs by about $+0.3$ dex, well within the 0.3--0.5 dex uncertainty quoted above. This is, however, a systematic effect, i.e., both low and high metallicity absorbers are affected the same way, and hence the overall shape of the metallicity distribution would be very similar. \citetalias{fumagalli16} also provide a thorough analysis of a large sample of LLSs where they use several ionization models and Bayesian techniques to derive the physical properties and metallicities of the LLSs. They find as well that the metallicity estimates are typically not very sensitive to the assumptions behind the ionization corrections. \section{Metallicity of the \lowercase{p}LLS\lowercase{s} and LLS\lowercase{s} at $2.3<\lowercase{z}<3.3$}\label{s-results} \subsection{Metallicity distribution of the pLLSs and LLSs}\label{s-mdf} Figure~\ref{f-zdist} shows the metallicity distribution function (MDF) for the 31 H$\;${\small\rm I}\relax-selected pLLSs and LLSs in our sample at $2.3 < z < 3.3$ summarized in Table~\ref{t-nhi}. Visually, the MDF is unimodal (see below). The MDF extends from $-3.5$ dex ($0.03\% Z_{\sun}$) to $+0.2$ dex ($1.6 Z_{\sun}$), but most of the values are dispersed around $-2$ dex. Using the Kaplain-Meier (KM) product limit estimator from the survival analysis (\citealt{feigelson85,isobe86}) to account for the upper limits in the sample, we estimate for the pLLSs {\it and}\ LLSs that $\langle {[\rm X/H]}\rangle = -2.00 \pm 0.17$ (where the quoted error is the KM error on the mean value). Treating the 5 upper limits as values, the median and standard deviation are $-2.05$ and 0.83 dex, respectively (under that assumption the mean of the MDF would be $-1.89$ dex). There is no evidence of a strong dip in the distribution as observed at low redshift (\citetalias{lehner13,wotta16}), and there is a prominent peak near the mean. A Dip test \citep{hartigan85} shows that the significance level with which a unimodal distribution can be rejected is only 26\%.\footnote{See \citealt{muratov10} for the description of the Dip test code.} Treating censored data as actual values, a Kolmogorov-Smirnov (KS) test finds the metallicity distribution is not inconsistent with a normal distribution with $p$-value $p= 0.39$ where the normal distribution has a mean\,$= -1.8$ and $\sigma = 0.9$ With future larger KODIAQ Z samples, we will be able to determine more robustly the shape of the MDF of both the pLLSs and LLSs. With the current sample, the MDF of the pLLSs+LLSs at $2.3 < z < 3.3$ can therefore be described by a unimodal distribution (possibly as a Gaussian distribution) with a large spread in both high and low metallicities. \begin{figure} \epsscale{1.2} \plotone{f5.pdf} \caption{Distribution of the metallicity of the H$\;${\scriptsize\rm I}\relax-selected pLLSs and LLSs at $2.3 < z < 3.3$.} \label{f-zdist} \end{figure} \begin{figure} \epsscale{1.2} \plotone{f6.pdf} \caption{Metallicity as a function of the H$\;${\scriptsize\rm I}\relax\ column density for absorbers at $2.3 \la z \la 3.3$. The grey open circles are for the LYAF absorbers from \citet{simcoe04}. The light blue pLLS data are from \citet{crighton13,crighton15} and LLS data from \citet{lehner14}. The dark blue data are from this work. The grey squares are adapted from \citetalias{fumagalli16} (see text for more details). The light-yellow squares are from the survey and compilation from \citet{quiret16} (see text for more details). The orchid triangles are from \citet{rafelski12}. The grey squares and circle are centered near the most typical $N_{\rm HI}$\ values within the range of values described by the horizontal bar of each data point. The red solid, long-dash, and short-dash lines are the mean of the pLLSs, pLLSs+LLSs, and LLSs, respectively. \label{f-metvsnh1}} \end{figure} \subsection{Variation of the metallicity with $N_{\rm HI}$}\label{s-metvsnhi} In Fig.~\ref{f-metvsnh1}, we show the distribution of the metallicity against $N_{\rm HI}$\ at $2.3 < z < 3.3$ , which allows us to separate the pLLSs and LLSs (and other absorbers) and to visualize the unbinned measurements. There is a large spread in the data for both the pLLS and LLS samples. In Table~\ref{t-metavg}, we list the mean, median, standard deviation, and fraction of very metal poor (VMP) absorbers with ${[\rm X/H]} \le -2.4$ (value corresponding to $2\sigma$ below the mean metallicity of the DLAs). The LLSs and pLLSs have similar dispersions in their metallicity distributions, but from the KM method, we estimate that the mean metallicity of the LLSs is a factor 5 smaller (0.7 dex) than that of the pLLSs, $\langle {[\rm X/H]} \rangle_{\rm LLS} = -2.37 \pm 0.24$ vs. $\langle {[\rm X/H]} \rangle_{\rm pLLS} = -1.67 \pm 0.18$ (although they overlap within less than $2\sigma$ KM error). There is also less evidence of VMP ${[\rm X/H]} \le -2.4$ pLLSs than LLSs (6\% vs. 43\%). A Gehan's generalized Wilcoxon test and log-rank tests (which take into account that there are censored data -- upper limits -- in both the pLLS and LLS samples, see \citealt{feigelson85}) indicate a marginal statistical difference between the MDFs of the pLLSs (18 data points including 2 upper limits) and LLSs (13 data points including 4 upper limits) at significance levels $p=6.5$\% and $2.7$\%, respectively. Yhe samples of LLSs and pLLSs are still small and there is a large overall dispersion in the metallicity distribution of both the pLLSs and LLSs; hence we consider any difference between the pLLS and LLS samples as tentative and marginal. In Fig.~\ref{f-metvsnh1}, we also show the metallicity for lower and higher $N_{\rm HI}$\ absorbers. For the LYAF, we show the mean and standard deviation from \citet{simcoe04} who determined in the spectra of 7 QSOs the metallicity using O$\;${\small\rm VI}\relax\ and C$\;${\small\rm IV}\relax\ for absorbers with $13.6 \la \log N_{\rm HI} \la 16$ (most between $13.6 \la \log N_{\rm HI} \la 14.4$, which is highlighted by the asymmetric error on the horizontal axis) at $z\sim 2.5$. We also note the pixel optical depth method leads to similar results at $z\sim 3$ \citep{ellison00,schaye03,aguirre04}. In the LYAF sample, about 60--70\% of the LYAF absorbers are enriched to (observable) levels of ${\rm [O/H]} \ga -3.5$, while the remaining have even lower abundances. The LLSs and SLLSs shown with grey squares and associated vertical error bars are from the HD-LLS survey and represent the medians and the 25th/75th percentiles of the composite posterior metallicity PDFs (\citetalias{fumagalli16}; the horizontal error bars show the range of $N_{\rm HI}$\ and are centered on the average $N_{\rm HI}$\ values). For completeness and reference, we also show in this figure (in light-yellow squares) the SLLS metallicities recently compiled from the literature as well as a few new metallicity estimates by \citet{quiret16}. For that sample, we only consider metallicities that were derived using an $\alpha$-element (i.e., O$\;${\small\rm I}\relax, Si$\;${\small\rm II}\relax, Mg$\;${\small\rm II}\relax) and within the redshift interval $2.3<z<3.3$. We have also attempted to remove from that sample any proximate SLLSs or absorbers that may be possibly biased (e.g., a D/H target). In that sample, the 5 estimated metallicities with O$\;${\small\rm I}\relax\ are all for SLLSs with $19.75 \le \log N_{\rm HI} \le 20.05$ and resulted in metallicities within the range $-2.3 \la {[\rm X/H]} \la -1.2$. Note that for several of these metallicites (including those derived with singly ionized species) no ionization correction was realized, which may play in part a role in some of the observed elevated values ($-0.5 \la {[\rm X/H]} \la +0.1$), especially since 5 of these have comparatively low $N_{\rm HI}$\ values with $19.1\le \log N_{\rm HI} \le 19.3$. Owing to the clean selections of the LLSs and SLLSs and the uniform analysis of the HD-LLS survey (both similar to the KODIAQ Z survey), we favor HD-LLS survey for comparison with our sample. For the DLAs, we use the measurements and compilation from \citet{rafelski12}.\footnote{We note that \citet{quiret16} also compile all the existing DLA metallicities from the literature. Unfortunately, for our purposes, this compilation lacks key information regarding any selection biases (e.g., D/H targets, DLAs pre-selected owing to the absence of metal absorption in SDSS spectra, etc.).} In Table~\ref{t-metavg}, we summarize the mean, median, and dispersion for each of these classes of absorbers. We also estimated the fraction of VMP DLAs with ${[\rm X/H]} \le -2.4$ (see Table~\ref{t-metavg}), which by definition of this threshold value ($2\sigma$ below the mean metallicity of the DLAs) is small. For the HD-LLS survey, owing to the method used to determine the metallicity, we list in Table~\ref{t-metavg} the probability of finding absorbers lower than ${[\rm X/H]} \le -2.4$. Considering the entire range of $N_{\rm HI}$\ plotted in Fig.~\ref{f-metvsnh1} ($14 \la \log N_{\rm HI} \la 22$) at $2.3 < z < 3.3$, several immediate conclusions can be drawn: 1) there is a gradual decrease in the mean (or median) metallicity from the DLAs to the LYAF (with possibly the exception of the pLLSs, but see above); 2) the dispersion around the mean for the LYAF, pLLSs, LLSs, and SLLSs is large (about 0.8 dex on average), but for the DLAs the dispersion is a factor 2 smaller ($\sim$0.5 dex); 3) there is a substantial fraction of LYAF, pLLSs, LLSs, and SLLSs that has metallicities below ${[\rm X/H]} \le -2.4$ while $<3\%$ of the DLAs have such low metallicities; 4) only for the LYAF, pLLSs, and LLSs, there is evidence of metallicity below ${[\rm X/H]} \simeq -3$ (see Fig.~\ref{f-metvsnh1}): for the pLLSs and LLSs, the fraction with ${[\rm X/H]} \le -3$ is in 2.5--17.7\% (68\% confidence interval), while $\sim 30\%$ of the LYAF absorbers have ${[\rm X/H]} \la -3.5$ \citep{simcoe04,simcoe11b}. \section{Redshift evolution of the \lowercase{p}LLS\lowercase{s} and LLS\lowercase{s}}\label{s-prop} Our selection of the pLLSs and LLSs at $z<1$ and $2.3<z<3.3$ follows the same criteria: first, they are H$\;${\small\rm I}\relax-selected to have H$\;${\small\rm I}\relax\ column densities between $16 \la \log N_{\rm HI} <19$; second, the H$\;${\small\rm I}\relax\ column density can be estimated reasonably accurately (within $\sim$0.3 dex and often better than 0.1 dex); and third, there is enough information from the metal lines to derive sensitively the metallicities. Therefore we can directly compare the high and low redshift samples to study the evolution of the metallicity for these systems. However, the overdensities of the structures change as function of $z$. At $z\sim 0.7$ the critical density of the universe is about a factor 8 lower than at $z\sim 2.8$. Using, e.g., the empirical relationship for the overdensity derived by \citet{dave99} for absorbers with $12.5 \la \log N_{\rm HI} \la 17.5$, $\delta_{\rm H} \equiv (n_{\rm H} - \bar{n}_{\rm H})/\bar{n}_{\rm H} \sim 20 N_{\rm HI}/(10^{14}\,{\rm cm}^{-2}) 10^{-0.4 z}$, the change in $\delta_{\rm H}$ is similarly a factor $\sim$8 between the mean redshifts of the \citetalias{wotta16} ($\langle z \rangle = 0.7$) and this study ($\langle z \rangle = 2.8 $). This implies that absorbers at some given $N_{\rm HI}$\ at high and low redshifts are not necessarily physically analogous \citep[see also][]{dave99}. For the LYAF absorbers, SLLSs, and DLAs, the redshift evolution of the density does not change the fact that LYAF absorbers trace very diffuse gas ($\delta_{\rm H}\ll 100$) and SLLSs/DLAs trace virialized structures ($\delta_{\rm H}\gg 100$) at both high and low $z$. On the other hand, for the LLSs and especially the pLLSs, while at $z<1$ they probe gas well within the CGM of galaxies, at $z\sim 2.8$, $\delta_{\rm H}$ can be $\la 100$, and hence pLLSs could probe more diffuse ionized gas at $z>2$. KBSS shows that only half of the absorbers with $\log N_{\rm HI} >15.5$ are found in the CGM of ({\it massive}) galaxies at $z\sim 2$; the other half may probe more diffuse gas or the CGM of dwarf galaxies \citep{rudie12}. Hence while high $z$ LLSs and pLLSs are by definition at the interface between the denser and more diffuse gas, they may not trace necessarily the same dense CGM of galaxies as their counterparts at $z<1$. We keep this caveat in mind as we now review the evolution of the properties of the pLLSs and LLSs with $z$. \subsection{Evolution of the physical properties with $z$}\label{s-difference} While the main goals of our study are to determine the shape of the metallicity distribution of the pLLSs/LLSs at high $z$ and how it evolves with $z$, we can also highlight similarities and differences in other properties (densities, $U$, etc.) of the pLLSs and LLSs at low and high $z$. In Table~\ref{t-cloudyavg}, we summarize the mean, median, standard deviation, and minimum, maximum values of $N_{\rm HI}$\ and several physical parameters derived from the Cloudy models for the pLLS/LLS samples at $z<1$ (from \citetalias{lehner13}) and $2.3<z<3.3$ (this paper as well as the results from \citealt{crighton13,crighton15,lehner14}). Note that here we have treated upper or lower limits as actual values, but this has limited effect on the statistics and comparison.\footnote{We have removed for this analysis the two absorbers where we set by hand $\log U \ge -4$ owing to too little constraints from the observations; including these would, however, not have changed the results.} For example, we find for the sample of pLLSs and LLSs at $2.3 < z < 3.3$ $\langle \log U \rangle = -2.35 \pm 0.12$ using the KM estimator instead of $-2.4$ assuming that the lower limits are actual values. As demonstrated by \citetalias{fumagalli16}, we emphasize that while the metallicities derived from the Cloudy simulations are quite reliable, there is a degeneracy between ionization parameter and intensity of the radiation field, which hinders robust estimates of the densities and sizes of the absorbers. Hence the hydrogen density ($n_{\rm H}$) and linear scale ($l \equiv N_{\rm H}/n_{\rm H}$) are not as robustly derived as the metallicities or the total H column density ($N_{\rm H}$). \begin{figure} \epsscale{1.2} \plotone{f7.pdf} \caption{The H$\;${\scriptsize\rm I}\relax\ column density as a function of $\log U$ ({\it top}) and distribution of $\log U$ for the pLLSs and LLSs ({\it bottom}) at $2.3 < z < 3.3$ from our sample and at $z<1$ from \citetalias{lehner13}. Note that lower/upper limits are not shown in the bottom panel for the $z<1$ sample for clarity, but can be identified from the top panel. } \label{f-udist} \end{figure} Unsurprisingly, the statistics for $N_{\rm HI}$\ at low and high $z$ are not too dissimilar owing to a similar initial selection of the pLLSs and LLSs (see Table~\ref{t-cloudyavg}). A two-sided KS test on the $N_{\rm HI}$\ low and high $z$ samples gives a maximum deviation between the cumulative distributions $D= 0.28$ and a $p$-value $p=0.16$, implying no significant difference between the $N_{\rm HI}$\ samples at low and high $z$. On the other hand, the ionization parameter derived from the Cloudy simulations evolves significantly with $z$. In Fig.~\ref{f-udist}, we show the histogram distribution of $U$ and distribution of $U$ against $N_{\rm HI}$\ for the pLLSs and LLSs in our sample at $2.3<z<3.3$ (see Appendix) and the \citetalias{lehner13} sample at $z\la 1$. There is some evidence that strong LLSs with $\log N_{\rm HI} \ga 18$ have smaller $U$-values at any studied $z$, but the sample of these strong LLSs is still small. For absorbers with $\log N_{\rm HI} \la 18$, there is no obvious trend between $U$ and $N_{\rm HI}$\ at any $z$. Most of the pLLSs/LLSs at $2.3<z<3.3$ have $-3\la \log U \le -1.5$ (consistent with the early compilation made for the LLSs by \citealt{fumagalli11b} and from the HD-LLS analysis, see \citetalias{fumagalli16}) while at $z<1$, most have $-3.8 \le \log U \le -2.5$. A two-sided KS test on the $U$ samples at low and high $z$ gives $D =0.58$ and $p=4.0\times 10^{-5}$, implying a significant difference in the $U$ distributions at low and high $z$. The mean and median of $\log U$ are a factor 10 times larger at $2.3<z<3.3$ than at $z<1$. The higher $U$-values at high redshift explain why highly ionized species (Si$\;${\small\rm IV}\relax, C$\;${\small\rm IV}\relax) can be modeled by photoionization, while a single-phase photoionization model typically fails to produce the same highly-ionized species (especially C$\;${\small\rm IV}\relax) at $z<1$ for the pLLSs and LLSs (\citetalias{lehner13} and see also \citealt{fox13}). \begin{figure} \epsscale{1.2} \plotone{f8.pdf} \caption{The hydrogen density ({\it top}), hydrogen column density ({\it middle}), and physical scale ({\it bottom}) as a function of the H$\;${\scriptsize\rm I}\relax\ column density for the pLLSs and LLSs at $2.3 < z < 3.3$ from our sample and at $z<1$ from \citetalias{lehner13}.} \label{f-ldist} \end{figure} In Fig.~\ref{f-ldist}, we show the hydrogen density, hydrogen column density, and physical scale as a function of the H$\;${\scriptsize\rm I}\relax\ column density for the pLLSs and LLSs at $2.3 < z < 3.3$ from our sample and at $z<1$ from \citetalias{lehner13} (note that we ignore the very few lower/upper limits in this figure). For the densities, while there are few more high $n_{\rm H}$ values at $z<1$ for weak pLLSs, overall $n_{\rm H}$ at high and low redshifts overlaps and have the same mean $\langle \log n_{\rm H} \rangle \simeq -2.3$ with a dispersion of about $0.6$ dex. These densities are very similar to the densities estimated by \citetalias{fumagalli16} for stronger LLSs. A two-sided KS test on the $n_{\rm H}$ samples at low and high $z$ gives $D =0.18$ and $p=0.65$, implying indeed no significant difference in the $n_{\rm H}$ distributions at low and high $z$. For the total H column densities, their typically values are higher at high redshift than at low redshift over the entire $N_{\rm HI}$\ range probed by the pLLSs and LLSs. On average, $N_{\rm H}$ is a factor $\sim$10 times larger at high $z$ than low $z$. A similar trend is also observed for $l$ where large-scale structures ($l>10$--$100$ kpc) for the pLLSs and LLSs are not rare at $z\ga 2.4$ (a result also found by \citetalias{fumagalli16} and \citealt{cooper15} at higher $z$ and for the LLSs at the boundary with the SLLSs). In the pLLS regime, while there is a large fraction of low-$z$ pLLSs with $l \la 1$ kpc, there is also an overlap between high- and low-$z$ pLLSs with $1 \la l < 100$ kpc. A two-sided KS test on the $N_{\rm H}$ and $l$ samples at low and high $z$ gives $p=0.0002$ and $p=0.003$, respectively, implying in both cases significant differences in the distributions of these quantities at low and high $z$. Finally, the last entry of Table~\ref{t-cloudyavg} shows that the temperature of the gas probed by the pLLSs and LLSs is higher at high $z$, but with a similar large dispersion at both low and high $z$. \citetalias{fumagalli16} found that the probability distribution function of the gas temperature peaks strongly at a similar value for the photoionized gas than the mean of our high redshift sample. A two-sided KS test on the temperatures samples at low and high $z$ gives $D =0.61$ and $p=1.1\times 10^{-5}$, implying a significant difference in the $T$ distributions at low and high $z$. Hence this strongly suggests based on simple overdensity arguments and the Cloudy results that the pLLSs and LLSs have different physical parameters at high and low $z$ (except for the densities), implying that the pLLSs and LLSs at $z>2$ do not evolve directly into their low $z$ analogs. Using the empirical relationship from \citet{dave99}, the pLLSs and LLSs at $z\sim 2.8$ should evolve into strong LYAF absorbers ($\log N_{\rm HI} \ga15$) and pLLSs at $z \sim 0.7$, respectively. \begin{figure} \epsscale{1.2} \plotone{f9.pdf} \caption{Metallicity as a function of the redshift (time since Big Bang is indicated on the top axis). The pLLS+LLS data at $2.3 < z < 3.3$ are from this work and at $z<1$ are from \citetalias{wotta16} and \citetalias{lehner13}.The grey squares are for the LLSs at $2.3 < z < 3.3$ with $17.30 \le \log N_{\rm HI} < 18.3$ ({\it bottom}) and $18.30 \le \log N_{\rm HI} < 19.3$ ({\it top}) from the HD-LLS survey (\citetalias{fumagalli16}; the slight redshift difference between the two data points is only artificial to be able to more easily separate them). The DLA data (open black triangles) are from \citet{rafelski12}. \label{f-metvsz}} \end{figure} \subsection{Evolution of the metallicity with $z$}\label{s-metvsz} The cosmic evolution of the DLAs \citep[e.g.,][]{prochaska03,rafelski12,battisti12,jorgenson13} and SLLSs (e.g., \citealt{som13,som15}; \citetalias{fumagalli16}; \citealt{quiret16}) have been well studied for several years. In Fig.~\ref{f-metvsz}, we show the metallicity evolution of the pLLSs and LLSs as a function of redshift (and look-back time) where the low and high $z$ absorbers were selected and analyzed using the same methodology. At all $z$ the peak-to-peak scatter in the metallicities of the pLLSs and LLSs is large (over 2 dex spread in ${[\rm X/H]} $). Owing to this large scatter, there is an overlap in the MDFs of the pLLSs and LLSs at low and high $z$, but the MDF is also changing drastically with $z$: at $2.3 < z < 3.3$, the MDF is unimodal, peaking at ${[\rm X/H]} \la -2$ with a long tail to higher metallicities, while at low $z$, the MDF is bimodal, peaking at ${[\rm X/H]} \simeq -1.8$ and $-0.3$ with about the same number of absorbers in each branch of the distribution (see also \citetalias{lehner13,wotta16}). At low $z$, only one system has a metallicity well below ${[\rm X/H]} \simeq -2$, although there are several upper limits near this lower bound metallicity. The quasi-absence of very low metallicity gas at $z<1$ can be attributable in part to the lower sensitivity of the UV data (typically, S/N\,$\la 20$--$30$ for {\em HST}/COS observations compared to $\ga 30$--$100$ for data obtained with Keck HIRES, see \citetalias{lehner13} and \citealt{omeara15}), but it is also possible that low metallicity gas with ${[\rm X/H]} \la -2$ is rare at low $z$. As noted above, pLLSs and LLSs at low $z$ are probably not always their direct high redshift analogs. Based on the overdensity argument, LLSs at $2.3<z<3.3$ could evolve into the low $z$ pLLSs. Using the results from this work (see Fig.~\ref{f-metvsnh1} and \S\ref{s-metvsnhi}) and \citetalias{fumagalli16}, the MDF of the LLSs at $2.3<z<3.3$ is consistent with a unimodal distribution, significantly different from the bimodal MDF of the pLLSs at $z\la 1$ \citepalias{wotta16}. Therefore, even considering the redshift evolution of the cosmic structures, there is a significant evolution of the MDF of the LLSs with $z$. The change in the MDF of the pLLSs and LLSs between $2.3 < z < 3.3$ and $z<1$ is also quite significant and distinct from DLA and SLLS evolution. The MDF of the pLLSs and LLSs is not simply shifting to higher metallicity as observed for the SLLSs and DLAs, but the shape of the MDF is evolving significantly to lower $z$. In Fig.~\ref{f-metvsz}, we also show the redshift evolution of DLA metallicities from the \citet{rafelski12} survey for comparison. As noted by \citet{rafelski12} and others, there is an overall increase of the metallicity with decreasing $z$, but the shape of the MDF for the DLAs does not evolve with $z$; it is unimodal with similar scatter about the mean at all redshifts. This scatter in metallicities is also smaller than that observed for the pLLSs and LLSs. The ``lower envelope" of the metallicity of the DLAs (mean metallicity of the DLAs minus $2\sigma$) changes from ${[\rm X/H]} \simeq -2.4$ at $2.3<z<3.3$ to $-1.4$ dex at $z\la 1$. Below these metallicities at the respective redshifts, there is a large number of pLLSs or LLSs, implying that a large fraction of the pLLSs and LLSs follows a different metal enrichment than the DLAs. At all $z$, however, there is also a large overlap in the metallicities of the DLAs and the more metal-enriched pLLSs and LLSs; these higher-metallicity pLLSs and LLSs may follow a similar metal enrichment evolution similar to that of the DLAs. \subsection{Relative Abundances of C/$\alpha$}\label{s-calpha} \begin{figure} \epsscale{1.2} \plotone{f10.pdf} \caption{Evolution of [C/$\alpha$] as a function of the metallicity $[{\rm \alpha/H}]$ for various types of absorbers and stars indicated in the legend (see text for more details and references; the green data point is a LLS at $z\simeq 3.5$ from \citealt{crighton15}). The hatched orange region is the “transition discriminant” criterion \citep{frebel07}; any gas in this region may have been polluted by Pop III stars (see text). \label{f-calpha}} \end{figure} So far we have only presented the results for the absolute abundances of the gas. Although we have limited information on the relative abundances, at both high and low redshifts (see \citetalias{lehner13}), we have some constraints on the C/$\alpha$ ratio. This ratio is a good indicator of the nucleosynthesis history since in low density, diffuse gas, carbon and the $\alpha$ elements used in these works are not expected to be strongly depleted into dust grains (see \S\ref{s-cloudy}), and hence this ratio provides additional information regarding the origin of the gas. For the pLLSs and LLSs, this ratio was principally derived from the photoionization models (see \S\ref{s-cloudy}). In these models, C/Si was set a priori to a solar value, but was allowed to vary in order to determine the best $U$, ${[\rm X/H]}$-values that fit the data. Although, this ratio is derived using photoionization models and subtle changes in the radiation field could change its value, we feel it is robustly derived for the following reasons. Firstly, \citetalias{wotta16} show that while modifying the radiation field from HM05 to HM12 can change ${\rm [\alpha/H]}$ in a systematic manner by about $+0.3$ dex, it does not affect as much the C/$\alpha$ ratio. Secondly and independently from any ionization assumption, we can directly estimate C/$\alpha$ from the observations using the column density ratios $(N_{\rm CII} + N_{\rm CIII} +N_{\rm CIV})/(N_{\rm SiII} + N_{\rm SiIII} +N_{\rm SiIV})$ at $z>2$ and $(N_{\rm CII} + N_{\rm CIII})/(N_{\rm SiII} + N_{\rm SiIII} )$ at $z<1$ (C$\;${\small\rm IV}\relax\ and Si$\;${\small\rm IV}\relax\ are not considered at lower redshift because these are typically produced in a different gas-phase, see \citetalias{lehner13}). We summarize these results in Table~\ref{t-calpha}. There is only a small fraction of the sample where we have simultaneously column densities for all these ions, but it is striking that for all but one, the direct and modeling methods provide consistent results (the only discrepancy toward J131215$+$423900 could be possibly arising owing to some contamination in the C$\;${\small\rm III}\relax\ $\lambda$977 absorption). As a reminder for the pLLSs and LLSs, at high redshift, the $\alpha$-element is mostly Si, but at low redshift it can also be O, Mg, and/or S depending on the system (see \citetalias{lehner13}). In Fig.~\ref{f-calpha}, we show [C/$\alpha$] vs. [$\alpha$/H] for the pLLSs and LLSs from both the high- and low-redshift samples from this and \citetalias{lehner13} surveys (note that the most metal poor LLS in this figure is from \citealt{crighton15}). We note that in the regions of overlapping metallicities, there is no obvious difference between the low and high redshift samples, and we therefore treat them together in the remainder of this section. For comparison, we also show the results for high redshift DLAs and SLLSs and Milky Way (MW) stars. For the DLAs and SLLSs, we use the results from \citet{pettini08}, \citet{penprase10}, and \citet{cooke11a} (and references therein and see also \citealt{becker12} for $z\ga 5$ measurements). For the MW thin and thick stars, we use the results from \citet{bensby06}, and for the MW halo stars, \citet{fabbian09} and \citet{akerman04}. For the stars, $\alpha$ is O, while for the DLAs and SLLSs, $\alpha$ is O or Si (changing O to Si or vice-versa for the DLAs would have little effect on the distribution of these data). As noted by \citet{pettini08}, \citet{penprase10}, and \citet{cooke11a}, the metal-poor SLLSs/DLAs follow well the overall behavior of [C/$\alpha$] with [$\alpha$/H] having a similar dispersion as observed in the MW metal-poor stars and confirm the overall increase of [C/$\alpha$] seen in metal-poor stars \citep{akerman04,spite05}. Where DLAs and stars overlap (${\rm [O/H]}\la -1.5$), the overall agreement in the distribution of C/$\alpha$ suggests a universal origin for the production of C relative to $\alpha$-elements \citep{cooke11a}. The overall trend observed in Fig.~\ref{f-calpha} in the stellar and SLLS/DLA samples can be separated in roughly two regions. {\it Region 1}: At $-3\la [{\rm \alpha/H}] \la -1$, [C/$\alpha$] decreases with increasing metallicity from super-solar values to about $-0.7$ dex. {\it Region 2}: at $-0.7\la [{\rm \alpha/H}] \la +0.2$, [C/$\alpha$] increases with increasing metallicity from about $-0.6$ dex to super-solar values. The behavior in region 2 has been well known for some time and is thought to occur as a result of the delayed release of carbon from low- and intermediate-mass stars combined with a strong metallicity dependence of the yields of carbon by massive stars with mass-loss \citep[e.g.,][]{akerman04,fabbian09}. The increase of [C/$\alpha$] to lower metallicity at $ [{\rm \alpha/H} ] \la -1$ was somewhat surprising at first, but has now been confirmed independently in both stellar atmospheres and SLLSs/DLAs. One possible interpretation for the high values of C/$\alpha$ at low metallicity could be the leftovers from the enhanced production of C (relative to $\alpha$-elements, and in particular O) in Population III (Pop III) stars. As shown by \citet{frebel07} and \citet{bromm03}, the gas progenitor of Pop III stars must have had high C abundance to efficiently cool the gas in order to actually form stars and to drive the transition from Pop III to Pop II stars (see also \citealt{cooke11a} for more discussion). We show in Fig.~\ref{f-calpha} that condition (hatched orange region) defined as the ``transition discriminant" criterion. No Pop II stars should be found in that zone, but any gas in this region will likely have been polluted by Pop III stars (twoo LLSs are found in that``forbidden'' zone, see Fig.~\ref{f-calpha} and below). Considering now the pLLSs/LLSs, about half the sample of the pLLSs and LLSs follows a similar distribution to that observed for the DLAs and stars over the entire range of metallicity, i.e., $-2.8\la [{\rm \alpha/H}]\la 0$. For these, their chemical enrichment (at least of C and $\alpha$-elements) appears to be similar to that of the MW stars and the bulk of the SLLSs/DLAs. However, the other half --- mostly clustered at $-2.2\la [{\rm \alpha/H}] \la -0.5$ and $-0.2\la [{\rm C/\alpha}] \la +0.2$ --- does not follow the trend observed in MW stars or DLAs as first pointed out by L13. These gas clouds are carbon-enhanced by a factor $\ga 2$--5 ($\ga 0.3$--$0.7$ dex) compared to stars or most DLAs with similar $[{\rm \alpha/H}]$. This effect is not artificially caused by the ionization modeling since near solar [C/$\alpha$] over $-2\la [{\rm \alpha/H}] \la -1$ are confirmed directly by the observations (see Table~\ref{t-calpha}), and hence the carbon-enhancement observed at $-2.2\la [{\rm \alpha/H}] \la -1$ is real. Finally, we highlight the lowest metallicity LLS in our sample with ${\rm [\alpha/H]} = -3.35 \pm 0.05$ and ${\rm [C/\alpha]} = -0.20 \pm 0.10$ at $z_{\rm abs} = 3.22319$ observed toward J095852+120245 that lies in the Pop III/Pop II transition (orange-zone in Fig.~\ref{f-calpha}). The properties of this LLS are reminiscent of another one at $z_{\rm abs}=3.53$ with ${\rm [\alpha/H]} = -3.41 \pm 0.26$ and $ {\rm [C/\alpha]} = -0.26 \pm 0.17$ described by \citet{crighton16} (shown with green data point in Fig.~\ref{f-calpha}). This implies that there are now two LLSs at $z\sim 3.4$ with expected [C/$\alpha$] and ${\rm [\alpha/H]}$ that are consistent with gas polluted from Pop III stars. \section{Discussion}\label{s-disc} Our present study explores the properties (in particular the metallicity) of the pLLSs and LLSs at $2.3<z<3.3$, a redshift epoch corresponding to the ascending part of the cosmic star formation rate (SFR) density, near its peak \citep[e.g.,][]{madau14}. Our previous studies \citepalias{lehner13,wotta16} have explored the metallicity of the pLLSs and LLSs with similar $N_{\rm HI}$\ at $z<1$ where the cosmic SFR density has significantly decreased. According to cosmological simulations, the exchanges of matter in and out through the CGM play critical roles in the evolution of galaxies and in the evolution of the cosmic star-formation \citep[e.g.,][]{keres05,dekel06,faucher-giguere11}. We therefore expect that some of the properties of the pLLSs and LLSs should be intimately coupled to those of star formation in galaxies. This should also be reflected in changes of the properties of the IGM/galaxy interface region as a function of $z$. As we lay out below, there are clear differences but also similarities between the low and high $z$ CGM probed by pLLSs and LLSs. Before going further we emphasize that at both high and low redshift studies the samples were H$\;${\small\rm I}\relax-selected absorbers with $16.2 \la \log N_{\rm HI} \la 18.5$ in order to avoid introducing any bias in the metallicity of the gas probed by these absorbers. We also use the same technique to derive the metallicity of the absorbers, so any changes in the MDF of the pLLSs and LLSs as a function of $z$ should be genuine, not some effect from comparing different samples or metallicities derived using different techniques. However, owing to the redshift evolution of the universe, pLLSs and LLSs at high $z$ are not the direct analogs of the low redshift pLLSs and LLSs (see \S\ref{s-difference}). We also note that at low $z$ we make a direct association between the CGM and absorbers with $16.2 \la \log N_{\rm HI} \la 18.5$ since all the $z<1$ pLLSs and LLSs with galaxy information have been found so far well within the virial radius of relatively bright galaxies ($0.2L^*$ to $>L^*$, see, e.g., \citetalias{lehner13}; \citealt{lehner09,cooksey08}). At high $z$, galaxy information is still scant. Observations with the Multi Unit Spectroscopic Explorer (MUSE) found no bright, star forming galaxy in the vicinity of the most metal-poor LLS in our sample \citep{fumagalli16a}. This LLS could probe an IGM structure\footnote{The path length of $\sim$2 Mpc and density $n_{\rm H}\sim 5\times 10^{-4}$ cm$^{-3}$ derived using our Cloudy model for this absorbe rare consistent with an IGM origin. However, we note this absorber is unique among our sample.} or the CGM of a faint galaxy with a SFR $<0.2$ M$_{\sun}$\,yr$^{-1}$. Furthermore, we note that the KODIAQ O$\;${\small\rm VI}\relax\ survey of H$\;${\small\rm I}\relax-selected absorbers with $\log N_{\rm HI} \ga 16$ shows a large fraction of the pLLSs and LLSs at high $z$ has strong and broad O$\;${\small\rm VI}\relax\ absorption associated with these absorbers, which contrasts remarkably from the O$\;${\small\rm VI}\relax\ properties in the IGM (typically much narrower and weaker). The strength and breadth of the O$\;${\small\rm VI}\relax\ make these absorbers likely probes of the CGM of some very actively star-forming galaxies (\citealt{lehner14} and see \S\ref{s-ovi}). In any case and at all $z$, the pLLSs and LLSs are at the interface between the very diffuse IGM probed by LYAF absorbers and virialized structures probed by SLLSs and DLAs, and it is in this context that we discuss our results below. \subsection{Evolution of the MDF of pLLSs and LLSs with $z$} In the ascending part of the cosmic SFR density at $2.3<z<3.3$, we find that the MDF of the pLLSs/LLSs is heavily weighted to low metallicities, unimodally distributed around ${[\rm X/H]} \simeq -2$. At $z\le 1$, well past the peak SFR density, the overall MDF has shifted to higher metallicity. For the pLLSs at $z<1$, the MDF is bimodal with about the same weight in each of the metallicity branches that peak at ${[\rm X/H]} \simeq -1.8$ and $-0.3$, i.e., the low-metallicity branch has on average a metallicity 20 times lower than those in the high-metallicity branch \citepalias{wotta16,lehner13}. These results for the low-redshift universe show that there are clearly two main populations of gaseous flows through the CGM at $z<1$. The metal-enriched CGM gas has properties consistent with those expected for matter being ejected by large-scale galaxy outflows, for matter being tidally-stripped from satellite galaxies, or for material tracing the remnants of earlier outflows that are recycled by galaxies. The other half has an extremely low metallicity for the $z<1$ universe. For all the cases so far, these metal-poor pLLSs and LLSs have been found well within the virial radius of some $>0.1 L*$ galaxy and have column densities, temperatures, and metal-enrichment levels about consistent with cold accretion gas as observed in cosmological simulations at $z\sim 2$--3 and $z<1$ (see \citetalias{lehner13} and simulations by, e.g.,\citealt{fumagalli11a,vandevoort12b,shen13,hafen16}; and see also \S\ref{s-compsim}). On average the metallicity of the gas also increases with increasing $N_{\rm HI}$\ at $z<1$ and $2.3<z<3.3$ (see Fig.~\ref{f-metvsnh1} and \citetalias{lehner13,wotta16}). As noted by \citetalias{wotta16}, the difference in the MDFs of the pLLSs/LLSs compared to the SLLSs and DLAs implies there is a fundamental change in the physical origins with $N_{\rm HI}$. DLAs are likely probing gas that have been enriched recently at a given $z$, while the bulk of the LYAF probes typically the diffuse IGM with little metal content. The pLLSs and LLSs appear to probe both types of gas, recent metal-enrichment as well as very ancient metal-enrichment. The SLLSs predominantly probe recent enrichment, but a non-negligible fraction may also be more pristine IGM-like metallicity (see Table~\ref{t-metavg}). Naively, if the interpretation that low-metallicity pLLSs and LLSs are mostly probing infalling gaseous streams or clouds, then the gas at the interface between galaxies and diffuse IGM at high $z$ would be infall-dominated at $2.3<z<3.3$. However, at these redshifts, the median metallicity of the pLLSs and LLSs is ${[\rm X/H]}=-2.1$, and hence a large proportion of the pLLSs and LLSs has metallicity overlapping with those of the DLAs (Table~\ref{t-metavg} and see Figs.~\ref{f-metvsnh1} and \ref{f-metvsz}). At $z<1$, only the high metallicity branch overlaps with the DLA MDF \citepalias{wotta16}; the mean metallicity of the DLAs at $z<1$ is $\langle{[\rm X/H]}\rangle \simeq -0.5$, very similar to that of the pLLSs/LLSs in the high metallicity branch. The mean metallicities of the DLAs and pLLSs/LLSs at $2.3<z<3.3$ are, however, much closer than at low redshift (a factor 4 compared to a factor 20). In view of the overlap of metallicities between pLLSs/LLSs and DLAs at high $z$, a better approach to separate at all $z$ potential metal-poor cold accretion candidates from other processes is to consider the fraction of VMP pLLSs/LLSs that we define as absorbers with metallicities $2\sigma$ below the mean metallicity of the DLAs in any given redshift interval. At $z<1$, that threshold is ${[\rm X/H]}_{\rm VMP} \le -1.40$; at $2.3<z<3.3$, it is ${[\rm X/H]}_{\rm VMP} \le -2.40$; and at $3.2 \le z\le 4.4$, it is ${[\rm X/H]}_{\rm VMP} \le -2.65$. At $2.3 <z<3.3$, the proportion of VMP pLLSs/LLSs is 25--41\% in our sample (see Table~\ref{t-metavg}). Similar numbers in the same redshift interval are found for the HD-LLS survey (31\% for the LLSs, 21\% for the SLLSs, see Table~\ref{t-metavg}). At $z<1$, \citetalias{wotta16} derive 28--44\% of the pLLSs are VMP. Using the recent sample at $3.2 \le z\le 4.4$ of very strong LLSs from \citet{glidden16} ($\log N_{\rm HI}\ge 17.8$, except for 2 systems), we calculate that the fraction of VMP strong LLSs is 18--34\% (sample size is 31 as we exclude the two SLLSs, which is similar to the present KODIAQ Z sample). Since many of these absorbers overlap with the SLLS regime, if we include only systems with $\log N_{\rm HI}\le 19.2$ from the \citeauthor{glidden16} sample, then the fraction of VMP strong LLSs would be 30--51\% (sample size 20).\footnote{At $3.2 \le z\le 4.4$ with a smaller sample probing extremely strong LLSs ($17.8 \la \log N_{\rm HI} \la 19.5$) and an indirect method, \citet{cooper15} also found 28\%--40\% of the LLS population could trace VMP gas.} All these intervals are at the 68\% confidence level. While in the future we will improve the confidence intervals and refine these fractions over smaller redshift bins, it is striking that the proportion of VMP pLLSs and LLSs do not evolve much with redshift (although we emphasize the $N_{\rm HI}$\ values sampled in the $3.2 \le z\le 4.4$ interval are quite higher than in our sample). The average metallicities of the VMP pLLSs/LLSs increase with increasing redshift, but their fractions remain about the same over 12 billion years.\footnote{We also note that the total hydrogen column densities or scale-lengths of the VMP pLLSs and LLSs evolve in the same way as for the more metal rich pLLSs and LLSs, i.e., on average $N_{\rm H}$ is 10 times larger at $2.3<z<3.3$ than at $z<1$ and there is no obvious difference between the VMP pLLSs/LLSs and the rest of the sample.} These VMP pLLSs and LLSs have metallicities that are consistent with the IGM metallicities in each redshift interval (although at $z<1 $, the metallicity of the IGM is unknown as a result of the limited sensitivity of the space-based UV observations). Hence these VMP pLLSs/LLSs appear to be the reservoirs of metal-poor gas in the interface between galaxies and the IGM, which appear to remain constant over cosmic time and which may feed galaxies with metal-poor gas to continue to form stars over billions of years. These VMP pLLSs/LLSs are also good candidates for cold flow accretions as seen in cosmological simulations (see \S\ref{s-compsim}). \subsection{The fraction of pristine gas at $2.3<z<3.3$} We found two pLLSs and LLSs with no metals (see Appendix) that might be reminiscent of the pristine LLSs that were discovered at $z=3.4$ and 3.1, down to a limit ${[\rm X/H]}< -4.2$ and $< -3.8$ \citep{fumagalli11b}. Unfortunately, Si$\;${\small\rm III}\relax\ is contaminated for each of these cases, and hence we cannot place a stringent constraint on their metallicities. For example, the conservative limit on the LLSs at $z = 3.08204$ toward J025905+001121 is ${[\rm X/H]} <-2.7$ (and $\log U\ge -3.6$); if instead we adopt the mean $\langle \log U\rangle = -2.4$ derived in our sample, then ${[\rm X/H]} < -4.1$ (see Appendix), a limit similar to those found by \citet{fumagalli11b}. To better understand the level of mixing of metals in the gas probed by pLLSs and LLSs in the early universe, we will need a much larger sample to reliably determine the frequency of pristine gas at $2<z<4.5$ in the interface regions between galaxies and the LYAF. With our sample, we determine that the fraction of pLLSs/LLSs with ${[\rm X/H]} \le -3$ is $3$--$18\%$ (2/31) at $2.3<z<3.3$ (68\% confidence interval), consistent with the \citetalias{fumagalli16} results for stronger LLSs. This fraction includes the lowest metallicity absorbers in our sample that have metals detected. If we push to metallicities down to ${[\rm X/H]} \le -4$ to exclude any pLLS or LLS with some metals detected, that fraction becomes $\le 3\%$ (68\% confidence interval), implying that pristine pLLSs/LLSs at $2.3<z<3.3$ are rare. As noted by \citet{crighton16} \citep[see also][]{cooke11a}, the extremely metal-poor LLSs ( ${[\rm X/H]} \sim -3.5$ at $z\sim 3$) with detected metal absorption may provide a new path to study the Pop III/Pop II metal-enrichment transition. The use of both the low metallicity and C$/\alpha$ ratio indeed provides a strong method to find metal-pollution at the transition from Pop III to Pop II star formation. In our sample of 31 pLLS/LLSs, we have found one such absorber (corresponding to a proportion of $3$--$18\%$) with ${[\rm X/H]} \simeq -3.35$ and [C/$\alpha ]=-0.2$, both consistent with a Pop III origin. \subsection{Super metal-rich gas at $z\sim 2.5$} On the other end of the metallicity spectrum, we have also discovered a supersolar pLLS ($\log N_{\rm HI} \simeq 16.2$) at $ z = 2.48778$ toward J172409+531405. This absorber is extraordinary on several levels. It has a metallicity of $\sim 1.6 Z_{\sun}$ at a redshift $z\sim 2.5$. This is the only pLLS with a detection of O$\;${\small\rm I}\relax, which is remarkable for such a low $N_{\rm HI}$\ absorber. The physical-scale ($l\simeq 0.35$ pc), density ($n_{\rm H} \simeq 0.2$ cm$^{-3}$), and temperature ($T\simeq 6000$ K) are all extremely atypical for any pLLSs at any $z$. The non-detection of Fe$\;${\small\rm II}\relax\ implies a $\alpha$/Fe enhancement (or possibly some dust depletion of Fe relative to Si). This pLLS is detected in several ions and transitions, so its properties are well-constrained. It is a multiphase absorber since the C$\;${\small\rm IV}\relax\ and singly-ionized species have very different velocity profiles (see Appendix) This is clearly an outlier in our sample (1/31 or $1$--$8\%$ at the 68\% confidence interval). Its properties (in particular its high metallicity and multiphase nature) suggests that it directly probes an active outflow from a proto-galaxy at $z\simeq 2.5$. As our KODIAQ Z survey will grow, we will more robustly determine the frequency and properties of both metal rich and pristine pLLSs and LLSs at $2 < z < 4$. \subsection{C/$\alpha$ in pLLSs and LLSs over cosmic time} The combined sample of pLLSs and LLSs at $2.3<z<3.3$ and $z<1$ shows that the scatter in C/$\alpha$ with metallicity is very large at any $z$ and C/$\alpha$ does not follow the trend observed in stars or DLAs (see Fig.~\ref{f-calpha} and Table~\ref{t-calpha}). Stated in another way, about half the sample of pLLSs and LLSs has an enhanced C/$\alpha$ ratio in the metallicity range $-2 \la {[\rm X/H]} \la -0.5$ compared to Galactic halo stars and DLAs, while the other half follows more closely C/$\alpha$ patterns seen in Galactic metal-poor stars or DLAs. The enhanced C/$\alpha$ ratio in the metallicity range $-2 \la {[\rm X/H]} \la -0.5$ implies that this gas must have been polluted by preferential ejection of C from low metallicity galaxies. A recent study in fact shows that at least some local metal-poor dwarf galaxies have also enhanced C/$\alpha$ over similar metallicities \citep{berg16}. While their C/$\alpha$ ratios are not as high as observed for the pLLSs and LLSs and their sample is small (12 galaxies), the absence of clear trend between [C/$\alpha]$ and $[\alpha$/H] is similar to that observed in pLLSs and LLSs. On the other hand, in the IGM (probed by the LYAF) at $z\sim 2.1$--$3.6$, using the pixel optical depth analysis of C$\;${\small\rm IV}\relax, O$\;${\small\rm VI}\relax, and Si$\;${\small\rm IV}\relax, low C/$\alpha$ ratios were derived: $[{\rm C/Si}] = -0.77 \pm 0.05 $ and $[{\rm C/O}] = -0.66 \pm 0.06 $ \citep{schaye03,aguirre04,aguirre08}. As discussed in \citet{aguirre04}, they only use the C$\;${\small\rm IV}\relax/Si$\;${\small\rm IV}\relax\ and O$\;${\small\rm VI}\relax/Si$\;${\small\rm IV}\relax\ ratio to determine C/Si and O/Si, respectively, which is dependent on the assumed ionizing background (and if collisional ionizing processes take place). While such low values are found for some of pLLSs and LLSs (see Fig.~\ref{f-calpha}), our results imply a very large scatter in C/$\alpha$ that does not depend on the redshift or the metallicity. It would seem likely that this should also happen in the IGM. \subsection{O$\;${\small\rm VI}\relax\ associated with pLLSs and LLSs} \label{s-ovi} Although we focus throughout on the metallicity of the cool gas of the pLLSs and LLSs, some of the surveys described above have also revealed that O$\;${\small\rm VI}\relax\ absorption with overlapping velocities with H$\;${\small\rm I}\relax\ is found at any $z$ \citep{lehner13,lehner14,fox13}. When O$\;${\small\rm VI}\relax\ is detected, these pLLSs and LLSs have typically multiple gas-phases as evidenced by the presence of low ions (e.g., C$\;${\small\rm II}\relax, Si$\;${\small\rm II}\relax, Si$\;${\small\rm III}\relax) and O$\;${\small\rm VI}\relax\ (or other high ions) that have often very different kinematics and cannot be explained by a single photoionization model (e.g., \citealt{lehner09,lehner13,crighton13,fox13}). At $z<1$, among the 23 pLLSs/LLSs with O$\;${\small\rm VI}\relax\ coverage, only 6 have no O$\;${\small\rm VI}\relax\ absorption, and hence the detection rate of O$\;${\small\rm VI}\relax\ absorption associated with pLLSs/LLSs is about 70\% and even higher (75--91\%) if only sensitive limits on $N_{\rm OVI}$\ are considered \citep{fox13}. At $2.3<z<3.6$, a similar number is found with the KODIAQ survey \citep{lehner14}. While there is a high frequency of O$\;${\small\rm VI}\relax\ absorption associated with pLLSs/LLSs at both high and low $z$, the similarities in the highly ionized gas properties between the high and low $z$ pLLS/LLS sample end there. The KODIAQ survey shows that for H$\;${\small\rm I}\relax-selected absorbers at $z\sim 2$--3.5 with $\log N_{\rm HI} \ga 16$, the O$\;${\small\rm VI}\relax\ absorption has typically total column densities $14.2 \la \log N_{\rm OVI} \la 15.5$ and full-widths $150 \la \Delta v_{\rm OVI} \la 500$ ${\rm km\,s}^{-1}$\ (\citealt{lehner14,burns14,lehner16a}; N. Lehner, J.C. Howk, J. O'Meara al. 2016, in prep., and see also Fig.~\ref{f-example2} and Appendix). More than half of the KODIAQ sample has $\log N_{\rm OVI} \ga 14.4$ and $\Delta v_{\rm OVI} \ga 300$ ${\rm km\,s}^{-1}$. The breadth and strength of the O$\;${\small\rm VI}\relax\ absorption in strong H$\;${\small\rm I}\relax\ absorbers at $z\sim 2$--3.5 are quite similar to those observed in starburst galaxies at low redshift \citep[see, e.g.,][]{grimes09,tripp11,muzahid15} and remarkably different from those of the O$\;${\small\rm VI}\relax\ absorption in the IGM at similar redshifts (typically $13.2 \la \log N_{\rm OVI} \la 14.4$ and $20 \la \Delta v_{\rm OVI} \la 100$ ${\rm km\,s}^{-1}$, see \citealt{simcoe02,muzahid12}). This strongly suggests that the bulk of the strong and broad O$\;${\small\rm VI}\relax\ associated with pLLSs and LLSs traces large-scale outflows from high-redshift star-forming galaxies. In contrast, at $z<1$, O$\;${\small\rm VI}\relax\ absorption in the pLLS sample has typically $ 50\la \Delta v_{\rm OVI} \la 150$ ${\rm km\,s}^{-1}$\ and $13.8 \la \log N_{\rm OVI} \la 15$ \citep{fox13}. There is overlap between the low and high $z$ surveys, but broad and strong O$\;${\small\rm VI}\relax\ absorption associated with LLSs and pLLSs at $z<1$ is the exception, not the norm. Only two strong H$\;${\small\rm I}\relax\ absorbers with broad ($\Delta v \ga 300$ ${\rm km\,s}^{-1}$\ ) and strong O$\;${\small\rm VI}\relax\ absorption at $z<1$ have been reported so far, both associated with a massive, large-scale outflow from a massive star-forming galaxy \citep{tripp11,fox13,muzahid15}. Therefore randomly H$\;${\small\rm I}\relax-selected pLLSs and LLSs at $z<1$ and $2.3<z<3.3$ show a dramatic change not only in the MDF of their cool gas but also in the properties of the associated highly ionized gas. It is likely that the difference in frequency of strong and broad O$\;${\small\rm VI}\relax\ between the low and high $z$ pLLS/LLS surveys reflects the fact that low-$z$ galaxies are much more quiescent than their high-redshift counterparts. The weaker O$\;${\small\rm VI}\relax\ absorbers associated with pLLSs/LLSs at both low and high $z$ have, however, likely a wider range of origins; according to simulations these may include outflows, inflows, ambient CGM \citep[e.g.,][]{shen13,ford14}. \subsection{pLLSs and LLSs in cosmological simulations} \label{s-compsim} With the first study that extends into the pLLS and low column density LLS regime with $16.2\la \log N_{\rm HI} \la 17.5$ at high $z$, we provide new stringent empirical results to test cosmological hydrodynamical simulations. In particular, we demonstrate there is a strong evolution of the metallicity of the pLLSs/LLSs with $z$, but also a remarkably constant fraction of VMP pLLSs/LLSs over cosmic time. For a large proportion of the pLLSs/LLSs at $z<1$ and $2.3<z<3.3$, C/$\alpha$ also does not follow the typical trend observed in metal-poor Galactic stars or high redshift DLAs (see Fig.~\ref{f-calpha} and Table~\ref{t-calpha}). As shown by \citet{bird14}, the simultaneous knowledge of the DLA MDF and column density function can provide strong constraints on the feedback model in cosmological simulations. The same applies for the pLLSs and LLSs for which the evolution of the MDF with $z$ starts to be constrained (and more refinement and improvement will come in the near future) and their column density function is also constrained over cosmic time \citep[e.g.,][]{lehner07,omeara07,prochaska10,ribaudo11,fumagalli13}. Simulations have already shown that pLLSs/LLSs may be used to trace cold flows \citep{faucher-giguere11,faucher-giguere15, fumagalli11a, fumagalli14,vandevoort12a, vandevoort12b, hafen16}. Simulated pLLSs and LLSs at $z\sim 2$--3 and $z<1$ appear, however, to have too many metals (see also discussion in \citetalias{fumagalli16}). Only in simulations with very mild stellar feedback \citep{fumagalli11b}, there is some agreement between the observed and simulated metallicity distributions; in this simulation, cold streams are traced mostly by LLSs within 1 or 2 virial radii of galaxies where the gas has only been enriched to ${[\rm X/H]}\simeq -1.8$ with similar scatter to that observed at high or low $z$. However, while mild feedback produces better agreement with the observed MDF at $z\sim 2$--3, the disagreement with the baryon fraction in stars worsens \citep{fumagalli11b}. The zoom-in Eris2 simulations by \citet{shen13} include much stronger galactic outflows (but possibly more realistic at these redshifts, see \citealt{lehner14}) and show that cold flows are metal-poor, but with a median value $-1.2$ dex, much larger than observed. \citet{vandevoort12a} similarly show that cold mode accretion is generally metal-poor with ${[\rm X/H]}\sim -1.5$ for any halo mass at $0.8 R{\rm vir}$, and only for $R>R_{\rm vir}$ does the metallicity of the cold mode accretion go below $-2$ dex. The FIRE zoom-in simulations at $z<1$ have also recently studied the physical nature of the pLLSs and LLSs \citep{hafen16}. These simulations confirm the general interpretation of the bimodal metallicity distribution observed at $z<1$: very low metallicity LLSs are predominantly associated with inflows at $z<1$, but higher metallicity LLSs trace gas with roughly equal probability of having recycled outflows (inflows) or outflows. However, the simulated metallicity distribution is not bimodal and has a metallicity plateau between about $-1.3$ and $-0.5$ dex at $z<1$. Furthermore, while very low metallicity pLLSs and LLSs are prevalent in the observations, they are not in the FIRE simulations, showing again that the gas is typically too metal rich in simulations. Nevertheless despite some disagreements between the simulations and the observations, there is a consensus in the simulations that a large fraction of the metal-poor LLSs and pLLSs should probe cold flow accretions onto galaxies. Future simulations with the goals of studying absorbers such as the pLLSs and LLSs (such as in \citealt{hafen16}) that include advanced radiative transfer techniques (crucial for correctly predicting the pLLS/LLS properties) and varying feedback prescriptions will help guiding the interpretation of these observational results, and in turn these observational results should help refining the sub-grid simulation physics and feedback prescriptions. \section{Summary}\label{s-sum} We have undertaken a study of the properties of the gas probed by pLLSs and LLSs at $2.3<z<3.3$ and the evolution of their properties over cosmic time. Here we present the first results from our KODIAQ Z survey with which we have assembled the first sizable sample of H$\;${\small\rm I}\relax-selected pLLSs and LLSs at $2.3<z<3.3$ with $16.2 \le \log N_{\rm HI} \le 18.4$ (most with $16.2 \le \log N_{\rm HI} \le 17.8$) for which we have determined the metallicity for each absorber. This sample of 31 absorbers therefore probes gas at the transition in $N_{\rm HI}$\ between the LYAF ($\log N_{\rm HI} \la 16$) and stronger LLSs ($\log N_{\rm HI} \ga 18.5$). It provides a direct comparison sample with the $z<1$ sample of \citetalias{lehner13} and \citetalias{wotta16} and complements other samples of typically stronger LLSs at similar and higher redshifts (\citetalias{fumagalli16}; \citealt{cooper15,glidden16}). To derive the metallicity we have used Cloudy simulations assuming a single gas-phase model following the methodology of our early work at low redshift \citepalias{lehner13}. In particular we have used the same ionizing background (HM05) to avoid introducing additional systematics in our comparison between low and high redshift absorbers. As in \citetalias{lehner13}, we only model the absorption seen in the metals that is associated with the pLLS or LLS H$\;${\small\rm I}\relax\ absorption, i.e., the metallicity is determined by comparing estimated column densities of metal ions and H$\;${\small\rm I}\relax\ in the strongest H$\;${\small\rm I}\relax\ component (not over the entire velocity profile where metal-line absorption may be observed). Our main results are as follows. \begin{enumerate} \item Typically the following ions Si$\;${\small\rm II}\relax\, Si$\;${\small\rm III}\relax, Si$\;${\small\rm IV}\relax, C$\;${\small\rm II}\relax, C$\;${\small\rm III}\relax, C$\;${\small\rm IV}\relax\ associated with the pLLSs or LLSs at $2.3<z<3.3$ are satisfactorily modeled with ionization models with $\langle \log U \rangle \simeq -2.4$ (with a dispersion of 0.6 dex), which imply temperatures (1--$4) \times 10^4$ K. Based on these Cloudy models, about half of the sample has physical scale $l<10$ kpc and the other half $17<l<200$ kpc (see Table~\ref{t-cloudyavg}). \item We empirically establish that the metallicity distribution of the pLLSs and LLSs at $2.3<z<3.3$ is unimodal peaking at $\langle {[\rm X/H]} \rangle = -2.00 \pm 0.17$ (error on the mean from the survival analysis) with a standard deviation of $\pm 0.84$ dex. The mean and distribution are quite similar to those derived for the stronger LLSs ($17.5 \le \log N_{\rm HI} \le 18.5$) from the HD-LLS survey over the same redshifts. On the other hand, the mean metallicities of the SLLSs ($19 \le \log N_{\rm HI} < 20.3$) and DLAs ($\log N_{\rm HI} \ge 20.3$) at $2.3<z<3.3$ are higher, $-1.71$ and $-1.39$ dex, respectively (the dispersion of the metallicities for the DLAs is also also factor 2 smaller). For the LYAF ($\log N_{\rm HI} \la 15.5$), the mean metallicity is significantly smaller at similar redshifts, $\langle {[\rm X/H]} \rangle = -2.85$ (with a similar dispersion). The mean metallicity of the gas at $2.3<z<3.3$ therefore increases with increasing $N_{\rm HI}$\ (with a possible exception for the pLLSs, although a larger sample will be needed to robustly determine this). \item There is a substantial fraction ($25$--$41\%$) of VMP pLLSs and LLSs with metallicities $2\sigma$ below the mean metallicity of the DLAs (i.e., ${[\rm X/H]} \la -2.4$ at $2.3<z<3.3$). These VMP pLLSs and LLSs are good candidates of metal-poor cold gas feeding galaxies as seen in cosmological simulations. \item At $2.3<z<3.3$, we determine that the fraction of pLLSs and LLSs with ${[\rm X/H]} \le -3$, i.e., at the Pop III remnant level, is $3$--$18\%$ at $2.3<z<3.3$ (68\% confidence interval). The lowest metallicity LLS in our sample with a metallicity of ${[\rm X/H]} \simeq -3.35$ has some metals detected with ${\rm [C/\alpha]\simeq -0.2}$, consistent with a Pop III enrichment. There is no strong evidence ($\la 3\%$ at the 68\% confidence interval) in this sample of pristine pLLS or LLS (i.e., with no metal absorption) with ${[\rm X/H]} \le -4$. \item About half the sample of the pLLSs and LLSs at $2.3<z<3.3$ and $z<1$ has C/$\alpha$ ratios similar to those derived for MW stars and SLLSs/DLAs with similar metallicities over the entire probed metallicity interval ($-3 \la {[\rm X/H]} \la +0.5$). The other half has enhanced C/$\alpha$ ratios (near solar values) in the metallicity range $-2 \la {[\rm X/H]} \la -0.5$, implying that this gas must have been polluted by preferential ejection of C from low metallicity galaxies. \item The comparison of the pLLSs and LLSs at $2.3<z<3.3$ and $z\la 1$ that were selected using the same selection criteria and analyzed using the same procedures shows that some of their properties have not evolved strongly with $z$. The absence of trend between C/$\alpha$ and the metallicity for the pLLSs and LLSs is observed at both high and low $z$. At overlapping metallicities, similar scatter and range of values are observed in C/$\alpha$ at high and low $z$. We show that the fraction of VMP pLLSs/LLSs is 20--47\% (68\% confidence interval) over the redshift interval $z<1$ to $z\sim 4$, i.e., over the last 12 billion years the fraction of VMP pLLSs and LLSs appears to remain relatively constant. The hydrogen densities of the pLLSs and LLSs are also similar at both low and high $z$. \item On the other hand, several properties of the pLLSs and LLSs have evolved strongly with $z$. The MDF of the pLLSs and LLSs evolves markedly with $z$, changing from a unimodal distribution at $2.3<z<3.3$ that peaks at ${[\rm X/H]} \simeq -2.0$ to a bimodal distribution at $z\la 1$ with peaks at ${[\rm X/H]} \simeq -1.8$ and $-0.3$. In contrast, the MDF of the DLAs over the same redshift intervals stays unimodal with only an increase of the mean metallicity with decreasing $z$. The ionization parameters, linear scales, and total hydrogen column densities are a factor $\sim 10$ larger on average at $2.3<z<3.3$ than at $z<1$. \end{enumerate} These first results from the KODIAQ Z survey already put some strong empirical constraints on the dense ionized gas probed by absorbers with $16 \la \log N_{\rm HI} \la 18.5$ and their evolution over 12 billion years of cosmic time, before and after the peak of cosmic star formation. However, our sample is still too small to robustly determine if the pLLS and LLS populations at $z>2$ probe similar or widely different physical structures. At $z\la 1$, by doubling the initial sample of pLLSs and LLSs in \citetalias{lehner13}, \citetalias{wotta16} have demonstrated that the MDF of the pLLSs is bimodal, but likely transitions to a unimodal distribution in the LLS regime. Our ongoing KODIAQ Z survey at $z\ga 2$ and COS Legacy survey at $z<1$ will yield much larger samples of pLLSs, LLSs, as well absorbers with $15 \la \log N_{\rm HI} \la 16$ at both high and low $z$, which will provide new stringent constraints on the properties of the diffuse and dense ionized gas at $0 \la z \la 4$ . \section*{Acknowledgements} Support for this research was partially made by NASA through the Astrophysics Data Analysis Program (ADAP) grant NNX16AF52G. MF acknowledges support by the Science and Technology Facilities Council through grant ST/L00075X/1. Part of this manuscript was written at the winter 2016 retreat hosted by IMPS of UC Santa Cruz Department of Astronomy. We thank the Esalen Institute for its great setting and wonderful hospitality during that retreat. All the data presented in this work were obtained from the Keck Observatory Database of Ionized Absorbers toward QSOs (KODIAQ), which was funded through NASA ADAP grant NNX10AE84G. This research has made use of the Keck Observatory Archive (KOA), which is operated by the W. M. Keck Observatory and the NASA Exoplanet Science Institute (NExScI), under contract with the National Aeronautics and Space Administration.The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. \input{ms.bbl} \input{tab1.tex} \input{tab2.tex} \input{tab3.tex} \input{tab4.tex} \input{tab5.tex}
1,314,259,992,808
arxiv
\section{Introduction} \label{sect:intro} Quantum gauge field theories are most successfully described perturbatively, expanding around the free quantum field theory. In fact, at present its non-perturbative formulation seems to be far beyond reach so it is the only thing we have. On the one hand, many rigorous results can be obtained \cite{BBH94,BBH95} using cohomological arguments within the context of the BRST-formalism \cite{BRS74,BRS75,BRS76,Tyu75}. On the other hand, renormalization of perturbative quantum field theories has been carefully structured using Hopf algebras \cite{Kre98,CK99,CK00}. The presence of a gauge symmetry induces a rich additional structure on these Hopf algebras, as has been explored in \cite{Kre05,KY06,BKUY08} and in the author's own work \cite{Sui07,Sui07c,Sui08}. All of this work is based on the algebraic transparency of BPHZ-renormalization, with the Hopf algebra reflecting the recursive nature of this procedure. In this article we study more closely the relation between the renormalization Hopf algebras and the BRST-symmetries for gauge theories. We work in the explicit case of quantum chromodynamics (QCD), a Yang--Mills gauge theory with gauge group $\SU(3)$ that describes the strong interaction between quarks and gluons. We will shortly describe this in a little more detail, as well as the appearance of BRST-symmetries. After describing the renormalization Hopf algebra for QCD, we study its structure in Section \ref{sect:hopf}. The link between this Hopf algebra and the BRST-symmetries acting on the fields is established in Section \ref{sect:coaction}. \section*{Acknowledgements} The author would like to thank the participants of the workshop ``DIAMANT meets GQT'' at the Lorentz Center in Leiden. \section{Quantum chromodynamics} \label{sect:ym} In order to keep the discussion in this article as explicit as possible, we will work in the setting of quantum chromodynamics (QCD). This is an example of a Yang--Mills gauge theory, as introduced originally in \cite{YM54}. It is the physical theory that successfully describes the so-called strong interaction between quarks and gluons. Let us make more precise how these particles can be described mathematically, at least in a perturbative approach. One of the basic principles in the dictionary between the (elementary particle) physicists' and mathematicians' terminology is that \begin{center} ``particles are representations of a Lie group.'' \end{center} In the case of quantum chromodynamics, this Lie group -- generally called the gauge group -- is $\SU(3)$. In fact, the {\it quark} is a $\C^3$-valued function $\psi = (\psi_i)$ on spacetime $M$. This `fiber' $\C^3$ at each point of spacetime is the defining representation of $\SU(3)$. Thus, there is an action on $\psi$ of an $\SU(3)$-valued function on $M$; let us write this function as $U$, so that $U(x) \in \SU(3)$. In physics, the three components of $\psi$ correspond to the so-called {\it color} of the quark, typically indicated by red, green and blue. The {\it gluon}, on the other hand, is described by an $\su(3)$-valued one-form on $M$, that is, a section of $\Lambda^1 (\su(3)) \equiv \Lambda^1 \otimes (M \times \su(3))$. We have in components $$ A=A_\mu dx^\mu = A_\mu^a dx^\mu T^a $$ where the $\{ T^a \}_{a=1}^8$ form a basis for $\su(3)$. The structure constants $\{ f^{ab}_c \}$ of $\g$ are defined by $[T^a, T^b]=f^{ab}_c T^c$ and the normalization is such that $\tr(T^a T^b) = \delta^{ab}$. It is useful to think of $A$ as a connection one-form (albeit on the trivial bundle $M \times \SU(3)$). The group $\SU(3)$ acts on the second component $\su(3)$ in the adjoint representation. Again, this is pointwise on $M$, leading to an action of $U=U(x)$ on $A$. In both cases, that is, for quarks and gluons, the transformations \begin{equation} \psi_i \mapsto U_{ij} \psi_j, \qquad A_\mu \mapsto g^{-1} U^{-1} \partial_\mu U + U^{-1} A_\mu U \end{equation} are called {\it gauge transformations}. The constant $g$ is the so-called {\it strong coupling constant}. As in mathematics, also in physics one is after {\it invariants}, in this case, one looks for functions -- or, rather, functionals -- of the quark and gluon fields that are invariant under a local ({i.e.} $x$-dependent) action of $\SU(3)$. We are interested in the following action functional: \begin{equation} \label{eq:ym} S(A,\psi) = \frac{1}{8 \pi} \int_M F_{\mu\nu}^a F^{\mu\nu}_a + \bar\psi_i (i \gamma^\mu \partial_\mu + \gamma^\mu A_{\mu}^a T^a_{ij} + m ) \psi_j \end{equation} with $F \equiv F(A):= d A + g A^2$ the curvature of $A$; it is an $\su(3)$-valued 2-form on $M$. Before checking that this is indeed invariant under $\SU(3)$, let us explain the notation in the last term. The $\gamma^\mu$ are the Dirac matrices, and satisfy $$ \gamma^\mu \gamma^\nu + \gamma^\nu \gamma^\mu = -2 \delta^{\mu\nu} $$ Clearly, this relation cannot be satisfied by complex numbers (which are never anti-commuting). In fact, the representation theory of the algebra with the above relation ({i.e.} the Clifford algebra) is quite rich. The idea is that the fields $\psi$ are not only $\C^3$-valued, but that actually each of the components $\psi_i$ is itself a 4-vector, called {\it spinors}. This is so as to accommodate a representation of the Clifford algebra: in 4 spacetime dimensions the Dirac matrices are 4-dimensional (although in general this dimension is $2^{[n/2]}$ for $n$ spacetime dimensions). Besides this matrix multiplication, the partial derivative $\partial_\mu$ acts componentswise, as does the {\it mass} $m$ which is really just a real number. Finally, for our purposes it is sufficient to think of $\bar\psi$ as the (componentswise) complex conjugate of $\psi$. The typical Grassmannian nature of these fermionic fields is only present in the current setup through the corresponding {\it Grassmann degree} of $+1$ and $-1$ that is assigned to both of them. Introducing the notation $\dirac = \gamma^\mu\partial_\mu$ and $\Aslash = \gamma^\mu A_\mu$, we can write $$ S(A,\psi) = - \big\langle F(A) , F(A) \big\rangle + \big\langle \psi,( i \dirac + \Aslash +m )\psi \big\rangle, $$ in terms of appropriate inner products. Essentially, these are combinations of spinorial and Lie algebra traces and the inner product on differential forms. For more details, refer to the lectures by Faddeev in \cite{DelEA99}. The key observation is that the $\SU(3)$-valued functions $U(x)$ act by unitaries with respect to this inner product. \subsection{Ghost fields and BRST-quantization} In a path integral quantization of the field theory defined by the above action, one faces the following problem. Since gauge transformations are supposed to act as symmetries on the theory, the gauge degrees of freedom are irrelevant to the final physical outcome. Thus, in one way or another, one has to quotient by the group of gauge transformations. However, gauge transformations are $\SU(3)$-valued function on $M$, yielding an infinite dimensional group. In order to deal with this infinite redundancy, Faddeev and Popov used the following trick. They introduced so-called {\it ghost fields}, denoted by $\omega$ and $\bar\omega$. In the case of quantum chromodynamics, these are $\su(3)[-1]$ and $\su(3)[1]$-valued functions on $M$, respectively. The shift $[-1]$ and $[+1]$ is to denote that $\omega$ and $\bar\omega$ have {\it ghost degree} $1$ and $-1$, respectively. Consequently, they have {\it Grassmann degree} $1$ and $-1$, respectively. In components, we write $$ \omega = \omega^a T^a; \qquad \bar\omega = \bar\omega^a T^a. $$ Finally, an auxiliary field $h$ -- also known as the Nakanishi--Lautrup field -- is introduced; it is an $\su(3)$-valued function (in ghost degree 0) and we write $h = h^a T^a$. The dynamics of the ghost fields and their interaction with the gauge field are described by the rather complicated additional term: $$ S_{\gh}(A,\omega,\bar\omega,h) = - \big\langle A, dh \big\rangle + \big \langle d \bar\omega, d \omega \big\rangle + \frac{1}{2} \xi \big \langle h ,h \big\rangle + g \big \langle d \bar\omega, [A,\omega] \big\rangle, $$ where $\xi \in \R$ is the so-called {\it gauge parameter}. The essential point about the ghost fields is that, in a path integral formulation of quantum gauge field theories, their introduction miraculously takes care of the {\it fixing of the gauge}, {i.e.} picking a point in the orbit in the space of fields under the action of the group of gauge transformations. The ghost fields are the ingredients in the BRST-formulation that was developed later by Becchi, Rouet, Stora and independently by Tyutin in \cite{BRS74,BRS75,BRS76,Tyu75}. Let us briefly describe this formalism. Because the gauge has been fixed by adding the term $S_{\gh}$, the combination $S + S_{\gh}$ is not invariant any longer under the gauge transformations. This is of course precisely the point. Nevertheless, $S+ S_{\gh}$ possesses another symmetry, which is the BRST-symmetry. It acts on function(al)s in the fields as a ghost degree $1$ derivation $s$, which is defined on the generators by \begin{gather} \label{brst} s A = d \omega +g [A,\omega],\qquad s \omega = \frac{1}{2} g [\omega,\omega],\qquad s \bar\omega = h\\ \nn s h =0\qquad s \psi = g \omega \psi , \qquad s \bar\psi = g \bar\psi \omega. \end{gather} Indeed, one can check (eg., see \cite[Sect. 15.7]{Wei96} for details) that $s(S + S_{\gh})=0$. The form degree and Grassmann degree of the fields are combined in the {\it total degree} and summarized in the following table: \begin{center} \begin{tabular}{|l|r|r|r|r|r|r|} \hline & $A$ & $\omega$ & $\bar\omega$ & $h$& $\psi$ &$\bar\psi$ \\ \hline Grassmann degree &0 &$+1$ &$-1$ &0 &$+1$ &$-1$ \\ \hline form degree &$+1$ &$0$ &$0$ &0 &0 &0 \\ \hline total degree &$+1$ &$+1$ &$-1$ &0 &$+1$ &$-1$ \\ \hline \end{tabular} \end{center} The fields generate an algebra, the algebra of local forms $\Loc(\Phi)$. With respect to the above degrees, it decomposes as before into $\Loc^{(p,q)}(\Phi)$ with $p$ the form degree and $q$ the Grassmann degree. The total degree is then $p+q$ and $\Loc(\Phi)$ is a graded Lie algebra by setting $$ [ X, Y ] = X Y - (-1)^{\deg(X)\deg(Y)} Y X, $$ with the grading given by this total degree. Note that the present graded Lie bracket is of degree $0$ with respect to the total degree, that is, $\deg([X,Y]) = \deg(X) + \deg(Y)$. It satisfies graded skew-symmetry, the graded Leibniz identity and the graded Jacobi identity: \begin{align*} &[X,Y] = - (-1)^{\deg(X)\deg(Y)} [Y,X], \\ & [XY,Z] = X [ Y,Z] + (-1)^{\deg(Y)\deg(Z)} [X,Z]Y.\\ &(-1)^{\deg(X)\deg(Z)} [ [ X,Y],Z] + (\hbox{cyclic perm.}) = 0, \end{align*} \begin{lma} The BRST-differential, together with the above bracket, gives $\Loc(\Phi)$ the structure of a graded differential Lie algebra. \end{lma} Moreover, the BRST-differential $s$ and the exterior derivative $d$ form a double complex, that is, $d \circ s + s \circ d=0$ and $$ \xymatrix{ &\vdots & \vdots & \vdots &\\ &\Loc^{(0,1)} \ar[u]_s \ar[r]_d &\Loc^{(1,1)} \ar[u]_s \ar[r]_d &\Loc^{(2,1)}\ar[u]_s \ar[r]_d & \cdots\\ &\Loc^{(0,0)} \ar[u]_s \ar[r]_d &\Loc^{(1,0)} \ar[u]_s \ar[r]_d &\Loc^{(2,0)}\ar[u]_s \ar[r]_d & \cdots\\ &\Loc^{(0,-1)} \ar[u]_s \ar[r]_d &\Loc^{(1,-1)} \ar[u]_s \ar[r]_d &\Loc^{(2,-1)}\ar[u]_s \ar[r]_d & \cdots\\ & \vdots \ar[u]_s & \vdots \ar[u]_s & \vdots \ar[u]_s & } $$ This double complex has a quite interesting structure in itself, and was the subject of study in \cite{BBH94,BBH95}. This contained further applications in renormalization and the description of anomalies. \section{Renormalization Hopf algebra for QCD} \label{sect:hopf} As we discussed previously, quantum chromodynamics describes the interaction between quarks and gluons. In order to do this successfully at a quantum level, it was necessary to introduce ghost fields. We will now describe how the dynamics and interaction of and between these fields, naturally give rise to Feynman graphs. These constitute a Hopf algebra which encodes the procedure of renormalization in QCD. We will describe this Hopf algebra, and study its structure in terms of the so-called Green's functions. \subsection{Hopf algebra of Feynman graphs} First of all, the quark, ghost and gluon fields are supposed to {\it propagate}, this we will denote by a straight, dotted and curly line or {\it edges} as follows: \begin{align*} e_1 =~ \parbox{25pt}{ \begin{fmfgraph}(20,10) \fmfleft{l} \fmflabel{}{l} \fmfright{r} \fmf{plain}{l,r} \end{fmfgraph} } \qquad e_2 = \parbox{25pt}{ \begin{fmfgraph}(20,10) \fmfleft{l} \fmflabel{}{l} \fmfright{r} \fmf{dots}{l,r} \end{fmfgraph} } \qquad e_3 =~ \parbox{25pt}{ \begin{fmfgraph}(20,10) \fmfleft{l} \fmflabel{}{l} \fmfright{r} \fmf{gluon}{l,r} \end{fmfgraph} }. \end{align*} The interactions between the fields then naturally appear as {\it vertices}, connecting the edges corresponding to the interacting fields. The allowed interactions in QCD are the following four: \begin{align*} v_1 =~\parbox{35pt}{ \begin{fmfchar}(25,25) \fmfleft{l} \fmfright{r1,r2} \fmf{gluon}{l,v} \fmf{plain}{r1,v} \fmf{plain}{v,r2} \fmfdot{v} \end{fmfchar}}, \qquad v_2 =~\parbox{35pt}{ \begin{fmfgraph}(25,25) \fmfleft{l} \fmfright{r1,r2} \fmf{gluon}{l,v} \fmf{dots}{r1,v} \fmf{dots}{v,r2} \fmfdot{v} \end{fmfgraph}}, \qquad v_3 =~\parbox{35pt}{ \begin{fmfgraph}(25,25) \fmfleft{l} \fmfright{r1,r2} \fmf{gluon}{l,v} \fmf{gluon}{r1,v} \fmf{gluon}{v,r2} \fmfdot{v} \end{fmfgraph}}, \qquad v_4 =~ \parbox{35pt}{ \begin{fmfchar}(25,25) \fmfleft{l1,l2} \fmfright{r1,r2} \fmf{gluon}{l1,v} \fmf{gluon}{l2,v} \fmf{gluon}{r1,v} \fmf{gluon}{v,r2} \fmfdot{v} \end{fmfchar}}. \end{align*} In addition, since the quark is supposed to have a mass, there is a {\it mass term}, which we depict as a vertex of valence two: $$ v_5 =~\parbox{25pt}{ \begin{fmfgraph}(25,10) \fmfleft{l} \fmfright{r} \fmf{plain}{l,v} \fmf{plain}{v,r} \fmfdot{v} \end{fmfgraph}}~. $$ We can make the relation between these edges (vertices) and the propagation (interaction) more precise through the definition of a map $\iota$ that assigns to each of the above edges and vertices a (monomial) functional in the fields. In fact, the assignment $e_i \mapsto \iota(e_i)$ and $v_j \mapsto \iota(v_j)$ is \begin{center} \begin{figure}[h!] \label{fig:setR} \begin{tabular}{|l|ccc|ccccc|} \hline & $e_1$ & $e_2$ & $e_3$ & $v_1$ & $v_2$ & $v_3$ & $v_4$ & $v_5$\\[2mm] edge/vertex &\parbox{15pt}{ \begin{fmfchar}(15,10) \fmfleft{l} \fmflabel{}{l} \fmfright{r} \fmf{plain}{l,r} \end{fmfchar}} & \parbox{20pt}{ \begin{fmfchar}(15,10) \fmfleft{l} \fmflabel{}{l} \fmfright{r} \fmf{dots}{l,r} \end{fmfchar} } & \parbox{20pt}{ \begin{fmfchar}(15,10) \fmfleft{l} \fmflabel{}{l} \fmfright{r} \fmf{gluon}{l,r} \end{fmfchar} } & \parbox{15pt}{ \begin{fmfchar}(15,10) \fmfleft{l} \fmfright{r1,r2} \fmf{gluon}{l,v} \fmf{plain}{r1,v} \fmf{plain}{v,r2} \end{fmfchar}} & \parbox{15pt}{ \begin{fmfgraph}(15,10) \fmfleft{l} \fmfright{r1,r2} \fmf{gluon}{l,v} \fmf{dots}{r1,v} \fmf{dots}{v,r2} \end{fmfgraph}} &\parbox{15pt}{ \begin{fmfgraph}(15,10) \fmfleft{l} \fmfright{r1,r2} \fmf{gluon}{l,v} \fmf{gluon}{r1,v} \fmf{gluon}{v,r2} \end{fmfgraph}} & \parbox{15pt}{ \begin{fmfchar}(15,10) \fmfleft{l1,l2} \fmfright{r1,r2} \fmf{gluon}{l1,v} \fmf{gluon}{l2,v} \fmf{gluon}{r1,v} \fmf{gluon}{v,r2} \end{fmfchar}} & \parbox{15pt}{ \begin{fmfgraph}(15,15) \fmfleft{l} \fmfright{r} \fmf{plain}{l,v} \fmf{plain}{v,r} \fmfdot{v} \end{fmfgraph}}\\[3mm] \hline \hline monomial $\iota$ &$i \bar\psi \dirac \psi$&$d \bar\omega d \omega$&$dA dA $&$\psi \Aslash \psi$ & $\bar\omega [A,\omega]$ &$2 dA A^2$ & $A^4$ & $m \bar\psi \psi$\\ \hline \end{tabular} \caption{QCD edges and vertices, and (schematically) the corresponding monomials in the fields.} \end{figure} \end{center} \begin{rem} We have not assigned an edge to the field $h$; this is because it does not interact with any of the other fields. Its only -- still crucial -- effect is on the propagator of the gluon, through the terms $-\langle A, dh \rangle$ and $\half \xi\langle h,h\rangle$. \end{rem} A {\it Feynman graph} is a graph built from these vertices and edges. Naturally, we demand edges to be connected to vertices in a compatible way, respecting their straight, dotted or curly character. As opposed to the usual definition in graph theory, Feynman graphs have no external vertices. However, they do have {\it external lines} which come from vertices in $\Gamma$ for which some of the attached lines remain vacant ({i.e.} no edge attached). If a Feynman graph $\Gamma$ has two external quark (straight) lines, we would like to distinguish between the propagator and mass terms. Mathematically, this is due to the presence of the vertex of valence two. In more mathematical terms, since we have vertices of valence two, we would like to indicate whether a graph with two external lines corresponds to such a vertex, or to an edge. A graph $\Gamma$ with two external lines is dressed by a bullet when it corresponds to a vertex, {i.e.} we write $\Gamma_\bullet$. The above correspondence between Feynman graphs and vertices/edges is given by the {\it residue} $\res(\Gamma)$. It is defined for a general graph as the vertex or edge it corresponds to after collapsing all its internal points. For example, we have: \begin{gather*} \res\left( \parbox{40pt}{ \begin{fmfgraph*}(40,30) \fmfleft{l} \fmfright{r1,r2} \fmf{gluon}{l,v} \fmf{plain}{v,v1,r1} \fmf{plain}{v,v2,r2} \fmffreeze \fmf{gluon}{v1,loop,v2} \fmffreeze \fmfv{decor.shape=circle, decor.filled=0, decor.size=2thick}{loop} \end{fmfgraph*} }\right) = \parbox{20pt}{ \begin{fmfgraph*}(20,20) \fmfleft{l} \fmfright{r1,r2} \fmf{gluon}{l,v} \fmf{plain}{v,r1} \fmf{plain}{v,r2} \end{fmfgraph*} } \qquad \text{ and }\qquad \res\left( \parbox{40pt}{ \begin{fmfgraph*}(40,30) \fmfleft{l} \fmfright{r} \fmf{plain}{l,v1,v2,v5,v6,r} \fmf{gluon,right,tension=0}{v5,v1} \fmf{gluon,right,tension=0}{v2,v6} \end{fmfgraph*} }\right) = \parbox{20pt}{ \begin{fmfgraph*}(20,20) \fmfleft{l} \fmfright{r} \fmf{plain}{l,r} \end{fmfgraph*}} \intertext{ but } \res\left( \parbox{40pt}{ \begin{fmfgraph*}(40,30) \fmfleft{l} \fmfright{r} \fmf{plain}{l,v1,v2,v5,v6,r} \fmf{gluon,right,tension=0}{v5,v1} \fmf{gluon,right,tension=0}{v2,v6} \end{fmfgraph*}}{}_\bullet \right) = \parbox{20pt}{ \begin{fmfgraph*}(20,20) \fmfleft{l} \fmfright{r} \fmf{plain}{l,v,r} \fmfdot{v} \end{fmfgraph*}}~. \end{gather*} For the definition of the Hopf algebra of Feynman graphs, we restrict to {\it one-particle irreducible} (1PI) Feynman graphs. These are graphs that are not trees and cannot be disconnected by cutting a single internal edge. \begin{defn}[Connes--Kreimer \cite{CK99}] The Hopf algebra $H_{\CK}$ of Feynman graphs is the free commutative algebra over $\C$ generated by all 1PI Feynman graphs with residue in $R= R_V \cup R_E$, with counit $\epsilon(\Gamma)=0$ unless $\Gamma=\emptyset$, in which case $\epsilon(\emptyset)=1$, and coproduct \begin{align*} \Delta (\Gamma) = \Gamma \otimes 1 + 1 \otimes \Gamma + \sum_{\gamma \subsetneq \Gamma} \gamma \otimes \Gamma/\gamma, \end{align*} where the sum is over disjoint unions of 1PI subgraphs with residue in $R$. The quotient $\Gamma/\gamma$ is defined to be the graph $\Gamma$ with the connected components of the subgraph contracted to the corresponding vertex/edge. If a connected component $\gamma'$ of $\gamma$ has two external lines, then there are possibly two contributions corresponding to the valence two vertex and the edge; the sum involves the two terms $\gamma'_\bullet \otimes \Gamma/(\gamma' \to \bullet)$ and $\gamma' \otimes \Gamma/\gamma'$. The antipode is given recursively by \begin{equation} \label{antipode} S(\Gamma) = - \Gamma - \sum_{\gamma \subsetneq \Gamma} S(\gamma) \Gamma/\gamma. \end{equation} \end{defn} Two examples of this coproduct are:\\[2mm] \begin{align*} \Delta( \parbox{35pt}{\begin{fmfgraph*}(35,11) \fmfleft{l} \fmfright{r} \fmf{plain}{l,v1,v2,v3,v4,r} \fmf{gluon,right,tension=0}{v4,v1} \fmf{gluon,right,tension=0}{v3,v2} \end{fmfgraph*}} ) &= \parbox{35pt}{\begin{fmfgraph*}(35,11) \fmfleft{l} \fmfright{r} \fmf{plain}{l,v1,v2,v3,v4,r} \fmf{gluon,right,tension=0}{v4,v1} \fmf{gluon,right,tension=0}{v3,v2} \end{fmfgraph*}} \otimes 1 + 1 \otimes \parbox{35pt}{\begin{fmfgraph*}(35,11) \fmfleft{l} \fmfright{r} \fmf{plain}{l,v1,v2,v3,v4,r} \fmf{gluon,right,tension=0}{v4,v1} \fmf{gluon,right,tension=0}{v3,v2} \end{fmfgraph*}} + \parbox{30pt}{ \begin{fmfgraph*}(30,11) \fmfleft{l} \fmfright{r} \fmf{plain}{l,v2,v3,r} \fmf{gluon,right,tension=0}{v3,v2} \end{fmfgraph*}} \otimes \parbox{30pt}{ \begin{fmfgraph*}(30,11) \fmfleft{l} \fmfright{r} \fmf{plain}{l,v1,v4,r} \fmf{gluon,right,tension=0}{v4,v1} \end{fmfgraph*}} + \parbox{30pt}{ \begin{fmfgraph*}(30,11) \fmfleft{l} \fmfright{r} \fmf{plain}{l,v2,v3,r} \fmf{gluon,right,tension=0}{v3,v2} \end{fmfgraph*}} { }_\bullet \otimes \parbox{30pt}{ \begin{fmfgraph*}(30,11) \fmfleft{l} \fmfright{r} \fmf{plain}{l,v1,v3,v4,r} \fmfdot{v3} \fmf{gluon,right,tension=0}{v4,v1} \end{fmfgraph*}}~, \\[3mm] \Delta( \parbox{35pt}{ \begin{fmfgraph*}(35,35) \fmfleft{l} \fmfright{r} \fmf{phantom}{l,v1,v2,r} \fmf{gluon}{l,v1} \fmf{gluon}{v2,r} \fmf{phantom,left,tension=0,tag=1}{v1,v2} \fmf{phantom,right,tension=0,tag=2}{v1,v2} \fmffreeze \fmfi{plain}{subpath (0,.8) of vpath1(__v1,__v2)} \fmfi{plain}{subpath (0,.8) of vpath2(__v1,__v2)} \fmfi{plain}{subpath (0.8,1.2) of vpath1(__v1,__v2)} \fmfi{plain}{subpath (0.8,1.2) of vpath2(__v1,__v2)} \fmfi{gluon}{point .8 of vpath1(__v1,__v2) .. point .8 of vpath2(__v1,__v2)} \fmfi{gluon}{point 1.2 of vpath1(__v1,__v2) .. point 1.2 of vpath2(__v1,__v2)} \fmfi{plain}{subpath (1.2,2) of vpath1(__v1,__v2)} \fmfi{plain}{subpath (1.2,2) of vpath2(__v1,__v2)} \end{fmfgraph*}} ) &= \parbox{35pt}{ \begin{fmfgraph*}(35,35) \fmfleft{l} \fmfright{r} \fmf{phantom}{l,v1,v2,r} \fmf{gluon}{l,v1} \fmf{gluon}{v2,r} \fmf{phantom,left,tension=0,tag=1}{v1,v2} \fmf{phantom,right,tension=0,tag=2}{v1,v2} \fmffreeze \fmfi{plain}{subpath (0,.8) of vpath1(__v1,__v2)} \fmfi{plain}{subpath (0,.8) of vpath2(__v1,__v2)} \fmfi{plain}{subpath (0.8,1.2) of vpath1(__v1,__v2)} \fmfi{plain}{subpath (0.8,1.2) of vpath2(__v1,__v2)} \fmfi{gluon}{point .8 of vpath1(__v1,__v2) .. point .8 of vpath2(__v1,__v2)} \fmfi{gluon}{point 1.2 of vpath1(__v1,__v2) .. point 1.2 of vpath2(__v1,__v2)} \fmfi{plain}{subpath (1.2,2) of vpath1(__v1,__v2)} \fmfi{plain}{subpath (1.2,2) of vpath2(__v1,__v2)} \end{fmfgraph*}} \otimes 1 + 1 \otimes \parbox{35pt}{ \begin{fmfgraph*}(35,35) \fmfleft{l} \fmfright{r} \fmf{phantom}{l,v1,v2,r} \fmf{gluon}{l,v1} \fmf{gluon}{v2,r} \fmf{phantom,left,tension=0,tag=1}{v1,v2} \fmf{phantom,right,tension=0,tag=2}{v1,v2} \fmffreeze \fmfi{plain}{subpath (0,.8) of vpath1(__v1,__v2)} \fmfi{plain}{subpath (0,.8) of vpath2(__v1,__v2)} \fmfi{plain}{subpath (0.8,1.2) of vpath1(__v1,__v2)} \fmfi{plain}{subpath (0.8,1.2) of vpath2(__v1,__v2)} \fmfi{gluon}{point .8 of vpath1(__v1,__v2) .. point .8 of vpath2(__v1,__v2)} \fmfi{gluon}{point 1.2 of vpath1(__v1,__v2) .. point 1.2 of vpath2(__v1,__v2)} \fmfi{plain}{subpath (1.2,2) of vpath1(__v1,__v2)} \fmfi{plain}{subpath (1.2,2) of vpath2(__v1,__v2)} \end{fmfgraph*}} +2~ \parbox{35pt}{ \begin{fmfgraph*}(35,35) \fmfleft{l} \fmfright{r1,r2} \fmf{gluon}{l,v} \fmf{plain}{v,v1,v3,r1} \fmf{plain}{v,v2,v4,r2} \fmffreeze \fmf{gluon}{v1,v2} \fmf{gluon}{v3,v4} \end{fmfgraph*}} \otimes \parbox{35pt}{ \begin{fmfgraph*}(35,35) \fmfleft{l} \fmfright{r} \fmf{phantom}{l,v1,v2,r} \fmf{gluon}{l,v1} \fmf{gluon}{v2,r} \fmf{plain,left,tension=0}{v1,v2} \fmf{plain,left,tension=0}{v2,v1} \end{fmfgraph*}}\\ & \qquad +2~ \parbox{35pt}{ \begin{fmfgraph*}(35,35) \fmfleft{l} \fmfright{r1,r2} \fmf{gluon}{l,v} \fmf{plain}{v,v1,r1} \fmf{plain}{v,v2,r2} \fmffreeze \fmf{gluon}{v1,v2} \end{fmfgraph*}} \otimes \parbox{35pt}{ \begin{fmfgraph*}(35,35) \fmfleft{l} \fmfright{r} \fmf{phantom}{l,v1,v2,r} \fmf{gluon}{l,v1} \fmf{gluon}{v2,r} \fmf{phantom,left,tension=0,tag=1}{v1,v2} \fmf{phantom,right,tension=0,tag=2}{v1,v2} \fmffreeze \fmfi{plain}{subpath (0,1) of vpath1(__v1,__v2)} \fmfi{plain}{subpath (0,1) of vpath2(__v1,__v2)} \fmfi{gluon}{point 1 of vpath1(__v1,__v2) .. point 1 of vpath2(__v1,__v2)} \fmfi{plain}{subpath (1,2) of vpath1(__v1,__v2)} \fmfi{plain}{subpath (1,2) of vpath2(__v1,__v2)} \end{fmfgraph*} } + \parbox{35pt}{ \begin{fmfgraph*}(35,35) \fmfleft{l} \fmfright{r1,r2} \fmf{gluon}{l,v} \fmf{plain}{v,v1,r1} \fmf{plain}{v,v2,r2} \fmffreeze \fmf{gluon}{v1,v2} \end{fmfgraph*}} \parbox{35pt}{ \begin{fmfgraph*}(35,35) \fmfleft{l} \fmfright{r1,r2} \fmf{gluon}{l,v} \fmf{plain}{v,v1,r1} \fmf{plain}{v,v2,r2} \fmffreeze \fmf{gluon}{v1,v2} \end{fmfgraph*}} \otimes \parbox{35pt}{ \begin{fmfgraph*}(35,35) \fmfleft{l} \fmfright{r} \fmf{phantom}{l,v1,v2,r} \fmf{gluon}{l,v1} \fmf{gluon}{v2,r} \fmf{plain,left,tension=0}{v1,v2} \fmf{plain,left,tension=0}{v2,v1} \end{fmfgraph*}}~. \end{align*} \bigskip The above Hopf algebra is an example of a connected graded Hopf algebra: it is graded by the {\it loop number $L(\Gamma)$} of a graph $\Gamma$. Indeed, one checks that the coproduct (and obviously also the product) satisfy the grading by loop number and $H^0_{\CK}$ consists of complex multiples of the empty graph, which is the unit in $H_{\CK}$, so that $H^0_{\CK}=\C 1$. We denote by $q_l$ the projection of $H_{\CK}$ onto $H^l_{\CK}$. In addition, there is another grading on this Hopf algebra. It is given by the number of vertices and already appeared in \cite{CK99}. However, since we consider vertices and edges of different types (straight, dotted and curly), we extend to a multigrading as follows. For each vertex $v_j$ ($j=1,\ldots,5$) we define a degree $d_j$ as $$ d_j ( \Gamma) = \# \text{vertices } v_j \text{ in } \Gamma - \delta_{v_j, \res(\Gamma)} $$ The multidegree indexed by $j=1,\ldots,5$ is compatible with the Hopf algebra structure, since contracting a subgraph $\Gamma \mapsto \Gamma/\gamma$ creates a new vertex. With this one easily arrives at the following relation: \begin{align*} d_j (\Gamma/\gamma) = d_j (\Gamma) - d_j (\gamma) \end{align*} Moreover, $d_j (\Gamma \Gamma') = d_j (\Gamma)+ d_j (\Gamma')$ giving a decomposition as vector spaces: $$ H_{\CK}= \bigoplus_{(n_1,\ldots,n_5) \in \Z^5} H^{n_1,\ldots,n_5}_{\CK}, $$ We denote by $p_{n_1,\ldots,n_5}$ the projection onto $H^{n_1,\ldots,n_5}_{\CK}$. Note that also $H^{0,\ldots, 0}_{\CK}=\C 1$. \begin{lma} \label{lma:rel-degrees} There is the following relation between the grading by loop number and the multigrading by number of vertices: $$ \sum_{j=1}^5 (N(v_j)-2)d_j = 2 L $$ where $N(v_j)$ is the valence of the vertex $v_j$. \end{lma} \begin{proof} This can be easily proved by induction on the number of internal edges using invariance of the quantity $\sum_{j} (N(v_j)-2)d_j - 2 L$ under the adjoint of an edge. \end{proof} The group $\Hom_\C(H_{\CK},\C)$ dual to $H_{\CK}$ is called the {\it group of diffeographisms} (for QCD). This name was coined in general in \cite{CK00} motivated by its relation with the group of (formal) diffeomorphisms of $\C$ (see Section \ref{sect:coaction} below). Stated more precisely, they constructed a map from the group of diffeographisms to the group of formal diffeomorphisms. We have established this result in general ({i.e.} for any quantum field theory) in \cite{Sui08}. Below, we will make a similar statement for Yang--Mills gauge theories. \subsection{Birkhoff decomposition} We now briefly recall how renormalization is an instance of a Birkhoff decomposition in the group of characters of $H$ as established in \cite{CK99}. Let us first recall the definition of a Birkhoff decomposition. We let $\gamma: C \to G$ be a loop with values in an arbitrary complex Lie group $G$, defined on a smooth simple curve $C \subset \P_1(\C)$. Let $C_\pm$ be the two complements of $C$ in $\P_1(\C)$, with $\infty \in C_-$. A {\it Birkhoff decomposition} of $\gamma$ is a factorization of the form $$ \gamma(z) = \gamma_-(z)^{-1} \gamma_+(z); \qquad (z \in C), $$ where $\gamma_\pm$ are (boundary values of) two holomorphic maps on $C_\pm$, respectively, with values in $G$. This decomposition gives a natural way to extract finite values from a divergent expression. Indeed, although $\gamma(z)$ might not holomorphically extend to $C_+$, $\gamma_+(z)$ is clearly finite as $z\to 0$. Now consider a Feynman graph $\Gamma$ in the Hopf algebra $H_{\CK}$. Via the so-called Feynman rules -- which are dictated by the Lagrangian of the theory -- one associates to $\Gamma$ the Feynman amplitude $U(\Gamma)(z)$. It depends on some regularization parameter, which in the present case is a complex number $z$ (dimensional regularization). The famous divergences of quantum field theory are now `under control' and appear as poles in the Laurent series expansion of $U(\Gamma)(z)$. On a curve around $0 \in \P^1(\C)$ we can define a loop $\gamma$ by $\gamma(z)(\Gamma):=U(\Gamma)(z)$ which takes values in the group of diffeographisms $G=\Hom_\C(H_{\CK},\C)$. Connes and Kreimer proved the following general result in \cite{CK99}. \begin{thm} Let $H$ be a graded connected commutative Hopf algebra with character group $G$. Then any loop $\gamma: C \to G$ admits a Birkhoff decomposition. \end{thm} In fact, an explicit decomposition can be given in terms of the group $G(K)= \Hom_\C(H,K)$ of $K$-valued characters of $H$, where $K$ is the field of convergent Laurent series in $z$.\footnote{In the language of algebraic geometry, there is an affine group scheme $G$ represented by $H$ in the category of commutative algebras. In other words, $G=\Hom_\C(H,~ \cdot ~)$ and $G(K)$ are the $K$-points of the group scheme. } If one applies this to the above loop associated to the Feynman rules, the decomposition gives exactly renormalization of the Feynman amplitude $U(\Gamma)$: the map $\gamma_+$ gives the renormalized Feynman amplitude and the $\gamma_-$ provides the counterterm. \bigskip Although the above construction gives a very nice geometrical description of the process of renormalization, it is a bit unphysical in that it relies on individual graphs that generate the Hopf algebra. Rather, in physics the probability amplitudes are computed from the full expansion of Green's functions. Individual graphs do not correspond to physical processes and therefore a natural question to pose is how the Hopf algebra structure behaves at the level of the Green's functions. We will see in the next section that they generate Hopf subalgebras, {i.e.} the coproduct closes on Green's functions. Here the so-called Slavnov--Taylor identities for the couplings will play a prominent role. \subsection{Structure of the Hopf algebra} In this subsection, we study the structure of the above Hopf algebra of QCD Feynman graphs. In fact, from a dual point of view, the group of diffeographisms turns out to be related to the group of formal diffeomorphisms of $\C^5$. Moreover, we will establish the existence of Hopf ideals, which correspond on the group level to subgroups. We define the {\it 1PI Green's functions} by \begin{equation} \label{green} G^{e_i} = 1 - \sum_{\res(\Gamma)=e_i} \frac{\Gamma}{\Sym(\Gamma)},\qquad G^{v_j} = 1 + \sum_{\res(\Gamma)=v_j} \frac{\Gamma}{\Sym(\Gamma)} \end{equation} with $i=1,2,3$ and $j=1,\ldots,5$. The restriction of the sum to graphs $\Gamma$ at loop order $L(\Gamma)=l$ is denoted by $G^r_l$, with $r \in \{ e_i, v_j\}_{i,j}$. \begin{rem} Let us explain the meaning of the inverse of Green's functions in our Hopf algebra. Since any Green's function $G^r$ starts with the identity, we can surely write its inverse formally as a geometric series. Recall that the Hopf algebra is graded by loop number. Hence, the inverse of a Green's function at a fixed loop order is in fact well-defined; it is given by restricting the above formal series expansion to this loop order. More generally, we understand any real power of a Green's function in this manner. \end{rem} We state without proof the following result of \cite{Sui08}. \begin{prop} \label{prop:cop-green} The coproduct takes the following form on (real powers of) the Green's functions: \begin{align*} \Delta \big( (G^{e_i} )^\alpha \big) &= \sum_{n_1, \ldots, n_5 } (G^{e_i})^\alpha Y_{v_1}^{n_1} \cdots Y_{v_5}^{n_5} \otimes p_n ((G^{e_i} )^\alpha),\\ \nonumber \Delta ( (G^{v_j} )^\alpha ) &= \sum_{n_1, \ldots, n_5 } (G^{v_j})^\alpha Y_{v_1}^{n_1 } \cdots Y_{v_5}^{n_5} \otimes p_n ((G^{v_j})^\alpha), \end{align*} with $\alpha \in \R$. Consequently, the algebra $H$ generated by the Green's functions (in each vertex multidegree) $G^{e_i}$ ($i=1,2,3$) and $G^{v_j}$ $(j=1,\ldots,5$) is a Hopf subalgebra of $H_{\CK}$. \end{prop} Denote by $N_k(r)$ the number of edges $e_k$ attached to $r \in \{ e_i,v_j \}_{i,j}$; clearly, the total number of lines attached to $r$ can be written as $N(r)=\sum_{i=1,2,3} N_i(r)$. With this notation, define for each vertex $v_j$ an element in $H$ by the formal expansion: $$ \label{Yv} Y_{v_j} := \frac{G^{v_j}}{\prod_{i=1,2,3} \left(G^{e_i}\right)^{N_i(v_j)/2} }. $$ We remark that alternative generators for the Hopf algebra $H$ are $G^{e_j}$ and $Y_{v_j}$, a fact that we will need later on. \begin{corl} \label{corl:cop-Yv} The coproduct on the elements $Y_v$ is given by $$ \Delta(Y_{v_j}) = \sum_{n_1, \ldots, n_5} Y_{v_j} Y_{v_1}^{n_1} \cdots Y_{v_5}^{n_5} \otimes p_{n_1 \cdots n_5} (Y_{v_j}), $$ where $p_{n_1 \cdots n_5}$ is the projection onto graphs containing $n_k$ vertices $v_k$ ($k=1,\ldots,5$). \end{corl} \begin{proof} This follows directly by an application of the formulas in Proposition \ref{prop:cop-green} to $\Delta(Y_{v_j}) = \Delta(G^{v_j}) \prod_{i=1,2,3} \Delta((G^{e_i})^{-N_i(v_j)/2})$. \end{proof} Quite remarkably, this formula coincides with the coproduct in the Hopf algebra dual to the group $\Diff(\C^5,0)$ of formal diffeomorphisms tangent to the identity in $5$ variables, closely related to the Fa\`a di Bruno Hopf algebra (cf. for instance the short review \cite{FGV05}). In other words, the Hopf subalgebra generated by $p_{n_1,\ldots,n_5} (Y_{v_j})$ is dual to (a subgroup of) the group $\Diff(\C^5,0)$. This will be further explored in Section \ref{sect:coaction} below. \begin{corl}{\cite{Sui07c}} \label{thm:hopfideal} The ideal $J$ in $H$ generated by $q_l\left(Y_{v_k}^{N(v_j)-2} - Y_{v_j}^{N(v_k)-2} \right)$ for any $l\geq0$ and $j,k =1,\ldots,4$ is a Hopf ideal, {i.e.} $$ \Delta(J) \subset J \otimes H + H \otimes J. $$ \end{corl} \begin{proof} Fix $j$ and $k$ two integers between 1 and 5. Applying the formulas in Proposition \ref{prop:cop-green} to the coproduct on $Y_{v_k}^{N(v_j)-2}$ yields $$ \Delta\left( Y_{v_k}^{N(v_j)-2} \right) = \sum_{n_1, \ldots, n_5} Y_{v_k}^{N(v_j)-2} Y_{v_1}^{n_1} \cdots Y_{v_5}^{n_5} \otimes p_{n_1 \cdots n_5} (Y_{v_k}^{N(v_j)-2}), $$ Now, module elements in $J$, we can write $$ Y_{v_2}^{n_2} = Y_{v_1}^{n_2 \frac{N(v_2)-2}{N(v_1)-2}}, $$ and similarly for $v_3$ and $v_4$ so that $$ Y_{v_1}^{n_1} \cdots Y_{v_5}^{n_5} = \left(Y_{v_1}^{1/N(v_1)-2} \right)^{\sum_k n_k (N(v_k)-2)} = \left(Y_{v_1}^{1/(N(v_1)-2)}\right)^{2l}. $$ by an application of Lemma \ref{lma:rel-degrees}. Note that this is independent of the $n_i$ but only depends on the total loop number $l$. For the coproduct, this yields $$ \Delta\left( Y_{v_k}^{N(v_j)-2} \right) = \sum_{l=0}^\infty Y_{v_k}^{N(v_j)-2} ~Y_{v_1}^{\frac{2l}{N(v_1)-2}} \otimes q_l (Y_{v_k}^{N(v_j)-2}), $$ Of course, a similar formula holds for the other term defining $J$, upon interchanging $j$ and $k$. For their difference we then obtain \begin{multline*} \Delta\left( Y_{v_k}^{N(v_j)-2} - Y_{v_j}^{N(v_k)-2} \right) = \sum_{l=0}^\infty \left( Y_{v_k}^{N(v_j)-2} - Y_{v_j}^{N(v_k)-2} \right) ~Y_{v_1}^{\frac{2l}{N(v_1)-2}} \otimes q_l (Y_{v_k}^{N(v_j)-2}) \\ + \sum_{l=0}^\infty Y_{v_k}^{N(v_j)-2} ~Y_{v_1}^{\frac{2l}{N(v_1)-2}} \otimes q_l \left( Y_{v_k}^{N(v_j)-2} - Y_{v_j}^{N(v_k)-2} \right). \end{multline*} This is an element in $J\otimes H + H\otimes J$, which completes the proof. \end{proof} \begin{rem} \label{rem:generatorsJ} An equivalent set of generators for $J$ is given by $Y_{v_i} - Y_{v_1}^{N(v_i)-2}$ with $i=2,3,4$. \end{rem} In this Hopf ideal, the reader might have already recognized the Slavnov--Taylor identities for the couplings. Indeed, in the quotient Hopf algebra $H/J$ these identities hold. Moreover, since the character $U : H \to\C$ given by the regularized Feynman rules vanishes on $J$ (these are exactly the Slavnov--Taylor identities) and thus factorizes over this quotient (provided we work with dimensional regularization, or another gauge symmetry preserving regularization scheme). Now, the Birkhoff decomposition for the group $\Hom_\C(H/J,\C)$ gives the counterterm map $C$ and the renormalized map $R$ as characters on $H/J$. Thus, they also satisfy the Slavnov--Taylor identities and this provides a purely algebraic proof of the compatibility of the Slavnov--Taylor identities for the couplings with renormalization, an essential step in proving renormalizablity of gauge theories. Below, we shall give a more conceptual (rather then combinatorial) explanation for the existence of these Hopf ideals, after establishing a connection between $H$ and the fields and coupling constants. \section{Coaction and BRST-symmetries} \label{sect:coaction} The fact that we encountered diffeomorphism groups starting with Feynman graphs is not very surprising from a physical point of view. Indeed, Feynman graphs are closely involved in the running of the coupling constants described by the renormalization group. In the next subsection, we will clarify this point by defining a coaction of the Hopf algebra $H$ on the coupling constants and the fields. Dually, this will lead to an action of the diffeomorphism group. It contains a subgroup that respects the BRST-invariance of the action, which will be related to the Hopf ideal of the previous section. Finally, its relation with the renormalization group is further described. \subsection{Coaction on the coupling constants and fields} In this section, we will establish a connection between the Hopf algebra of Feynman graphs defined above and the fields, coupling constants and masses that characterize the field theory. This allows for a derivation of the Hopf ideals encountered in the previous section from the so-called master equation satisfied by the Lagrangian. Let us first introduce formal variables $\lambda_1, \lambda_2, \ldots,\lambda_5$, corresponding to the vertices describing the five possible interactions in QCD. Also, we write $\phi_1=A,~ \phi_2=\psi, ~ \phi_3 = \omega$ and $\phi_4=h$ for the fields, in accordance with the labelling of the edges (see Figure \ref{fig:setR} above). We denote by $\F = \Loc(\phi_1, \phi_2, \phi_3, \phi_4)\otimes \C[[\lambda_1,\ldots,\lambda_5]]$ the algebra of local functionals in the fields $\phi_i$ (and their conjugates), extended linearly by formal power series in the $\lambda_j$. Recall that a local functional is an integral of a polynomial in the fields and their derivatives, and the algebra structure is given by multiplication of these integrals. \begin{thm} \label{thm:coaction} The algebra $\F$ is a comodule algebra over the Hopf algebra $H$ The coaction $\rho: \F \to \F \otimes H$ is given on the generators by \begin{align*} \rho : \lambda_j &\longmapsto \sum_{n_1, \ldots, n_5} \lambda_j \lambda_{1}^{n_1 } \cdots \lambda_{5}^{n_5 } \otimes p_{n_1 \cdots n_5} (Y_{v_j}),\qquad (j=1,\ldots, 5);\\ \rho: \phi_i & \longmapsto \sum_{n_1, \ldots, n_5} \phi~ \lambda_{1}^{n_1 } \cdots \lambda_{5}^{n_5} \otimes p_{n_1 \cdots n_5} (\sqrt{G^{e_i}}), \qquad (i=1,2,3), \end{align*} while it commutes with partial derivatives on $\phi$. \end{thm} \begin{proof} Since we work with graded Hopf algebras, it suffices to establish that $ (\rho \otimes 1)\circ \rho= (1 \otimes \Delta) \circ \rho$. We claim that this follows from coassociativity ({i.e.} $(\Delta \otimes 1) \circ \Delta = (1\otimes \Delta)\circ \Delta$) of the coproduct $\Delta$ of $H$. Indeed, the first expression very much resembles the form of the coproduct on $Y_j$ as derived in Corollary \ref{corl:cop-Yv}: replacing therein each $Y_{v_k}$ ($k=1, \ldots, 5$) on the first leg of the tensor product by $\lambda_{k}$ and one $\Delta$ by $\rho$ gives the desired result. A similar argument applies to the second expression, using Proposition \ref{prop:cop-green} above. \end{proof} \begin{corl} The Green's functions $G^{v_j} \in H$ can be obtained when coacting on the interaction monomial $\int \lambda_j \iota(v)(x) d\mu(x)= \int \lambda_j \partial_{\vec\mu_1} \phi_{i_1}(x) \cdots \partial_{\vec\mu_N} \phi_{i_N}(x)d \mu(x)$ for some index set $\{ i_1, \ldots, i_N \}$. \end{corl} For example, \begin{align*} \rho \bigg( \lambda_2 \langle d \bar\omega, [A,\omega] \rangle \bigg) &= \sum_{n_1 \cdots n_5} \lambda_2 \lambda_{1}^{n_1} \cdots \lambda_{5}^{n_5}\langle d \bar\omega, [A,\omega] \rangle \otimes p_{n_1 \cdots n_5} \left(Y_\ghoglu \sqrt{G^\glu} G^\gho \right) \\ &=\sum_{n_1, \ldots, n_5} \lambda_2 \lambda_{1}^{n_1} \cdots \lambda_{5}^{n_5}\langle d \bar\omega, [A,\omega] \rangle \otimes p_{n_1 \cdots n_5} \left(G^\ghoglu\right) \end{align*} Actually, the first equation in Theorem \ref{thm:coaction} can be interpreted as an action of a subgroup of formal diffeomorphisms in 5 variables on $\C[[\lambda_{1}, \ldots, \lambda_{5}]]$. Let us make this more precise. Consider the group $\Diff(\C^5, 0)$ of formal diffeomorphisms in 5 dimensions (coordinates $x_1, \ldots, x_5$) that leave the five axis-hyperplanes invariant. In other words, we consider maps $$ f(x) = \big( f_1(x ),\ldots, f_5(x) \big) $$ where each $f_i$ is a formal power series of the form $f_i(x ) = x_i(\sum a^{(i)}_{n_1\cdots n_5}(f) x_1^{n_1} \cdots x_5^{n_5})$ with $a^{(i)}_{0,\ldots,0}=1$ and $x=(x_1, \ldots ,x_5)$. The group multiplication is given by composition, and is conveniently written in a dual manner, in terms of the coordinates. In fact, the $a^{(i)}_{n_1 \cdots n_5}$ generate a Hopf algebra with the coproduct expressed as follows. On the formal generating element $A_i(x) = x_i(\sum a^{(i)}_{n_1\cdots n_k} x_1^{n_1} \cdots x_k^{n_k})$: $$ \Delta(A_i(x)) = \sum_{n_1, \ldots, n_k} A_i(x) \left( A_1(x) \right)^{n_1} \cdots \left( A_k(x) \right)^{n_k} \otimes a^{(i)}_{n_1\cdots n_k}. $$ Thus, by mapping the $a^{(j)}_{n_1, \ldots, n_5}$ to $p_{n_1,\ldots,n_5}(Y_{v_j})$ in $H$ we obtain a surjective map from $H$ to the Hopf algebra dual to $\Diff(\C^5,0)$. In other words, $\Hom(H,\C)$ is a subgroup of $\Diff(\C^5,0)$ and, in fact, substituting $a^{(j)}_{n_1, \ldots, n_5}$ for $p_{n_1,\ldots,n_5}(Y_{v_j})$ in the first equation of Theorem \ref{thm:coaction} yields (dually) a group action of $\Diff(\C^5,0)$ on $ \C[[\lambda_{1}, \ldots, \lambda_{5}]]$ by $f(a) := (1 \otimes f) \rho(a)$ for $f\in \Diff(\C^5,0)$ and $a \in \C[[\lambda_{1}, \ldots, \lambda_{5}]]$. In fact, we have the following \begin{prop} \label{prop:action} Let $G'$ be the group consisting of algebra maps $f: \F \to \F$ given on the generators by \begin{align*} f( \lambda_j)&= \sum_{n_1 \cdots n_5} f^{v_j}_{n_1, \ldots, n_5} \lambda_j \lambda_{1}^{n_1 } \cdots \lambda_{5}^{n_5 }; \qquad (j=1,\ldots, 5) ,\\ f ( \phi_i)&= \sum_{n_1 \cdots n_5} f^{e_i}_{n_1, \ldots, n_5} \phi_i \lambda_{1}^{n_1 } \cdots \lambda_{5}^{n_5 }; \qquad (i =1, \ldots, 3) , \\ \end{align*} where $f^{v_j}_{n_1 \cdots n_5},f^{e_i}_{n_1 \cdots n_5}\in \C$ are such that $f^{v_j}_{0 \cdots 0} = f^{e_i}_{0\cdots 0} = 1$. Then the following hold: \begin{enumerate} \item The character group $G$ of the Hopf algebra $H$ generated by $p_{n_1\cdots n_5} (Y_v)$ and $p_{n_1\cdots n_5} (\sqrt{G^e})$ with coproduct given in Proposition \ref{prop:cop-green}, is a subgroup of $G'$. \item The subgroup $N:= \{ f: f(\lambda_j) =\lambda_j, j=1,\ldots, 5 \}$ of $G'$ is normal and isomorphic to $(\C[[\lambda_{1},\ldots,\lambda_{5}]]^\times)^{3}$. \item $G' \simeq (\C[[\lambda_{1},\ldots,\lambda_{5}]]^\times)^{3} \rtimes \Diff(\C^5,0)$. \end{enumerate} \end{prop} \begin{proof} From Theorem \ref{thm:coaction}, it follows that a character $\chi$ acts on $\F$ as in the above formula upon writing $f^{v_j}_{n_1 \cdots n_5} = \chi( p_{n_1\cdots n_5} (Y_v) )$ for $j=1,\ldots,5$ and $f^{e_i}_{n_1 \cdots n_5} = \chi( p_{n_1\cdots n_5} (\sqrt{G^{\phi_i}}))$ for $i=1,2,3$. For (2) one checks by explicit computation that $N$ is indeed normal and that each series $f^{e_i}$ defines an element in $\C[[\lambda_{1}, \ldots, \lambda_{5}]]^\times$ of invertible formal power series. Then (3) follows from the existence of a homomorphism from $G'$ to $\Diff(\C^5,0)$. It is given by restricting an element $f$ to $\C[[\lambda_{1}, \ldots, \lambda_{5}]]$. This is clearly the identity map on $\Diff(\C^5,0)$ when considered as a subgroup of $G$ and its kernel is precisely $N$. \end{proof} The action of (the subgroup of) $(\C[[\lambda_{1},\ldots,\lambda_{k}]]^\times)^{3} \rtimes \Diff(\C^5,0)$ on $\F$ has a natural physical interpretation: the invertible formal power series act on a field as wave function renormalization whereas the diffeomorphisms act on the coupling constants $\lambda_1,\ldots,\lambda_5$. \subsection{BRST-symmetries} We will now show how the previous coaction of the Hopf algebra $H$ on the algebra $\F$ gives rise to the Hopf ideal $J$ encountered before. For this, we choose a distinguished element in $\F$, namely the action $S$. It is given by \begin{multline} \label{eq:action} S[\phi_i,\lambda_j] = -\big\langle dA , dA \big\rangle - 2 \lambda_3 \big\langle dA , A^2 \big\rangle - \lambda_4\big\langle A^2 , A^2 \big\rangle + \big\langle \psi,( \dirac + \lambda_1 \Aslash + \lambda_5 )\psi \big\rangle \\ - \big\langle A, dh \big\rangle + \big \langle d \bar\omega, d \omega \big\rangle + \frac{1}{2} \xi \big \langle h ,h \big\rangle + \lambda_2 \big \langle d \bar\omega, [A,\omega] \big\rangle. \end{multline} in terms of the appropriate inner products. Note that the action has finitely many terms, that is, it is a (local) polynomial functional in the fields and coupling constants rather than a formal power series. With the BRST-differential given in Equation \eqref{brst} (involving the `fundamental' coupling constant $g$), we will now impose the BRST-invariance of $S$, by setting $$ s( S) =0. $$ Actually, we will define an ideal $I = \big\langle s (S) \big\rangle$ in $\F$ that implements the relations between the $\lambda_j$'s. Strictly speaking, the fundamental coupling $g$ is not an element in $\F$; we will instead set $g \equiv \lambda_1$. The remaining `coupling' constant $\lambda_5$ is interpreted as the quark mass $m$. \begin{prop} The ideal $I$ is generated by the following elements: $$ \lambda_1 - \lambda_2; \qquad \lambda_2 - \lambda_3; \qquad \lambda_3 - \lambda_4^2. $$ \end{prop} \begin{proof} This follows directly by applying $s$ (involving $g$) to the action $S$. \end{proof} A convenient set of (equivalent) generators for the ideal $I$ is $\lambda_i - g^{N(v_i)-2}$ for $i=1,\ldots 4$. Thus, the image of $S$ in the quotient $\F/I$ is BRST-invariant, that is, $s(S)$ is identically zero. Let us return to the group $G \subset (\C[[\lambda_{1},\ldots,\lambda_{5}]]^\times)^{3} \rtimes \Diff(\C^5,0)$, acting on $\F$. Consider the subgroup $G^I$ of $G$ consisting of elements $f$ that leave invariant the ideal $I$, i.e., such that $f(I) \subseteq I$. It is clear from the above generators of $I$ that this will involve a diffeomorphism group in 2 variables, instead of 5. More precisely, we have the following \begin{thm}[\cite{Sui08}] \label{thm:groupGI} Let $J$ be the ideal from Theorem \ref{thm:hopfideal}. \begin{enumerate} \item The group $G^I$ acts on the quotient algebra $\F/I$. \item The image of $G^I$ in $\Aut( \F/I)$ is isomorphic to $\Hom_\C(H/J, \C)$ and $H/J$ coacts on $\F/I$. \end{enumerate} Consequently, (the image in $\Aut(\F/I)$ of) $G^I$ is a subgroup of the semidirect product $(\C[[g,\lambda_{5}]]^\times)^{3} \rtimes \Diff(\C^2,0)$. \end{thm} \begin{proof} The first claim is direct. For the second, note that an element $f \in G$ acts on the generators of $I$ as \begin{multline*} f\left( \lambda_{i} -g^{N(v_i)-2} \right) \\ = \sum_{n_1, \ldots, n_5} \lambda_{1}^{n_1} \cdots\lambda_{5}^{n_5} \left[ \lambda_i f\left( p_{n_1\cdots n_5}(Y_{v_i}) \right) - g^{N(v_i)-2} f\left( p_{n_1\cdots n_5}(Y_{v_1}^{N(v_i)-2})\right)\right], \end{multline*} since $g\equiv \lambda_1$. We will reduce this expression by replacing $\lambda_{i}$ by $g^{N(v_i)-2}$, modulo terms in $I$. Together with Lemma \ref{lma:rel-degrees} this yields $$ f\left( \lambda_{i} -g^{N(v_i)-2} \right) = \sum_{l=0}^\infty g^{2l + N(v_i)-2} ~ f \left(q_l\left( Y_{v_i} - Y_{v_1}^{N(v_i)-2}\right) \right) \mod I. $$ The requirement that this is an element in $I$ is equivalent to the requirement that $f$ vanishes on $q_l ( Y_{v_i} - Y_{v_1}^{N(v_i)-2})$, {i.e.} on the generators of $J$, establishing the desired isomorphism. One then easily computes $$ \rho(I) \subset I \otimes H + \F \otimes J $$ so that $H/J$ coacts on $\F$ by projecting onto the two quotient algebras. \end{proof} In fact, the last claim of the above Theorem can be strengthened. Focusing on the subgroup of the formal diffeomorphism group $\Diff (\C^5, 0)^I$ that leaves invariant the ideal $I$ we have: $$ 1 \to (1+I)^5 \to \Diff (\C^5, 0)^I \to \Diff(\C^2,0) \to 1. $$ Here, an element $(1+B_i)_{i=1,\ldots,5}$ in $(1+I)^5$ acts on the generators $\lambda_1, \ldots, \lambda_5$ by right multiplication. This sequence actually splits, leading to a full description of the group $\Diff (\C^5, 0)^I$. Indeed, by the simple structure of the ideal $I$, a one-sided inverse of the map $\Diff (\C^5, 0)^I \to \Diff(\C^2,0)$ can be easily constructed. A similar statement holds for the above subgroup $G^I$ of the semidirect product $G\simeq (\C[[\lambda_{1},\ldots,\lambda_{5}]]^\times)^{3} \rtimes \Diff(\C^5,0)$. In any case, the contents of Theorem \ref{thm:groupGI} have a very nice physical interpretation: the invertible formal power series act on the three fields as wave function renormalization whereas the diffeomorphisms act on one fundamental coupling constant $g$. We will appreciate this even more in the next section where we discuss the renormalization group flow. \subsection{Renormalization group} We will now establish a connection between the group of diffeographisms and the renormalization group \`a la Gell-Mann and Low \cite{GL54}. This group describes the dependence of the renormalized amplitudes $\phi_+(z)$ on a mass scale that is implicit in the renormalization procedure. In fact, in dimensional regularization, in order to keep the loop integrals $d^{4-z} k$ dimensionless for complex $z$, one introduces a factor of $\mu^z$ in front of them, where $\mu$ has dimension of mass and is called the {\it unit of mass}. For a Feynman graph $\Gamma$, Lemma \ref{lma:rel-degrees} shows that this factor equals $\mu^{z \sum_{i} (N(v_i)-2)) \delta_{v_i}(\Gamma)/2}$ reflecting the fact that the coupling constants appearing in the action get replaced by $$ \lambda_{i} \mapsto \mu^{z \sum_{k} (N(v_k)-2))/2}\lambda_{i} $$ for every vertex $v_i$ ($i=1,\ldots, 5$). As before, the Feynman rules define a loop $\gamma_\mu: C \to G\equiv G(\C)$, which now depends on the mass scale $\mu$. Consequently, there is a Birkhoff decomposition for each $\mu$: $$ \gamma_\mu(z) = \gamma_{\mu,-}(z)^{-1} \gamma_{\mu,+}(z); \qquad (z \in C), $$ As was shown in \cite{CK00}, the negative part $\gamma_{\mu,-}(z)$ of this Birkhoff decomposition is independent of the mass scale, that is, $$ \frac{\partial}{\partial \mu} \gamma_{\mu,-}(z) = 0. $$ Hence, we can drop the index $\mu$ and write $\gamma_{-}(z):=\gamma_{\mu,-}(z)$. In terms of the generator $\theta_t$ for the one-parameter subgroup of $G(K)$ corresponding to the grading $l$ on $H$, we can write $$ \gamma_{e^t \mu (z) } = \theta_{tz} \left(\gamma_\mu(z) \right), \qquad (t \in \R). $$ A proof of this and the following result can be found in \cite{CK00}. \begin{prop} The limit $$ F_t := \lim_{z \to 0} \gamma_-(z) \theta_{tz} \left( \gamma_-(z)^{-1} \right) $$ exists and defines a $1$-parameter subgroup of $G$ which depends polynomially on $t$ when evaluated on an element $X \in H$. \end{prop} In physics, this 1-parameter subgroup goes under the name of {\it renormalization group}. In fact, using the Birkhoff decomposition, we can as well write $$ \gamma_{e^t \mu, +}(0) = F_t ~ \gamma_{\mu,+}(0), \qquad (t \in \R). $$ This can be formulated in terms of the generator $\beta := \frac{d}{dt} F_t |_{t=0}$ of this 1-parameter group as \begin{equation} \label{eq:beta} \mu \frac{\partial}{\partial \mu} \gamma_{\mu,+}(0) = \beta \gamma_{\mu,+}(0). \end{equation} Let us now establish that this is indeed the beta-function familiar from physics by exploring how it acts on the coupling constants $\lambda_{i}$. First of all, although the name might suggest otherwise, the coupling constants depend on the energy or mass scale $\mu$. Recall the action of $G$ on $\C[[\lambda_{1}, \ldots, \lambda_{5}]]$ defined in the previous section. In the case of $\gamma_{\mu,+}(0) \in G$, we define the (renormalized) {\it coupling constant at scale $\mu$} to be $$ \lambda_{i}(\mu) = \gamma_{\mu,+}(0)(\lambda_{i}). $$ This function of $\mu$ (with coefficients in $\C[[\lambda_1,\ldots, \lambda_5]]$) satisfies the following differential equation: \begin{equation*} \beta \left( \lambda_{i}(\mu) \right) = \mu \frac{\partial}{\partial \mu} \left(\lambda_{i}(\mu) \right) \end{equation*} which follows easily from Eq. \eqref{eq:beta}. This is exactly the renormalization group equation expressing the flow of the coupling constants $\lambda_{i}$ as a function of the energy scale $\mu$. Moreover, if we extend $\beta$ by linearity to the action $S$ of Eq. \eqref{eq:action}, we obtain Wilson's continuous renormalization equation \cite{Wil75}: $$ \beta(S(\mu)) = \mu \frac{\partial}{\partial \mu} \left( S(\mu) \right) $$ This equation has been explored in the context of renormalization Hopf algebras in \cite{GKM04, KM08}. Equation \eqref{eq:beta} expresses $\beta$ completely in terms of $\gamma_{\mu,+}$; as we will now demonstrate, this allows us to derive that for QCD all $\beta$-functions coincide. First, recall that the maps $\gamma_{\mu}$ are the Feynman rules dictated by $S$ in the presence of the mass scale $\mu$, which we suppose to be BRST-invariant: $s(S)=0$. In other words, we are in the quotient of $\F$ by $I = \langle s(S) \rangle$. If the regularization procedure respects gauge invariance, it is well-known that the Feynman amplitude satisfy the Slavnov--Taylor identities for the couplings. In terms of the ideal $J$ defined in the previous section, this means that $\gamma_{\mu} (J)=0$. Since $J$ is a Hopf ideal (Theorem \ref{thm:hopfideal}), it follows that both $\gamma_{\mu,-}$ and $\gamma_{\mu,+}$ vanish on $J$. Indeed, the character $\gamma$ given by the Feynman rules factorizes through $H/J$, for which the Birkhoff decomposition gives two characters $\gamma_+$ and $\gamma_-$ of $H/J$. In other words, if the unrenormalized Feynman amplitudes given by $\gamma_\mu$ satisfy the Slavnov--Taylor identities, so do the counterterms and the renormalized Feynman amplitudes. In particular, from Eq. \eqref{eq:beta} we conclude that $\beta$ vanishes on the ideal $I$ in $\C[[\lambda_{1}, \ldots, \lambda_{5}]]$. This implies the following result, which is well-known in the physics literature: \begin{prop} All (QCD) $\beta$-functions (for $i=1,\ldots,4$) are expressed in terms of $\beta(g)$ for the fundamental coupling constant $g$: $$\beta(\lambda_{i}) = \beta(g^{N(v)-2}). $$ \end{prop} \newcommand{\noopsort}[1]{}
1,314,259,992,809
arxiv
\section{Introduction} The exchange of gluons between hadrons, known as Pomeron exchange~\cite{Donnachie:2002}, is a fundamental process that is expected to dominate hadron-hadron total cross sections at high energies. In general, multi-gluon exchange is harder to study at lower energy since diagrams including quark exchange play a more important role. The $\phi$ meson is unique in that it is nearly pure $s\bar{s}$ and hence multi-gluon exchange is expected to dominate $\phi$-N scattering at all energies. Since gluon exchange is flavor blind, information on multi-gluon exchange, isolated from the $\phi$-N interaction, would be universal and useful in models of hadron-hadron interactions. For example, information on the $\phi$-N interaction at very low energies, known as the QCD van der Waals interaction, is essential for the reliable prediction of the possible formation of a bound state in the $\phi$-N system~\cite{Gao:2000az}. The total $\phi$-N cross section ($\sigma_{\phi N}$) is estimated by using vector meson dominance (VMD) applied to exclusive $\phi$ photoproduction on the proton in the photon energy range $E_{\gamma}< 10$~GeV, resulting in $\sigma_{\phi N}$ $\simeq$ 10--12~mb~\cite{Behrend:1978ik,Sibirtsev:2006yk}, which is in agreement with the estimate from the additive quark model~\cite{PhysRevLett.16.1015} applied to $KN$ and $\pi N$ scattering data~\cite{vmreview}. More recently, the inelastic $\phi$-N cross section $\sigma_{\phi N}^{inel}$ ~was extracted from the attenuation of $\phi$-mesons in photoproduction from Li, C, Al, and Cu nuclei~\cite{Ishikawa:2004id}. The attenuation for large $A$ is significantly larger than that calculated from VMD. More sophisticated models~\cite{Cabrera:2003wb,Muhlich:2005kf,Sibirtsev:2006yk} are consistent with the experiment if $\sigma_{\phi N}^{inel}$ ~is significantly larger ($\sim$30~mb) compared with $\sigma_{\phi N}$ ~from the VMD model. The reason for the discrepancy of $\sigma_{\phi N}$ ~from these two estimates is not well understood. Here we will show that information on the $t$ dependence and spin structure of the $\phi$-N interaction provides essential clues to solve this problem. In this Letter, the $\phi$-N interaction is investigated in coherent photoproduction on deuterium. The diagrams of the dominant processes contributing to the reaction $\gamma d \rightarrow \phi d$ are shown in Fig.~\ref{fig:single-double}. In the first diagram, Fig.~\ref{fig:single-double}(a), the $\phi$ is produced in a single scattering of a nucleon, which is dominant at small $-t$, and strongly suppressed at larger $-t$ due to the deuteron form factor. The second diagram, Fig.~\ref{fig:single-double}(b), shows double-scattering, where the $\phi$ is produced at the first vertex and scatters from the other nucleon at the second vertex. The strength of the second interaction is gauged by $\sigma_{\phi N}$. The probability to undergo double-scattering increases at larger $-t$ because both nucleons receive momentum transfer and may recombine into a final-state deuteron with a smaller relative momentum between the two nucleons~\cite{Frankfurt:1997ss}. The $\phi$ meson is a spin one particle which decays to a $K\bar{K}$ pair, {\it i.e.} two spin-less particles. The decay angular distribution of the $\phi$ carries information on the spin structure of the reaction amplitude which is the sum of single- and double- scattering processes~\cite{Schilling:1970um}. The measurement of the differential cross sections of coherent $\phi$ photoproduction and the decay angular distributions in a wide $t$ range allows one to study the $\phi$-N interaction in both single and double scattering, as well as the transition from one to the other. \begin{figure}[tb] \includegraphics[width=4.2cm]{diagrams_single.eps} \includegraphics[width=4.2cm]{diagrams_double.eps} \caption{(a) Single-scattering and (b) double-scattering contributions to the coherent $\phi$-meson photoproduction on the deuteron.} \label{fig:single-double} \end{figure} The data were collected with the CLAS detector and the Hall B tagged-photon beam at the Thomas Jefferson National Accelerator Facility~\cite{Mecking:2003zu}. The incident electron beam energy was 3.8~GeV, producing tagged photons in the range from 0.8 to 3.6~GeV. The photon beam was directed onto a 24-cm long liquid-deuterium target. The data acquisition trigger required two charged particles detected in coincidence with a tagged photon. Charged particles were momentum analyzed by the CLAS torus magnet and three sets of drift chambers. The torus magnet was run at two settings, low field (2250~A) and high field (3375~A), each for about half of the run period. The reaction $\gamma d \rightarrow \phi d$ was identified by detecting a deuteron and a $K^+$ from $\phi \rightarrow K^+K^-$ decay. The $K^+$ and deuteron were selected based on time-of-flight, path length, and momentum measurements. Figure~\ref{fig.mm}(a) shows the missing mass distribution, $M_X$, for the reaction $\gamma d \rightarrow dK^+X$ when events near the $\phi$-meson peak ($0.98<M(K^+K^-)<1.12$~GeV/c$^2$) were selected in the $K^+K^-$ invariant mass, assuming a $K^-$ was the missing particle. A missing $K^-$ peak is seen on top of a smooth background from non-$d$~$K^+K^-$ final states. The missing mass resolution, ranging from 8 to 30~MeV/c$^2$, depends on photon energy and the deuteron momentum. A three-$\sigma$ cut was applied to select the missing $K^-$ for the exclusive $\gamma d \rightarrow K^+K^- d$ reaction. \begin{figure}[t] \includegraphics[width=8.5cm]{mmdk_mkk_rev1.eps} \caption{ (a) Missing mass distribution of the reaction $\gamma d \rightarrow d K^+ X$ for events near the $\phi$-meson mass ($0.98<M(K^+K^-)<1.12$~GeV/c$^2$). (b) Invariant mass distribution for the $K^+K^-$ pair after the selection of the missing $K^-$. The solid curve is a fit to the data. The dashed curve shows the contribution from background.} \label{fig.mm} \end{figure} Figure~\ref{fig.mm}(b) shows the invariant mass distribution for the $K^+K^-$ pair after the selection of the missing $K^-$. The $\phi$-meson peak appears above a smooth background. The $\phi$-meson yield was obtained from a fit to the $M(K^+K^-)$ distribution by a gaussian-convoluted Breit-Wigner function and a background function. The width and the pole position for the Breit-Wigner function were fixed to 4.3~MeV/c$^2$ and 1019.5~MeV/c$^2$, respectively~\cite{Yao:2006px}. The standard deviation of the gaussian distribution was fixed to the value obtained from simulation. The background function was chosen as $a \sqrt{x^2-(2m_K)^2} + b (x^2-(2m_K)^2)$~\cite{Lukashin:2001sh}, where $x$ is $M(K^+K^-)$, $m_K$ is the charged kaon mass, and $a$ and $b$ are the fit parameters. Three background functions: a linear background, background from non-resonant $K^+K^-d$ production, and $f_0$ photoproduction, were studied as alternative choices. The background models for the non-resonant $K^+K^-d$ and $f_0$ photoproduction were parameterized by the differential cross section and photon-energy distribution of events in the sidebands of the $\phi$-meson peak. The dependence of the yield on the background function, fit range, and parameterization of the Breit-Wigner function were studied. The extracted yield changes between 3\% and 9\% depending on the yield extraction procedures. \begin{figure}[t] \includegraphics[width=9.0cm]{cs_alltopo_eg1.6-3.6_v5.eps} \caption{Comparison of differential cross sections for $\gamma d \rightarrow \phi d$ from various topologies in the range $1.6<E_{\gamma}<3.6$~GeV. Only statistical uncertainties are shown. } \label{fig.yieldalltopo2} \end{figure} The CLAS acceptance was determined by using a GEANT-based Monte Carlo simulation~\cite{Brun:1978fy}. The simulation was iterated to reproduce measured $t$, photon energy, and decay angular distributions. The acceptance was between 10\% and 20\% in the kinematic region covered by the present measurements. The accuracy of the calculation of the acceptance was estimated from comparison of results from the other event reconstruction topologies ($d$~$K^+K^-$, $K^+K^-$, and $d$~$K^0_s$ topologies) for which the acceptances were different from that for the $d$~$K^+$ topology. The differential cross sections for these topologies are shown in Fig.~\ref{fig.yieldalltopo2}. They agree with each other within statistical uncertainties, indicating that the acceptance is understood as to the number of reconstructed tracks, charge combinations, and decay modes. Supplemental simulations were performed to understand the systematic uncertainties due to the event generator (1-11\%) and event reconstruction (1-5\%). Systematic uncertainties in the yield extraction and acceptance were estimated as a function of photon energy and $t$; they were between 4\% and 13\%. The combined systematic uncertainty for the luminosity and trigger efficiency was less than 10\%. Systematic uncertainties from contributions from accidental tracks, target windows, and particle misidentification are less than a few percent. The total systematic uncertainty was estimated as 11-17\% by adding these uncertainties in quadrature. \begin{table}[t] \caption{Differential cross sections for the reaction $\gamma d \rightarrow \phi d$. The second and third numbers in each field are the statistical and systematic uncertainties, respectively.} \label{tab.dsdt} \begin{center} \begin{tabular}{|c|c|r|r|} \hline \multicolumn{2}{|c|}{$t$ range (GeV$^2$/c$^2$)}& \multicolumn{2}{|c|}{$d\sigma/dt$~[nb/(GeV$^2$/c$^2$)]}\\ \hline $t_{min}$ &$t_{max}$ & $1.6<E_{\gamma}<2.6$~GeV & $2.6<E_{\gamma}<3.6$~GeV\\ \hline \hline -0.375 & -0.350 &10.21 $\pm$ 0.82 (1.70) & 8.63 $\pm$ 0.80 (1.04) \\ -0.400 & -0.375 & 8.85 $\pm$ 0.75 (1.11) & 6.80 $\pm$ 0.69 (1.07) \\ -0.425 & -0.400 & 7.32 $\pm$ 0.59 (0.94) & 4.57 $\pm$ 0.53 (0.74) \\ -0.450 & -0.425 & 6.16 $\pm$ 0.55 (0.81) & 5.76 $\pm$ 0.56 (0.65) \\ -0.500 & -0.450 & 4.73 $\pm$ 0.34 (0.60) & 3.99 $\pm$ 0.33 (0.55) \\ -0.550 & -0.500 & 3.52 $\pm$ 0.28 (0.51) & 3.59 $\pm$ 0.29 (0.55) \\ -0.600 & -0.550 & 2.66 $\pm$ 0.24 (0.38) & 2.11 $\pm$ 0.22 (0.28) \\ -0.700 & -0.600 & 2.17 $\pm$ 0.15 (0.26) & 1.83 $\pm$ 0.14 (0.24) \\ -0.800 & -0.700 & 1.40 $\pm$ 0.12 (0.16) & 1.32 $\pm$ 0.12 (0.20) \\ -1.000 & -0.800 & 0.94 $\pm$ 0.07 (0.11) & 0.96 $\pm$ 0.07 (0.11) \\ -1.200 & -1.000 & 0.57 $\pm$ 0.06 (0.07) & 0.57 $\pm$ 0.05 (0.06) \\ -1.400 & -1.200 & 0.28 $\pm$ 0.05 (0.04) & 0.36 $\pm$ 0.04 (0.05) \\ -2.000 & -1.400 & 0.19 $\pm$ 0.02 (0.03) & 0.15 $\pm$ 0.02 (0.02) \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[t] \includegraphics[width=9.0cm]{figcs4_v5.eps} \caption{Differential cross sections for the reaction $\gamma d \rightarrow \phi d$. The inner error bars shown are statistical uncertainty only, while the outer error bars are the sum of statistical and systematic uncertainties in quadrature. The curves~A, B and C are calculations from the re-scattering model~\cite{Frankfurt:1997ss,Rogers:2005bt}, see text for details. The uncertainties on curves~B and C are comparable to that of curve~A, but are not shown. The dot-dashed curve is a contribution from the single scattering diagram. } \label{fig.cseg1a} \end{figure} The differential cross sections were measured in the ranges $1.6<E_{\gamma}<2.6$~GeV and $2.6<E_{\gamma}<3.6$~GeV. They are given in Table~\ref{tab.dsdt}. Figure~\ref{fig.cseg1a} shows the experimental data in the range $2.6<E_{\gamma}<3.6$~GeV. The data is compared with theoretical calculations using a rescattering model~\cite{Frankfurt:1997ss,Rogers:2005bt}. In this model, the $\gamma N\rightarrow \phi N$ amplitude was parameterized by using published data on the $\gamma p\rightarrow \phi p$ reaction~\cite{Anciant:2000az} and data from the proton target run during this experiment. This amplitude was convoluted with the deuteron wave function with a correction of the relativistic-recoil effect~\cite{Frankfurt:1997ss}. The double scattering process (Fig.~\ref{fig:single-double}(b)) is modeled by the Generalized Eikonal Approximation \cite{Frankfurt:1996xx}. The $\sigma_{\phi N}$ ~and $t$ dependence for the re-scattering process are the inputs for the calculation. The model successfully reproduces the differential cross sections on coherent $\rho$ photoproduction~\cite{Frankfurt:1997ss} using the inputs from the VMD. The total model uncertainty is estimated to be about 20\%. A 10\% uncertainty was assigned to the parametrization of the $\gamma N\rightarrow \phi N$ amplitude based on the $\gamma p\rightarrow \phi p$ data. The effect of spin-flip in the process $\gamma N\rightarrow \phi N$ was ignored in the parametrization of the single scattering amplitude since the spin-flip amplitude is more suppressed in the coherent process than in the incoherent process. A 15\% systematic uncertainty was assigned due to this effect~\cite{Titov:spinflip}. An isospin dependence of the process $\gamma N\rightarrow \phi N$ was not taken into account in the model, but Ref.~\cite{Titov:1998tx} suggests such an effect is small. In Fig.~\ref{fig.cseg1a}, curve~A shows the $t$ distribution calculated by using the VMD prediction for the $\phi$-N cross section, {\it i.e.} $\sigma_{\phi N}$=10~mb, and the same $t$ distribution for the reaction $\gamma N\rightarrow \phi N$ and the reaction $\phi N\rightarrow \phi N$. The curve~B corresponds to $\sigma_{\phi N}$=30~mb, inspired by Ref.~\cite{Ishikawa:2004id}, with the VMD assumption for the $t$ distribution. It overestimates the data at large $-t$ where the contribution from double scattering dominates. This implies that if the $t$ distribution follows the VMD prediction, $\sigma_{\phi N}$ ~should also be consistent with the VMD prediction. In this case, inconsistency with the larger $\sigma_{\phi N}$ ~from the $A$-dependence experiment~\cite{Ishikawa:2004id} still remains. However, the VMD picture may not be a good approximation in this photon energy range. The larger $\sigma_{\phi N}$ ~from the $A$-dependence experiment~\cite{Ishikawa:2004id} can be explained if the $t$ distribution of the reaction $\phi N \rightarrow \phi N$ is different from the VMD prediction. For example, it is possible for the virtual $\phi$ to fluctuate to a $K \bar{K}$ pair and have a larger cross section for the second interaction~\cite{mark}. In this case, the $t$-slope for the second interaction would be larger than that for the $\gamma N\rightarrow \phi N$ reaction based on a general geometric relation between the $t$-slope and the total cross section~\cite{Povh:1987ju}. Following this hypothesis, cross sections were calculated with $\sigma_{\phi N}$=30~mb using a larger exponential $t$-slope, $b_{\phi N}=10$~(GeV/c)$^{-2}$, in the second interaction (curve~C). The data is equally-well described by curve~C, suggesting a larger $t$-slope parameter is necessary if $\sigma_{\phi N}$ ~is larger than the VMD prediction. Although the current data do not allow one to extract the $\sigma_{\phi N}$ ~and the $t$-slope independently due to the strong correlation between them, it possibly suggests a larger $\sigma_{\phi N}$ ~from the $A$-dependence. \begin{figure*}[t] \includegraphics[width=15.0cm]{w_angle1_rev1.eps} \caption{Decay angular distributions of the $\phi$-meson in the helicity frame. The inner error bars shown are statistical uncertainty only, while the outer error bars are the sum of statistical and systematic uncertainties in quadrature. Solid curves are the predictions from helicity conservation. } \label{fig.w} \end{figure*} In addition to the differential cross sections, the decay angular distributions of the $\phi$ meson were also measured in the helicity frame~\cite{Schilling:1970um}. The direction of the $\phi$-meson momentum in the CM system was chosen as the $z$-axis, and the polar angle and azimuthal angle between the $K^+$ momentum and the $\phi$-meson production plane were defined as $\theta_H$ and $\phi_H$ in the $\phi$-meson rest frame. Figure~\ref{fig.w} shows the projections of the decay angular distributions onto $\cos\theta_H$ and $\phi_H$ in the ranges $-0.8<t<-0.35$~~GeV$^2$/c$^2$ ~and $-2.0<t<-0.8$~~GeV$^2$/c$^2$ ~in each photon energy region. The data are consistent with the prediction from helicity conservation (solid curves), {\it i.e.} the spin of the $\phi$-meson is aligned to the momentum of the $\phi$-meson. This is similar to what was observed in the $\phi$ photoproduction on the proton~\cite{McCormick:2003bb,Mibe:2005er}. In the larger $-t$ region, the double scattering contribution becomes more important. No drastic change is observed from the smaller $-t$ to the larger $-t$ region, implying that the spin structure of the single- and double-scattering processes are similar. In summary, we have presented the first measurement of the differential cross sections and decay angular distributions for coherent $\phi$ photoproduction on the deuteron up to $t=-2.0$~GeV$^2$/c$^2$. The differential cross sections at large $-t$ exhibit a contribution from double scattering. The data are consistent with $\sigma_{\phi N}$ =10~mb in a framework of VMD. The data also provide a possible explanation for larger $\sigma_{\phi N}$ ~if the $t$-slope for $\phi N \rightarrow \phi N$ is larger than the VMD value from $\gamma p \rightarrow \phi p$. The decay angular distributions follow the prediction from helicity conservation. This measurement demonstrates a new approach to the study of the $\phi$-N interaction in the low energy region where VMD is not necessarily a good approximation. Further measurements at higher photon energies~\cite{eg3}, at very small $-t$~\cite{lepscphi}, as well as an $A$-dependence study in $e^+e^-$ decay~\cite{g7phi} will make it possible to map out details of the energy and $t$ dependences of the $\phi$-N interaction. We would like to thank the staff of the Accelerator and Physics Divisions at Jefferson Lab who made this experiment possible. We acknowledge useful discussions with T.~Rogers, M.~Sargsian, M.~Strikman and A.~Titov. This work was supported in part by the Italian Istituto Nazionale de Fisica Nucleare, the French Centre National de la Recherche Scientifique and Commissariat \`a l'Energie Atomique, the Korea Research Foundation, the U.S. Department of Energy and the National Science Foundation, and the U.K. Engineering and Physical Science Research Council. Jefferson Science Associates (JSA) operates the Thomas Jefferson National Accelerator Facility for the United States Department of Energy under contract DE-AC05-06OR23177. \bibliographystyle{apsrev}
1,314,259,992,810
arxiv
\section{Introduction} Topological quantum and classic systems with extraordinary properties, such as robust edge states, have been a growing interest \cite{lu2014topological,bansil2016colloquium, shun2018topological, ozawa2019topological, wang2020topological,kim2020recent}, which are the driving force of technology innovation\cite{wu2017applications}. Recently, a new class of topological systems, namely higher-order topological insulators (HOTIs) \cite{frank2018higher, zhang2019second, zhang2020low, xie2021higher, liu2021bulk}, has been demonstrated by theories and experiments. The most focused for HOTIs is the zero-dimensional (0D) corner state of two- or three-dimensional systems. In the research of topological systems, symmetries play a critical role generally \cite{chiu2016classification}. For example, non-zero Berry curvatures generally exist in a system without time-reversal symmetry or spatial inversion symmetry, namely Chern insulators\cite{wang2008reflection, wang2020universal} or valley insulators\cite{ma2016all, kim2021multiband,xi2020topological}, which admit the emergence of edge states. Moreover, a two-dimensional (2D) HOTI with square lattice, the topological invariant $Q_{c}$, also known as quadrupole moment or fractional corner charge, is quantized to 0 or 0.5 when a system presents mirror symmetries $M_x:=x \to -x$ and $M_y := y \to -y$ or fourfold rotation $C_4$ symmetry, leading to topological non-trivial corner states that are protected by non-zero $Q_c$ \cite{benalcazar2017electric,benalcazar2017quantized,wheeler2019many}. As for high order topology, the corner states can be divided into two types. Type-\uppercase\expandafter{\romannumeral1} corner states are protected by non-zero multiple moments. In particular, HOTIs with vanishing dipole densities but non-zero quadrupole moments, namely quadrupole topological insulators (QTIs), because of certain crystalline symmetries, e.g., reflection and rotation \cite{benalcazar2017quantized, benalcazar2019quantization}, the non-zero quadrupole moment is quantized to 0.5, which promises to the existence of type-\uppercase\expandafter{\romannumeral1} corner states. Very recently, the photonic QTIs have been realized by a square lattice without time-reversal symmetry \cite{he2020quadrupole}. Type-\uppercase\expandafter{\romannumeral2} corner states are caused by the long-range coupling between two edge states. For instance, a photonic kagome lattice with non-trivial 2D Zak phase\cite{liu2018topological, liu2017novel}, and by increasing the long-range interactions, i.e., coupling beyond next-nearest-neighbor(NNN) hopping, two sets of type-\uppercase\expandafter{\romannumeral2} corner states with different spatial symmetry can distinguish from the spectrum of edge states\cite{ni2019observation, li2020higher, shen2021investigation, wang2021higher, xu2020general}. However, in contrast to HOTIs in the systems with ``perfect lattice symmetries", e.g. the $C_3$ symmetry for kagome lattices and the $C_4$ symmetry for square lattices, the phenomena of HOTIs with breaking of the perfect lattice symmetries have not been intensively investigated to the best of our knowledge. The reason for lacking such research maybe because the topological invariant, multipole moment, can not be quantized in these systems, so that they are widely regarded as topologically trivial. It would be very novel to reveal that, even with the breaking of perfect lattice symmetries, such as the photonic square lattice without both $C_4$ and $M_{x(y)}$ symmetries, rich phenomena of HOTIs and the topological edge states still can be observed. Furthermore, if we gradually make the system more and more asymmetric from the original system with perfect lattice symmetries, the continuous evolving of HOTIs and the topological edge states in this process are also very inspiring, since the topological origin of HOTIs, the varying trend of $Q_c$ and the Berry curvature from non-zero to zero can be carefully investigated. In this work, we systemically investigate the topological properties of 2D photonic square lattice without both $C_4$ and $M_{x(y)}$ symmetries. We first construct a photonic square lattice with perfect $C_4$ lattice symmetry since four rods in one cell are identical. Then by gradually changing the dielectric constant of two diagonal rods, the $C_4$ and $M_{x(y)}$ symmetry are broken and the topological phase transition appears, such as the annihilation of topology-degenerate singularities that carry non-zero Berry curvatures, leading to two sets of edge states. Surprisingly, two types of corner states appear in the process without the perfect lattice symmetries. Because of the asymmetries of lattice, type-\uppercase\expandafter{\romannumeral1} corner states which are protected by non-zero quadrupole moments, is no longer quantized to but less than 0.5, and type-\uppercase\expandafter{\romannumeral2} corner states that caused by long-range interactions can distinguish from the above edge states easily. We also find that a whole gap range can only exist type-\uppercase\expandafter{\romannumeral1} corner states but no edge states, and larger long-range interactions could cause added type-\uppercase\expandafter{\romannumeral2} corner states. The above results from the strict numerical methods are also confirmed by the results from the tight binding model, which gives more clear physical origins of the two types of corner states. This work will prove valuable in expanding the understanding of topological phases beyond ``perfect lattice symmetries". Furthermore, these findings could be friendly to the application of edge states and corner states due to the all-dielectric structure. \section{Edge states} \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{figure/model2.pdf} \caption{(a) A schema of 2D photonic unit cell with square lattice, the radius of all-dielectric rods is fixed as $r=0.12a$, where $a$ is the lattice constant; (b) The first Brillouin zone of square lattice; (c-e) Band structures of PhCs with parameter $\varepsilon_A=12$ (c), $11$ (d), and $1$ (e), while $\varepsilon_B$ is fixed as 12. (f-g) Distributions of Berry curvatures for the first and second band with parameters $\varepsilon_A=11$, $\varepsilon_B=12$, $r=0.12a$.}\label{model} \end{figure} We start by considering the 2D photonic unit cell with square lattice as shown in Fig. \ref{model}(a). The four corners of the unit cell contain four dielectric rods in the air with radius $r=0.12a$, where $a$ is the lattice constant. The top left and down right rods are marked as type-\textit{A} with the relative permittivity $\varepsilon_A$ and permeability $\mu_A$, while the others are marked as type-\textit{B} rods with $\varepsilon_B$ and $\mu_B$. The first Brillouin zone of this PhC is illustrated in Fig. \ref{model}(b). At the first step, we suppose $\varepsilon_A=\varepsilon_B=12$, $\mu_A = \mu_B = 1$, and Fig. \ref{model}(c) shows the band structure of transverse electric (TE, $Ez$ polarization) mode. Because of spatial symmetry and time-reversal symmetry, the second and third bands degenerate at $\Gamma$ and $M(N)$ points. Our goal is to investigate the topological phase transition when the square lattice is without both $C_4$ and $M_{x(y)}$ symmetries. Hence, we turn $\varepsilon_A$ of type-\textit{A} rods slightly from 12 to 11, and the band structure is shown in Fig. \ref{model}(d). Since $C_4$ and $M_{x(y)}$ symmetries are broken, the degeneracies at $\Gamma$ and $M(N)$ points are opened, and those degenerate points, namely topological singularities, move along $\Gamma - N$ direction. To research the topological phase of the second gap, we calculate the Berry curvatures which are normalized to $[-\pi, \pi]$ for the disturbed model, and the details of the calculations are given in supplementary information S1. The Berry curvatures of the first and second bands are shown in Fig. \ref{model}(f) and Fig. \ref{model}(g). We find that two topological singularities with opposite non-zero Berry curvatures generate from $\Gamma$ point, while two other opposite topological singularities generate from $N$ point in Fig. \ref{model}(g). The sum of Berry curvatures, i.e., Chern number, keeps zero because of time-reversal symmetry. However, the sum of local Berry curvatures, namely valley Chern number, is non-zero, and we mark it as general valley Chern number $C_{gv}^{(n)}$ because there are degenerate points in concerned bands. We can calculate the general Chern number of the second band near $\Gamma$ point $C_{\Gamma}^{(2)}=\pm 0.5$ and near $N$ point $C_{N}^{(2)}=\pm 0.5$. The general valley Chern number of a single gap can be calculated by summing up $C_{gv}^{(n)}$ of all band(s) below the gap, which means the second gap has non-trivial $C_{gv}^{(2)}=\pm 1$. Because of the antisymmetry of the distributions for Berry curvatures as shown in Fig. \ref{model}(g), we can assume that the difference for the valley Chern number of the second gap between this model and its counterpart, i.e., $\varepsilon_A=12, \varepsilon_B=1$, is $\Delta C_{gv}^{(2)}=\pm 2$, which means two different topological edge states are supported according to the bulk-edge correspondence. \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{figure/EdgeState2.pdf} \caption{(a) Band structures of a supercell, the bulk states, symmetrical edge states, antisymmetrical edge states are marked as black, red, and blue dots, respectively. The parameters are set as above(below) 10 unit cells with $\varepsilon_{A(B)}=11$ and $\varepsilon_{B(A)}=12$. (b) The same as (a) but $\varepsilon_{A(B)}=1$. (c) Spatial distributions of $E_Z$ field for $k_x=0$ (A-B) and $k_x=\pi/a$ (C-D) with $\varepsilon_{A(B)}=11$, the circles of dash line mean dielectric rods with $\varepsilon_{A(B)}=11$, and circles of full line mean dielectric rods with $\varepsilon_{B(A)}=12$. (d) The same as (c) but $\varepsilon_{A(B)}=1$, the circles of full line mean dielectric rods with $\varepsilon_{B(A)}=12$}\label{EdgeState} \end{figure} To confirm the bulk-edge correspondence, we combine 10 unit cells whose parameters are $\varepsilon_A=11$, $\varepsilon_B=12$ above the boundary with its counterpart $\varepsilon_A=12$, $\varepsilon_B=11$ below the boundary. By using finite-element method(FEM) software COMSOL Multiphysics and the wave optics module, the band structure of a supercell with Bloch boundary condition along the x-direction and periodic boundary condition of continuity along the y-direction is shown in Fig. \ref{EdgeState}(a). Two sets of edge states in the second gap are marked as red dots with symmetry modes and blue dots with antisymmetry modes, and the $E_z$ fields with $k_x=0$ and $k_x=\pi /a$ are drawn in Fig. \ref{EdgeState}(c). However, the second gap with these parameters is not omnidirectional, meaning that the edge states would be hard to observe in the laboratory. Fortunately, the width of the second gap will keep broadening if we decrease $\varepsilon_A$ continuously and fix $\varepsilon_B=12$. For example, when $\varepsilon_A=1$ and $\varepsilon_B=12$, the omnidirectional second gap is highly obvious as shown in Fig. \ref{model}(e). In the process of decreasing $\varepsilon_A$ from $11$ to $1$, some interesting phenomena are observed, such as two topological singularities with non-zero Berry curvature generate from $\Gamma$ point and two generate from $N$ point consolidate and annihilate by each other, so the Berry curvatures of second band change from non-zero to zero. According to the bulk-edge correspondence, the non-zero Berry curvatures admit the existence of edge states. Surprisingly, even Berry curvatures change to zero because of the annihilation of singularities, the two sets of edge states with spatial symmetry and antisymmetry alway exist in the process, which do evolve from the non-trivial edge states and are also protected by the chiral symmetry \cite{orazbayev2018chiral}, e.g., Fig. \ref{EdgeState}(b) shows the band spectrum of a supercell contains upper PhCs with $\varepsilon_A=1$, $\varepsilon_B=12$ and lower PhCs with $\varepsilon_A=12$, $\varepsilon_B=1$. The above results for the evolution of Berry curvatures and edge states with $\varepsilon_A$ changing are shown in supplementary information S2. It would be interesting to analyze the two sets of edge states from the perspectives of physic and engineering. First, the band spectrum of antisymmetrical edge states is almost flat, which can be demonstrated by two views: one is the strong antisymmetrical fields which concentrate on dielectric rods as mode B and D in Fig. \ref{EdgeState}(d) show, the other is the limited intra-cell interaction (see supplementary information S5 for details). The flat edge band can be used to produce the topological slow light \cite{arregui2021quantifying} and high-Q cavity. We mark these edge states as dielectric-edge-states (DESs). Second, the symmetrical edge states whose energies mainly concentrate on air as mode A and C in Fig. \ref{EdgeState}(d) show, which would be valuable in the topological waveguide to reduce the effect of the impurities in the medium and the light with high energies. Such edge states are marked as air-edge-states (AESs). More importantly, the coupling between the two sets of edge states can realize the topological Fano resonator \cite{zangeneh2019topo}. \section{Corner states and fractional charge} \begin{figure*}[htbp] \centering \includegraphics[width=0.9\textwidth]{figure/CornerState.pdf} \caption{(a) Band structure of a combined supercell, the bulk, edge, and corner states are marked as black, blue, and red dots, respectively. Parameters are set as: the inside $10 \times 10$ unit cells with $\varepsilon_A=1$, $\varepsilon_B=12$ and the outside 5 layers of unit cells with $\varepsilon_A=12$, $\varepsilon_B=1$. (b) The spatial distributions for $E_z$ field of corner states(i-iii and v-vii) and edge states (iv and viii)}\label{CornerState} \end{figure*} In this section, we will focus on HOTI with corner states, namely 0D localized modes in the 2D system. The corner states can be divided into two types: type-\uppercase\expandafter{\romannumeral1} corner states from the eigenmodes of the unit cell, namely the ``zero-energy" mode of tight binding model, and type-\uppercase\expandafter{\romannumeral2} corner states caused by the long-range interactions of edge states. So far two types of corner states have not been realized simultaneously in a PhC with $C_4$ symmetry and time-reversal symmetry, because of the degeneracies of bands \cite{SI} and the limited NNN hopping. However, if we construct a PhC without both $C_4$ and $M_{x(y)}$ symmetries, can we realize high order topological states and novel topological phases? Next, we will demonstrate that interesting phenomena and special topology in such symmetries-broken systems, which is beyond the systems with perfect lattice symmetry, can be found. At the first step, we combine two types of PhCs to investigate the corner states and edge states. As shown in Fig. \ref{CornerState}, a supercell of $10\times 10$ unit cells with $\varepsilon_A=1$ and $\varepsilon_B=12$ is surrounded by $5$ layers of unit cells with $\varepsilon_A=12$ and $\varepsilon_B=1$, and absorbing boundary condition is used in the outsides to avoid redundant states in the concerning gap. The spectrum around the 2nd gap is obtained by FEM and shown in Fig. \ref{CornerState}(a), in which bulk, edge, and corner states are marked by black, blue, and red dots, respectively. For convenience, we select eight typical states from low frequency to high frequency which are marked as i-viii, and the $E_z$ field distribution of those states are also shown in Fig. \ref{CornerState}(b). We start from two distinctive edge states DES-iv and AES-viii shown in Fig. \ref{CornerState}(b)-iv and -viii respectively as we mentioned in Section 2: DES-iv is antisymmetrical along the boundary of the two types of supercell and its field concentrates on the dielectric rods, while AES-viii is symmetrical and its field concentrates on the air. More specifically, the corner states near DES-iv in Fig. \ref{CornerState}(a) have similar symmetry and distribution features, such as states i, ii, iii, v, and vi, whereas the corner states near AES-viii have similar features, such as state vii. From the view of long range coupling between different edges, the states i, ii, iii, and v can be judged as typical type-\uppercase\expandafter{\romannumeral2} corner states. The states vi and vii are type-\uppercase\expandafter{\romannumeral1} corner states which will be clearly demonstrated in the next model. \begin{figure*}[htbp] \centering \includegraphics[width=1\textwidth]{figure/CornerStatePEC.pdf} \caption{(a-d) Band spectrum of a supercell with $\varepsilon_A=1$, $\varepsilon_B=12$. PEC boundaries are used, and the distance between PEC and PhC are $d=0a$(a), $d=0.05a$(b), $d=0.5a$(c), and $d=0.75a$(d), respectively. The inserted figures of (b) and (d) show the spatial distributions of corner state. (e) The spatial distributions of corner and edge states which are marked in (a) with i-vi; (f) The spatial distributions of corner and edge states which are marked in (d) with i-ii; (g) The spatial distributions of LDOS with $d=0.5a$; (f) The corner charges versus the different $\varepsilon_A$ from $1$ to $7$, and the dashed line is the extrapolation from the corner charges.}\label{CornerStatePEC} \end{figure*} At the second step, we construct the model with the perfect electrical conductor (PEC) as the boundary of PhCs, shown in Fig. \ref{CornerStatePEC} and the results of local density of states (LDOS) \cite{liu2021bulk,xie2021higher} are used to strictly verify the topological origins of the two types of corner states. In Fig. \ref{CornerStatePEC}, we calculate the eigenfrequencies and eigenstates of a supercell which contains $N \times N$ unit cells with $\varepsilon_A=1$ and $\varepsilon_B=12$, and the distance between the PhC and PEC boundary is $d$. We choose $N=10$, and the band structures with $d=0a$, $0.05a$, $0.5a$, and $0.75a$ are shown in Fig. \ref{CornerStatePEC}(a-d), respectively. The typical localized states of Fig. \ref{CornerStatePEC}(a) and (d) are shown in Fig. \ref{CornerStatePEC}(e) and (f), respectively, which also sorted by the frequency from low to high. We will analyze a PhC with PEC boundary in the order of edge states, type-\uppercase\expandafter{\romannumeral2} corner states, and type-\uppercase\expandafter{\romannumeral1} corner states. First, very interestingly, two sets of edge states would not appear at the same time when the PEC boundary is used. Specifically, DESs only appear with $d=0a$ as Fig. \ref{CornerStatePEC}(e)-iii shows, and AESs only appear with $d=0.75a$ as Fig. \ref{CornerStatePEC}(f)-ii shows. The coupling between AESs or DESs will generate type \uppercase\expandafter{\romannumeral2} corner states, including all corner states in Fig. \ref{CornerStatePEC}(e) and (f). Besides, some novelty phenomena about the type-\uppercase\expandafter{\romannumeral2} corner states in a square photonic lattice without both $C_4$ and $M_{x(y)}$ symmetries are observed for the first time. For example, type-\uppercase\expandafter{\romannumeral2} corner states usually appear in pairs above and below the edge states, and they have opposite symmetry, such as state-i vs. state-v and state-ii vs. state-iv, as shown in Fig. \ref{CornerStatePEC}(e). Nevertheless, the corner states Fig. \ref{CornerStatePEC}(e)-vi and Fig. \ref{CornerStatePEC}(f)-i appear not in pairs. We believe that they should also have counterparts from the symmetry argument, but their counterparts are falling in the range of bulk states \cite{SI}. Even more, from the field distribution shown in corner state ii and iv in Fig. \ref{CornerStatePEC}(e), we find that such type-\uppercase\expandafter{\romannumeral2} corner states are not very localized and physically they are from the larger long-range interactions between the rods in one cell. Such type-\uppercase\expandafter{\romannumeral2} corner states have not been found previously in kagome lattice \cite{li2020higher} and can also be observed by the tight-binding model if we introduce the long-range interactions into our tight-binding model (see supplementary S5 for details). Second, we will investigate the corner states shown in Fig. \ref{CornerStatePEC}(b) and (c) with $d=0.05a$ and $d=0.5a$, which will be proved to be the topological non-trivial type-\uppercase\expandafter{\romannumeral1} corner states. Counterintuitively, except the corner states, there are no edge states inside the gap as shown in Fig. \ref{CornerStatePEC}(b) and (c). We can use the theory of surface impedance\cite{huang2016geometric, xiao2014surface,xiong2021resonance, li2019two, zhang2021fractal} to explain the absence of edge states (see supplementary information S5 for details). Two degenerated corner states in Fig. \ref{CornerStatePEC}(b) are concentrated on the dielectric rod, while the other two in Fig. \ref{CornerStatePEC}(c) are concentrated on the air. This phenomenon that only corner states exist in the whole gap range is reported for the first time to the best of our knowledge. Next, We will demonstrate the topological origin of such corner states in detail. From the previous works of QTIs\cite{he2020quadrupole,benalcazar2019quantization}, there are several methods to judge the topological non-triviality of corner states. The first method is the filling anomaly that the non-trivial corner states are ``contributed" by both the top and bottom bands, respectively. In particular, a QTI with $N \times N$ cells, the solution numbers of non-trivial corner states are from $2N^2-1$ to $2N^2+2$ for the second gap. The second is the non-zero fractional corner charges from the corner states, which usually equal $0.5$ due to certain crystalline symmetries. As for the corner states of our system in Fig. \ref{CornerStatePEC}(b) and (c), we have counted the solution numbers of these corner states and find that they satisfy the filling anomaly. What's more, we calculate the spatial distribution for the sum of the lowest $2N^2$ energy states to obtain LDOS with $d=0.5a$ in Fig. \ref{CornerStatePEC}(g), and the non-zero fractional corner charges $Q_c$ could be found at the four corners, while the edge charges keep zero as same as traditional QTIs with $C_2$ symmetry (similar results also can be obtained with $d=0.05a$). Hence, the two judgements ensure that the corner states in \ref{CornerStatePEC}(b) and (c) are topologically non-trivial type-\uppercase\expandafter{\romannumeral1} corner states. However, in contrast to traditional QTIs with $C_4$ and $M_{x(y)}$ symmetries, the fractional corner charges of our system are not equal to 0.5. This abnormal phenomenon can be explained by the absence of crystalline symmetries, e.g. the $C_4$ and $M_{x(y)}$ symmetries for our square lattice. We can confirm it by calculating corner charges versus the different $\varepsilon_A$ from $1$ to $7$, and the results are shown in Fig. \ref{CornerStatePEC}(h). The dashed line in Fig. \ref{CornerStatePEC}(h) is the extrapolation from our results. The extrapolation shows that the fractional corner charge would increase when we decrease the difference between $\varepsilon_A$ and $\varepsilon_B$, and $Q_c=0.5$ in a system with "perfect lattice symmetry" when $\varepsilon_A = \varepsilon_B$. It should be noted that actually we can NOT observe type-\uppercase\expandafter{\romannumeral1} corner states because of the degeneracies of the 2nd and 3rd bands for the systems with the perfect square lattice symmetry. So, such ideal $0.5$ corner charges only exist theoretically on the extrapolation line. Surprisingly, our symmetries-broken model provides a new method to realize type-\uppercase\expandafter{\romannumeral1} corner states despite quadrupole moment less than $0.5$ and we will show that such type-\uppercase\expandafter{\romannumeral1} corner states also have special properties compared with the common type-\uppercase\expandafter{\romannumeral1} corner states. Different from common cases, from Fig. \ref{CornerStatePEC}(b) and (c) the first special property is that there are no edge states inside the gap and the topologically nontrivial type-\uppercase\expandafter{\romannumeral1} corner states are near the gap center which far away from the bands. This means that type-\uppercase\expandafter{\romannumeral1} corner states of our system are very localized which is proved by the field distributions in Fig. \ref{CornerStatePEC}(b) and (c). Hence, the type-\uppercase\expandafter{\romannumeral1} corner states of such systems could be widely used as high-Q cavities with sub-wavelength scale. What's more, the corner states in Fig. \ref{CornerStatePEC} (b) and (c) concentrate on dielectric and air regions, respectively. This property is also a fantastic result from the symmetry breaking. Such air-concentrated type-\uppercase\expandafter{\romannumeral1} corner state shown in Fig. \ref{CornerStatePEC} (c) is observed for the first time and it has advantages in the design of the cavities with very strong field and the detectors for liquid, molecules in the air, etc. At last, we would note that those type-\uppercase\expandafter{\romannumeral2} corner states and non-trivial type-\uppercase\expandafter{\romannumeral1} corner states also could be realized by the simple tight binding model (TBM) without $C_4$ and $M_{x(y)}$ symmetries, which confirmed the universality of these corner states. The detailed derivations and results of TBM are given in supplementary information S5. \section{conclusion} In summary, we have constructed general valley TI and HOTI in PhCs without both $C_4$ and $M_{x(y)}$ symmetries but with time-reversal symmetry, and the physical origins of edge states and two types of corner states are demonstrated in detail. Our results reveal rich topological physics beyond the ``perfect lattice symmetries", which contribute to our understanding of new topological physics and extend the methods to realize topological non-trivial phase and states. The proposed states of such all-dielectric PhCs, with excellent properties of low group velocity, sub-wavelength localization, and air-concentrated field distribution, can be widely realized at almost all frequencies of interest and be further utilized to design the aimed optoelectronic devices with enhanced robustness, such as topological slow light, Fano resonator, detector, switch, and laser, etc. Furthermore, the related topics in high-dimensional systems and other waves, e.g., phonons and electrons, are also very attractive. \begin{acknowledgments} This work is supported by National High Technology Research and Development Program of China (17-H863-04-ZT-001-035-01); National Key Research and Development Program of China (2016YFA0301103, 2018YFA0306201). We thank Professor Wei E.I. Sha and Dr. Samuel J Palmer for their open source codes \cite{zhao2020first, palmer2020peacock}. \end{acknowledgments} \nocite{*} \bibliographystyle{apsrev4-2} \input{manuscript.bbl} \end{document}
1,314,259,992,811
arxiv
\section{Introduction} Despite the great success of the Standard Model (SM) as a theory of fundamental interactions, it features drawbacks such as, for example, the lack of explanation of the SM flavor structure; in particular, the observed pattern of SM fermion masses and mixings, the origin of Dark Matter (DM), the source of parity violation in electroweak (EW) interactions, the lepton and baryon asymmetries of the Universe and the anomalous magnetic moments of the muon and electron. In order to address these issues, it is necessary to propose a possible more general higher energy theory. In this sense, left-right symmetric electroweak extensions of the Weinberg-Salam theory have many appealing features, foremost of which is to address the origin of parity violation as a low energy effect, a remanent of its breaking at a certain high energy scale. We are therefore proposing, as a possible explanation of the problems listed before, a minimal renormalizable Left-right symmetric theory ~\cite{Pati:1974yy,Mohapatra:1974gc} based on the gauge symmetry $SU(3)_{C}\times SU\left( 2\right) _{L}\times SU\left( 2\right) _{R}\times U\left( 1\right) _{B-L}$, supplemented by the $% Z_{4}^{\left( 1\right) }\times Z_{4}^{\left( 2\right) }$ discrete group, where the $Z_{4}^{\left( 1\right) }$ symmetry is completely broken, whereas the $Z_{4}^{\left( 2\right) }$ symmetry is broken down to the preserved $% Z_{2}$, thus allowing the implementation of a radiative inverse seesaw mechanism to generate the tiny masses of the light active neutrinos. In the proposed model, the top and exotic quarks obtain masses at tree level from the Yukawa interactions, whereas the masses of the bottom, charm and strange quarks, tau and muon leptons arise from a tree level Universal Seesaw mechanism \cite{Davidson:1987mh,Davidson:1987mi}. The masses for the first generation SM charged fermions are generated from a one loop level radiative seesaw mechanism mediated by charged vector like fermions and electrically neutral scalars. Unlike \cite{Davidson:1987mh}, where the tree level Universal Seesaw mechanism was first implemented to generate the masses of all SM charged fermions and light active neutrinos, in our model we use the tree level Universal Seesaw mechanism only for the charm, bottom, strange quarks, tau and muon leptons. Furthermore, whereas in the model of \cite% {Davidson:1987mh} the light active neutrino masses are generated from a type I seesaw mechanism, in our model we implement the one loop level inverse seesaw mechanism mediated by electrically neutral scalar singlets and right handed Majorana neutrinos, in order to produce the tiny masses of the light active neutrinos. Some recent left-right symmetric models have been considered in Refs. \cite% {CarcamoHernandez:2018hst,Dekens:2014ina,Nomura:2016run,Brdar:2018sbk,Ma:2020lnm,Babu:2020bgz}% . Unlike the model of Ref \cite{CarcamoHernandez:2018hst}, where non renormalizable Yukawa interactions are employed for the implementation of a Froggatt Nielsen mechanism to produce the current SM fermion mass and mixing pattern, our proposed model is a fully renormalizable theory, with minimal particle content and symmetries, where tree level Universal as well as one-loop level radiative seesaw and inverse seesaw mechanisms are combined to explain the observed hierarchy of SM fermion masses and fermionic mixing parameters. Furthermore, unlike Ref. \cite{CarcamoHernandez:2018hst} our model successfully explains the electron and muon anomalous magnetic moments and includes a discussion about leptogenesis and collider signatures of heavy scalar and $Z^{\prime }$ gauge bosons, which is not presented in \cite% {CarcamoHernandez:2018hst}. In our current model, the charged vector-like leptons responsible for the tree level Universal and one-loop level radiative seesaw mechanism that produces the SM charged fermion mass hierarchy, allows to reproduce the measured values of the muon and electron anomalous magnetic moments, thus linking the fermion mass generation mechanism and the $g-2$ anomalies, which is not given in the left-right symmetric model of Ref. \cite% {CarcamoHernandez:2018hst}. Moreover, unlike the left-right symmetric theory of Ref. \cite{Ma:2020lnm}, our model does not rely on the inclusion of scalar leptoquarks to generate one loop level masses for the SM charged fermions and light active neutrinos. Besides that, whereas in the left-right symmetric model of \cite{Babu:2020bgz} the light active neutrino masses are generated from a combination of type I and type II seesaw mechanisms, in our model the tiny masses of the light active neutrinos are produced from an inverse seesaw mechanism at one loop level. Another difference of our model with the one proposed of \cite{Babu:2020bgz} is that in the former a mechanism for explaining the SM charged fermion mass hierarchy is presented, whereas in the latter such mechanism is not given. Furthermore, whereas in the models of Refs. \cite{Brdar:2018sbk} and \cite{Nomura:2016run}, the masses of the light active neutrinos are generated from a tree level inverse and radiative type I sessaw mechanisms, respectively, in our model we use the inverse seesaw mechanism at one loop level to produce the tiny masses of the light active neutrinos. In addition, our model includes a dynamical mechanism to generate the SM charged fermion mass pattern, which is not presented in the model of Ref. \cite{Brdar:2018sbk}. On the other hand, the renormalizable left-right symmetric theory proposed in this paper has similar amount of particle content compared to the left-right symmetric model considered in \cite{Dekens:2014ina}. For instance, whereas the scalar sector of left-right symmetric model of Ref. \cite{Dekens:2014ina} has one scalar bidoublet (having 8 degrees of freedom), one $SU(2)_{L}$ scalar triplet (transforming as a $SU(2)_{R}$ singlet) (having 6 degrees of freedom) and one $SU(2)_{R}$ scalar triplet (transforming as a $SU(2)_{L}$ singlet) (having 6 degrees of freedom), thus amounting to 14 physical scalar degrees of freedom (after substracting the number of Goldstone bosons), our current left-right model has one scalar bidoublet (8 degrees of freedom), two $SU(2)_{L}$ scalar doublets (8 degrees of freedom), two $SU(2)_{R}$ scalar doublets (8 degrees of freedom), two electrically neutral gauge singlet real scalars (2 degrees of freedom) and two electrically neutral gauge singlet complex scalars (4 degrees of freedom), which corresponds to 24 physical scalar degrees of freedom. Despite our model has more scalar degrees of freedom than the one proposed in \cite{Dekens:2014ina}, the advantage of our proposal with respect to the ones presented in \cite{Dekens:2014ina,Brdar:2018sbk} is that in the former a mechanism that naturally explains the SM fermion mass hierarchy is presented, whereas the latter does not include such mechanism. The paper is organized as follows. In section \ref{model} we outline the proposed model. The implications of our model in the SM fermion hierarchy is discussed in section \ref{fermionmasses}. The implications of our model in charged lepton flavor violation are described in section \ref{LFV}. The consequences of our model in leptogenesis are described in section \ref{leptogenesis}, while the model scalar potential is analyzed in section \ref{scalarpotential}. The implications of our model in the Higgs diphoton decay are discussed in section \ref{sec.Higgsdiphoton}, and in section \ref{sec.gminus2} we analyze its application to the muon and electron anomalous magnetic moments. The $% Z^\prime$ and heavy scalar production at a proton-proton collider are discussed in sections \ref{HeavyScalar} and \ref{Zprime}, respectively. The implications of our model in meson oscillations are discussed in section \ref% {FCNC}. We conclude in section \ref{conclusions}. An analytical argument of the minimal number of fermionic seesaw mediators required to generate the masses of SM fermions via a seesaw-like mechanism is presented in Appendix % \ref{M}. \section{An extended Left-Right symmetric model} \label{model} Before providing a detailed explanation of our left-right symmetric model, we will explain the reasoning behind introducing extra scalars, fermions and symmetries, needed for implementing an interplay of tree level universal and radiative seesaw mechanism to explain the SM charged fermion mass hierarchy and one loop level inverse seesaw mechanism to generate the tiny neutrino masses. It is worth mentioning that in our proposed model, the mass of the top quark will be generated from a renormalizable Yukawa operator, with an order one Yukawa coupling, i.e. \begin{equation} \overline{Q}_{3L}\Phi Q_{iR},\hspace{1.5cm}i=1,2,3 \end{equation}% where $Q_{3L}$ and $Q_{iR}$ are $SU\left( 2\right) _{L}$ and $SU\left( 2\right) _{R}$ quark doublets, respectively: \begin{equation} Q_{iL}=\left( \begin{array}{c} u_{iL} \\ d_{iL}% \end{array}% \right) ,\hspace{1.5cm}Q_{iR}=\left( \begin{array}{c} \overline{u}_{iR} \\ \overline{d}_{iR}% \end{array}% \right) ,\hspace{1.5cm}i=1,2,3, \end{equation}% whereas $\Phi $ is a scalar bidoblet, with the VEV pattern \begin{equation} \left\langle \Phi \right\rangle =\left( \begin{array}{cc} v_{1} & 0 \\ 0 & v_{2}% \end{array}% \right) , \end{equation}% where we have set $v_{2}=0$ to prevent a bottom quark mass arising from the above given Yukawa interaction. Now, to generate tree level masses via a Universal Seesaw mechanism for the bottom, strange and charm quarks, as well as for the tau and muon leptons, one loop level masses for the first generation SM charged fermions and the tiny masses for the light active neutrinos via a one loop level inverse seesaw mechanism, we need to forbid the operators: \begin{eqnarray} &&\overline{Q}_{nL}\Phi Q_{iR},\hspace{1.5cm}\overline{Q}_{nL}\widetilde{% \Phi }Q_{iR},\hspace{1.5cm}n=1,2,\hspace{1.5cm}i=1,2,3, \notag \\ &&\overline{L}_{iL}\widetilde{\Phi }L_{jR},\hspace{1.5cm}\overline{L}_{iL}% \widetilde{\chi }_{L}N_{jR},\hspace{1.5cm}\left( m_{N}\right) _{ij}\overline{% N}_{iR}N_{jR}^{C},\hspace{1.5cm}i,j=1,2,3. \end{eqnarray}% where $\chi_L$ ($\chi_R$) is a $SU\left ( 2\right ) _{L}$ ($SU\left ( 2\right ) _{R}$) scalar doublet. Furthermore, $L_{iL}$ and $L_{iR}$ are $SU\left( 2\right) _{L}$ and $SU\left( 2\right) _{R}$ lepton doublets, respectively:% \begin{equation} L_{iL}=\left( \begin{array}{c} \nu _{iL} \\ e_{iL}% \end{array}% \right) ,\hspace{1.5cm}L_{iR}=\left( \begin{array}{c} \nu _{iR} \\ e_{iR}% \end{array}% \right) ,\hspace{1.5cm}i=1,2,3, \end{equation}% while $N_{iR}$ ($i=1,2,3$) are gauge singlet neutral leptons. As it will be shown in the following, the aforementioned gauge singlet neutral leptons are necessary for the implementation of the one loop level inverse seesaw mechanism that produces the tiny masses of the light active neutrinos. Furthermore, the successfull implentation of the tree level universal and radiative seesaw mechanism to explain the SM charged fermion mass hierarchy and of the one loop level inverse seesaw mechanism to generate the tiny neutrino masses, requires the following operators: \begin{eqnarray} &&\overline{Q}_{3L}\chi _{L}B_{1R},\hspace{1.5cm}\overline{Q}_{nL}\chi _{L}B_{2R},\hspace{1.5cm}\overline{B}_{nL}\chi _{R}^{\dagger }Q_{iR},\hspace{% 1.5cm}\overline{B}_{1L}\rho B_{1R},\hspace{1.5cm}\overline{B}_{2L}\sigma B_{2R}, \notag \\ &&\overline{Q}_{nL}\widetilde{\chi }_{L}T_{R},\hspace{1.5cm}\overline{T}_{L}% \widetilde{\chi }_{R}^{\dagger }Q_{iR},\hspace{1.5cm}\overline{T}_{L}\sigma T_{R},\hspace{1.5cm}n=1,2,\hspace{1.5cm}i=1,2,3, \notag \\ &&\overline{Q}_{nL}\phi _{L}B_{R}^{\prime },\hspace{1cm}\bar{B}_{L}^{\prime }\phi _{R}^{\dagger }Q_{iR},\hspace{1cm}\overline{Q}_{nL}\widetilde{\phi }% _{L}T_{R}^{\prime },\hspace{1cm}\bar{T}_{L}^{\prime }\widetilde{\phi }% _{R}^{\dagger }Q_{iR},\hspace{1cm}\bar{B}_{L}^{\prime }\sigma B_{R}^{\prime },\hspace{1cm}\bar{T}_{L}^{\prime }\sigma T_{R}^{\prime }, \notag \\ &&\overline{L}_{iL}\chi _{L}E_{nR},\hspace{1cm}\overline{E}_{nL}\chi _{R}^{\dagger }L_{jR},\hspace{1cm}\overline{L}_{iL}\phi _{L}E_{R}^{\prime },% \hspace{1cm}\bar{E}_{L}^{\prime }\phi _{R}^{\dagger }L_{iR},\hspace{1cm}% \overline{E}_{nL}\rho E_{nR},\hspace{1cm}\bar{E}_{L}^{\prime }\rho E_{R}^{\prime }, \notag \\ &&\overline{L}_{iL}\Phi L_{jR},\hspace{1cm}\overline{N_{iR}^{C}}\widetilde{% \chi }_{R}^{\dagger }L_{jR},\hspace{1cm}\overline{\Omega }_{nR}\Omega _{nR}^{C}\eta ,\hspace{1cm}\overline{N}_{nR}\Omega _{kR}^{C}\varphi ,\hspace{% 1cm}n,k=1,2. \end{eqnarray}% This requires to add $Z_{4}^{\left( 1\right) }$ and $Z_{4}^{\left( 2\right) } $ discrete symmetries, which are spontaneously broken, where the former is completely broken, and the latter is broken down to the preserved $Z_{2}$ symmetry. Such remaining conserved $Z_{2}$ symmetry allows to implement an inverse seesaw mechanism at one loop level to produce the tiny neutrino masses. Let us note that the gauge singlet neutral leptons $\Omega _{nR}$ ($% n=1,2$) are crucial for generating the term $\left( m_{N}\right) _{ij}% \overline{N}_{iR}N_{jR}^{C}$ ($i,j=1,2,3$) at one loop level, thus allowing the implementation of the one-loop level inverse seesaw mechanism. Additionally, the above mentioned exotic neutral lepton content is the minimal one required to generate the masses for two light active neutrinos, as required from the neutrino oscillation experimental data. Besides that, the SM charged fermion sector has to be extended to include the following heavy fermions: up type quarks $T$, $T^{\prime }$, down type quarks $B_{n}$, $B^{\prime }$ and charged leptons $E_{n}$ ($n=1,2$), $E^{\prime }$ in singlet representations under $SU\left( 2\right) _{L}\times SU\left( 2\right) _{R}$. As a consequence of the above mentioned exotic charged fermion spectrum, the rows and columns of the tree level SM charged fermion mass matrices will be linearly dependent, thus implying that the first generation SM charged fermions will be massless at tree level. The one loop level corrections to these matrices mediated by the $T^{\prime }$, $% B^{\prime }$ and $E^{\prime }$ fermionic fields will make thus rows and columns linearly indenpendent, thus yielding one-loop level masses for the up and down quarks as well as for the electron. Consequently, the aforementioned exotic charged fermion spectrum is the minimal necessary so that no massless charged SM-fermions would appear in the model, provided that one loop level corrections are taken into account. For a more detailed explanation of the analytical argument of the minimal number of fermionic seesaw mediators required to generate the masses of SM fermions via a seesaw-like mechanism the reader is referred to Appendix \ref{M}. On the other hand, in what regards the scalar sector, it is worth mentioning that $\chi _{L}$, $\phi _{L}$ and $\chi _{R}$, $\phi _{R}$ are $SU\left( 2\right) _{L}$ and $SU\left( 2\right) _{R}$ scalar doublets, respectively, whereas $\eta $, $\sigma $, $\rho $ and $\varphi $ are gauge singlet scalars. Furthermore, the $\chi _{L}$ ($\chi _{R}$) scalar is crucial for generating mass mixing terms between left (right) handed SM charged fermions and right (left) handed exotic charged fermions. Furthermore, the $SU\left( 2\right) _{R}$ scalar doublet $\chi _{R}$ is crucial for triggering the spontaneous breaking of the $SU\left( 2\right) _{L}\times SU\left( 2\right) _{R}\times U\left( 1\right) _{B-L}$ symmetry down to the SM electroweak gauge group. Besides that, the $\sigma $ and $\rho $ are gauge singlet scalars, whose inclusion is necessary for generating the masses of the charged exotic fermions. On the other hand, the gauge singlet scalars $\eta $ and $\varphi $ are required to generate tree and one loop level masses for the Majorana neutrinos $\Omega _{nR}$ ($n=1,2$) and $N_{iR}$ ($i=1,2,3$), which is crucial for a radiative generation of the $\mu $ parameter of the inverse seesaw mechanism. Moreover, the inclusion of the scalar bidoublet $% \Phi $ is crucial to generate a tree level top quark mass, as well as the Dirac neutrino submatrix, as will be shown below. The aforementioned scalar content is the minimal required for a successful implementation of the tree level universal and one loop level radiative seesaw mechanisms to explain the SM charged fermion mass hierarchy, as well as of the one loop level inverse seesaw mechanism to produce the tiny neutrino masses. By suitable charge assignments to be specified below, we can implement the aforementioned seesaw mechanisms, useful for explaining the SM fermion mass hierarchy. Our proposed model is based on the gauge symmetry $SU(3)_{C}\times SU\left( 2\right) _{L}\times SU\left( 2\right) _{R}\times U\left( 1\right) _{B-L}$, supplemented by the $Z_{4}^{\left( 1\right) }\times Z_{4}^{\left( 2\right) }$ discrete group, where the full symmetry $\mathcal{G}$ exhibites the following breaking scheme: \begin{eqnarray} &&\mathcal{G}=SU(3)_{C}\times SU\left( 2\right) _{L}\times SU\left( 2\right) _{R}\times U\left( 1\right) _{B-L}\times Z_{4}^{\left( 1\right) }\times Z_{4}^{\left( 2\right) } \notag \\ &&\hspace{35mm}\Downarrow v_{\sigma },v_{\eta },v_{\rho } \notag \\[0.12in] &&\hspace{15mm}SU(3)_{C}\times SU\left( 2\right) _{L}\times SU\left( 2\right) _{R}\times U\left( 1\right) _{B-L} \notag \\ &&\hspace{35mm}\Downarrow v_{R} \notag \\[0.12in] &&\hspace{15mm}SU(3)_{C}\times SU\left( 2\right) _{L}\times U\left( 1\right) _{Y}\times Z_{2} \notag \\[0.12in] &&\hspace{35mm}\Downarrow v_{1},v_{L} \notag \\[0.12in] &&\hspace{15mm}SU(3)_{C}\otimes U\left( 1\right) _{Q}\times Z_{2} \end{eqnarray}% Both $Z_{4}^{\left( 1\right) }$ and $Z_{4}^{\left( 2\right) }$ discrete groups are spontaneously broken, and are crucial for avoiding a tree level inverse seesaw mechanism. The $Z_{4}^{\left( 1\right) }$ symmetry is completely broken, whereas the $Z_{4}^{\left( 2\right) }$ symmetry is broken down to the preserved $Z_{2}$ symmetry. It is assumed that such discrete symmetries are broken at the scale much larger than the scale of breaking of the left-right symmetry. We further assume that the left-right symmetry breaking scale is about $v_{R}\sim \mathcal{O}(10)$ TeV. In addition, the $% Z_{4}^{\left( 2\right) }$ symmetry, which is spontaneously broken to the preserved $Z_{2}$, is crucial in order to forbid the appearance of the term $% \left( m_{N}\right) _{ij}\overline{N}_{iR}N_{jR}^{C}$ at tree level, thus allowing the implementation of the one loop level inverse seesaw mechanism that generates the light active neutrino masses. Besides that, the spontaneously broken $Z_{4}^{\left( 1\right) }$ symmetry is crucial to prevent tree level Yukawa mass terms involving the scalar bidoublet and SM charged fermions lighter than the top quark. As we will see in the following, in the SM fermion sector only the top quark will acquire its mass from a renormalizable Yukawa interaction with the scalar bidoublet, whereas the SM charged fermions lighter than the top quark will get their masses from tree level Universal seesaw and radiative seesaw mechanisms. The fermion assignments under the $SU(3)_{C}\times SU\left( 2\right) _{L}\times SU\left( 2\right) _{R}\times U\left( 1\right) _{B-L}$ group are: \begin{eqnarray} Q_{iL} &=&\left( \begin{array}{c} u_{iL} \\ d_{iL}% \end{array}% \right) \sim \left( \mathbf{3},\mathbf{2,1},\frac{1}{3}\right) ,\hspace{1.5cm% }Q_{iR}=\left( \begin{array}{c} \overline{u}_{iR} \\ \overline{d}_{iR}% \end{array}% \right) \sim \left( \mathbf{3},\mathbf{1,2},\frac{1}{3}\right) ,\hspace{1.5cm% }i=1,2,3, \notag \\ L_{iL} &=&\left( \begin{array}{c} \nu _{iL} \\ e_{iL}% \end{array}% \right) \sim \left( \mathbf{1},\mathbf{2,1},-1\right) ,\hspace{1.5cm}% L_{iR}=\left( \begin{array}{c} \nu _{iR} \\ e_{iR}% \end{array}% \right) \sim \left( \mathbf{1},\mathbf{1,2},-1\right) ,\hspace{1.5cm}i=1,2,3, \notag \\ T_{R} &\sim &\left( \mathbf{3},\mathbf{1,1},\frac{4}{3}\right) ,\hspace{1.5cm% }T_{L}\sim \left( \mathbf{3},\mathbf{1,1},\frac{4}{3}\right) ,\hspace{1.5cm}% T_{R}^{\prime }\sim \left( \mathbf{3},\mathbf{1,1},\frac{4}{3}\right) ,% \hspace{1.5cm}T_{L}^{\prime }\sim \left( \mathbf{3},\mathbf{1,1},\frac{4}{3}% \right) , \notag \\ B_{nR} &\sim &\left( \mathbf{3},\mathbf{1,1},-\frac{2}{3}\right) ,\hspace{% 1.5cm}B_{nL}\sim \left( \mathbf{3},\mathbf{1,1},-\frac{2}{3}\right) ,\hspace{% 1.5cm}B_{R}^{\prime }\sim \left( \mathbf{3},\mathbf{1,1},-\frac{2}{3}\right) ,\hspace{1.5cm}B_{L}^{\prime }\sim \left( \mathbf{3},\mathbf{1,1},-\frac{2}{3% }\right) , \notag \\ E_{nR} &\sim &\left( \mathbf{1},\mathbf{1,1},-2\right) ,\hspace{1.5cm}% E_{nL}\sim \left( \mathbf{1},\mathbf{1,1},-2\right) ,\hspace{1.5cm}% E_{R}^{\prime }\sim \left( \mathbf{1},\mathbf{1,1},-2\right) ,\hspace{1.5cm}% E_{L}^{\prime }\sim \left( \mathbf{1},\mathbf{1,1},-2\right) , \notag \\ N_{iR} &\sim &\left( \mathbf{1},\mathbf{1,1},0\right) ,\hspace{1.5cm}\Omega _{nR}\sim \left( \mathbf{1},\mathbf{1,1},0\right) ,\hspace{1.5cm}n=1,2. \end{eqnarray}% Let us note that we have extended the fermion sector of the original left-right symmetric model model by introducing two exotic up type quarks $T$% , $T^{\prime }$, three exotic down type quarks $B_{n}$ ($n=1,2$), $B^{\prime }$, three charged leptons $E_{n}$, $E^{\prime }$ and five Majorana neutrinos, i.e., $N_{iR}$ ($i=1,2,3$)\ and $\Omega _{nR}$ ($n=1,2$). Such exotic fermions are assigned as singlet representations of the $SU\left( 2\right) _{L}\times SU\left( 2\right) _{R}$ group. The above mentioned exotic fermion content is the minimal one required to generate tree level masses via a Universal seesaw mechanism for the bottom, charm and strange quarks, as well as the tau and muon, and one loop level masses for the first generation SM charged fermions, i.e., the up, down quarks, and the electron. The scalar assignments under the $SU(3)_{C}\times SU\left( 2\right) _{L}\times SU\left( 2\right) _{R}\times U\left( 1\right) _{B-L}$ group are: \begin{eqnarray} \Phi &=&\left( \begin{array}{cc} \frac{1}{\sqrt{2}}\left( v_{1}+\phi _{1R}^{0}+i\phi _{1I}^{0}\right) & \phi _{2}^{+} \\ \phi _{1}^{-} & \frac{1}{\sqrt{2}}\left( v_{2}+\phi _{2R}^{0}+i\phi _{2I}^{0}\right)% \end{array}% \right) \sim \left( \mathbf{1},\mathbf{2,2},0\right) , \notag \\ \chi _{L} &=&\left( \begin{array}{c} \chi _{L}^{+} \\ \frac{1}{\sqrt{2}}\left( v_{L}+\func{Re}\chi _{L}^{0}+i\func{Im}\chi _{L}^{0}\right)% \end{array}% \right) \sim \left( \mathbf{1},\mathbf{2,1},1\right) ,\hspace{1cm}\chi _{R}=\left( \begin{array}{c} \chi _{R}^{+} \\ \frac{1}{\sqrt{2}}\left( v_{R}+\func{Re}\chi _{R}^{0}+i\func{Im}\chi _{R}^{0}\right)% \end{array}% \right) \sim \left( \mathbf{1},\mathbf{1,2},1\right) , \notag \\ \phi _{L} &=&\left( \begin{array}{c} \phi _{L}^{+} \\ \frac{1}{\sqrt{2}}\left( \func{Re}\phi _{L}^{0}+i\func{Im}\phi _{L}^{0}\right)% \end{array}% \right) \sim \left( \mathbf{1},\mathbf{2,1},1\right) ,\hspace{1cm}\phi _{R}=\left( \begin{array}{c} \phi _{R}^{+} \\ \frac{1}{\sqrt{2}}\left( \func{Re}\phi _{R}^{0}+i\func{Im}\phi _{R}^{0}\right)% \end{array}% \right) \sim \left( \mathbf{1},\mathbf{1,2},1\right) , \notag \\ \sigma &\sim &\left( \mathbf{1},\mathbf{1,1},0\right) ,\hspace{1cm}\varphi \sim \left( \mathbf{1},\mathbf{1,1},0\right) ,\hspace{1cm}\eta \sim \left( \mathbf{1},\mathbf{1,1},0\right) ,\hspace{1cm}\rho \sim \left( \mathbf{1},% \mathbf{1,1},0\right) . \end{eqnarray}% To implement the tree level Universal mechanism we have introduced the scalars $\chi _{L}$, $\chi _{R}$ which are responsible for generating tree level mixings between the exotic and SM fermions. Besides that, the scalar fields $\phi _{L}$, $\phi _{R}$ are required for the implementation of the radiative seesaw mechanism that produces the masses for the first generation SM charged fermions. We have further introduced the gauge singlet scalars $\eta $ and $\varphi $ which are crucial for the implementation of the radiative inverse seesaw mechanism necessary to produce the light active neutrino masses. Furthermore, the gauge singlet scalar $\sigma $ provides tree level masses for the exotic $T$ , $T^{\prime }$, $B_{2}$ and $B^{\prime }$ quarks. Besides that, the gauge singlet scalars $\rho $ and $\eta $ are included in the scalar spectrum in order to provide tree level masses for the exotic down type quark $B_{1}$, for the exotic leptons $E_{n}$, $E^{\prime }$\ and $% \Omega _{nR}$ ($n=1,2$), without the need of invoking soft-breaking mass terms. Furthermore, we have also included the scalar bidoublet $\Phi $, which is responsible for generating the top quark mass from the renormalizable Yukawa operator $\overline{Q}_{3L}\Phi Q_{iR}$ ($i=1,2,3$).% The vacuum expectation values (VEVs) of the scalars $\Phi $, $\chi _{L}$ and $\chi _{R}$ are: \begin{equation} \left\langle \Phi \right\rangle =\left( \begin{array}{cc} v_{1} & 0 \\ 0 & v_{2}% \end{array}% \right) ,\hspace{1.5cm}\left\langle \chi _{L}\right\rangle =\left( \begin{array}{c} 0 \\ v_{L}% \end{array}% \right) ,\hspace{1.5cm}\left\langle \chi _{R}\right\rangle =\left( \begin{array}{c} 0 \\ v_{R}% \end{array}% \right) , \end{equation} where for the sake of simplicity we will set $v_{2}=0$. The fermion assignments under $Z_{4}^{\left( 1\right) }\times Z_{4}^{\left( 2\right) }$ are: \begin{eqnarray*} Q_{nL} &\sim &\left( -1,-1\right) ,\hspace{1cm}Q_{3L}\sim \left( i,-1\right) ,\hspace{1cm}Q_{jR}\sim \left( 1,1\right) ,\hspace{1cm} \\ T_{L} &\sim &\left( 1,1\right) ,\hspace{1cm}T_{R}\sim \left( 1,-1\right) ,% \hspace{1cm}T_{L}^{\prime }\sim \left( 1,-i\right) ,\hspace{1cm}% T_{R}^{\prime }\sim \left( 1,i\right) , \\ B_{nL} &\sim &\left( 1,1\right) ,\hspace{1cm}B_{1R}\sim \left( -i,-1\right) ,% \hspace{1cm}B_{2R}\sim \left( 1,-1\right) ,\hspace{1cm}B_{L}^{\prime }\sim \left( 1,i\right) ,\hspace{1cm}B_{R}^{\prime }\sim \left( 1,-i\right) , \\ L_{jL} &\sim &\left( 1,i\right) ,\hspace{1cm}L_{jR}\sim \left( -i,-i\right) ,% \hspace{1cm}N_{jR}\sim \left( i,i\right) ,\hspace{1cm}\Omega _{nR}\sim \left( -i,1\right) ,\hspace{1cm}j=1,2,3, \\ E_{nL} &\sim &\left( -i,-i\right) ,\hspace{1cm}E_{nR}\sim \left( -1,i\right) ,\hspace{1cm}E_{L}^{\prime }\sim \left( -i,1\right) ,\hspace{1cm}% E_{R}^{\prime }\sim \left( -1,-1\right) ,\hspace{1cm}n=1,2. \end{eqnarray*} The scalar fields have the following $Z_{4}^{\left( 1\right) }\times Z_{4}^{\left( 2\right) }$ assignments: \begin{eqnarray} \Phi &\sim &\left( i,-1\right) ,\hspace{1cm}\chi _{L}\sim \left( -1,1\right) ,\hspace{1cm}\chi _{R}\sim \left( 1,1\right) ,\hspace{1cm}\phi _{L}\sim \left( -1,-i\right) ,\hspace{1cm}\phi _{R}\sim \left( 1,-i\right) \notag \\ \varphi &\sim &\left( 1,i\right) ,\hspace{1cm}\sigma \sim \left( 1,-1\right) ,\hspace{1cm}\eta \sim \left( -1,1\right) ,\hspace{1cm}\rho \sim \left( i,-1\right) . \end{eqnarray}% The fermion and scalar assignments under the $SU(3)_{C}\times SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}\times Z_{4}^{\left( 1\right) }\times Z_{4}^{\left( 2\right) }$ symmetry are shown in Tables \ref{fermions} and \ref{scalars}, respectively. \begin{table}[tbp] \begin{equation*} \begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & Q_{nL} & Q_{3L} & Q_{iR} & L_{iL} & L_{iR} & T_{L} & T_{R} & T_{L}^{\prime } & T_{R}^{\prime } & B_{nL} & B_{1R} & B_{2R} & B_{L}^{\prime } & B_{R}^{\prime } & E_{nL} & E_{nR} & E_{L}^{\prime } & E_{R}^{\prime } & N_{iR} & \Omega _{nR} \\ \hline SU(3)_{C} & \mathbf{3} & \mathbf{3} & \mathbf{3} & \mathbf{1} & \mathbf{1} & \mathbf{3} & \mathbf{3} & \mathbf{3} & \mathbf{3} & \mathbf{3} & \mathbf{3} & \mathbf{3} & \mathbf{3} & \mathbf{3} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} \\ \hline SU\left( 2\right) _{L} & \mathbf{2} & \mathbf{2} & \mathbf{1} & \mathbf{2} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} \\ \hline SU\left( 2\right) _{R} & \mathbf{1} & \mathbf{1} & \mathbf{2} & \mathbf{1} & \mathbf{2} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} \\ \hline U\left( 1\right) _{B-L} & \frac{1}{3} & \frac{1}{3} & \frac{1}{3} & -1 & -1 & \frac{4}{3} & \frac{4}{3} & \frac{4}{3} & \frac{4}{3} & -\frac{2}{3} & -% \frac{2}{3} & -\frac{2}{3} & -\frac{2}{3} & -\frac{2}{3} & -2 & -2 & -2 & -2 & 0 & 0 \\ \hline Z_{4}^{\left( 1\right) } & -1 & i & 1 & 1 & -i & 1 & 1 & 1 & 1 & 1 & -i & 1 & 1 & 1 & -i & -1 & -i & -1 & i & -i \\ \hline Z_{4}^{\left( 2\right) } & -1 & -1 & 1 & i & -i & 1 & -1 & -i & i & 1 & -1 & -1 & i & -i & -i & i & 1 & -1 & i & 1 \\ \hline \end{array}% \end{equation*}% \caption{Fermion assignments under $SU(3)_{C}\times SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}\times Z_{4}^{\left( 1\right) }\times Z_{4}^{\left( 2\right) }$. Here $i=1,2,3$ and $n=1,2$} \label{fermions} \end{table} \begin{table}[tbp] \begin{equation*} \begin{array}{|c|c|c|c|c|c|c|c|c|c|} \hline & \Phi & \chi _{L} & \chi _{R} & \phi _{L} & \phi _{R} & \varphi & \sigma & \eta & \rho \\ \hline SU(3)_{C} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} \\ \hline SU\left( 2\right) _{L} & \mathbf{2} & \mathbf{2} & \mathbf{1} & \mathbf{2} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} \\ \hline SU\left( 2\right) _{R} & \mathbf{2} & \mathbf{1} & \mathbf{2} & \mathbf{1} & \mathbf{2} & \mathbf{1} & \mathbf{1} & \mathbf{1} & \mathbf{1} \\ \hline U\left( 1\right) _{B-L} & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ \hline Z_{4}^{\left( 1\right) } & i & -1 & 1 & -1 & 1 & 1 & 1 & -1 & i \\ \hline Z_{4}^{\left( 2\right) } & -1 & 1 & 1 & -i & -i & i & -1 & 1 & -1 \\ \hline \end{array}% \end{equation*}% \caption{Scalar assignments under $SU(3)_{C}\times SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}\times Z_{4}^{\left( 1\right) }\times Z_{4}^{\left( 2\right) }$.} \label{scalars} \end{table} Let us note that all scalar fields acquire nonvanishing vacuum expectation values, excepting the scalar singlet $\varphi $, as well as the $\phi _{L}$ and $\phi _{R}$ fields whose $Z_{4}^{\left( 2\right) }$ charges correspond to nontrivial charges under the preserved remnant $Z_{2}$ symmetry. Furthermore, due to such remnant $Z_{2}$ symmetry, the real and imaginary parts of the scalar singlet $\varphi $ and of the neutral components of the $% \phi _{L}$ and $\phi _{R}$ fields will not have mixings with the remaining CP even and CP odd neutral scalar fields of the model. It is worth mentioning that the preserved $Z_{2}$ symmetry allows for stable scalar and fermionic dark matter candidates. The scalar dark matter candidate is the lightest among the $\func{Re}\varphi $, $\func{Im}\varphi $% , $\func{Re}\phi _{L}^{0}$, $\func{Re}\phi _{R}^{0}$, $\func{Im}\phi _{L}^{0} $\ and $\func{Im}\phi _{R}^{0}$\ fields. The fermionic dark matter candidate is the lightest among the right handed Majorana neutrinos $N_{iR}$ ($i=1,2,3$). In the scenario of a scalar DM candidate, it annihilates mainly into $WW$, $ZZ$, $t\overline{t}$, $b\overline{b}$ and $h_{SM}h_{SM}$ via a Higgs portal scalar interaction. These annihilation channels will contribute to the DM relic density, which can be accommodated for appropriate values of the scalar DM mass and of the coupling of the Higgs portal scalar interaction. Some studies of the dark matter constraints for the scenario of scalar singlet dark matter candidate are provided in ~\cite% {Escudero:2016gzx,Bernal:2017xat,CarcamoHernandez:2020ehn}. Thus, for the DM direct detection prospects, the scalar DM candidate would scatter off a nuclear target in a detector via Higgs boson exchange in the $t$-channel, giving rise to a constraint on the Higgs portal scalar interaction coupling. Regarding the scenario of fermionic DM candidate, the Dark matter relic abundance can be obtained through freeze-in, as shown in \cite% {Bernal:2017xat}. The resulting constraints can therefore be fulfilled for an appropriate region of parameter space, along similar lines of Refs.~\cite% {Bernal:2017xat,Han:2019lux,Cabrera:2020lmg,CarcamoHernandez:2021iat,Abada:2021yot}% . A detailed study of the implications of our model in dark matter is beyond the scope of this work and will be done elsewhere. With the above particle content, the following relevant Yukawa terms arise: \begin{eqnarray} -\mathcal{L}_{Y} &=&\dsum\limits_{i=1}^{3}\alpha _{i}\overline{Q}_{3L}\Phi Q_{iR}+\dsum\limits_{n=1}^{2}x_{n}^{\left( T\right) }\overline{Q}_{nL}% \widetilde{\chi }_{L}T_{R}+\dsum\limits_{i=1}^{3}z_{i}^{\left( T\right) }% \overline{T}_{L}\widetilde{\chi }_{R}^{\dagger }Q_{iR}+\dsum\limits_{n=1}^{2}w_{n}^{\left( T^{\prime }\right) }\overline{Q}% _{nL}\widetilde{\phi }_{L}T_{R}^{\prime }+\dsum\limits_{i=1}^{3}r_{i}^{\left( T^{\prime }\right) }\bar{T}% _{L}^{\prime }\widetilde{\phi }_{R}^{\dagger }Q_{iR} \notag \\ &&+x_{3}^{\left( B\right) }\overline{Q}_{3L}\chi _{L}B_{1R}+\dsum\limits_{n=1}^{2}x_{n2}^{\left( B\right) }\overline{Q}% _{nL}\chi _{L}B_{2R}+\dsum\limits_{n=1}^{2}\dsum\limits_{i=1}^{3}z_{ni}^{\left( B\right) }\overline{B}_{nL}\chi _{R}^{\dagger }Q_{iR}+\dsum\limits_{n=1}^{2}w_{n}^{\left( B^{\prime }\right) }\overline{Q}% _{nL}\phi _{L}B_{R}^{\prime }+\dsum\limits_{i=1}^{3}r_{i}^{\left( B^{\prime }\right) }\bar{B}_{L}^{\prime }\phi _{R}^{\dagger }Q_{iR} \notag \\ &&+y_{T}\overline{T}_{L}\sigma T_{R}+y_{T^{\prime }}\bar{T}_{L}^{\prime }\sigma T_{R}^{\prime }+y_{B_{1}}\overline{B}_{1L}\rho B_{1R}+y_{B_{2}}% \overline{B}_{2L}\sigma B_{2R}+y_{B^{\prime }}\bar{B}_{L}^{\prime }\sigma B_{R}^{\prime }+\dsum\limits_{n=1}^{2}y_{E_{n}}\overline{E}_{nL}\rho E_{nR}+y_{E^{\prime }}\bar{E}_{L}^{\prime }\rho E_{R}^{\prime } \notag \\ &&+\dsum\limits_{i=1}^{3}\dsum\limits_{n=1}^{2}x_{in}^{\left( E\right) }% \overline{L}_{iL}\chi _{L}E_{nR}+\dsum\limits_{n=1}^{2}\dsum\limits_{i=1}^{3}z_{nj}^{\left( E\right) }\overline{E}_{nL}\chi _{R}^{\dagger }L_{jR}+\dsum\limits_{i=1}^{3}w_{i}^{\left( E^{\prime }\right) }\overline{L}% _{iL}\phi _{L}E_{R}^{\prime }+\dsum\limits_{i=1}^{3}r_{i}^{\left( E^{\prime }\right) }\bar{E}_{L}^{\prime }\phi _{R}^{\dagger }L_{iR} \notag \\ &&+\dsum\limits_{i=1}^{3}\dsum\limits_{j=1}^{3}y_{ij}^{\left( L\right) }% \overline{L}_{iL}\Phi L_{jR}+\dsum\limits_{i=1}^{3}\dsum\limits_{j=1}^{3}x_{ij}^{\left( N\right) }% \overline{N_{iR}^{C}}\widetilde{\chi }_{R}^{\dagger }L_{jR}+\dsum\limits_{n=1}^{2}\left( y_{\Omega }\right) _{n}\overline{\Omega }_{nR}\Omega _{nR}^{C}\eta +\dsum\limits_{i=1}^{3}\dsum\limits_{k=1}^{2}x_{ik}^{\left( S\right) }% \overline{N}_{iR}\Omega _{kR}^{C}\varphi +H.c. \label{Ly} \end{eqnarray} To close this section, in the following we discuss the implications of our model for flavor changing neutral currents (FCNC). The FCNC in the down type quark sector are expected to be very suppressed since at energies below the scale $v_{R}$ of breaking of the left-right symmetry, only the $SU\left( 2\right) _{L}$ scalar doublet $\chi _{L}$ will appear in the down type quark Yukawa terms. In what regards the up type quark sector, there would be FCNC at tree level, since at low energies (below $v_{R}$), the bidoblet scalar $% \Phi $ and the $SU\left( 2\right) _{L}$ scalar doublet $\chi _{L}$ participate in the up type quark Yukawa interactions. However, such FCNC which can give rise to meson oscillations, can be suppressed by appropiate values of the Yukawa couplings and heavy non SM neutral scalar masses. Furthermore, concerning the charged lepton sector, the corresponding FCNC can be suppressed by making the matrix $y_{ij}^{\left( L\right) }$ diagonal. \newpage \section{Fermion mass matrices.} \label{fermionmasses} From the Yukawa interactions, we find that the mass matrices for SM charged fermions are given by: \begin{eqnarray} M_{U} &=&\left( \begin{array}{ccc} \Delta _{U} & 0_{2\times 1} & A_{U} \\ 0_{1\times 2} & m_{t} & 0 \\ B_{U} & 0 & m_{T}% \end{array}% \right) ,\hspace{1cm}\hspace{1cm}A_{U}=\left( \begin{array}{c} x_{1}^{\left( T\right) } \\ x_{2}^{\left( T\right) }% \end{array}% \right) \frac{v_{_{L}}}{\sqrt{2}}, \notag \\ B_{U} &=&\left( \begin{array}{cc} z_{1}^{\left( T\right) }, & z_{2}^{\left( T\right) }% \end{array}% \right) \frac{v_{R}}{\sqrt{2}},\hspace{1cm}m_{t}=\alpha _{3}\frac{v_{1}}{% \sqrt{2}}, \label{MU} \end{eqnarray} \begin{eqnarray} M_{D} &=&\left( \begin{array}{cc} \Delta _{D} & A_{D} \\ B_{D} & M_{B}% \end{array}% \right) ,\hspace{1cm}\hspace{1cm}A_{D}=\left( \begin{array}{cc} 0 & x_{12}^{\left( B\right) } \\ 0 & x_{22}^{\left( B\right) } \\ x_{3}^{\left( B\right) } & 0% \end{array}% \right) \frac{v_{L}}{\sqrt{2}}, \notag \\ B_{D} &=&\left( \begin{array}{ccc} z_{11}^{\left( B\right) }, & z_{12}^{\left( B\right) }, & z_{13}^{\left( B\right) } \\ z_{21}^{\left( B\right) }, & z_{22}^{\left( B\right) }, & z_{23}^{\left( B\right) }% \end{array}% \right) \frac{v_{R}}{\sqrt{2}},\hspace{1cm}\hspace{1cm}M_{B}=\left( \begin{array}{cc} m_{B_{1}} & 0 \\ 0 & m_{B_{2}}% \end{array}% \right) , \label{MD} \end{eqnarray} \begin{eqnarray} M_{E} &=&\left( \begin{array}{cc} \Delta _{E} & A_{E} \\ B_{E} & C_{E}% \end{array}% \right) ,\hspace{1cm}\hspace{1cm}A_{E}=\left( \begin{array}{cc} x_{11}^{\left( E\right) } & x_{12}^{\left( E\right) } \\ x_{21}^{\left( E\right) } & x_{22}^{\left( E\right) } \\ x_{31}^{\left( E\right) } & x_{32}^{\left( E\right) }% \end{array}% \right) \frac{v_{L}}{\sqrt{2}}, \notag \\ B_{D} &=&\left( \begin{array}{ccc} z_{11}^{\left( E\right) }, & z_{12}^{\left( E\right) }, & z_{13}^{\left( E\right) } \\ z_{21}^{\left( E\right) }, & z_{22}^{\left( E\right) }, & z_{23}^{\left( E\right) }% \end{array}% \right) \frac{v_{R}}{\sqrt{2}},\hspace{1cm}\hspace{1cm}C_{E}=\left( \begin{array}{cc} m_{E_{1}} & 0 \\ 0 & m_{E_{2}}% \end{array}% \right) , \label{ME} \end{eqnarray} \begin{figure}[tbp] \centering \includegraphics[width = 0.9\textwidth]{Diagramschargedfermions.pdf} \caption{Feynman diagrams contributing to the entries of the SM charged fermion mass matrices. Here, $n=1,2$ and $i,j=1,2,3$.} \label{Diagramschargedfermions} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width = 0.9\textwidth]{Diagramneutrinos.pdf}\vspace{-15cm} \caption{One-loop Feynman diagram contributing to the Majorana neutrino mass submatrix $\protect\mu $. Here, $n,k=1,2,3$ and $r=1,2$.} \label{Loopdiagrammu} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width = 0.9\textwidth]{Diagramsmutilde.pdf}\vspace{-15cm} \caption{Feynman diagram contributing to the Majorana neutrino mass submatrix $\tilde{\protect\mu}$. Here, $i,j,r,s=1,2,3$ and the cross mark $% \otimes$ in the internal lines corresponds to the one loop level induced Majorana mass term.} \label{Loopdiagrammutilde} \end{figure} where we have set $\alpha _{1}=\alpha _{2}=0$ to strongly suppress the tree level FCNC in the quark sector. As seen from Eqs. (\ref{MU}), (\ref{MD}) and (\ref{ME}), the exotic heavy vector-like fermions mix with the SM fermions lighter than top quark. The masses of these vector-like fermions are much larger than the scale of breaking of the left-right symmetry $v_{R}\sim \mathcal{O}(10)$ TeV, since the gauge singlet scalars $\eta $, $\sigma $ and $\rho $ are assumed to acquire vacuum expectation values much larger than this scale. Therefore, the charm, bottom and strange quarks, as well as the tau and muon leptons, acquire their masses from the tree-level Universal seesaw mechanism, whereas the first generation SM charged fermions, i.e., the up, down quarks and the electron get one-loop level masses from a radiative seesaw mechanism. Thus, the SM charged fermion mass matrices take the form: \begin{eqnarray} \widetilde{M}_{U} &=&\left( \begin{array}{cc} \Delta _{U}-A_{U}M_{\widetilde{T}}^{-1}B_{U} & 0_{2\times 1} \\ 0_{1\times 2} & m_{t}% \end{array}% \right) , \\ \widetilde{M}_{D} &=&\Delta _{D}-A_{D}M_{B}^{-1}B_{D}, \\ \widetilde{M}_{E} &=&\Delta _{E}-A_{E}M_{E}^{-1}B_{E}, \end{eqnarray}% where $\Delta _{U}$, $\Delta _{D}$ and $\Delta _{E}$ are the one loop level contributions to the SM charged fermion mass matrices arising from the one-loop Feynman diagrams of Figure \ref{Diagramschargedfermions}. It is worth mentioning that the first and second Feynman diagrams of the first row of Figure \ref{Diagramschargedfermions} contribute to the $\left( 3,i\right) $ and $\left( n,i\right) $ ($i=1,2,3$ and $n=1,2$) entries of the SM up type quark mass matrix, respectively. The first and the second diagrams from the second row of Figure \ref{Diagramschargedfermions} contribute to the $\left( i,j\right) $ ($i,j=1,2,3$) entries of the SM down type quark and SM charged lepton mass matrices, respectively. Furthermore, the one loop level contributions to the $\left( n,i\right) $ entries of the SM up type quark mass matrix arise from the first diagram of the third row of Figure \ref% {Diagramschargedfermions}. On the other hand, the second diagram of the third row of Figure \ref{Diagramschargedfermions} generates the one loop level contribution to the $\left( i,j\right) $ entries of the SM down type quark mass matrix. Finally, the last diagram of Figure \ref% {Diagramschargedfermions} yields the one loop level contribution to the $% \left( i,j\right) $ entries of the SM charged lepton mass matrix. The one loop level contributions to the SM charged fermion mass matrices are given by: \begin{eqnarray} \Delta _{U} &=&\frac{m_{T^{\prime }}}{16\pi ^{2}}\left( \begin{array}{ccc} w_{1}^{\left( T^{\prime }\right) }r_{1}^{\left( T^{\prime }\right) } & w_{1}^{\left( T^{\prime }\right) }r_{2}^{\left( T^{\prime }\right) } & w_{1}^{\left( T^{\prime }\right) }r_{3}^{\left( T^{\prime }\right) } \\ w_{2}^{\left( T^{\prime }\right) }r_{1}^{\left( T^{\prime }\right) } & w_{2}^{\left( T^{\prime }\right) }r_{2}^{\left( T^{\prime }\right) } & w_{2}^{\left( T^{\prime }\right) }r_{3}^{\left( T^{\prime }\right) } \\ 0 & 0 & 0% \end{array}% \right) \label{DeltaU} \\ &&\times \left\{ \left[ f\left( m_{S_{1}}^{2},m_{T^{\prime }}^{2}\right) -f\left( m_{S_{2}}^{2},m_{T^{\prime }}^{2}\right) \right] \sin 2\theta _{S}-% \left[ f\left( m_{P_{1}}^{2},m_{T^{\prime }}^{2}\right) -f\left( m_{P_{2}}^{2},m_{T^{\prime }}^{2}\right) \right] \sin 2\theta _{P}\right\} , \notag \\ \Delta _{D} &=&\frac{2m_{B^{\prime }}}{16\pi ^{2}}\left( \begin{array}{ccc} w_{1}^{\left( B^{\prime }\right) }r_{1}^{\left( B^{\prime }\right) } & w_{1}^{\left( B^{\prime }\right) }r_{2}^{\left( B^{\prime }\right) } & w_{1}^{\left( B^{\prime }\right) }r_{3}^{\left( B^{\prime }\right) } \\ w_{2}^{\left( B^{\prime }\right) }r_{1}^{\left( B^{\prime }\right) } & w_{2}^{\left( B^{\prime }\right) }r_{2}^{\left( B^{\prime }\right) } & w_{2}^{\left( B^{\prime }\right) }r_{3}^{\left( B^{\prime }\right) } \\ 0 & 0 & 0% \end{array}% \right) \label{DeltaD} \\ &&\times \left\{ \left[ f\left( m_{S_{1}}^{2},m_{B^{\prime }}^{2}\right) -f\left( m_{S_{2}}^{2},m_{B^{\prime }}^{2}\right) \right] \sin 2\theta _{S}-% \left[ f\left( m_{P_{1}}^{2},m_{B^{\prime }}^{2}\right) -f\left( m_{P_{2}}^{2},m_{B^{\prime }}^{2}\right) \right] \sin 2\theta _{P}\right\} , \notag \\ \Delta _{E} &=&\frac{2m_{E^{\prime }}}{16\pi ^{2}}\left( \begin{array}{ccc} w_{1}^{\left( E^{\prime }\right) }r_{1}^{\left( E^{\prime }\right) } & w_{1}^{\left( E^{\prime }\right) }r_{2}^{\left( E^{\prime }\right) } & w_{1}^{\left( E^{\prime }\right) }r_{3}^{\left( E^{\prime }\right) } \\ w_{2}^{\left( E^{\prime }\right) }r_{1}^{\left( E^{\prime }\right) } & w_{2}^{\left( E^{\prime }\right) }r_{2}^{\left( E^{\prime }\right) } & w_{2}^{\left( E^{\prime }\right) }r_{3}^{\left( E^{\prime }\right) } \\ w_{3}^{\left( E^{\prime }\right) }r_{1}^{\left( E^{\prime }\right) } & w_{3}^{\left( E^{\prime }\right) }r_{2}^{\left( E^{\prime }\right) } & w_{3}^{\left( E^{\prime }\right) }r_{3}^{\left( E^{\prime }\right) }% \end{array}% \right) \label{DeltaE} \\ &&\times \left\{ \left[ f\left( m_{S_{1}}^{2},m_{E^{\prime }}^{2}\right) -f\left( m_{S_{2}}^{2},m_{E^{\prime }}^{2}\right) \right] \sin 2\theta _{S}-% \left[ f\left( m_{P_{1}}^{2},m_{E^{\prime }}^{2}\right) -f\left( m_{P_{2}}^{2},m_{E^{\prime }}^{2}\right) \right] \sin 2\theta _{P}\right\} , \notag \end{eqnarray}% where $f\left( m_{1},m_{2}\right) $ is given by: \begin{equation} f\left( m_{1},m_{2}\right) =\frac{m_{1}^{2}}{m_{1}^{2}-m_{2}^{2}}\ln \left( \frac{m_{1}^{2}}{m_{2}^{2}}\right) , \end{equation}% and the physical scalars $S_{1}$, $S_{2}$ and pseudoscalars $P_{1}$ and $% P_{2}$ are given by: \begin{equation} \left( \begin{array}{c} S_{1} \\ S_{2}% \end{array}% \right) =\left( \begin{array}{cc} \cos \theta _{S} & \sin \theta _{S} \\ -\sin \theta _{S} & \cos \theta _{S}% \end{array}% \right) \left( \begin{array}{c} \func{Re}\phi _{L}^{0} \\ \func{Re}\phi _{R}^{0}% \end{array}% \right) ,\hspace{1cm}\left( \begin{array}{c} \ P_{1} \\ P_{2}% \end{array}% \right) =\left( \begin{array}{cc} \cos \theta _{P} & \sin \theta _{P} \\ -\sin \theta _{P} & \cos \theta _{P}% \end{array}% \right) \left( \begin{array}{c} \func{Im}\phi _{L}^{0} \\ \func{Im}\phi _{R}^{0}% \end{array}% \right) . \end{equation}% It is worth mentioning that the SM charged fermion mass hierarchy can be successfully reproduced by having appropriate values for the exotic fermion masses. For instance, to successfully explain the GeV scale value of the bottom quark and tau lepton masses, we have that such masses can be estimated as: \begin{equation} m_{b}\sim m_{\tau }\sim \frac{y^{2}v_{L}v_{R}}{m_{F}} \label{estimate} \end{equation}% where $m_{\mathrm{F}}$ is the mass scale of the exotic fermions, $y$ the SM fermion-exotic fermion Yukawa coupling and $\lambda $ the quartic scalar coupling. Taking $v_{L}\sim \mathcal{O}\left( 100\right) $ GeV, $v_{R}\sim \mathcal{O}\left( 10\right) $ TeV, $m_{F}\sim \mathcal{O}\left( 100\right) $ TeV and $y\sim \mathcal{O}\left( 0.4\right) $, Eq. (\ref{estimate}) takes the form $m_{b}\sim m_{\tau }\sim \mathcal{O}\left( 1\right) $ GeV, thus showing that our model naturally explains the smallness of the bottom and tau masses with respect to the top quark mass. Furthermore, the hierarchy between the masses of the remaining SM charged fermions lighter than the top quark can be accommodated by having some deviation from the scenario of universality of the Yukawa couplings in both quark and lepton sectors. This would imply some moderate tuning among the Yukawa couplings. However, such a situation is considerably better compared to that of the minimal Left-Right symmetric model, where a significant tuning of the Yukawa couplings is required. In order to find the best fit point that successfully reproduces the SM quark masses and CKM parameters, we proceed to minimize the following $\chi ^{2}$ function: \begin{equation} \chi ^{2}=\sum_{f}\frac{(m_{f}^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{th}}-m_{f}^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{exp}})^{2}}{\sigma _{f}^{2}}+\frac{(|\mathbf{V}_{12}^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{th}}|-|\mathbf{V}_{12}^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{exp}% }|)^{2}}{\sigma _{12}^{2}}+\frac{(|\mathbf{V}_{23}^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{th}}|-|\mathbf{V}% _{23}^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{exp}}|)^{2}}{\sigma _{23}^{2}}+\frac{(|\mathbf{V}_{13}^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{th% }}|-|\mathbf{V}_{13}^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{exp}}|)^{2}}{\sigma _{13}^{2}}+\frac{(J_{q}^{% \RIfM@\expandafter\text@\else\expandafter\mbox\fi{th}}-J_{q}^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{exp}})^{2}}{\sigma _{J}^{2}}\,\;, \end{equation}% where $f=u,c,t,d,s,b$ and $J_{q}$ is the Jarlskog parameter. The experimental values for the quark masses are given by~\cite{Xing:2020ijf}, \begin{equation*} \begin{split} m_{u}^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{exp}}(M_{Z})& =1.24\pm 0.22\RIfM@\expandafter\text@\else\expandafter\mbox\fi{MeV}\;, \\ m_{c}^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{exp}}(M_{Z})& =0.626\pm 0.020\RIfM@\expandafter\text@\else\expandafter\mbox\fi{GeV}\;, \\ m_{t}^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{exp}}(M_{Z})& =172.9\pm 0.04\RIfM@\expandafter\text@\else\expandafter\mbox\fi{GeV}\;, \\ m_{d}^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{exp}}(M_{Z})& =2.69\pm 0.19\RIfM@\expandafter\text@\else\expandafter\mbox\fi{MeV}\;, \\ m_{s}^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{exp}}(M_{Z})& =53.5\pm 4.6\RIfM@\expandafter\text@\else\expandafter\mbox\fi{MeV}\;, \\ m_{b}^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{exp}}(M_{Z})& =2.86\pm 0.03\RIfM@\expandafter\text@\else\expandafter\mbox\fi{GeV}\;, \end{split}% \end{equation*}% and the CKM parameters are~\cite{Zyla:2020zbs} \begin{equation*} \begin{split} |\mathbf{V}_{12}^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{exp}}|& =0.22452\pm 0.00044\;, \\ |\mathbf{V}_{23}^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{exp}}|& =0.04214\pm 0.00076\;, \\ |\mathbf{V}_{13}^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{exp}}|& =0.00365\pm 0.00012\;, \\ J_{q}^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{exp}}& =(3.18\pm 0.15)\times 10^{-5}\;. \end{split}% \end{equation*}% The magnitudes of the quark Yukawa couplings are randomly varied in the range $[0.1,1.5]$, whereas their complex phases are ranged between $0$ and $2\pi $. Furthermore, we have fixed $v_{L}=$ $100$\mbox{GeV} and $v_{R}=10$\mbox{TeV} and randomly varied $\theta =\theta _{S}=-\theta _{P}$ in a small range around $\frac{\pi }{3}$. The masses of the vector like quarks and inert scalar mediators are varied in the ranges: \begin{eqnarray} 0.5\mbox{TeV} &\leq &m_{S_{1}}=m_{P_{1}}\leq 10\mbox{TeV},\hspace{1cm}% 1.01m_{S_{1}}\leq m_{S_{2}}=m_{P_{2}}\leq 1.03m_{S_{1}},\hspace{1cm}1% \mbox{TeV}\leq m_{T^{\prime }},m_{B^{\prime }}\leq 10^{3}\mbox{TeV}, \notag \\ 10^{2}\mbox{TeV} &\leq &m_{B_{1}}\leq 2\times 10^{2}\mbox{TeV},\hspace{0.9cm}% 10^{2}\frac{m_{b}}{m_{c}}\mbox{TeV}\leq m_{T}\leq 2\times 10^{2}\frac{m_{b}}{% m_{c}}\mbox{TeV},\hspace{0.9cm}10^{2}\frac{m_{b}}{m_{s}}\mbox{TeV}\leq m_{B_{2}}\leq 2\times 10^{2}\frac{m_{b}}{m_{s}}\mbox{TeV},\hspace{0.9cm} \notag \end{eqnarray}% In the above described range of parameters, we find that the minimization of the $\chi ^{2}$ function yields the following benchmark point, consistent with the experimental values of the SM quark masses and CKM parameters: \begin{eqnarray} \theta &\simeq &85.9^{\circ },\hspace{1cm}m_{S_{1}}=m_{P_{1}}\simeq 1.9% \mbox{TeV},\hspace{1cm}m_{S_{2}}=m_{P_{2}}\simeq 2.1\mbox{TeV},\hspace{1cm}% v_{L}\simeq 100\mbox{GeV},\hspace{1cm}v_{R}\simeq 10\mbox{TeV}, \notag \\ m_{T} &\simeq &583\mbox{TeV},\hspace{0.9cm}m_{T^{\prime }}\simeq 1.1\times 10^{3}\mbox{TeV},\hspace{0.9cm}m_{B_{1}}\simeq 216\mbox{TeV},\hspace{0.9cm}% m_{B_{2}}\simeq 9.3\times 10^{3}\mbox{TeV},\hspace{0.9cm}m_{B^{\prime }}\simeq 396\mbox{TeV}, \notag \\ x_{1}^{\left( T\right) } &\simeq &0.24-0.02i,\hspace{1cm}x_{2}^{\left( T\right) }\simeq 0.96-0.06i,\hspace{1cm}z_{1}^{\left( T\right) }=z_{2}^{\left( T\right) }\simeq -0.16+0.08i,\hspace{1cm}x_{12}^{\left( B\right) }\simeq -0.05-0.03i, \notag \\ x_{22}^{\left( B\right) } &\simeq &-0.62-0.06i,\hspace{1cm}x_{3}^{\left( B\right) }\simeq 0.07-0.62i,\hspace{1cm}z_{11}^{\left( B\right) }\simeq 0.25,% \hspace{1cm}z_{12}^{\left( B\right) }\simeq 0.49,\hspace{1cm}z_{13}^{\left( B\right) }\simeq -0.44, \notag \\ z_{21}^{\left( B\right) } &\simeq &-1.17i,\hspace{1cm}z_{22}^{\left( B\right) }\simeq 0.95i,\hspace{1cm}z_{23}^{\left( B\right) }\simeq 0.80i,% \hspace{1cm}w_{1}^{\left( T^{\prime }\right) }\simeq -0.39+0.197i,\hspace{1cm% }w_{2}^{\left( T^{\prime }\right) }\simeq -0.58+0.29i, \notag \\ r_{1}^{\left( T^{\prime }\right) } &\simeq &-0.154+0.084i,\hspace{1cm}% r_{2}^{\left( T^{\prime }\right) }\simeq -0.875+0.48i,\hspace{1cm}% r_{3}^{\left( T^{\prime }\right) }\simeq 0.30-0.16i,\hspace{1cm}% w_{1}^{\left( B^{\prime }\right) }\simeq 0.12-0.087i, \notag \\ w_{2}^{\left( B^{\prime }\right) } &\simeq &0.44+0.74i,\hspace{1cm}% r_{1}^{\left( B^{\prime }\right) }\simeq -0.17-0.98i,\hspace{1cm}% r_{2}^{\left( B^{\prime }\right) }\simeq -1.22,\hspace{1cm}r_{3}^{\left( B^{\prime }\right) }\simeq 1.299+0.698i, \label{bfpquarks} \end{eqnarray} As we can see, the dimensionless quark Yukawa couplings are of order unity with moderate deviations. This shows that the proposed model is able to explain the existing pattern of the observed quark spectrum. The resulting correlations among the heavy exotic quark masses and between the exotic quark masses and the masses $m_{S_1}$ and $m_{S_2}$ of the inert scalars $S_1$ and $S_2$ are shown in Figures \ref{nonSMquarkmasses} and \ref{heavyquarkvsscalars}, respectively, which present the allowed region of parameter space for the seesaw mediator masses, consistent with a successful description of the observed pattern of SM quark masses and CKM parameters. As shown in Figures \ref{nonSMquarkmasses} and \ref{heavyquarkvsscalars}, the observed SM quark mass and mixing hierarchy can be successfully accounted for, provided that the heavy vector like quark have masses in the ranges $450$ TeV $\lesssim m_T\lesssim 700$ TeV, $900$ TeV $\lesssim m_{T^{\prime}}\lesssim 1400$ TeV, $7.5\times 10^3$ TeV $\lesssim m_{B_2}\lesssim 11 \times 10^3$ TeV, $300$ TeV $\lesssim m_{T^{\prime}}\lesssim 500$ TeV, wheras the masses of the inert scalars are constrained to be in the ranges $1.4$ TeV $\lesssim m_{S_1}\lesssim 2.2$ TeV and $1.7$ TeV $\lesssim m_{S_2}\lesssim 2.5$ TeV for $m_{S_1}=m_{P_1}$ and $m_{S_2}=m_{P_2}$. \begin{figure}[tbp] \centering \includegraphics[width=8.3cm, height=7.5cm]{plotquark2.jpg}% \includegraphics[width=8.3cm, height=7.5cm]{plotquark3.jpg}\newline \includegraphics[width=8.3cm, height=7.5cm]{plotquark4.jpg}% \includegraphics[width=8.3cm, height=7.5cm]{plotquark5.jpg}\newline \includegraphics[width=8.3cm, height=7.5cm]{plotquark9.jpg}% \includegraphics[width=8.3cm, height=7.5cm]{plotquark10.jpg}\newline \caption{Correlations between the heavy exotic quark masses.} \label{nonSMquarkmasses} \end{figure} It may seem that the problem of the hierarchies of SM fermions is not solved but simply reparameterized in terms of unknown vector-like fermion masses. However, there are six advantages to this approach. Firstly, the approach is dynamical, since the vector-like masses, which are dynamically generated from Yukawa interactions involving the gauge singlet scalars neutral under the remnant $Z_{2}$ symmetry, are new physical quantities, which could in principle be determined by a future theory. Secondly, it has experimental consequences, since the new vector-like charged exotic fermions and right handed neutrinos can be discovered directly at proton-proton colliders via their production by gluon fusion (for the exotic quarks only) and Drell Yan mechanisms, or indirectly from their loop contributions to certain observables. For instance, the charged exotic vector like leptons, which mediate the Universal seesaw mechanism that produces the SM charged lepton masses, are also crucial for accommodating the experimental values of the muon and electron anomalous magnetic moments, whose magnitudes do not find an explanation within the context of the Standard Model. Thirdly, this approach can also account for the small quark mixing angles, as well as the large lepton mixing angles arising from the neutrino sector. Fourthly, the effective Yukawa couplings are proportional to a product of two other dimensionless couplings, so a small hierarchy in those couplings can yield a quadratically larger hierarchy in the effective couplings. Fifthly, the masses of the light active neutrinos are dynamically generated via a radiative inverse seesaw mechanism at one loop level, thanks to the remnant $% Z_{2}$ symmetry arisen from the spontaneous breaking of the $Z_{4}$ symmetry. Sixthly, the remnant $Z_{2}$ symmetry allows for stable scalar and fermionic dark matter candidates. For all these reasons, the approach we follow in this paper is both well motivated and interesting. \begin{figure}[tbp] \centering \includegraphics[width=0.5\textwidth]{plotquark1.jpg}% \includegraphics[width=0.5\textwidth]{plotquark6.jpg}\newline \includegraphics[width=0.5\textwidth]{plotquark7.jpg}% \includegraphics[width=0.5\textwidth]{plotquark8.jpg}\newline \caption{Correlations between the exotic quark masses and the masses $m_{S_1}$ and $m_{S_2}$ of the inert scalars $S_1$ and $S_2$, respectively. } \label{heavyquarkvsscalars} \end{figure} Concerning the neutrino sector, we find that the neutrino Yukawa interactions give rise to the following neutrino mass terms: \begin{equation} -\mathcal{L}_{mass}^{\left( \nu \right) }=\frac{1}{2}\left( \begin{array}{ccc} \overline{\nu _{L}^{C}} & \overline{\nu _{R}} & \overline{N_{R}}% \end{array}% \right) M_{\nu }\left( \begin{array}{c} \nu _{L} \\ \nu _{R}^{C} \\ N_{R}^{C}% \end{array}% \right) +\dsum\limits_{n=1}^{2}\left( m_{\Omega }\right) _{n}\overline{% \Omega }_{nR}\Omega _{nR}^{C}+H.c, \label{Lnu} \end{equation}% where the neutrino mass matrix reads: \begin{equation} M_{\nu }=\left( \begin{array}{ccc} 0_{3\times 3} & m_{\nu D} & 0_{3\times 3} \\ m_{\nu D}^{T} & \widetilde{\mu } & M \\ 0_{3\times 3} & M^{T} & \mu% \end{array}% \right) , \label{Mnu} \end{equation}% and the submatrices are given by: \begin{eqnarray} \left( m_{\nu D}\right) _{ij} &=&y_{ij}^{\left( L\right) }\frac{v_{1}}{\sqrt{% 2}},\hspace{0.7cm}\hspace{0.7cm}M_{ij}=x_{ij}^{\left( N\right) }\frac{v_{R}}{% \sqrt{2}},\hspace{0.7cm}\hspace{0.7cm}i,j,n,k=1,2,3,\hspace{0.7cm}\hspace{% 0.7cm}r=1,2, \notag \\ \mu _{nk} &=&\dsum\limits_{r=1}^{2}\frac{x_{nr}^{\left( S\right) }x_{kr}^{\left( S\right) }m_{\Omega _{r}}}{16\pi ^{2}}\left[ \frac{% m_{\varphi _{R}}^{2}}{m_{\varphi _{R}}^{2}-m_{\Omega _{r}}^{2}}\ln \left( \frac{m_{\varphi _{R}}^{2}}{m_{\Omega _{r}}^{2}}\right) -\frac{m_{\varphi _{I}}^{2}}{m_{\varphi _{I}}^{2}-m_{\Omega _{r}}^{2}}\ln \left( \frac{% m_{\varphi _{I}}^{2}}{m_{\Omega _{r}}^{2}}\right) \right] ,\hspace{0.7cm}% \hspace{0.7cm}. \end{eqnarray}% The $\mu $ block is generated at one loop level due to the exchange of $% \Omega _{rR}$ ($r=1,2$) and $\varphi $ in the internal lines, as shown in Figure \ref{Loopdiagrammu}. To close the corresponding one loop diagram, the following trilinear scalar interaction is needed: \begin{equation} V_{\mu }=A\left( \varphi ^{\ast }\right) ^{2}\sigma , \end{equation} Furthermore, the $\widetilde{\mu }$ submatrix is generated from the Feynman diagram of Figure \ref{Loopdiagrammutilde}, which involves the virtual exchange of $\func{Re}\chi _{R}^{0}$, $\func{Im}\chi _{R}^{0}$, $Z^{\prime }$ as well as the one loop level induced Majorana mass term in the internal lines of the loop, in analogy with \cite{Pilaftsis:1991ug}. The entries of the submatrix $\widetilde{\mu }$ are given by: \begin{eqnarray} \widetilde{\mu }_{ij} &=&\frac{g_{R}^{2}}{16\pi ^{2}}\mu _{ij}\frac{% m_{Z^{\prime }}^{2}}{m_{Z^{\prime }}^{2}-\mu _{ij}^{2}}\ln \left( \frac{% m_{Z^{\prime }}^{2}}{\mu _{ij}^{2}}\right) \label{mutilde} \\ &&+\dsum\limits_{r=1}^{3}\dsum\limits_{s=1}^{3}\frac{x_{ri}^{\left( N\right) }x_{sj}^{\left( N\right) }}{16\pi ^{2}}\mu _{rs}\left[ \frac{m_{\chi _{R}^{0}}^{2}}{m_{\chi _{R}^{0}}^{2}-\left\vert \mu _{rs}\right\vert ^{2}}% \ln \left( \frac{m_{\chi _{R}^{0}}^{2}}{\left\vert \mu _{rs}\right\vert ^{2}}% \right) -\frac{m_{\chi _{I}^{0}}^{2}}{m_{\chi _{I}^{0}}^{2}-\left\vert \mu _{rs}\right\vert ^{2}}\ln \left( \frac{m_{\chi _{I}^{0}}^{2}}{\left\vert \mu _{rs}\right\vert ^{2}}\right) \right] , \notag \end{eqnarray} Then, as follows from Eq. (\ref{mutilde}), we have $\left\vert\widetilde{\mu }_{ij}\right\vert<<\left\vert \mu_{ij}\right\vert$ ($i,j=1,2,3$), since the entries of the $\widetilde{\mu }$ submatrix are much smaller than the entries of the $\mu$ submatrix, by at least two orders of magnitude. The light active masses arise from an inverse seesaw mechanism and the physical neutrino mass matrices are: \begin{eqnarray} \widetilde{\mathbf{M}}_{\nu } &=&m_{\nu D}\left( M^{T}\right) ^{-1}\mu M^{-1}m_{\nu D}^{T},\hspace{0.7cm} \label{M1nu} \\ \mathbf{M}_{\nu }^{\left( 1\right) } &=&-\frac{1}{2}\left( M+M^{T}\right) +% \frac{1}{2}\left( \mu +\widetilde{\mu }\right) ,\hspace{0.7cm} \\ \mathbf{M}_{\nu }^{\left( 2\right) } &=&\frac{1}{2}\left( M+M^{T}\right) +% \frac{1}{2}\left( \mu +\widetilde{\mu }\right) . \end{eqnarray}% where $M_{\nu }^{(1)}$ corresponds to the mass matrix for light active neutrinos $\nu _{a}$ ($a=1,2,3$), whereas $M_{\nu }^{(2)}$ and $M_{\nu }^{(3)}$ are the mass matrices for sterile neutrinos ($N_{a}^{-},N_{a}^{+}$) which are superpositions of mostly $\nu _{aR}$ and $N_{aR}$ as $N_{a}^{\pm }\sim \frac{1}{\sqrt{2}}\left( \nu _{aR}\mp N_{aR}\right) $. In the limit $% \mu \rightarrow 0$, which corresponds to unbroken lepton number, the light active neutrinos become massless. The smallness of the $\mu $ and $% \widetilde{\mu }$ parameters is responsible for a small mass splitting between the three pairs of sterile neutrinos, thus implying that the sterile neutrinos form pseudo-Dirac pairs. The full neutrino mass matrix given by Eq. (\ref{Mnu}) can be diagonalized by the following rotation matrix \cite{Catano:2012kw}: \begin{equation} \mathbb{R}=% \begin{pmatrix} \mathbf{R}_{\nu } & \mathbf{R}_{1}\mathbf{R}_{M}^{\left( 1\right) } & \mathbf{R}_{2}\mathbf{R}_{M}^{\left( 2\right) } \\ -\frac{(\mathbf{R}_{1}^{\dagger }+\mathbf{R}_{2}^{\dagger })}{\sqrt{2}}% \mathbf{R}_{\nu } & \frac{(1-\mathbf{S})}{\sqrt{2}}\mathbf{R}_{M}^{\left( 1\right) } & \frac{(1+\mathbf{S})}{\sqrt{2}}\mathbf{R}_{M}^{\left( 2\right) } \\ -\frac{(\mathbf{R}_{1}^{\dagger }-\mathbf{R}_{2}^{\dagger })}{\sqrt{2}}% \mathbf{R}_{\nu } & \frac{(-1-\mathbf{S})}{\sqrt{2}}\mathbf{R}_{M}^{\left( 1\right) } & \frac{(1-\mathbf{S})}{\sqrt{2}}\mathbf{R}_{M}^{\left( 2\right) }% \end{pmatrix}% , \label{U} \end{equation}% where \begin{equation} \mathbf{S}=-\frac{1}{4}M^{-1}\mu ,\hspace{1cm}\hspace{1cm}\mathbf{R}% _{1}\simeq \mathbf{R}_{2}\simeq \frac{1}{\sqrt{2}}m_{\nu D}^{\ast }M^{-1}. \end{equation}% Notice that the physical neutrino spectrum is composed of three light active neutrinos and six exotic neutrinos. The exotic neutrinos are pseudo-Dirac, with masses $\sim \pm \frac{1}{2}\left( M+M^{T}\right) $ and a small splitting $\mu $. Furthermore, $\mathbf{R}_{\nu }$, $\mathbf{R}_{M}^{\left( 1\right) }$ and $\mathbf{R}_{M}^{\left( 2\right) }$ are the rotation matrices which diagonalize $\widetilde{\mathbf{M}}_{\nu }$, $\mathbf{M}_{\nu }^{\left( 1\right) }$ and $\mathbf{M}_{\nu }^{\left( 2\right) }$, respectively. On the other hand, using Eq. (\ref{U}) we find that the neutrino fields $\nu _{L}=\left( \nu _{1L},\nu _{2L},\nu _{3L}\right) ^{T}$, $\nu _{R}^{C}=\left( \nu _{1R}^{C},\nu _{2R}^{C},\nu _{3R}^{C}\right) $ and $N_{R}^{C}=\left( N_{1R}^{C},N_{2R}^{C},N_{3R}^{C}\right) $ are related with the physical neutrino fields by the following relations: \begin{equation} \left( \begin{array}{c} \nu _{L} \\ \nu _{R}^{C} \\ N_{R}^{C}% \end{array}% \right) =\mathbb{R}\Psi _{L}\simeq \begin{pmatrix} \mathbf{R}_{\nu } & \mathbf{R}_{1}\mathbf{R}_{M}^{\left( 1\right) } & \mathbf{R}_{2}\mathbf{R}_{M}^{\left( 2\right) } \\ -\frac{(\mathbf{R}_{1}^{\dagger }+\mathbf{R}_{2}^{\dagger })}{\sqrt{2}}% \mathbf{R}_{\nu } & \frac{(1-\mathbf{S})}{\sqrt{2}}\mathbf{R}_{M}^{\left( 1\right) } & \frac{(1+\mathbf{S})}{\sqrt{2}}\mathbf{R}_{M}^{\left( 2\right) } \\ -\frac{(\mathbf{R}_{1}^{\dagger }-\mathbf{R}_{2}^{\dagger })}{\sqrt{2}}% \mathbf{R}_{\nu } & \frac{(-1-\mathbf{S})}{\sqrt{2}}\mathbf{R}_{M}^{\left( 1\right) } & \frac{(1-\mathbf{S})}{\sqrt{2}}\mathbf{R}_{M}^{\left( 2\right) }% \end{pmatrix}% \left( \begin{array}{c} \Psi _{L}^{\left( 1\right) } \\ \Psi _{L}^{\left( 2\right) } \\ \Psi _{L}^{\left( 3\right) }% \end{array}% \right) ,\hspace{0.5cm}\hspace{0.5cm}\hspace{0.5cm}\hspace{0.5cm}\Psi _{L}=\left( \begin{array}{c} \Psi _{L}^{\left( 1\right) } \\ \Psi _{L}^{\left( 2\right) } \\ \Psi _{L}^{\left( 3\right) }% \end{array}% \right) , \end{equation}% where $\Psi _{jL}^{\left( 1\right) }$, $\Psi _{jL}^{\left( 2\right) }=N_{j}^{+}$ and $\Psi _{jL}^{\left( 3\right) }=N_{j}^{-}$ ($j=1,2,3$) are the three active neutrinos and six exotic neutrinos, respectively. Finally to close this section we provide a discussion about collider signatures of exotic fermions of our model. From the Yukawa interactions it follows that the charged exotic fermions have mixing mass terms with the SM charged fermions, which allows the former to decay into any of the scalars of the model and SM charged fermions. These heavy charged exotic fermions can be produced in association with the charged fermions and can be pair produced as well at the LHC via gluon fusion (for the exotic quarks only) and Drell Yan mechanism. Consequently, observing an excess of events in the multijet and multilepton final state can be a signal of support of this model at the LHC. Regarding the sterile neutrino sector, it is worth mentioning that the sterile neutrinos can be produced at the LHC in association with a SM charged lepton, via quark-antiquark annihilation mediated by a $W^{\prime }$ gauge boson. The corresponding total cross section for the process $pp\rightarrow W^{\prime }\rightarrow lN_{a}^{\pm }$ $(a=1,2,3)$ will be sizeable provided that $m_{W^{\prime }}>m_{N_{a}^{\pm }}$% , which implies that in the $s$-channel the $W^{\prime }$ gauge boson is on its mass shell. Furthermore, in our model the sterile neutrinos have the following two body decay modes: $N_{a}^{\pm }\rightarrow l_{i}^{\pm }W^{\mp } $, $N_{a}^{\pm }\rightarrow \nu _{i}Z$ and $N_{a}^{\pm }\rightarrow \nu _{i}S $ (where $i=1,2,3$ is a flavor index and $S$ correponds to any of the scalars of our model lighter than the sterile neutrinos), which are suppressed by the small active-sterile neutrino mixing angle, taken to fullfill $\theta \sim \mathcal{O}(10^{-3})$, in order to keep charged have lepton flavor violating decays well below their current experimental upper limit and at the same time successfully comply with the constraints arising from the unitarity \cite{Abada:2018nio,Fernandez-Martinez:2016lgt}. Furthermore the heavy sterile neutrinos $N_{a}^{\pm }$ can decay via off-shell gauge bosons via the following modes: $N_{a}^{\pm }\rightarrow l_{i}^{+}l_{j}^{-}\nu _{k}$, $N_{a}^{\pm }\rightarrow l_{i}^{-}u_{j}\bar{d}% _{k}$, $N_{a}^{\pm }\rightarrow b\bar{b}\nu _{k}$ (where $i,j,k=1,2,3$ are flavor indices). Consequently, the heavy sterile neutrino can be detected the LHC via the observation of an excess of events with respect to the SM background in a final state composed of a pair of opposite sign charged leptons plus two jets. This signal of a pair of opposite sign charged leptons plus two jets arising from the decay of sterile neutrinos via an offshell $W^{\prime }$ gauge boson features a much lower SM background than the ones arising from the pair production and decays of sterile neutrinos, thus making the sterile neutrino much easier to detect at the LHC in left-right symmetric models than in models having only an extra $U^{\prime }(1)$ symmetry \cite{AguilarSaavedra:2012fu,Das:2012ii}. Studies of inverse seesaw neutrino signatures at colliders as well as the production of heavy neutrinos at the LHC are carried out in Refs. \cite% {Dev:2009aw,BhupalDev:2012zg,Das:2012ze,AguilarSaavedra:2012fu,Das:2012ii,Dev:2013oxa,Das:2014jxa,Das:2016hof,Das:2017gke,Das:2017nvm,Das:2017zjc,Das:2017rsu,Das:2018usr,Das:2018hph,Bhardwaj:2018lma,Helo:2018rll,Pascoli:2018heg}% . A comprehensive study of the exotic fermion production at the LHC and the exotic fermion decay modes is beyond the scope of this work and is left for future studies. \section{Charged lepton flavor violation} \label{LFV}\ac{In this section we will discuss the implications of the model in charged lepton flavor violation. As mentioned in the previous section, the sterile neutrino spectrum of the model is composed of six nearly degenerate heavy neutrinos. These sterile neutrinos, together with the heavy $W^{\prime }$ gauge boson, induce the $l_{i}\rightarrow l_{j}\gamma $ decay at one loop level, whose Branching ratio is given by: \cite% {Ilakovac:1994kj,Deppisch:2004fa,Lindner:2016bgg}: \begin{eqnarray} Br\left( l_{i}\rightarrow l_{j}\gamma \right) &=&\frac{\alpha _{W}^{3}s_{W}^{2}m_{l_{i}}^{5}\kappa ^{2}}{256\pi ^{2}m_{W^{\prime }}^{4}\Gamma _{i}}\dsum\limits_{r=1}^{3}\left\vert G\left( \frac{% m_{N_{r}}^{2}}{m_{W^{\prime }}^{2}}\right) \right\vert ^{2},\hspace{0.5cm}% \hspace{0.5cm}\hspace{0.5cm}G\left( x\right) =-\frac{2x^{3}+5x^{2}-x}{% 4\left( 1-x\right) ^{2}}-\frac{3x^{3}}{2\left( 1-x\right) ^{4}}\ln x, \notag \\ \kappa &=&\left\vert \dsum\limits_{k=1}^{3}\left( V_{lL}^{\dagger }\right) _{ik}\left( V_{lL}^{\dagger }\right) _{jk}\right\vert , \end{eqnarray}% where the one loop level contribution arising from the $W$ gauge boson exchange has been neglected, because it is suppressed by the quartic power of the active-sterile neutrino mixing angle $\theta $, assumed to be of the order of $10^{-3}$, for sterile neutrino masses of about $1$ TeV. It has been shown in Ref. \cite{Deppisch:2013cya} that for such mixing angle the contribution of the $W$ gauge boson to the branching ratio for the $\mu \rightarrow e\gamma $ decay rate takes values of the order of $10^{-16}$, which corresponds to three orders of magnitude below its experimental upper limit of $4.2\times 10^{-13}$. Thus, in this work we only consider the dominant $W^{\prime }$ contribution to the $\mu \rightarrow e\gamma $ decay rate. Furthermore, there will be additional contributions to the $\mu \rightarrow e\gamma $ decay, arising from the virtual exchange of electrically neutral scalars and charged exotic leptons. We have numerically checked that this contribution is close to $10^{-13}$ for charged exotic leptons with masses $m_{E}\sim \mathcal{O}\left( 100\right) $ TeV\ and flavor violating Yukawa couplings around $10^{-3}$. In order to simplify our analysis, we will consider a benchmark scenario where the couplings of the lepton violating scalar interactions are much lower than $10^{-3}$, thus allowing us to consider the $\mu \rightarrow e\gamma $ decay as mainly arising from the $W^{\prime }$ and heavy neutrino virtual exchange. Furthermore, in our analysis we consider the simplified scenario of degenerate heavy neutrinos with a common mass $m_{N}$ and we also set $% g_{R}=g$ and $\kappa =10^{-2}$, which corresponds to off-diagonal elements of $V_{lL}$ left handed leptonic rotation matrix of the order of $0.1$. \begin{figure}[tbp] \includegraphics[width=0.6\textwidth]{mutoegamma.pdf} \caption{Allowed parameter space in the $m_{W^{\prime }}-m_{N}$ plane consistent with the LFV constraints.} \label{LFVplot} \end{figure} Figure \ref{LFVplot} shows the allowed parameter space in the $m_{W^{\prime }}-m_{N}$ plane, consistent with the constraints arising from charged lepton flavor violating decays. The $W^{\prime }$ gauge boson and the sterile neutrino masses have been taken to be in the ranges $7$ TeV$\lesssim m_{W^{\prime }}\lesssim 10$ TeV and $1$ TeV$\lesssim m_{N}\lesssim 10$ TeV, respectively. As seen from Figure \ref{LFVplot}, the $\mu \rightarrow e\gamma $ decay branching ratio reach values of the order of $10^{-14}$ and lower, which are below its experimental upper limit of $4.2\times 10^{-13}$ and are within the reach of future experimental sensitivity, in the allowed model parameter space. In the region of parameter space consistent with $\mu \rightarrow e\gamma $ decay rate constraints, the maximum obtained branching ratios for the $\tau \rightarrow \mu \gamma $ and $\tau \rightarrow e\gamma $ decays can reach values below their corresponding upper experimental bounds of $4.4\times 10^{-8}$ and $3.3\times 10^{-8}$, respectively. Consequently, our model is compatible with the current charged lepton flavor violating decay constraints. On the other hand, the Effective Lagrangian approach for describing LFV processes, used in \cite{Kuno:1999jp}, in the regime of low momentum limit, where the off-shell contributions from photon exchange are negligible with respect to the contributions arising from real photon emission, imply that the dipole operators shown in Ref.~\cite{Kuno:1999jp} will dominate the Lepton Flavor Violating (LFV) transitions $\mu \rightarrow 3e$, $\mu {\RIfM@\expandafter\text@\else\expandafter\mbox\fi{% Al}}\rightarrow e{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Al and }}\mu {\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Ti}}\rightarrow e{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{T}}$, yielding the following relations \cite{Kuno:1999jp,Lindner:2016bgg}: \begin{equation} {\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Br}}\left( \mu \rightarrow 3e\right) \simeq \frac{1}{160}{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Br}}% \left( \mu \rightarrow e\gamma \right) ,\hspace{1cm}{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{CR}}\left( \mu {% \RIfM@\expandafter\text@\else\expandafter\mbox\fi{Ti}}\rightarrow e{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Ti}}\right) \simeq \frac{1}{200}{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Br}}% \left( \mu \rightarrow e\gamma \right) ,\hspace{1cm}{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{CR}}\left( \mu {% \RIfM@\expandafter\text@\else\expandafter\mbox\fi{Al}}\rightarrow e{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Al}}\right) \simeq \frac{1}{350}{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Br}}% \left( \mu \rightarrow e\gamma \right) \label{eq:CR-BR} \end{equation} where the $\mu^{-}-e^{-}$ conversion ratio is defined~\cite{Lindner:2016bgg} as follows: \begin{equation} \label{eq:Conversion-Rate} {\RIfM@\expandafter\text@\else\expandafter\mbox\fi{CR}}\left(\mu-e\right)=\frac{\Gamma\left(\mu^{-}+{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Nucleus}}% \left(A,Z\right)\rightarrow e^{-}+{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Nucleus}}\left(A,Z\right)\right)}{% \Gamma\left(\mu^{-}+{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Nucleus}}\left(A,Z\right)\rightarrow\nu_{\mu}+{% \RIfM@\expandafter\text@\else\expandafter\mbox\fi{Nucleus}}\left(A,Z-1\right)\right)} \end{equation} Consequently, for our model we expect that the resulting rates for the LFV transitions $\mu \rightarrow 3e$, $\mu {\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Al}}\rightarrow e{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Al and }}\mu {\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Ti}}\rightarrow e{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{T}}$ will be of the order of $10^{-16}$% , i.e, two orders of magnitude lower than the obtained rate for the $\mu \rightarrow e\gamma$ decay, implying that in our model the corresponding values are below their current experimental bounds of about $% 10^{-12}$ for these LFV transitions. \section{Leptogenesis} \label{leptogenesis} In this section we will analyze the implications of our model in leptogenesis. In our analysis of leptogenesis we follow the approach of Ref. \cite{Blanchet:2009kk}. To simplify our analysis, we work in the basis where the SM charged lepton mass matrix is diagonal, assume that $y^{(L)}$ and $x^{(N)}$ are diagonal matrices and consider the scenario where $\left\vert y_{11}^{(L)}\right\vert \ll \left\vert y_{22}^{(L)}\right\vert ,\left\vert y_{33}^{(L)}\right\vert $ and $\left\vert x_{11}^{(N)}\right\vert \ll \left\vert x_{22}^{(N)}\right\vert,\left\vert x_{33}^{(N)}\right\vert $. In that scenario only the first generation of pseudo-Dirac fermions $N_{a}^{\pm }$, i.e, $N_{1}^{\pm }$ will be much lighter than the second and third generation ones. This implies that the decay of {$N_{1}^{\pm }$ provides the dominant contribution to the Baryon asymmetry of the Universe (BAU), whereas the decays of the heavier pseudo-Dirac fermions $N_{2}^{\pm }$ and $N_{3}^{\pm }$ will give subleading contributions to the $B-L$ asymmetry. This is due to the fact that the lepton asymmetry generated by the decays of the heavier pseudo-Dirac pairs $N_{2}^{\pm }$ and $N_{3}^{\pm }$} gets washed out very quickly, yielding a very small impact on the lepton asymmetry produced by the decay of the lightest pair $N_{1}^{\pm }$, as discussed in Ref. \cite{Blanchet:2009kk}. We are considering the scenario of diagonal $y^{(L)}$ matrix in order to suppress tree level FCNCs in the charged lepton sector. We also take the initial temperature larger than the mass $m_{{N^{\pm }}}$ of the lightest pair of pseudo-Dirac fermions $N_{1}^{\pm }=N^{\pm }$. Within this minimal scenario, the Boltzmann equations take the form \cite{Buchmuller:2004nz}: \begin{eqnarray} \frac{dN_{{N_{1}^{\pm }}}\left( z\right) }{dz} &=&-\left[ D\left( z\right) +S\left( z\right) \right] \left[ N_{{N_{1}^{\pm }}}\left( z\right) -N_{{% N_{1}^{\pm }}}^{eq}\left( z\right) \right] , \notag \\ \frac{dN_{N_{B-L}}\left( z\right) }{dz} &=&-\varepsilon _{\pm }D\left( z\right) \left[ N_{{N_{1}^{\pm }}}\left( z\right) -N_{{N_{1}^{\pm }}% }^{eq}\left( z\right) \right] -W\left( z\right) N_{N_{B-L}}\left( z\right) , \end{eqnarray}% where $z=\frac{m_{{N_{1}^{\pm }}}}{T}$, whereas $N_{{N_{1}^{\pm }}}$ and $N_{N_{B-L}}$ are the number density and the amount of $B-L$ asymmetry, respectively. Here $\varepsilon _{\pm }$ are the lepton asymmetry parameters, which are induced by the $N^{\pm }$ decay processes and have the following form \cite{Covi:1996wh,Rangarajan:1999kt,Gu:2010xc,Pilaftsis:1997jf}: \begin{eqnarray} \varepsilon _{\pm } &=&\dsum\limits_{i=1}^{3}\dsum\limits_{r=1}^{2}\frac{% \left[ \Gamma \left( N_{\pm }\rightarrow l_{i}H_{r}^{+}\right) -\Gamma \left( N_{\pm }\rightarrow \bar{l}_{i}H_{r}^{-}\right) \right] }{\left[ \Gamma \left( N_{\pm }\rightarrow l_{i}H_{r}^{+}\right) +\Gamma \left( N_{\pm }\rightarrow \bar{l}_{i}H_{r}^{-}\right) \right] }+\dsum% \limits_{i=1}^{3}\frac{\left[ \Gamma \left( N_{\pm }\rightarrow h\nu _{i}\right) -\Gamma \left( N_{\pm }\rightarrow h\nu _{i}\right) \right] }{% \left[ \Gamma \left( N_{\pm }\rightarrow h\nu _{i}\right) +\Gamma \left( N_{\pm }\rightarrow h\nu _{i}\right) \right] } \notag \\ &\simeq &\frac{\func{Im}\left\{ \left( \left[ \left( y_{N_{+}}\right) ^{\dagger }\left( y_{N_{-}}\right) \right] ^{2}\right) _{11}\right\} }{8\pi A_{\pm }}\frac{r}{r^{2}+\frac{\Gamma _{\pm }^{2}}{m_{N_{\pm }}^{2}}}, \label{ep} \end{eqnarray}% with: \begin{eqnarray} r &=&\frac{m_{N_{+}}^{2}-m_{N_{-}}^{2}}{m_{N_{+}}m_{N_{-}}},\hspace{0.7cm}% \hspace{0.7cm}A_{\pm }=\left[ \left( y_{N_{\pm }}\right) ^{\dagger }y_{N_{\pm }}\right] _{11},\hspace{0.7cm}\hspace{0.7cm}\Gamma _{\pm }=\frac{% A_{\pm }m_{N_{\pm }}}{8\pi }, \notag \\ y_{N_{\pm }} &=&\frac{y^{\left( L\right) }}{\sqrt{2}}\left( 1\mp S\right) =% \frac{y^{\left( L\right) }}{\sqrt{2}}\left[ 1\pm \frac{1}{4}M^{-1}\left( \mu +\widetilde{\mu }\right) \right] , \label{yN} \end{eqnarray} where we have assumed that the exotic leptonic fields $E_{nR}$, $E^{\prime }$ and $\Omega _{nR}$ ($n=1,2$)\ are heavier than the lightest pseudo-Dirac fermions $N_{1}^{\pm }=N^{\pm }$.} \ac{On the other hand, it is worth mentioning that $N_{N_{1}^{\pm }}$ and $N_{N_{B-L}}$ are computed in a portion of comoving volume that contains one photon at temperatures much larger than $m_{{N_{1}^{\pm }}}$, thus implying that $N_{{N_{1}^{\pm }}}^{eq}\left( z<<1\right) =\frac{3}{4}$ \cite% {Buchmuller:2004nz}. Besides that, $D\left( z\right) $, $S\left(z\right)$ and $W\left( z\right) $, are the thermally averaged rates corresponding to the decays of ${N_{1}^{\pm }}$, to the scattering processes and to the inverse decays, respectively. These thermally averaged rates are given by: \begin{eqnarray} D\left( z\right) &=&D\left( z\right) _{N_{1}}+D_{N_{1}}^{\left( W^{\prime }\right) }\left( z\right) ,\hspace{0.7cm}\hspace{0.7cm}{S\left( z\right) =S_{Z^{\prime }}\left( z\right) +}S_{W^{\prime }}\left( z\right) , \\ W\left( z\right) &=&W_{N_{1}}^{ID}\left( z\right) +W_{N_{1}}^{ID\left( W^{\prime }\right) }\left( z\right) , \end{eqnarray}% where $D\left( z\right) _{N_{1}}$ is the thermally averaged rate associated with the two body decays $N_{1}^{\pm }\rightarrow l_{i}H{_{r}^{+}}$ ($r=1,2$), $N_{1}^{\pm }\rightarrow \nu _{i}h$ ($i=1,2,3$% ) whereas $D_{N_{1}}^{\left( W^{\prime }\right) }\left( z\right) $ corresponds to the thermally averaged rate arising from the $W^{\prime }$ mediated three body decay $N_{1}^{\pm }\rightarrow l_{i}^{-}u_{j}\bar{d}_{k} $ ($i,j,k=1,2,3$). Furthermore, $S_{Z^{\prime }}\left( z\right) $ is the thermally \aech{averaged} rate arising from the $Z^{\prime }$ mediated scattering processes $N_{1}^{\pm }N_{1}^{\pm }\longleftrightarrow l_{i}\overline{l}_{j}$ ($i,j=1,2,3$), $N_{1}^{\pm }N_{1}^{\pm }\longleftrightarrow u_{i}\overline{u}_{j}$ and $N_{1}^{\pm }N_{1}^{\pm }\longleftrightarrow d_{i}\overline{d}_{j}$, whereas the thermally averaged rate $S_{W^{\prime }}\left( z\right) $ is caused by the $W^{\prime }$ mediated processes {$N_{1}^{\pm }$}$l_{iR}\longleftrightarrow \overline{u}_{jR}d_{kR}$, $N_{1}^{\pm }\overline{u}_{iR}\longleftrightarrow l_{jR}\overline{d}_{kR}$, {$N_{1}^{\pm }d$}$_{iR}\longleftrightarrow l_{jR}u_{kR}$ ($i,j,k=1,2,3$). In addition, $W_{N_{1}}^{ID}\left( z\right) $ and $W_{N_{1}}^{ID\left( W^{\prime }\right) }\left( z\right) $ are the thermally averaged rates arising from the inverse two and three body decays of $N_{1}^{\pm }$, respectively. The above mentioned thermally averaged rates are given by \cite{Plumacher:1996kc,Buchmuller:2004nz,Cosme:2004xs,Frere:2008ct,Blanchet:2010kw,Dolan:2018qpy}: \begin{eqnarray} D\left( z\right) _{N_{1}} &=&\frac{\Gamma _{D}}{H\left( z=1\right) z}=Kz% \frac{\mathcal{K}_{1}\left( z\right) }{\mathcal{K}_{2}\left( z\right) },% \hspace{0.7cm}\hspace{0.7cm}W_{N_{1}}^{ID}\left( z\right) =\frac{1}{2}\frac{% \Gamma _{ID}\left( z\right) }{H\left( z\right) z}=\frac{1}{2}\frac{\Gamma _{D}\left( z\right) }{H\left( z\right) z}\frac{N^{eq}\left( z\right) }{N_{l}}% =\frac{1}{4}K\mathcal{K}_{1}\left( z\right) z^{3}, \notag \\ \Gamma _{ID}\left( z\right) &=&\Gamma _{D}\left( z\right) \frac{N^{eq}\left( z\right) }{N_{l}^{eq}},\hspace{0.7cm}N_{{N_{1}^{\pm }}}^{eq}\left( z\right) =% \frac{3}{8}z^{2}\mathcal{K}_{2}\left( z\right) ,\hspace{0.7cm}N_{l}^{eq}=% \frac{3}{4},\hspace{0.7cm}K=\frac{\left[ y^{\left( L\right) }\left( y^{\left( L\right) }\right) ^{\dagger }\right] _{11}v_{1}^{2}}{2m_{\ast }m_{{% {N_{1}^{\pm }}}}}, \notag \\ D_{N_{1}}^{\left( W^{\prime }\right) }\left( z\right) &=&\frac{\gamma _{{% N_{1}^{\pm }}}^{\left( W_{R}\right) }}{n_{{N_{1}^{\pm }}}^{eq}\left( z\right) H\left( z=1\right) z},\hspace{0.7cm}n_{{N_{1}^{\pm }}}^{eq}\left( z\right) =\frac{3}{4}n_{\gamma }\left( z\right) N_{{N_{1}^{\pm }}% }^{eq}\left( z\right) =\frac{9}{64}z^{2}\mathcal{K}_{2}\left( z\right) n_{\gamma }\left( z\right) ,\hspace{0.7cm}n_{\gamma }=\frac{2\zeta \left( 3\right) }{\pi ^{2}}T^{3}, \\ W_{N_{1}}^{ID\left( W^{\prime }\right) }\left( z\right) &=&\frac{1}{2}% D_{W^{\prime }}\left( z\right) \frac{N^{eq}\left( z\right) }{N_{l}},\hspace{% 0.7cm}\hspace{0.7cm}S\left( z\right) _{Z^{\prime },W^{\prime }}=\frac{\Gamma _{S}^{\left( Z^{\prime },W^{\prime }\right) }}{H\left( z=1\right) z},\hspace{% 0.7cm}\hspace{0.7cm}\Gamma _{S}=\frac{\gamma _{{S}}^{\left( Z^{\prime },W^{\prime }\right) }}{n_{{N_{1}^{\pm }}}^{eq}\left( z\right) H\left( z=1\right) z}, \notag \\ \gamma _{{N_{1}^{\pm }}}^{\left( W^{\prime }\right) } &=&n_{{N_{1}^{\pm }}% }^{eq}\left( z\right) \frac{\mathcal{K}_{1}\left( z\right) }{\mathcal{K}% _{2}\left( z\right) }\Gamma _{N}^{\left( W^{\prime }\right) },\hspace{0.7cm}% \Gamma _{{N_{1}^{\pm }}}^{\left( W^{\prime }\right) }=\frac{3g_{R}^{4}}{% 2^{9}\pi ^{3}m_{{{N_{1}^{\pm }}}}^{3}}\int_{0}^{m_{{{N_{1}^{\pm }}}}^{2}}ds% \frac{m_{{{N_{1}^{\pm }}}}^{6}-3m_{{{N_{1}^{\pm }}}}^{2}s^{2}+2s^{3}}{\left( s-m_{W^{\prime }}^{2}\right) ^{2}+m_{W^{\prime }}^{2}\Gamma _{W^{\prime }}^{2}},\hspace{0.7cm}\Gamma _{W^{\prime }}=\frac{g_{R}^{2}}{4\pi }% m_{W^{\prime }}^{2}, \notag \end{eqnarray}% where $\mathcal{K}_{r}\left( z\right) $ ($r=1,2$) is the modified Besel function of the $r$th type, $\Gamma _{{N_{1}^{\pm }}}^{\left( W^{\prime }\right) }$ the total three body decay of ${N_{1}^{\pm }}$, $\Gamma _{W^{\prime }}$ the $W^{\prime }$ total decay width, $n_{{N_{1}^{\pm }}% }^{eq}\left( z\right) $ is the right handed equilibrium distribution density, $n_{\gamma }\left( z\right) $ is the number density of photons, $K$ the washout parameter, $m_{\ast }$ the equilibrium neutrino mass, $H\left( z\right) $ the Hubble expansion rate, whereas $\gamma_{S}^{\left( Z^{\prime },W^{\prime }\right) }$, $\gamma _{N_{1}^{\pm }}^{\left( W_{R}\right) }$ are the scattering and decay reaction densities, respectively. The equilibrium neutrino mass $m_{\ast }$ and the Hubble expansion rate $H$ are given by \cite{Buchmuller:2004nz}: \begin{equation} m_{\ast }=\frac{16\pi ^{\frac{5}{2}}\sqrt{g_{\ast }}v_{1}^{2}}{3\sqrt{5}M_{P}% }=1.08\times 10^{-3}eV,\hspace{0.7cm}\hspace{0.7cm}H=\sqrt{\frac{4\pi ^{3}g_{\ast }}{45}}\frac{T^{2}}{M_{P}}=\sqrt{\frac{4\pi ^{3}g_{\ast }}{45}}% \frac{m_{{{N_{1}^{\pm }}}}^{2}}{z^{2}M_{P}}\simeq 1.66\sqrt{g_{\ast }}\frac{% m_{{{N_{1}^{\pm }}}}^{2}}{z^{2}M_{P}}, \notag \end{equation}% where $g_{\ast }=118$ is the number of effective relativistic degrees of freedom, $M_{Pl}=1.2\times 10^{9}$ GeV is the Planck constant. Furthermore, the scattering reaction densities $\gamma _{{S}% }^{\left(W^{\prime }\right) }$ and $\gamma _{{S}}^{\left( Z^{\prime }\right) }$ are given by: \begin{eqnarray} \gamma _{{S}}^{\left( W^{\prime }\right) } &=&\gamma _{{{N_{1}^{\pm }}% l_{iR}\longleftrightarrow \overline{u}_{jR}d_{kR}}}+\gamma _{{{N_{1}^{\pm }}% \overline{u}_{iR}\longleftrightarrow l_{jR}\overline{d}_{kR}}}+\gamma _{{{% N_{1}^{\pm }d}_{iR}\longleftrightarrow l_{jR}u_{kR}}}, \notag \\ \gamma _{{S}}^{\left( Z^{\prime }\right) } &=&\gamma _{{{N_{1}^{\pm }N_{1}^{\pm }}\longleftrightarrow l}_{i}{\overline{l}}_{j}}+\gamma _{{{% N_{1}^{\pm }N_{1}^{\pm }}\longleftrightarrow u}_{i}{\overline{u}}% _{j}}+\gamma _{{{N_{1}^{\pm }N_{1}^{\pm }}\longleftrightarrow d}_{i}{% \overline{d}}_{j}} \end{eqnarray} where the scattering reaction density for the process $ab\longleftrightarrow cd$ is defined as follows \cite% {Luty:1992un,Plumacher:1996kc,Buchmuller:2004nz,Cosme:2004xs,Frere:2008ct,Blanchet:2010kw,Dolan:2018qpy}: \begin{eqnarray} \gamma _{ab\longleftrightarrow cd} &=&\frac{T}{64\pi ^{4}}\int_{s_{\min }}^{\infty }ds\sqrt{s}\widehat{\sigma }_{ab\longleftrightarrow cd}\left( s\right) \mathcal{K}_{1}\left( \frac{\sqrt{s}}{T}\right) =\frac{m_{{{% N_{1}^{\pm }}}}^{4}}{64\pi ^{4}z}\int_{x_{0}}^{\infty }dx\sqrt{x}\widehat{% \sigma }\left( x\right) \mathcal{K}_{1}\left( z\sqrt{x}\right) ,\hspace{0.7cm% }\hspace{0.7cm} \notag \\ x &=&\frac{s}{m_{{{N_{1}^{\pm }}}}^{2}},\hspace{0.7cm}\hspace{0.7cm}x_{0}=% \frac{1}{m_{{{N_{1}^{\pm }}}}^{2}}\max \left[ \left( m_{a}+m_{b}\right) ^{2},\left( m_{c}+m_{d}\right) ^{2}\right] ,\hspace{0.7cm}\hspace{0.7cm}z=% \frac{m_{{{N_{1}^{\pm }}}}}{T}, \end{eqnarray} and $\widehat{\sigma }_{ab\longleftrightarrow cd}\left( s\right) $ is the reduced cross section corresponding to the scattering process $ab\longleftrightarrow cd$. In the left-right model under consideration, the relevant reduced cross sections are given by \cite{Plumacher:1996kc,Plumacher:1996kc,Cosme:2004xs,Frere:2008ct,Blanchet:2010kw,Dolan:2018qpy} \begin{eqnarray} \widehat{\sigma }_{{{N_{1}^{\pm }}l_{iR}\longleftrightarrow \overline{u}% _{jR}d_{kR}}}\left( x\right) &=&\frac{9g_{R}^{4}}{48\pi x}\frac{% 1-3x^{2}+2x^{3}}{\left[ \left( x-\frac{m_{W^{\prime }}^{2}}{m_{{{N_{1}^{\pm }% }}}^{2}}\right) ^{2}+\frac{m_{W^{\prime }}^{2}\Gamma _{W^{\prime }}^{2}}{m_{{% {N_{1}^{\pm }}}}^{2}}\right] }, \\ \widehat{\sigma }_{{{N_{1}^{\pm }}\overline{u}_{iR}\longleftrightarrow l_{jR}% \overline{d}_{kR}}}\left( x\right) &=&\frac{9g_{R}^{4}}{8\pi x}% \int_{1-x}^{0}du\frac{\left( x+u\right) \left( x+u-1\right) }{\left( u-\frac{% m_{W^{\prime }}^{2}}{m_{{{N_{1}^{\pm }}}}^{2}}\right) ^{2}}, \\ \widehat{\sigma }_{{{N_{1}^{\pm }d}_{iR}\longleftrightarrow l_{jR}u_{kR}}% }\left( x\right) &=&\frac{9g_{R}^{4}}{8\pi }\frac{m_{{{N_{1}^{\pm }}}}^{2}}{% m_{W^{\prime }}^{2}}\frac{\left( 1-x\right) ^{2}}{x+\frac{m_{W^{\prime }}^{2}% }{m_{{{N_{1}^{\pm }}}}^{2}}-1}, \\ \widehat{\sigma }_{{{N_{1}^{\pm }N_{1}^{\pm }}\longleftrightarrow l\overline{% l}}}\left( x\right) +\gamma _{{{N_{1}^{\pm }N_{1}^{\pm }}\longleftrightarrow u}_{i}{\overline{u}}_{j}}+\gamma _{{{N_{1}^{\pm }N_{1}^{\pm }}% \longleftrightarrow d}_{i}{\overline{d}}_{j}} &=&\frac{13g_{B-L}^{2}}{6\pi }% \frac{\sqrt{x\left( x-4\right) ^{3}}}{\left( x-\frac{m_{Z^{\prime }}^{2}}{m_{% {{N_{1}^{\pm }}}}^{2}}\right) ^{2}+\frac{m_{Z^{\prime }}^{2}\Gamma _{Z^{\prime }}^{2}}{m_{{{N_{1}^{\pm }}}}^{2}}}, \end{eqnarray} where $\Gamma _{Z^{\prime }}$ is the total $Z^{\prime }$ decay given by: \begin{equation} \Gamma _{Z^{\prime }}=\frac{g_{B-L}^{2}}{24\pi }m_{Z^{\prime }}\left[ 13+3\left( 1-\frac{4m_{{{N_{1}^{\pm }}}}^{2}}{m_{Z^{\prime }}^{2}}\right) ^{% \frac{3}{2}}\right], \end{equation} It is worth mentioning that we are not considering the contributions arising from the $t$-channel scattering processes $N_{1}^{\pm }N_{1}^{\pm }$$\longleftrightarrow l_{i}\overline{l}_{j}$ ($i,j=1,2,3$) since its corresponding rates have a very fast decrease for $z${$=\frac{m_{{N_{1}^{\pm }}}}{T}$}$>1$, as discussed in Ref. \cite{Blanchet:2009kk}. Furthermore, we are also not considering contributions arising from $\Delta L=1$ scatterings involving scalars, since they are subleading, as discussed in Ref. \cite{Blanchet:2009kk}. Moreover, we are not considering scattering processes involving heavy charged exotic fermions, since they are very heavy with masses larger than $100$ TeV (see Eq. (\ref{bfpquarks})) in order to naturally reproduce the SM fermion mass hierarchy. The numerical solution of the Boltzmann equations allows to determine the amount of $B-L$ asymmetry $N_{B-L}$, and then the baryon to photon ratio, by using the following relation \cite{Buchmuller:2004nz,Frere:2008ct}: \begin{equation} \eta _{B}=\frac{n_{B}}{n_{\gamma }}=\frac{3}{4}a_{sph}N_{B-L},\hspace{0.7cm}% \hspace{0.7cm}a_{sph}=\frac{8n_{f}+4n_{H}}{22n_{f}+13n_{H}}, \end{equation} where $a_{sph}$ is the $L$ to $B$ sphaleron conversion rate. Furthermore, $n_{f}$ is the number of fermion families and $n_{H}$ is the number of Higgs doublets. As shown in Ref. \cite{Blanchet:2010kw}, the contributions arising from the aforementioned scattering processes, as well as from the inverse decays, are subdominant for temperatures sufficiently lower than the mass $m_N$, i.e, $z>>1$, thus implying that the lepton asymmetry mainly arises from the decay of the the lightest pair of pseudo-Dirac fermions $N_{1}^{\pm }$. This is confirmed in Figure \ref{scattrate}, which shows the thermally averaged rates corresponding to the decays, scattering and washouts, as a function of $z=\frac{m_{N}}{T}$, where $m_{N}$ is the mass of the lightest pair of pseudo-Dirac fermions $N_{1}^{\pm }=N^{\pm }$ and $T$ the temperature. Here we have set $v_R=14$ TeV, $m_{W^{\prime }}=7$ TeV and $m_{Z^{\prime }}=7.2$ TeV. As shown in Figure \ref{scattrate}, for $z>\mathcal{O}(10)$ the thermally averaged scattering rate corresponding to the decays is much larger by several orders of magnitudes than the ones associated with the scattering and inverse decays (washouts). Furthermore, it has been shown in Ref. \cite{Blanchet:2010kw} that the contribution arising from the $W_{R}$ mediated three body decay $N_{1}^{\pm }\rightarrow l_{i}^{-}u_{j}\bar{d}_{k}$ is much smaller than the ones arising from the {$N_{1}^{\pm }$}$\rightarrow l_{i}H{_{r}^{+}}$, {$N_{1}^{\pm }$}$\rightarrow \nu _{i}h$ decays. On the other hand, if the temperature of the Universe drops below the scale of breaking of the left-right symmetry, the inverse decays producing ${N_{1}^{\pm }}$ fall out of thermal equilibrium, and thermal leptogenesis can take place. \begin{figure}[tbp] \centering \includegraphics[width=14cm,height=10cm]{Lepto4.jpg} \caption{Thermally averaged scattering rates $D\left(z\right)$, $S\left(z\right)$ and $W\left(z\right)$ as functions of $z=\frac{m_{N}}{T}$, with $m_{N}$ the mass of the lightest pair of pseudo-Dirac fermions $N_{1}^{\pm }=N^{\pm }$ and $T$ the temperature. Here we have set $v_R=14$ TeV, $m_{W^{\prime }}=7$ TeV and $m_{Z^{\prime }}=7.2$ TeV.} \label{scattrate} \end{figure} It is worth mentioning that CP violation in the lepton sector, necessary to generate the lepton asymmetry parameter, can arise from complex entries in $y^{\left( L\right) }$, $M$ or $\mu $, as indicated by Eqs. (\ref{ep}) and (\ref{yN}). Furthermore, in order to successfully reproduce the neutrino oscillation experimental data, the submatrix $\mu $, in the basis of diagonal SM charged lepton mass matrix, should have the following form: \begin{equation} \mu =M^{T}m_{\nu D}^{-1}\widetilde{\mathbf{M}}_{\nu }\left( m_{\nu D}^{T}\right) ^{-1}M=M^{T}m_{\nu D}^{-1}U_{PMNS}\left( \widetilde{\mathbf{M}}% _{\nu }\right) _{diag}diag\left( m_{1},m_{2},m_{3}\right) U_{PMNS}^{T}\left( m_{\nu D}^{T}\right) ^{-1}M, \end{equation}% where: \begin{equation} \left( \widetilde{\mathbf{M}}_{\nu }\right) _{diag}=diag\left( m_{1},m_{2},m_{3}\right) \end{equation}% being $m_{1}$, $m_{2}$ and $m_{3}$ the masses of the light active neutrinos and $U_{PMNS}$ the PMNS leptonic mixing matrix. The correlations of the baryon asymmetry and the magnitude of the Dirac neutrino Yukawa couplings $y_{11}^{(L)}$ and $y_{22}^{(L)}$ are shown in Figure \ref{etaB1} and \ref{etaB2}, respectively. Here we have set $v_R=14$ TeV, $m_{W^{\prime }}=7$ TeV, $m_{Z^{\prime }}=7.2$ TeV, $m_{N_{2}^{\pm }}=14$ TeV, $m_{N_{3}^{\pm }}=28$ TeV and $\left\vert y_{22}^{(L)}\right\vert=\left\vert y_{33}^{(L)}\right\vert=\left\vert y_{2}^{(L)}\right\vert$. As shown in Figures \ref{etaB1} and \ref{etaB2}, the measured value of the baryon asymmetry of the Universe \cite{Zyla:2020zbs}: \begin{equation} \eta_{B}=\left(6.12\pm 0.04\right) \times 10^{-10} \end{equation} can be successfully reproduced in the simplified scenario considered in our model, provided that $\left\vert y_{1}^{(L)}\right\vert\sim\mathcal{O}\left(10^{-4}\right)$ and $\left\vert y_{2}^{(L)}\right\vert\sim\mathcal{O}\left(1\right)$. In our numerical analysis we have found that the baryon asymmetry of the Universe is generated for $z\sim\mathcal{O}\left(10\right)$, which corresponds to temperatures one order of magnitude lower than the mass $m_{N}$ of the lightest pair of pseudo-Dirac fermions $N_{1}^{\pm }=N^{\pm }$. This result is consistent with the one obtained in Ref. \cite{Blanchet:2010kw}. The correlation of the baryon asymmetry and the mass $m_{N}$ of the lightest pair of pseudo-Dirac fermions $N_{1}^{\pm }=N^{\pm }$ is shown in Figure \ref{etaB3}. As shown in Figures \ref{etaB1}, \ref{etaB2} and \ref{etaB3}, our model successfully accommodates the experimental value of the baryon asymmetry parameter $\eta_{B}$. \begin{figure}[tbp] \centering \includegraphics[width=14cm, height=10cm]{Lepto1.jpg} \caption{Correlation of the baryon asymmetry and the magnitude of the Dirac neutrino Yukawa coupling $y_{11}^{(L)}$. Here we have set $v_R=14$ TeV, $m_{W^{\prime }}=7$ TeV, $m_{Z^{\prime }}=7.2$ TeV, $m_{N_{2}^{\pm }}=14$ TeV, $m_{N_{3}^{\pm }}=28$ TeV and $\left\vert y_{22}^{(L)}\right\vert=\left\vert y_{33}^{(L)}\right\vert=\left\vert y_{2}^{(L)}\right\vert$.} \label{etaB1} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=14cm, height=10cm]{Lepto2.jpg} \caption{Correlation of the baryon asymmetry and the magnitude of the Dirac neutrino Yukawa coupling $y_{22}^{(L)}$. Here we have set $v_R=14$ TeV, $m_{W^{\prime }}=7$ TeV, $m_{Z^{\prime }}=7.2$ TeV, $m_{N_{2}^{\pm }}=14$ TeV, $m_{N_{3}^{\pm }}=28$ TeV and $\left\vert y_{22}^{(L)}\right\vert=\left\vert y_{33}^{(L)}\right\vert=\left\vert y_{2}^{(L)}\right\vert$.} \label{etaB2} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=14cm, height=10cm]{Lepto3.jpg}% \caption{Correlation of the baryon asymmetry and the mass $m_{N}$ of the lightest pair of pseudo-Dirac fermions $N_{1}^{\pm }=N^{\pm }$. Here we have set $v_R=14$ TeV, $m_{W^{\prime }}=7$ TeV, $m_{Z^{\prime }}=7.2$ TeV, $m_{N_{2}^{\pm }}=14$ TeV, $m_{N_{3}^{\pm }}=28$ TeV and $\left\vert y_{22}^{(L)}\right\vert=\left\vert y_{33}^{(L)}\right\vert=\left\vert y_{2}^{(L)}\right\vert$} \label{etaB3} \end{figure} \section{The simplified scalar potential} \label{scalarpotential} In order to simplify our analysis, we will consider a bechmark scenario where the singlet real scalar fields $\sigma $, $\eta $ and $\rho $ will not feature mixings with the neutral components of the $% \Phi $, $\chi _{L}$ and $\chi _{R}$ scalars. Furthermore, for the sake of simplicity, in our bechmark scenario we do not consider the trilinear terms $% A_{1}(\chi _{R}^{\dagger }\phi _{R})\varphi $ and $A_{2}(\chi _{L}^{\dagger }\phi _{L})\varphi $ that will give rise to mixings of the gauge singlet scalar field $\varphi $ with the $\phi _{L}$ and $\phi _{R}$ scalars. The justification of the benchmark scenario under consideration arises from the fact that such gauge singlet scalars $\sigma $, $\eta $ and $\rho $ are assumed to acquire vacuum expectation values much larger than the scale of breaking of the left-right symmetry, thus allowing to neglect the mixings of these fields with the $\Phi $, $\chi _{L}$ and $\chi _{R}$ scalars and to treat their scalar potentials independently. Let us note that the mixing angles between those fields are suppressed by the ratios of their VEVs, as follows from the method of recursive expansion of Ref. \cite{Grimus:2000vj}. The scalar potential for the $\Phi $, $\chi _{L}$, $\phi _{L}$, $\chi _{R}$ and $\phi _{R}$ scalars takes the form:% \begin{eqnarray} V &=&\mu _{1}^{2}(\chi _{L}^{\dagger }\chi _{L})+\mu _{2}^{2}(\chi _{R}^{\dagger }\chi _{R})+\mu _{3}^{2}Tr(\Phi ^{\dagger }\Phi )+\mu _{4}^{2}(\phi _{L}^{\dagger }\phi _{L})+\mu _{5}^{2}(\phi _{R}^{\dagger }\phi _{R})-\mu ^{2}Tr\left[ \Phi ^{2}+\left( \Phi ^{\ast }\right) ^{2}% \right] +\lambda _{1}(\chi _{L}^{\dagger }\chi _{L})^{2}+\lambda _{2}(\chi _{R}^{\dagger }\chi _{R})^{2} \notag \\ &&+\lambda _{3}(\chi _{L}^{\dagger }\chi _{L})(\chi _{R}^{\dagger }\chi _{R})+\lambda _{4}\left[ Tr(\Phi ^{\dagger }\Phi )\right] ^{2}+\lambda _{5}Tr% \left[ (\Phi ^{\dagger }\Phi )^{2}\right] +\lambda _{6}\left[ Tr(\widetilde{% \Phi }\widetilde{\Phi }^{\dagger })\right] ^{2}+\lambda _{7}Tr\left[ (% \widetilde{\Phi }\widetilde{\Phi }^{\dagger })^{2}\right] +\lambda _{8}(\chi _{L}^{\dagger }\chi _{L})Tr(\Phi ^{\dagger }\Phi ) \notag \\ &&+\lambda _{9}(\chi _{R}^{\dagger }\chi _{R})Tr(\Phi ^{\dagger }\Phi )+\lambda _{10}(\chi _{L}^{\dagger }\chi _{L})Tr(\widetilde{\Phi }\widetilde{% \Phi }^{\dagger })+\lambda _{11}(\chi _{R}^{\dagger }\chi _{R})Tr(\widetilde{% \Phi }\widetilde{\Phi }^{\dagger })+\lambda _{12}(\phi _{L}^{\dagger }\phi _{L})^{2}+\lambda _{13}(\phi _{R}^{\dagger }\phi _{R})^{2} \notag \\ &&+\lambda _{14}(\phi _{L}^{\dagger }\phi _{L})(\phi _{R}^{\dagger }\phi _{R})+\lambda _{15}(\phi _{L}^{\dagger }\phi _{L})Tr(\Phi ^{\dagger }\Phi )+\lambda _{16}(\phi _{R}^{\dagger }\phi _{R})Tr(\Phi ^{\dagger }\Phi )+\lambda _{17}(\phi _{L}^{\dagger }\phi _{L})Tr(\widetilde{\Phi }\widetilde{% \Phi }^{\dagger })+\lambda _{18}(\phi _{R}^{\dagger }\phi _{R})Tr(\widetilde{% \Phi }\widetilde{\Phi }^{\dagger }) \notag \\ &&+\lambda _{19}\left[ (\phi _{L}^{\dagger }\chi _{L})(\phi _{R}^{\dagger }\chi _{R})+(\chi _{L}^{\dagger }\phi _{L})(\chi _{R}^{\dagger }\phi _{R})% \right] \end{eqnarray}% where the term $-\mu ^{2}Tr\left[ \Phi ^{2}+\left( \Phi ^{\ast }\right) ^{2}% \right] $ softly breaks the $Z_{4}^{\left( 1\right) }$ symmetry. Such term arises from the trilinear scalar interaction $ATr(\widetilde{\Phi }\Phi ^{\dagger }+\widetilde{\Phi }^{\dagger }\Phi )\eta $ after the $\eta $ singlet scalar field acquires a VEV. The minimization conditions of the scalar potential yields the following relations: \begin{eqnarray} \mu _{1}^{2} &=&\frac{1}{2}\left( -2\lambda _{1}v_{L}^{2}-\lambda _{3}v_{R}^{2}-\left( \lambda _{8}+\lambda _{10}\right) v_{1}^{2}\right) , \\ \mu _{2}^{2} &=&\frac{1}{2}\left( -\lambda _{3}v_{L}^{2}-2\lambda _{2}v_{R}^{2}-\left( \lambda _{9}+\lambda _{11}\right) v_{1}^{2}\right) , \\ \mu _{3}^{2} &=&2\mu ^{2}+\frac{1}{2}\left( -\left( \lambda _{8}+\lambda _{10}\right) v_{L}^{2}-\left( \lambda _{9}+\lambda _{11}\right) v_{R}^{2}-2\left( \lambda _{4}+\lambda _{5}+\lambda _{6}+\lambda _{7}\right) v_{1}^{2}\right) . \end{eqnarray} The squared mass matrix for the electrically charged scalars even under the remnant $Z_{2}$ symmetry, in the basis $\left( \chi _{L}^{+},\chi _{R}^{+},\phi _{1I}^{+},\phi _{2I}^{+}\right) -\left( \chi _{L}^{-},\chi _{R}^{-},\phi _{1I}^{-},\phi _{2I}^{-}\right) $ takes the form: \begin{equation} \mathbf{M}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{charged}}^{2}=\left( \begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 2\mu ^{2}-\lambda _{7}v_{1}^{2} & -2\mu ^{2} \\ 0 & 0 & -2\mu ^{2} & 2\mu ^{2}-\lambda _{5}v_{1}^{2}% \end{array}% \right) \end{equation}% where the massless scalar eigenstates $\chi _{L}^{\pm }$ and $\chi _{R}^{\pm }$ correspond to the Goldstone bosons associated with the longitudinal components of the $W^{\pm }$ and $W^{\prime \pm }$ gauge bosons. Besides that, there are physical electrically charged scalars $H_{1}^{\pm }$ and $% H_{2}^{\pm }$ , whose squared masses are given by: \begin{eqnarray} m_{H_{1}^{\pm }}^{2} &=&\frac{1}{2}\left[ 4\mu ^{2}-\left( \lambda _{5}+\lambda _{7}\right) v_{1}^{2}-\sqrt{\left( \lambda _{5}-\lambda _{7}\right) ^{2}v_{1}^{4}+16\mu ^{4}}\right] , \\ m_{H_{2}^{\pm }}^{2} &=&\frac{1}{2}\left[ 4\mu ^{2}-\left( \lambda _{5}+\lambda _{7}\right) v_{1}^{2}+\sqrt{\left( \lambda _{5}-\lambda _{7}\right) ^{2}v_{1}^{4}+16\mu ^{4}}\right] . \end{eqnarray} Furthermore, the electrically charged scalar fields $S_{1}^{\pm }=\phi _{L}^{\pm }$ and $S_{2}^{\pm }=\phi _{R}^{\pm }$ having non trivial charges under the remnant $Z_{2}$ symmetry have squared masses given by: \begin{eqnarray} m_{S_{1}^{\pm }}^{2} &=&\mu _{4}^{2}+\left( \lambda _{15}+\lambda _{17}\right) v_{1}^{2}, \\ m_{S_{2}^{\pm }}^{2} &=&\mu _{5}^{2}+\left( \lambda _{16}+\lambda _{18}\right) v_{1}^{2}. \end{eqnarray} The squared mass matrix for the CP-odd neutral scalar sector, even under the remnant $Z_{2}$ symmetry in the basis $\left( \func{Im}\chi _{L}^{0},\func{Im% }\chi _{R}^{0},\phi _{1I}^{0},\phi _{2I}^{0}\right) $ has the form: \begin{equation} \mathbf{M}_{CP-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{odd}}^{2}=\left( \begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 4\mu ^{2} & 0 \\ 0 & 0 & 0 & 4\mu ^{2}-\left( \lambda _{5}+\lambda _{7}\right) v_{1}^{2}% \end{array}% \right) \end{equation}% The massless scalar eigenstates $\func{Im}\chi _{L}^{0}$ and $\func{Im}\chi _{R}^{0}$ are associated with the Goldstone bosons associated with the longitudinal components of the $Z$ and $Z^{\prime }$ gauge bosons. Furthermore, the $Z_{2}$ even CP-odd neutral scalar sector contains two massive CP odd scalars whose squared masses are given by: \begin{eqnarray} m_{A_{1}^{0}}^{2} &=&4\mu ^{2}, \\ m_{A_{2}^{0}}^{2} &=&4\mu ^{2}-\left( \lambda _{5}+\lambda _{7}\right) v_{1}^{2}. \end{eqnarray} Moreover, the squared mass matrix for the CP-odd neutral scalar sector, odd under the remnant $Z_{2}$ symmetry in the basis $\left( \func{Im}\phi _{L}^{0},\func{Im}\phi _{R}^{0}\right) $ has the form: \begin{equation} \widetilde{\mathbf{M}}_{CP-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{odd}}^{2}=\left( \begin{array}{cc} \frac{1}{2}\left[ \mu _{4}^{2}+\left( \lambda _{15}+\lambda _{17}\right) v_{1}^{2}\right] & -\lambda _{19}v_{L}v_{R} \\ -\lambda _{19}v_{L}v_{R} & \frac{1}{2}\left[ \mu _{5}^{2}+\left( \lambda _{16}+\lambda _{18}\right) v_{1}^{2}\right]% \end{array}% \right) \end{equation} This matrix can be diagonalized as follows: \begin{eqnarray} R_{P}^{T}\widetilde{\mathbf{M}}_{CP-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{odd}}^{2}R_{P} &=&\left( \begin{array}{cc} \frac{A_{P}+B_{P}}{2}+\frac{1}{2}\sqrt{\left( A_{P}-B_{P}\right) ^{2}+4C_{P}^{2}} & 0 \\ 0 & \frac{A_{P}+B_{P}}{2}-\frac{1}{2}\sqrt{\left( A_{P}-B_{P}\right) ^{2}+4C_{P}^{2}}% \end{array}% \right) , \notag \label{eq:Theta-P} \\ R_{P} &=&\left( \begin{array}{cc} \cos \theta _{P} & -\sin \theta _{P} \\ \sin \theta _{P} & \cos \theta _{P}% \end{array}% \right) , \notag \\ A_{P} &=&\frac{1}{2}\left[ \mu _{4}^{2}+\left( \lambda _{15}+\lambda _{17}\right) v_{1}^{2}\right] ,\hspace{0.5cm}\hspace{0.7cm}B_{P}=\frac{1}{2}% \left[ \mu _{5}^{2}+\left( \lambda _{16}+\lambda _{18}\right) v_{1}^{2}% \right] , \notag \\ C_{P} &=&-\lambda _{19}v_{L}v_{R},\hspace{0.7cm}\hspace{0.7cm}\tan 2\theta _{P}=\frac{2C_{P}}{A_{P}-B_{P}}. \end{eqnarray}% Consequently, the physical scalar mass eigenstates $P_{1,2}$ are given by: \begin{equation} \left( \begin{array}{c} P_{1} \\ P_{2}% \end{array}% \right) =\left( \begin{array}{cc} \cos \theta _{P} & \sin \theta _{P} \\ -\sin \theta _{P} & \cos \theta _{P}% \end{array}% \right) \left( \begin{array}{c} \func{Im}\phi _{L}^{0} \\ \func{Im}\phi _{R}^{0}% \end{array}% \right) . \end{equation}% Their squared masses are: \begin{equation} m_{P_{1}}^{2}=\frac{A_{P}+B_{P}}{2}+\frac{1}{2}\sqrt{\left( A_{P}-B_{P}\right) ^{2}+4C_{P}^{2}},\hspace{0.7cm}\hspace{0.7cm}% m_{P_{2}}^{2}=\frac{A_{P}+B_{P}}{2}-\frac{1}{2}\sqrt{\left( A_{P}-B_{P}\right) ^{2}+4C_{P}^{2}}. \end{equation} The squared mass matrix for the CP-even neutral scalar sector in the basis $% \left( \phi _{1R}^{0},\func{Re}\chi _{L}^{0},\phi _{2R}^{0},\func{Re}\chi _{R}^{0}\right) $ \begin{equation} \mathbf{M}_{CP-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{even}}^{2}=\left( \begin{array}{cccc} 2\left( \lambda _{4}+\lambda _{5}+\lambda _{6}+\lambda _{7}\right) v_{1}^{2} & \left( \lambda _{8}+\lambda _{10}\right) v_{1}v_{L} & 0 & \left( \lambda _{9}+\lambda _{11}\right) v_{1}v_{R} \\ \left( \lambda _{8}+\lambda _{10}\right) v_{1}v_{L} & 2\lambda _{1}v_{L}^{2} & 0 & \lambda _{3}v_{L}v_{R} \\ 0 & 0 & \left( \lambda _{5}+\lambda _{7}\right) \left( -v_{1}^{2}\right) & 0 \\ \left( \lambda _{9}+\lambda _{11}\right) v_{1}v_{R} & \lambda _{3}v_{L}v_{R} & 0 & 2\lambda _{2}v_{R}^{2}% \end{array}% \right) \end{equation} On the other hand, the squared mass matrix for the CP-even neutral scalar sector, odd under the remnant $Z_{2}$ symmetry in the basis $\left( \func{Re}% \phi _{L}^{0},\func{Re}\phi _{R}^{0}\right) $ has the form: \begin{equation} \widetilde{\mathbf{M}}_{CP-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{even}}^{2}=\left( \begin{array}{cc} \frac{1}{2}\left[ \mu _{4}^{2}+\left( \lambda _{15}+\lambda _{17}\right) v_{1}^{2}\right] & \lambda _{19}v_{L}v_{R} \\ \lambda _{19}v_{L}v_{R} & \frac{1}{2}\left[ \mu _{5}^{2}+\left( \lambda _{16}+\lambda _{18}\right) v_{1}^{2}\right]% \end{array}% \right) \end{equation} This matrix can be diagonalized as follows: \begin{eqnarray} R_{S}^{T}\widetilde{\mathbf{M}}_{CP-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{even}}^{2}R_{S} &=&\left( \begin{array}{cc} \frac{A_{S}+B_{s}}{2}-\frac{1}{2}\sqrt{\left( A_{S}-B_{S}\right) ^{2}+4C_{S}^{2}} & 0 \\ 0 & \frac{A_{S}+B_{S}}{2}+\frac{1}{2}\sqrt{\left( A_{S}-B_{S}\right) ^{2}+4C_{S}^{2}}% \end{array}% \right) , \notag \label{eq:Theta-S} \\ R_{S} &=&\left( \begin{array}{cc} \cos \theta _{S} & -\sin \theta _{S} \\ \sin \theta _{S} & \cos \theta _{S}% \end{array}% \right) , \notag \\ A_{S} &=&\frac{1}{2}\left[ \mu _{4}^{2}+\left( \lambda _{15}+\lambda _{17}\right) v_{1}^{2}\right] ,\hspace{0.5cm}\hspace{0.7cm}B_{S}=\frac{1}{2}% \left[ \mu _{5}^{2}+\left( \lambda _{16}+\lambda _{18}\right) v_{1}^{2}% \right] , \notag \\ C_{S} &=&\lambda _{19}v_{L}v_{R},\hspace{0.7cm}\hspace{0.7cm}\tan 2\theta _{S}=\frac{2C_{S}}{A_{S}-B_{S}}. \end{eqnarray} Consequently, the physical scalar mass eigenstates states of the matrix $% \widetilde{\mathbf{M}}_{CP-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{even}}^{2}$ are given by: \begin{equation} \left( \begin{array}{c} S_{1} \\ S_{2}% \end{array}% \right) =\left( \begin{array}{cc} \cos \theta _{S} & \sin \theta _{S} \\ -\sin \theta _{S} & \cos \theta _{S}% \end{array}% \right) \left( \begin{array}{c} \func{Re}\phi _{L}^{0} \\ \func{Re}\phi _{R}^{0}% \end{array}% \right) . \end{equation} Their squared masses are:% \begin{equation} m_{S_{1/2}}^{2}=\frac{A_{S}+B_{S}}{2}\pm \frac{1}{2}\sqrt{\left( A_{S}-B_{S}\right) ^{2}+4C_{S}^{2}}\;. \end{equation} Correlations between the masses of the non SM scalars are shown in Figure % \ref{scalarcorrelations} and indicates that there are a large number of solutions for the scalar masses consistent with experimental bounds. \begin{figure}[tbp] \centering \includegraphics[width=0.5\textwidth]{plotmH2vsmH1.jpg}% \includegraphics[width=0.5\textwidth]{plotmA2vsmA1.jpg}\newline \includegraphics[width=0.5\textwidth]{plotmH1cvsmA2.jpg}% \includegraphics[width=0.5\textwidth]{plotmH2cvsmA1.jpg}\newline \includegraphics[width=0.5\textwidth]{plotmH2cvsmA2.jpg}% \includegraphics[width=0.5\textwidth]{plotmH2cvsmH1c.jpg}\newline \includegraphics[width=0.5\textwidth]{plotmH3vsvR.jpg}% \caption{Correlations between the non SM scalar masses (top plots). Correlation between the mass of the CP even neutral scalar $H_{3}^{0}$ and the scale $v_R$ of breaking of the left-right symmetry.} \label{scalarcorrelations} \end{figure} \section{Higgs diphoton decay rate} \label{sec.Higgsdiphoton} The decay rate for the $h\rightarrow \gamma \gamma $ process takes the form: \begin{equation} \Gamma (h\rightarrow \gamma \gamma )=\dfrac{\alpha _{em}^{2}m_{h}^{3}}{% 256\pi ^{3}v^{2}}\left\vert \sum_{f}a_{hff}N_{C}Q_{f}^{2}F_{1/2}(\rho _{f})+a_{hWW}F_{1}(\rho _{W})+\sum_{k=1,2}\frac{C_{hH_{k}^{\pm }H_{k}^{\mp }}v}{2m_{H_{k}^{\pm }}^{2}}F_{0}(\rho _{H_{k}^{\pm }})\right\vert ^{2}, \end{equation}% where $\rho _{i}$ are the mass ratios $\rho _{i}=\frac{m_{h}^{2}}{4M_{i}^{2}} $ with $M_{i}=m_{f},M_{W}$; $\alpha _{em}$ is the fine structure constant; $% N_{C}$ is the color factor ($N_{C}=1$ for leptons and $N_{C}=3$ for quarks) and $Q_{f}$ is the electric charge of the fermion in the loop. From the fermion-loop contributions we only consider the dominant top quark term. Furthermore, $C_{hH_{k}^{\pm }H_{k}^{\mp }}$ is the trilinear coupling between the SM-like Higgs and a pair of charged Higges, whereas $a_{htt}$ and $a_{hWW}$ are the deviation factors from the SM Higgs-top quark coupling and the SM Higgs-W gauge boson coupling, respectively (in the SM these factors are unity). Such deviation factors are close to unity in our model, which is a consequence of the numerical analysis of its scalar, Yukawa and gauge sectors. Furthermore, $F_{1/2}(z)$ and $F_{1}(z)$ are the dimensionless loop factors for spin-$1/2$ and spin-$1$ particles running in the internal lines of the loops. They are given by: \begin{align} F_{1/2}(z)& =2(z+(z-1)f(z))z^{-2}, \\ F_{1}(z)& =-2(2z^{2}+3z+3(2z-1)f(z))z^{-2}, \\ F_{0}(z)& =-(z-f(z))z^{-2}, \end{align}% with \begin{equation} f(z)=\left\{ \begin{array}{lcc} \arcsin ^{2}\sqrt{2} & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{for} & z\leq 1 \\ & & \\ -\frac{1}{4}\left( \ln \left( \frac{1+\sqrt{1-z^{-1}}}{1-\sqrt{1-z^{-1}}% -i\pi }\right) ^{2}\right) & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{for} & z>1 \\ & & \end{array}% \right. \end{equation} In order to study the implications of our model in the decay of the $126$ GeV Higgs into a photon pair, one introduces the Higgs diphoton signal strength $R_{\gamma \gamma }$, which is defined as: \begin{equation} R_{\gamma \gamma }=\frac{\sigma (pp\rightarrow h)\Gamma (h\rightarrow \gamma \gamma )}{\sigma (pp\rightarrow h)_{SM}\Gamma (h\rightarrow \gamma \gamma )_{SM}}\simeq a_{htt}^{2}\frac{\Gamma (h\rightarrow \gamma \gamma )}{\Gamma (h\rightarrow \gamma \gamma )_{SM}}. \label{eqn:hgg} \end{equation}% That Higgs diphoton signal strength, normalizes the $\gamma \gamma $ signal predicted by our model in relation to the one given by the SM. Here we have used the fact that in our model, single Higgs production is also dominated by gluon fusion as in the Standard Model. The ratio $R_{\gamma \gamma }$ has been measured by CMS and ATLAS collaborations with the best fit signals \cite{Sirunyan:2018ouh,Aad:2019mbh}% : \begin{equation} R_{\gamma \gamma }^{CMS}=1.18_{-0.14}^{+0.17}\quad \RIfM@\expandafter\text@\else\expandafter\mbox\fi{and}\quad R_{\gamma \gamma }^{ATLAS}=0.96\pm 0.14. \label{eqn:rgg} \end{equation}% The correlation of the Higgs diphoton signal strength with the charged scalar mass $m_{H_{1}^{\pm }}$ is shown in Figure \ref{Higgsdiphoton}, which indicates that our model successfully accommodates the current Higgs diphoton decay rate constraints. Furthermore, as indicated by Figure \ref% {Higgsdiphoton}, our model favours a Higgs diphoton decay rate lower than the SM expectation but inside the $3\sigma$ experimentally allowed range. \begin{figure}[tbp] \centering \includegraphics[width=9.0cm, height=6.5cm]{CorrelationRdiphoton1.jpg}% \includegraphics[width=9.0cm, height=6.5cm]{CorrelationRdiphoton2.jpg} \caption{Correlation of the Higgs diphoton signal strength with the $a_{hWW}$ deviation factor from the SM Higgs-W gauge boson coupling.} \label{Higgsdiphoton} \end{figure} \newpage \section{Muon and electron anomalous magnetic moments} \label{sec.gminus2} In this section we will analyze the implications of our model in the muon and electron anomalous magnetic moments. The muon and electron anomalous magnetic moments receive contributions arising from vertex diagrams involving the exchange of neutral scalars and charged leptons running in the internal lines of the loop. \aech{The Feynman diagramas corresponding to these contributions are shown in Figure \ref{Diagramsgminus2}.} \begin{figure}[tbp] \centering \includegraphics[width = 0.9\textwidth]{Diagramsgminus2mu.pdf}\vspace{-11cm} \caption{One-loop Feynman diagrams contributing to the muon and electron anomalous magnetic moments. Here $i=1,2,3$, $k=1,2$.} \label{Diagramsgminus2} \end{figure} Then, in our model the contributions to the muon and electron anomalous magnetic moments take the form: \begin{eqnarray} \Delta a_{\mu } &=&\dsum\limits_{k=1}^{2}\frac{\func{Re}\left( \beta _{2k}\gamma _{k2}^{\ast }\right) m_{\mu }^{2}}{8\pi ^{2}}\left( R_{CP-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{% even}}^{T}\right) _{21}\left( R_{CP-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{even}}^{T}\right) _{41}I_{S}^{\left( \mu \right) }\left( m_{E_{k}},m_{h^{0}}\right) \notag \\ &&+\dsum\limits_{k=1}^{2}\frac{\func{Re}\left( \beta _{2k}\gamma _{k2}^{\ast }\right) m_{\mu }^{2}}{8\pi ^{2}}\dsum\limits_{i=1}^{3}\left( R_{CP-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{% even}}^{T}\right) _{2,i+1}\left( R_{CP-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{even}}^{T}\right) _{4,i+1}I_{S}^{\left( \mu \right) }\left( m_{E_{k}},m_{H_{i}^{0}}\right) \notag \\ &&+\frac{m_{\mu }^{2}\func{Re}\left( \kappa _{2}\vartheta _{2}^{\ast }\right) }{8\pi ^{2}}\left[ I_{S}^{\left( \mu \right) }\left( m_{E^{\prime }},m_{S_{1}}\right) -I_{P}^{\left( \mu \right) }\left( m_{E^{\prime }},m_{P_{1}}\right) -I_{S}^{\left( \mu \right) }\left( m_{E^{\prime }},m_{S_{2}}\right) +I_{P}^{\left( \mu \right) }\left( m_{E^{\prime }},m_{P_{2}}\right) \right] \sin \theta \cos \theta \notag \\ &&+\frac{\left\vert y_{22}^{\left( L\right) }\right\vert ^{2}m_{\mu }^{2}}{% 8\pi ^{2}}\left[ \dsum\limits_{i=1}^{3}\left\vert \left( R_{CP-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{even}% }^{T}\right) _{3,i+1}\right\vert ^{2}I_{S}^{\left( \mu \right) }\left( m_{\mu },m_{H_{i}^{0}}\right) +\dsum\limits_{i=1}^{2}\left\vert \left( R_{CP-% \RIfM@\expandafter\text@\else\expandafter\mbox\fi{odd}}^{T}\right) _{4,i+2}\right\vert ^{2}I_{P}^{\left( \mu \right) }\left( m_{\mu },m_{A_{i}^{0}}\right) \right] \notag \\ \Delta a_{e} &=&\dsum\limits_{k=1}^{2}\frac{\func{Re}\left( \beta _{1k}\gamma _{k1}^{\ast }\right) m_{e}^{2}}{8\pi ^{2}}\left( R_{CP-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{even% }}^{T}\right) _{21}\left( R_{CP-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{even}}^{T}\right) _{41}I_{S}^{\left( e\right) }\left( m_{E_{k}},m_{h^{0}}\right) \\ &&+\dsum\limits_{k=1}^{2}\frac{\func{Re}\left( \beta _{1k}\gamma _{k1}^{\ast }\right) m_{e}^{2}}{8\pi ^{2}}\dsum\limits_{i=1}^{3}\left( R_{CP-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{even}% }^{T}\right) _{2,i+1}\left( R_{CP-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{even}}^{T}\right) _{4,i+1}I_{S}^{\left( e\right) }\left( m_{E_{k}},m_{H_{i}^{0}}\right) \notag \\ &&+\frac{m_{e}^{2}\func{Re}\left( \kappa _{1}\vartheta _{1}^{\ast }\right) }{% 8\pi ^{2}}\left[ I_{S}^{\left( e\right) }\left( m_{E^{\prime }},m_{S_{1}}\right) -I_{P}^{\left( e\right) }\left( m_{E^{\prime }},m_{P_{1}}\right) -I_{S}^{\left( e\right) }\left( m_{E^{\prime }},m_{S_{2}}\right) +I_{P}^{\left( e\right) }\left( m_{E^{\prime }},m_{P_{2}}\right) \right] \sin \theta \cos \theta \notag \\ &&+\frac{\left\vert y_{11}^{\left( L\right) }\right\vert ^{2}m_{e}^{2}}{8\pi ^{2}}\left[ \dsum\limits_{i=1}^{3}\left\vert \left( R_{CP-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{even}% }^{T}\right) _{3,i+1}\right\vert ^{2}I_{S}^{\left( \mu \right) }\left( m_{e},m_{H_{i}^{0}}\right) +\dsum\limits_{i=1}^{2}\left\vert \left( R_{CP-% \RIfM@\expandafter\text@\else\expandafter\mbox\fi{odd}}^{T}\right) _{4,i+2}\right\vert ^{2}I_{P}^{\left( \mu \right) }\left( m_{e},m_{A_{i}^{0}}\right) \right] \notag \end{eqnarray}% where $\theta =\theta _{S}=-\theta _{P}$, being $\theta _{S}$ and $\theta _{P}$ the $\func{Re}\phi _{L}^{0}-\func{Re}\phi _{R}^{0}$ and $\func{Im}\phi _{L}^{0}-\func{Im}\phi _{R}^{0}$ mixing angles, respectively. Furthermore, the loop $I_{S\left( P\right) }\left( m_{E},m\right) $ has the form \cite% {Diaz:2002uk,Jegerlehner:2009ry,Kelso:2014qka,Lindner:2016bgg,Kowalska:2017iqv}% : \begin{equation} I_{S\left( P\right) }^{\left( e,\mu \right) }\left( m_{E},m_{S}\right) =\int_{0}^{1}\frac{x^{2}\left( 1-x\pm \frac{m_{E}}{m_{e,\mu }}\right) }{% m_{\mu }^{2}x^{2}+\left( m_{E}^{2}-m_{e,\mu }^{2}\right) x+m_{S,P}^{2}\left( 1-x\right) }dx \end{equation}% and the dimensionless parameters $\beta _{1k}$, $\beta _{2k}$, $\gamma _{k1}$% , $\gamma _{k2}$, $\kappa _{1}$, $\kappa _{2}$, $\vartheta _{1}$, $\vartheta _{2}$ are given by: \begin{eqnarray} \beta _{1k} &=&\dsum\limits_{i=1}^{3}x_{ik}^{\left( E\right) }\left( V_{lL}^{\dagger }\right) _{1i},\hspace{0.7cm}\hspace{0.7cm}\gamma _{k1}=\dsum\limits_{j=1}^{3}z_{kj}^{\left( E\right) }\left( V_{lR}\right) _{j1}, \\ \beta _{2k} &=&\dsum\limits_{i=1}^{3}x_{ik}^{\left( E\right) }\left( V_{lL}^{\dagger }\right) _{2i},\hspace{0.7cm}\hspace{0.7cm}\gamma _{k2}=\dsum\limits_{j=1}^{3}z_{kj}^{\left( E\right) }\left( V_{lR}\right) _{j2}, \\ \kappa _{1} &=&\dsum\limits_{i=1}^{3}w_{i}^{\left( E^{\prime }\right) }\left( V_{lL}^{\dagger }\right) _{1i},\hspace{0.7cm}\hspace{0.7cm}\vartheta _{1}=\dsum\limits_{j=1}^{3}r_{j}^{\left( E^{\prime }\right) }\left( V_{lR}\right) _{j1}, \\ \kappa _{2} &=&\dsum\limits_{i=1}^{3}w_{i}^{\left( E^{\prime }\right) }\left( V_{lL}^{\dagger }\right) _{2i},\hspace{0.7cm}\hspace{0.7cm}\vartheta _{2}=\dsum\limits_{j=1}^{3}r_{j}^{\left( E^{\prime }\right) }\left( V_{lR}\right) _{j2}, \end{eqnarray} where $V_{lL}$ and $V_{lR}$ are the rotation matrices that diagonalize $% \widetilde{M}_{E}$ according to the relation: \begin{equation} V_{lL}^{\dagger }\widetilde{M}_{E}V_{lR}=diag\left( m_{e},m_{\mu },m_{\tau }\right) \end{equation} Considering that the muon and electron anomalous magnetic moments are constrained to be in the ranges \cite{Abi:2021gix,Morel:2020dww}: \begin{eqnarray} \left( \Delta a_{\mu }\right) _{\exp } &=&\left( 2.51\pm 0.59\right) \times 10^{-9} \notag \\ (\Delta a_{e})_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{exp}} &=&(4.8\pm 3.0)\times 10^{-13}. \end{eqnarray} We plot in Figure \ref{gminus2} the correlations of the muon and electron anomalous magnetic moments with the masses $m_{A^0_{1}}$ and $m_{A^0_{2}}$ of the CP odd neutral scalar (top plots) as well as the correlation between the electron and muon anomalous magnetic moments (bottom plot). We find that our model can successfully accommodates the experimental values of the muon and electron anomalous magnetic moments. \begin{figure}[tbp] \centering \includegraphics[width=8.9cm, height=6.0cm]{deltaamuvsmA1} \includegraphics[width=8.9cm, height=6.0cm]{deltaamuvsmA2}\newline \includegraphics[width=8.9cm, height=6.0cm]{deltaaevsmA1} \includegraphics[width=8.9cm, height=6.0cm]{deltaaevsmA2}\newline \includegraphics[width=8.9cm, height=6.0cm]{deltaaevsdeltaamu.jpg} \caption{Correlations of the muon and electron anomalous magnetic moments with the masses $m_{A^0_{1}}$ and $m_{A^0_{2}}$ of the CP odd neutral scalars (top plots). Correlation between the electron and muon anomalous magnetic moments (bottom plot).} \label{gminus2} \end{figure} \section{Heavy scalar production at the LHC} \label{HeavyScalar} In this section we discuss the singly heavy scalar $% H_{1}^{0}$ production at a proton-proton collider. Such production mechanism at the LHC is dominated by the gluon fusion mechanism, which is a one-loop process mediated by the top quark. Thus, the total $H_{1}^{0}$ production cross section in proton-proton collisions with center of mass energy $\sqrt{S% }$ takes the form: \begin{equation} \sigma _{pp\rightarrow gg\rightarrow H_{1}^{0}}\left( S\right) =\frac{\alpha _{S}^{2}a_{H_{1}^{0}t\bar{t}}^{2}m_{H_{1}^{0}}^{2}}{64\pi v^{2}S}\left[ I\left( \frac{m_{H_{1}^{0}}^{2}}{m_{t}^{2}}\right) \right] ^{2}\int_{\ln \sqrt{\frac{m_{H_{1}^{0}}^{2}}{S}}}^{-\ln \sqrt{\frac{m_{H_{1}^{0}}^{2}}{S}}% }f_{p/g}\left( \sqrt{\frac{m_{H_{1}^{0}}^{2}}{S}}e^{y},\mu ^{2}\right) f_{p/g}\left( \sqrt{\frac{m_{H_{1}^{0}}^{2}}{S}}e^{-y},\mu ^{2}\right) dy, \end{equation}% where $f_{p/g}\left( x_{1},\mu ^{2}\right) $ and $f_{p/g}\left( x_{2},\mu ^{2}\right) $ are the distributions of gluons in the proton which carry momentum fractions $x_{1}$ and $x_{2}$ of the proton, respectively. Furthermore $\mu =m_{H_{1}}$ is the factorization scale, whereas $I(z)$ has the form: \begin{equation} I(z)=\int_{0}^{1}dx\int_{0}^{1-x}dy\frac{1-4xy}{1-zxy}. \label{g1a} \end{equation}% \begin{figure}[tbh] \resizebox{8.5cm}{8cm}{\includegraphics{pptoH1at14TeV}}% \resizebox{8.5cm}{8cm}{\includegraphics{pptoH1at28TeV}} \caption{Total cross section for the $H_{1}^{0}$ production via gluon fusion mechanism at the LHC for $\protect\sqrt{s}=14$ TeV (left-panel) and $\protect% \sqrt{S}=28$ (right-panel) TeV as a function of the heavy scalar mass $% m_{H_{1}^{0}}$.} \label{pptoH1} \end{figure} Figure~\ref{pptoH1} shows the $H_{1}^{0}$ total production cross section at the LHC via gluon fusion mechanism for $\sqrt{S}=14$ TeV (left-plot) and $% \sqrt{S}=28$ TeV (right-plot), as a function of the scalar mass $% m_{H_{1}^{0}}$, which is taken to range from $400$ GeV up to $600$ GeV. Furthermore, the coupling $a_{H_{1}^{0}t\bar{t}}$ of the heavy scalar $% H_{1}^{0}$ with the top-antitop quark pair has been set to be equal to $0.4$% , which is consistent with our numerical analysis of the scalar potential. In the aforementioned region of masses for the heavy $H_{1}$ scalar, we find that the total production cross section ranges from $1.2$ pb up to $0.3$ pb. However, at the proposed energy upgrade of the LHC with $\sqrt{S}=28$ TeV, the total cross section for the $H_{1}^{0}$ is enhanced reaching values between $5$ pb and $1.5$ pb in the aforementioned mass range as indicated in the right panel of Figure~\ref{pptoH1}. The heavy neutral $H_{1}^{0}$ scalar, after being produced, will have dominant decay modes into top-antitop quark pairs, SM Higgs boson pairs as well as into a pair of SM gauge bosons, thus implying that the observation of an excess of events in the multileptons or multijet final states over the SM background can be a smoking gun signature of this model, whose observation will be crucial to assess its viability. \section{$Z^\prime$ gauge boson production at the LHC} \label{Zprime} In this section we discuss the single heavy $Z^{\prime }$ gauge boson via Drell-Yan mechanism at proton-proton collider. We consider the dominant contributions due to the parton distribution functions of the light up, down and strange quarks, so that the total cross section for the production of a $Z^{\prime }$ via quark antiquark annihilation in proton-proton collisions with center of mass energy $\sqrt{S}$ takes the form: \begin{equation} \sigma _{pp\rightarrow Z^{\prime }}^{\left( DrellYan\right) }(S)=\frac{% g_{R}^{2}\pi }{24S}\int_{\ln \sqrt{\frac{m_{Z^{\prime }}^{2}}{S}}}^{-\ln \sqrt{\frac{m_{Z^{\prime }}^{2}}{S}}}\dsum\limits_{q=u,d,s}f_{p/q}\left( \sqrt{\frac{m_{Z^{\prime }}^{2}}{S}}e^{y},\mu ^{2}\right) f_{p/\overline{q}% }\left( \sqrt{\frac{m_{Z^{\prime }}^{2}}{S}}e^{-y},\mu ^{2}\right) dy \end{equation}% where $f_{p/u}\left( x_{1},\mu ^{2}\right) $ ($f_{p/\overline{u}}\left( x_{2},\mu ^{2}\right) $), $f_{p/d}\left( x_{1},\mu ^{2}\right) $ ($f_{p/% \overline{d}}\left( x_{2},\mu ^{2}\right) $) and $f_{p/s}\left( x_{1},\mu ^{2}\right) $ ($f_{p/\overline{s}}\left( x_{2},\mu ^{2}\right) $) are the distributions of the light up, down and strange quarks (antiquarks), respectively, in the proton which carry momentum fractions $x_{1}$ ($x_{2}$) of the proton. The factorization scale is taken to be $\mu =m_{Z^{\prime }}$% . \begin{figure}[tbh] \resizebox{8.5cm}{8cm}{\includegraphics{sigmaqqtoZprime14TeV}}% \resizebox{8.5cm}{8cm}{\includegraphics{sigmaqqtoZprime28TeV}} \caption{Total cross section for the $Z^{\prime }$ production via Drell-Yan mechanism at a proton-proton collider for $\protect\sqrt{S}=14$ TeV (left-panel) and $\protect\sqrt{S}=28$ (right-panel) TeV as a function of the $Z^{\prime }$ mass.} \label{qqtoZprime} \end{figure} Fig.~\ref{qqtoZprime} displays the $Z^{\prime }$ total production cross section at the LHC via the Drell-Yan mechanism for $\sqrt{S}=14$ TeV (left panel) and $\sqrt{S}=28$ TeV (right panel) as a function of the $Z^{\prime }$ mass $M_{Z^{\prime }}$ in the range from $7$ TeV up to $8$ TeV. We consider $% Z^{\prime }$ gauge boson masses larger than $7$ TeV and we set $g_R=1$, which is consistent with the constraint $\frac{M_{Z^{\prime }}}{g_R}>7$ TeV arising from LEP I and II measurements of $e^{+}e^{-}\rightarrow l^{+}l^{-}$ \cite{LEP:2004xhf,Carena:2004xs,Das:2021esm} as well as with the ones resulting from LHC searches \cite{ATLAS:2019erb,CMS:2021ctt}. Limits on the ratio $\frac{M_{Z^{\prime }}}{g_R}$ are derived in Ref. \cite{Das:2021esm}, both for LEP II as well as for different values of the center of mass energy $\sqrt{s}$ of the future International Linear (ILC) $e^{+}e^{-}$ Collider. In this work we use the LEP II bound $\frac{M_{Z^{\prime }}}{g_R}>7$ TeV, since the other bounds correspond to future projective limits related to experiments which have not been started yet. With respect to the bounds of the $W^{\prime }$ gauge boson mass, CMS and ATLAS experiments at CERN have found that the $W^{\prime }$ gauge boson should be heavier than $6$ TeV \cite{CMS:2021qef} and $5$ TeV \cite{ATLAS:2018dcj}, respectively. } For this region of $Z^{\prime }$ masses we find that the total production cross section ranges from \ac{$0.85$ fb up to $0.01$ fb}. The heavy neutral $% Z^{\prime }$ gauge boson, after being produced, will subsequently decay into the pair of the SM fermion-antifermion pairs, thus implying that the observation of an excess of events in the dileptons or dijet final states over the SM background can be a signal of support of this model at the LHC. On the other hand, at the proposed energy upgrade of the LHC at 28 TeV center of mass energy, the total cross section for the Drell-Yan production of a heavy $Z^{\prime }$ neutral gauge boson gets significantly enhanced reaching values ranging from \ac{$26$ fb up to $12$ fb}, as indicated in the right panel of Fig.~\ref{qqtoZprime}. \newpage \section{Meson oscillations} \label{FCNC} In this section, we discuss the implications of our model in the Flavour Changing Neutral Current (FCNC) interactions in the down type quark sector. The FCNC Yukawa interactions in the down type quark sector give rise to meson oscillations. The following effective Hamiltonians describe $K^{0}-\bar{K}^{0}$, $B_{d}^{0}-\bar{B}_{d}^{0}$ and $B_{s}^{0}-% \bar{B}_{s}^{0}$ mixings: \begin{equation} \mathcal{H}_{eff}^{\left( K^{0}-\bar{K}^{0}\right) }\mathcal{=}\frac{% G_{F}^{2}m_{W}^{2}}{16\pi ^{2}}\sum_{i=1}^{3}C_{i}^{\left( K^{0}-\bar{K}% ^{0}\right) }\left( \mu \right) O_{i}^{\left( K^{0}-\bar{K}^{0}\right) }\left( \mu \right) , \end{equation} \begin{equation} \mathcal{H}_{eff}^{\left( B_{d}^{0}-\bar{B}_{d}^{0}\right) }\mathcal{=}\frac{% G_{F}^{2}m_{W}^{2}}{16\pi ^{2}}\sum_{i=1}^{3}C_{i}^{\left( B_{d}^{0}-\bar{B}% _{d}^{0}\right) }\left( \mu \right) O_{i}^{\left( B_{d}^{0}-\bar{B}% _{d}^{0}\right) }\left( \mu \right) , \end{equation} \begin{equation} \mathcal{H}_{eff}^{\left( B_{s}^{0}-\bar{B}_{s}^{0}\right) }\mathcal{=}\frac{% G_{F}^{2}m_{W}^{2}}{16\pi ^{2}}\sum_{i=1}^{3}C_{i}^{\left( B_{s}^{0}-\bar{B}% _{s}^{0}\right) }\left( \mu \right) O_{i}^{\left( B_{s}^{0}-\bar{B}% _{s}^{0}\right) }\left( \mu \right) , \end{equation} In our analysis of meson oscillations we follow the approach of \cite% {Dedes:2002er,Aranda:2012bv}. The $K^{0}-\bar{K}^{0}$, $B_{d}^{0}-\bar{B}% _{d}^{0}$ and $B_{s}^{0}-\bar{B}_{s}^{0}$ meson mixings receive tree level contributions corresponding to the exchange of neutral CP even and CP odd scalars, thus giving rise to the following operators: \begin{eqnarray} O_{1}^{\left( K^{0}-\bar{K}^{0}\right) } &=&\left( \overline{s}P_{L}d\right) \left( \overline{s}P_{L}d\right) ,\hspace{1cm}O_{2}^{\left( K^{0}-\bar{K}% ^{0}\right) }=\left( \overline{s}P_{R}d\right) \left( \overline{s}% P_{R}d\right) ,\hspace{1cm}O_{3}^{\left( K^{0}-\bar{K}^{0}\right) }=\left( \overline{s}P_{L}d\right) \left( \overline{s}P_{R}d\right) , \label{op3f} \\ O_{1}^{\left( B_{d}^{0}-\bar{B}_{d}^{0}\right) } &=&\left( \overline{d}% P_{L}b\right) \left( \overline{d}P_{L}b\right) ,\hspace{1cm}O_{2}^{\left( B_{d}^{0}-\bar{B}_{d}^{0}\right) }=\left( \overline{d}P_{R}b\right) \left( \overline{d}P_{R}b\right) ,\hspace{1cm}O_{3}^{\left( B_{d}^{0}-\bar{B}% _{d}^{0}\right) }=\left( \overline{d}P_{L}b\right) \left( \overline{d}% P_{R}b\right) ,\hspace{0.7cm} \\ O_{1}^{\left( B_{s}^{0}-\bar{B}_{s}^{0}\right) } &=&\left( \overline{s}% P_{L}b\right) \left( \overline{s}P_{L}b\right) ,\hspace{1cm}O_{2}^{\left( B_{s}^{0}-\bar{B}_{s}^{0}\right) }=\left( \overline{s}P_{R}b\right) \left( \overline{s}P_{R}b\right) ,\hspace{1cm}O_{3}^{\left( B_{s}^{0}-\bar{B}% _{s}^{0}\right) }=\left( \overline{s}P_{L}b\right) \left( \overline{s}% P_{R}b\right) , \end{eqnarray}% where the corresponding Wilson coefficients are given by: \begin{eqnarray} C_{1}^{\left( K^{0}-\bar{K}^{0}\right) } &=&\frac{16\pi ^{2}}{% G_{F}^{2}m_{W}^{2}}\widetilde{C}_{1}^{\left( K^{0}-\bar{K}^{0}\right) },% \hspace{0.7cm}\hspace{0.7cm}\widetilde{C}_{1}^{\left( K^{0}-\bar{K}% ^{0}\right) }=\frac{y_{h\overline{s}_{R}d_{L}}^{2}}{m_{h}^{2}}+\sum_{i=1}^{3}% \frac{y_{H_{i}^{0}\overline{s}_{R}d_{L}}^{2}}{m_{H_{i}^{0}}^{2}}% -\sum_{i=1}^{2}\frac{y_{A_{i}^{0}\overline{s}_{R}d_{L}}^{2}}{% m_{A_{i}^{0}}^{2}}, \\ C_{2}^{\left( K^{0}-\bar{K}^{0}\right) } &=&\frac{16\pi ^{2}}{% G_{F}^{2}m_{W}^{2}}\widetilde{C}_{2}^{\left( K^{0}-\bar{K}^{0}\right) },% \hspace{0.7cm}\hspace{0.7cm}\widetilde{C}_{2}^{\left( K^{0}-\bar{K}% ^{0}\right) }=\frac{y_{h\overline{s}_{L}d_{R}}^{2}}{m_{h}^{2}}+\sum_{i=1}^{3}% \frac{y_{H_{i}^{0}\overline{s}_{L}d_{R}}^{2}}{m_{H_{i}^{0}}^{2}}% -\sum_{i=1}^{2}\frac{y_{A_{i}^{0}\overline{s}_{L}d_{R}}^{2}}{% m_{A_{i}^{0}}^{2}}, \\ C_{3}^{\left( K^{0}-\bar{K}^{0}\right) } &=&\frac{16\pi ^{2}}{% G_{F}^{2}m_{W}^{2}}\widetilde{C}_{3}^{\left( K^{0}-\bar{K}^{0}\right) },% \hspace{0.3cm}\widetilde{C}_{3}^{\left( K^{0}-\bar{K}^{0}\right) }=\frac{y_{h% \overline{s}_{R}d_{L}}y_{h\overline{s}_{L}d_{R}}}{m_{h}^{2}}+\sum_{i=1}^{3}% \frac{y_{H_{i}^{0}\overline{s}_{R}d_{L}}y_{H_{i}^{0}\overline{s}_{L}d_{R}}}{% m_{H_{i}^{0}}^{2}}-\sum_{i=1}^{2}\frac{y_{A_{i}^{0}\overline{s}% _{R}d_{L}}y_{A_{i}^{0}\overline{s}_{L}d_{R}}}{m_{A_{i}^{0}}^{2}}, \end{eqnarray}% \begin{eqnarray} C_{1}^{\left( B_{d}^{0}-\bar{B}_{d}^{0}\right) } &=&\frac{16\pi ^{2}}{% G_{F}^{2}m_{W}^{2}}\widetilde{C}_{1}^{\left( B_{d}^{0}-\bar{B}% _{d}^{0}\right) },\hspace{0.7cm}\hspace{0.7cm}\widetilde{C}_{1}^{\left( B_{d}^{0}-\bar{B}_{d}^{0}\right) }=\frac{y_{h\overline{d}_{R}b_{L}}^{2}}{% m_{h}^{2}}+\sum_{i=1}^{3}\frac{y_{H_{i}^{0}\overline{d}_{R}b_{L}}^{2}}{% m_{H_{i}^{0}}^{2}}-\sum_{i=1}^{2}\frac{y_{A_{i}^{0}\overline{d}_{R}b_{L}}^{2}% }{m_{A_{i}^{0}}^{2}}, \\ C_{2}^{\left( B_{d}^{0}-\bar{B}_{d}^{0}\right) } &=&\frac{16\pi ^{2}}{% G_{F}^{2}m_{W}^{2}}\widetilde{C}_{2}^{\left( B_{d}^{0}-\bar{B}% _{d}^{0}\right) },\hspace{0.7cm}\hspace{0.7cm}\widetilde{C}_{2}^{\left( B_{d}^{0}-\bar{B}_{d}^{0}\right) }=\frac{y_{h\overline{d}_{L}b_{R}}^{2}}{% m_{h}^{2}}+\sum_{i=1}^{3}\frac{y_{H_{i}^{0}\overline{d}_{L}b_{R}}^{2}}{% m_{H_{i}^{0}}^{2}}-\sum_{i=1}^{2}\frac{y_{A_{i}^{0}\overline{d}_{L}b_{R}}^{2}% }{m_{A_{i}^{0}}^{2}}, \\ C_{3}^{\left( B_{d}^{0}-\bar{B}_{d}^{0}\right) } &=&\frac{16\pi ^{2}}{% G_{F}^{2}m_{W}^{2}}\widetilde{C}_{3}^{\left( B_{d}^{0}-\bar{B}% _{d}^{0}\right) },\hspace{0.3cm}\widetilde{C}_{3}^{\left( B_{d}^{0}-\bar{B}% _{d}^{0}\right) }=\frac{y_{h\overline{d}_{R}b_{L}}y_{h\overline{d}_{L}b_{R}}% }{m_{h}^{2}}+\sum_{i=1}^{3}\frac{y_{H_{i}^{0}\overline{d}% _{R}b_{L}}y_{H_{i}^{0}\overline{d}_{L}b_{R}}}{m_{H_{i}^{0}}^{2}}% -\sum_{i=1}^{2}\frac{y_{A_{i}^{0}\overline{d}_{R}b_{L}}y_{A_{i}^{0}\overline{% d}_{L}b_{R}}}{m_{A_{i}^{0}}^{2}}, \end{eqnarray}% \begin{eqnarray} C_{1}^{\left( B_{s}^{0}-\bar{B}_{s}^{0}\right) } &=&\frac{16\pi ^{2}}{% G_{F}^{2}m_{W}^{2}}\widetilde{C}_{1}^{\left( B_{s}^{0}-\bar{B}% _{s}^{0}\right) },\hspace{0.7cm}\hspace{0.7cm}\widetilde{C}_{1}^{\left( B_{s}^{0}-\bar{B}_{s}^{0}\right) }=\frac{y_{h\overline{s}_{R}b_{L}}^{2}}{% m_{h}^{2}}+\sum_{i=1}^{3}\frac{y_{H_{i}^{0}\overline{s}_{R}b_{L}}^{2}}{% m_{H_{i}^{0}}^{2}}-\sum_{i=1}^{2}\frac{y_{A_{i}^{0}\overline{s}_{R}b_{L}}^{2}% }{m_{A_{i}^{0}}^{2}}, \\ C_{2}^{\left( B_{s}^{0}-\bar{B}_{s}^{0}\right) } &=&\frac{16\pi ^{2}}{% G_{F}^{2}m_{W}^{2}}\widetilde{C}_{2}^{\left( B_{s}^{0}-\bar{B}% _{s}^{0}\right) },\hspace{0.7cm}\hspace{0.7cm}\widetilde{C}_{2}^{\left( B_{s}^{0}-\bar{B}_{s}^{0}\right) }=\frac{y_{h\overline{s}_{L}b_{R}}^{2}}{% m_{h}^{2}}+\sum_{i=1}^{3}\frac{y_{H_{i}^{0}\overline{s}_{L}b_{R}}^{2}}{% m_{H_{i}^{0}}^{2}}-\sum_{i=1}^{2}\frac{y_{A_{i}^{0}\overline{s}_{L}b_{R}}^{2}% }{m_{A_{i}^{0}}^{2}}, \\ C_{3}^{\left( B_{s}^{0}-\bar{B}_{s}^{0}\right) } &=&\frac{16\pi ^{2}}{% G_{F}^{2}m_{W}^{2}}\widetilde{C}_{3}^{\left( B_{s}^{0}-\bar{B}% _{s}^{0}\right) },\hspace{0.3cm}\widetilde{C}_{3}^{\left( B_{s}^{0}-\bar{B}% _{s}^{0}\right) }=\frac{y_{h\overline{s}_{R}b_{L}}y_{h\overline{s}_{L}b_{R}}% }{m_{h}^{2}}+\sum_{i=1}^{3}\frac{y_{H_{i}^{0}\overline{s}% _{R}b_{L}}y_{H_{i}^{0}\overline{s}_{L}b_{R}}}{m_{H_{i}^{0}}^{2}}% -\sum_{i=1}^{2}\frac{y_{A_{i}^{0}\overline{s}_{R}b_{L}}y_{A_{i}^{0}\overline{% s}_{L}b_{R}}}{m_{A_{i}^{0}}^{2}}, \end{eqnarray}% Furthermore, the $K-\bar{K}$, $B_{d}^{0}-\bar{B}_{d}^{0}$ and $B_{s}^{0}-% \bar{B}_{s}^{0}$\ mass splittings can be written as: \begin{equation} \Delta m_{K}=\left( \Delta m_{K}\right) _{SM}+\Delta m_{K}^{\left( NP\right) },\hspace{1cm}\Delta m_{B_{d}}=\left( \Delta m_{B_{d}}\right) _{SM}+\Delta m_{B_{d}}^{\left( NP\right) },\hspace{1cm}\Delta m_{B_{s}}=\left( \Delta m_{B_{s}}\right) _{SM}+\Delta m_{B_{s}}^{\left( NP\right) }, \label{Deltam} \end{equation}% where $\left( \Delta m_{K}\right) _{SM}$, $\left( \Delta m_{B_{d}}\right) _{SM}$ and $\left( \Delta m_{B_{s}}\right) _{SM}$ are the SM contributions, whereas $\Delta m_{K}^{\left( NP\right) }$ , $\Delta m_{B_{d}}^{\left( NP\right) }$ and $\left( \Delta m_{B_{s}}\right) _{SM}$ are new physics contributions. In our model, the new physics contributions to the meson differences are given by: \begin{eqnarray} \Delta m_{K}^{\left( NP\right) } &=&\frac{G_{F}^{2}m_{W}^{2}}{6\pi ^{2}}% m_{K}f_{K}^{2}\eta _{K}B_{K}\left[ P_{2}^{\left( K^{0}-\bar{K}^{0}\right) }C_{3}^{\left( K^{0}-\bar{K}^{0}\right) }+P_{1}^{\left( K^{0}-\bar{K}% ^{0}\right) }\left( C_{1}^{\left( K^{0}-\bar{K}^{0}\right) }+C_{2}^{\left( K^{0}-\bar{K}^{0}\right) }\right) \right] \notag \\ &=&\frac{8}{3}m_{K}f_{K}^{2}\eta _{K}B_{K}\left[ P_{2}^{\left( K^{0}-\bar{K}% ^{0}\right) }\widetilde{C}_{3}^{\left( K^{0}-\bar{K}^{0}\right) }+P_{1}^{\left( K^{0}-\bar{K}^{0}\right) }\left( \widetilde{C}_{1}^{\left( K^{0}-\bar{K}^{0}\right) }+\widetilde{C}_{2}^{\left( K^{0}-\bar{K}% ^{0}\right) }\right) \right] \end{eqnarray}% \begin{eqnarray} \Delta m_{B_{d}}^{\left( NP\right) } &=&\frac{G_{F}^{2}m_{W}^{2}}{6\pi ^{2}}% m_{B_{d}}f_{B_{d}}^{2}\eta _{B_{d}}B_{B_{d}}\left[ P_{2}^{\left( B_{d}^{0}-% \bar{B}_{d}^{0}\right) }C_{3}^{\left( B_{d}^{0}-\bar{B}_{d}^{0}\right) }+P_{1}^{\left( B_{d}^{0}-\bar{B}_{d}^{0}\right) }\left( C_{1}^{\left( B_{d}^{0}-\bar{B}_{d}^{0}\right) }+C_{2}^{\left( B_{d}^{0}-\bar{B}% _{d}^{0}\right) }\right) \right] \notag \\ &=&\frac{8}{3}m_{B_{d}}f_{B_{d}}^{2}\eta _{B_{d}}B_{B_{d}}\left[ P_{2}^{\left( B_{d}^{0}-\bar{B}_{d}^{0}\right) }\widetilde{C}_{3}^{\left( B_{d}^{0}-\bar{B}_{d}^{0}\right) }+P_{1}^{\left( B_{d}^{0}-\bar{B}% _{d}^{0}\right) }\left( \widetilde{C}_{1}^{\left( B_{d}^{0}-\bar{B}% _{d}^{0}\right) }+\widetilde{C}_{2}^{\left( B_{d}^{0}-\bar{B}_{d}^{0}\right) }\right) \right] \end{eqnarray}% \begin{eqnarray} \Delta m_{B_{s}}^{\left( NP\right) } &=&\frac{G_{F}^{2}m_{W}^{2}}{6\pi ^{2}}% m_{B_{s}}f_{B_{s}}^{2}\eta _{B_{s}}B_{B_{s}}\left[ P_{2}^{\left( B_{s}^{0}-% \bar{B}_{s}^{0}\right) }C_{3}^{\left( B_{s}^{0}-\bar{B}_{s}^{0}\right) }+P_{1}^{\left( B_{s}^{0}-\bar{B}_{s}^{0}\right) }\left( C_{1}^{\left( B_{s}^{0}-\bar{B}_{s}^{0}\right) }+C_{2}^{\left( B_{s}^{0}-\bar{B}% _{s}^{0}\right) }\right) \right] \notag \\ &=&\frac{8}{3}m_{B_{s}}f_{B_{s}}^{2}\eta _{B_{s}}B_{B_{s}}\left[ P_{2}^{\left( B_{s}^{0}-\bar{B}_{s}^{0}\right) }\widetilde{C}_{3}^{\left( B_{s}^{0}-\bar{B}_{s}^{0}\right) }+P_{1}^{\left( B_{s}^{0}-\bar{B}% _{s}^{0}\right) }\left( \widetilde{C}_{1}^{\left( B_{s}^{0}-\bar{B}% _{s}^{0}\right) }+\widetilde{C}_{2}^{\left( B_{s}^{0}-\bar{B}_{s}^{0}\right) }\right) \right] \end{eqnarray}% Using the following parameters \cite% {Dedes:2002er,Aranda:2012bv,Khalil:2013ixa,Queiroz:2016gif,Buras:2016dxz,Ferreira:2017tvy,Duy:2020hhk,Branco:2021vhs,Zyla:2020zbs}% : \begin{eqnarray} \Delta m_{K} &=&\left( 3.484\pm 0.006\right) \times 10^{-12}MeV,\hspace{1.5cm% }\left( \Delta m_{K}\right) _{SM}=3.483\times 10^{-12}MeV \notag \\ f_{K} &=&\left( 155.7\pm 0.3\right) MeV,\hspace{1.5cm}B_{K}=0.717\pm 0.024,% \hspace{1.5cm}\eta _{K}=0.57, \notag \\ P_{1}^{\left( K^{0}-\bar{K}^{0}\right) } &=&-9.3,\hspace{1.5cm}P_{2}^{\left( K^{0}-\bar{K}^{0}\right) }=30.6,\hspace{1.5cm}m_{K}=\left( 497.611\pm 0.013\right) MeV,\hspace{1.5cm} \end{eqnarray}% \begin{eqnarray} \left( \Delta m_{B_{d}}\right) _{\exp } &=&\left( 3.334\pm 0.013\right) \times 10^{-10}MeV,\hspace{1.5cm}\left( \Delta m_{B_{d}}\right) _{SM}=3.582\times 10^{-10}MeV, \notag \\ f_{B_{d}} &=&\left( 190.0\pm 1.3\right) MeV,\hspace{1.5cm}B_{B_{d}}=1.30\pm 0.10,\hspace{1.5cm}\eta _{B_{d}}=0.55, \notag \\ P_{1}^{\left( B_{d}^{0}-\bar{B}_{d}^{0}\right) } &=&-0.52,\hspace{1.5cm}% P_{2}^{\left( B_{d}^{0}-\bar{B}_{d}^{0}\right) }=0.88,\hspace{1.5cm}% m_{B_{d}}=\left( 5279.65\pm 0.12\right) MeV,\hspace{1.5cm} \end{eqnarray}% \begin{eqnarray} \left( \Delta m_{B_{s}}\right) _{\exp } &=&\left( 1.1683\pm 0.0013\right) \times 10^{-8}MeV,\hspace{1.5cm}\left( \Delta m_{B_{s}}\right) _{SM}=1.21103\times 10^{-8}MeV, \notag \\ f_{B_{s}} &=&\left( 230.3\pm 1.3\right) MeV,\hspace{1.5cm}B_{B_{s}}=1.35\pm 0.06,\hspace{1.5cm}\eta _{B_{s}}=0.55, \notag \\ P_{1}^{\left( B_{s}^{0}-\bar{B}_{s}^{0}\right) } &=&-0.52,\hspace{1.5cm}% P_{2}^{\left( B_{s}^{0}-\bar{B}_{s}^{0}\right) }=0.88,\hspace{1.5cm}% m_{B_{s}}=\left( 5366.88\pm 0.14\right) MeV,\hspace{1.5cm} \end{eqnarray}% Figure \ref{BBbar} displays the correlation between the $\Delta m_{B_{d}}$ mass splitting and the heavy CP even scalar mass $m_{H_{1}^{0}}$. In our numerical analysis, for the sake of simplicity, we have set the couplings of the flavor changing neutral Yukawa interactions that produce the $B_{d}^{0}-% \bar{B}_{d}^{0}$ oscillations to be equal to $10^{-4}$. Furthermore, we have fixed $m_{H_{3}^{0}}=10$ TeV and we have varied the masses of $H_{1}^{0}$, $% H_{2}^{0}$ and $A_{1}^{0}$ in the ranges $200$ GeV$\leqslant m_{H_{1}^{0}}\leqslant $ $400$ GeV, $350$ GeV$\leqslant m_{H_{2}^{0}}\leqslant $ $550$ GeV and $300$ GeV$\leqslant m_{A_{1}^{0}}\leqslant $ $450$ GeV, whereas we have also set $% m_{A_{2}^{0}}=m_{A_{1}^{0}}+150GeV$. It is worth mentioning that the above described ranges of scalar masses is consistent with the ones described in the correlation plots of heavy scalar masses shown in Figure \ref% {scalarcorrelations}. As indicated in Figure \ref{BBbar}, the experimental constraints arising from $B_{d}^{0}-\bar{B}_{d}^{0}$ meson oscillations are successfully fullfilled for the aforementioned range of parameter space. We have numerically checked that in the above described range of masses, the obtained values for the $\Delta m_{B_{s}}$ and $\Delta m_{K}$ mass splittings are consistent with the experimental data on meson oscillations for flavor violating Yukawa couplings equal to $2.5\times 10^{-4}$ and $% 10^{-6}$ for the $B_{s}^{0}-\bar{B}_{s}^{0}$ and $K^{0}-\bar{K}^{0}$ mixings, respectively. \begin{figure}[h] \includegraphics[width=0.7\textwidth]{plotDeltamBdvsmH1.pdf} \caption{Correlation between the $\Delta m_{B_{d}}$ mass splitting and the heavy CP even scalar mass $m_{H_{1}^{0}}$. The couplings of the flavor changing neutral Yukawa interactions have been set to be equal to $10^{-4}$.} \label{BBbar} \end{figure} \section{Conclusions} \label{conclusions} We have built a renormalizable left-right symmetric theory with additional symmetry $Z_{4}^{\left( 1\right) }\times Z_{4}^{\left( 2\right) }$ consistent with the observed SM fermion mass hierarchy, the tiny values for the light active neutrino masses, the lepton and baryon asymmetries of the Universe, the constraints arising from meson oscillations, from charged lepton flavor violation, as well as the muon and electron anomalous magnetic moments. As the main appealing feature of the proposed model, the top and exotic fermions get their masses at tree level whereas the masses of the bottom, charm and strange quarks, tau and muon leptons are generated from a tree level Universal Seesaw mechanism thanks to their mixings with charged exotic vector like fermions. The first generation SM charged fermions masses are produced from a radiative seesaw mechanism at one loop level mediated by charged vector like fermions and electrically neutral scalars. The tiny masses of the light active neutrinos arise from an inverse seesaw mechanism at one-loop level. Furthermore, we have also shown that the proposed model successfully accommodates the current Higgs diphoton decay rate constraints, yielding a Higgs diphoton decay rate lower than the SM expectation but inside the $3\sigma $ experimentally allowed range. We also studied the heavy $H_{1}^{0}$ scalar and $Z^{\prime }$ gauge boson production in a proton-proton collider at $\sqrt{S}=14$ TeV and $\sqrt{S}=28$ TeV, via the gluon fusion and Drell-Yan mechanisms, respectively. We found that the singly $H_{1}^{0}$ scalar production cross section reach values of $1.2$ and $5$ pb at $\sqrt{S}=14$ TeV and $\sqrt{S}=28$ TeV, respectively, for a $400$ GeV heavy scalar mass. On the other hand, we found that the total cross section for the $Z^{\prime }$ gauge boson production takes the values of $0.85$ fb and $26$ fb at $\sqrt{S}=14$ TeV and $\sqrt{S}=28$ TeV, respectively, for a $7$ TeV $Z^{\prime }$ gauge boson mass. \section*{Acknowledgments} A.E.C.H and I.S. are supported by ANID-Chile FONDECYT 1210378, ANID-Chile FONDECYT 1180232, ANID-Chile FONDECYT 3150472, ANID PIA/APOYO AFB180002 and Milenio-ANID-ICN2019\_044 \section*{Abstract (Not appropriate in this style!)}% \else \small \begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}% \quotation \fi }% }{% }% \@ifundefined{endabstract}{\def\endabstract {\if@twocolumn\else\endquotation\fi}}{}% \@ifundefined{maketitle}{\def\maketitle#1{}}{}% \@ifundefined{affiliation}{\def\affiliation#1{}}{}% \@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}% \@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}% \@ifundefined{newfield}{\def\newfield#1#2{}}{}% \@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }% \newcount\c@chapter}{}% \@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}% \@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}% \@ifundefined{subsection}{\def\subsection#1% {\par(Subsection head:)#1\par }}{}% \@ifundefined{subsubsection}{\def\subsubsection#1% {\par(Subsubsection head:)#1\par }}{}% \@ifundefined{paragraph}{\def\paragraph#1% {\par(Subsubsubsection head:)#1\par }}{}% \@ifundefined{subparagraph}{\def\subparagraph#1% {\par(Subsubsubsubsection head:)#1\par }}{}% \@ifundefined{therefore}{\def\therefore{}}{}% \@ifundefined{backepsilon}{\def\backepsilon{}}{}% \@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}% \@ifundefined{registered}{% \def\registered{\relax\ifmmode{}\r@gistered \else$\m@th\r@gistered$\fi}% \def\r@gistered{^{\ooalign {\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr \mathhexbox20D}}}}{}% \@ifundefined{Eth}{\def\Eth{}}{}% \@ifundefined{eth}{\def\eth{}}{}% \@ifundefined{Thorn}{\def\Thorn{}}{}% \@ifundefined{thorn}{\def\thorn{}}{}% \def\TEXTsymbol#1{\mbox{$#1$}}% \@ifundefined{degree}{\def\degree{{}^{\circ}}}{}% \newdimen\theight \@ifundefined{Column}{\def\Column{% \vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}% \theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip \kern -\theight \vbox to \theight{% \rightline{\rlap{\box\z@}}% \vss }% }% }}{}% \@ifundefined{qed}{\def\qed{% \ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi \hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}% }}{}% \@ifundefined{cents}{\def\cents{\hbox{\rm\rlap c/}}}{}% \@ifundefined{tciLaplace}{\def\tciLaplace{\ensuremath{\mathcal{L}}}}{}% \@ifundefined{tciFourier}{\def\tciFourier{\ensuremath{\mathcal{F}}}}{}% \@ifundefined{textcurrency}{\def\textcurrency{\hbox{\rm\rlap xo}}}{}% \@ifundefined{texteuro}{\def\texteuro{\hbox{\rm\rlap C=}}}{}% \@ifundefined{euro}{\def\euro{\hbox{\rm\rlap C=}}}{}% \@ifundefined{textfranc}{\def\textfranc{\hbox{\rm\rlap-F}}}{}% \@ifundefined{textlira}{\def\textlira{\hbox{\rm\rlap L=}}}{}% \@ifundefined{textpeseta}{\def\textpeseta{\hbox{\rm P\negthinspace s}}}{}% \@ifundefined{miss}{\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}}{}% \@ifundefined{vvert}{\def\vvert{\Vert}}{ \@ifundefined{tcol}{\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}}{}% \@ifundefined{dB}{\def\dB{\hbox{{}}}}{ \@ifundefined{mB}{\def\mB#1{\hbox{$#1$}}}{ \@ifundefined{nB}{\def\nB#1{\hbox{#1}}}{ \@ifundefined{note}{\def\note{$^{\dag}}}{}% \defLaTeX2e{LaTeX2e} \ifx\fmtnameLaTeX2e \DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm} \DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf} \DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt} \DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf} \DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit} \DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl} \DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc} \fi \def\alpha{{\Greekmath 010B}}% \def\beta{{\Greekmath 010C}}% \def\gamma{{\Greekmath 010D}}% \def\delta{{\Greekmath 010E}}% \def\epsilon{{\Greekmath 010F}}% \def\zeta{{\Greekmath 0110}}% \def\eta{{\Greekmath 0111}}% \def\theta{{\Greekmath 0112}}% \def\iota{{\Greekmath 0113}}% \def\kappa{{\Greekmath 0114}}% \def\lambda{{\Greekmath 0115}}% \def\mu{{\Greekmath 0116}}% \def\nu{{\Greekmath 0117}}% \def\xi{{\Greekmath 0118}}% \def\pi{{\Greekmath 0119}}% \def\rho{{\Greekmath 011A}}% \def\sigma{{\Greekmath 011B}}% \def\tau{{\Greekmath 011C}}% \def\upsilon{{\Greekmath 011D}}% \def\phi{{\Greekmath 011E}}% \def\chi{{\Greekmath 011F}}% \def\psi{{\Greekmath 0120}}% \def\omega{{\Greekmath 0121}}% \def\varepsilon{{\Greekmath 0122}}% \def\vartheta{{\Greekmath 0123}}% \def\varpi{{\Greekmath 0124}}% \def\varrho{{\Greekmath 0125}}% \def\varsigma{{\Greekmath 0126}}% \def\varphi{{\Greekmath 0127}}% \def{\Greekmath 0272}{{\Greekmath 0272}} \def\FindBoldGroup{% {\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}% } \def\Greekmath#1#2#3#4{% \if@compatibility \ifnum\mathgroup=\symbold \mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}% \else \mathchar"#1#2#3# \fi \else \FindBoldGroup \ifnum\mathgroup=\theboldgroup \mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}% \else \mathchar"#1#2#3# \fi \fi} \newif\ifGreekBold \GreekBoldfalse \let\SAVEPBF=\pbf \def\pbf{\GreekBoldtrue\SAVEPBF}% \@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{} \@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{} \@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{} \@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{} \@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{} \@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{} \@ifundefined{remark}{\newtheorem{remark}{Remark}}{} \@ifundefined{example}{\newtheorem{example}{Example}}{} \@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{} \@ifundefined{definition}{\newtheorem{definition}{Definition}}{} \@ifundefined{mathletters}{% \newcounter{equationnumber} \def\mathletters{% \addtocounter{equation}{1} \edef\@currentlabel{\arabic{equation}}% \setcounter{equationnumber}{\c@equation} \setcounter{equation}{0}% \edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}% } \def\endmathletters{% \setcounter{equation}{\value{equationnumber}}% } }{} \@ifundefined{BibTeX}{% \def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}% \@ifundefined{AmS}% {\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}% A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}% \@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}% \def\@@eqncr{\let\@tempa\relax \ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}% \else \def\@tempa{&}\fi \@tempa \if@eqnsw \iftag@ \@taggnum \else \@eqnnum\stepcounter{equation}% \fi \fi \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@eqnswtrue \global\@eqcnt\z@\cr} \def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}} \def\@TCItag#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{(#1)}% \global\def\@currentlabel{#1}} \def\@TCItagstar*#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{#1}% \global\def\@currentlabel{#1}} \def\QATOP#1#2{{#1 \atop #2}}% \def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}% \def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}% \def\QABOVE#1#2#3{{#2 \above#1 #3}}% \def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}% \def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}% \def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}% \def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}% \def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}% \def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}% \def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}% \def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}% \def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}% \def\QTABOVED#1#2#3#4#5{{\textstyle {#4 \abovewithdelims#1#2#3 #5}}}% \def\QDABOVED#1#2#3#4#5{{\displaystyle {#4 \abovewithdelims#1#2#3 #5}}}% \def\tint{\mathop{\textstyle \int}}% \def\tiint{\mathop{\textstyle \iint }}% \def\tiiint{\mathop{\textstyle \iiint }}% \def\tiiiint{\mathop{\textstyle \iiiint }}% \def\tidotsint{\mathop{\textstyle \idotsint }}% \def\toint{\mathop{\textstyle \oint}}% \def\tsum{\mathop{\textstyle \sum }}% \def\tprod{\mathop{\textstyle \prod }}% \def\tbigcap{\mathop{\textstyle \bigcap }}% \def\tbigwedge{\mathop{\textstyle \bigwedge }}% \def\tbigoplus{\mathop{\textstyle \bigoplus }}% \def\tbigodot{\mathop{\textstyle \bigodot }}% \def\tbigsqcup{\mathop{\textstyle \bigsqcup }}% \def\tcoprod{\mathop{\textstyle \coprod }}% \def\tbigcup{\mathop{\textstyle \bigcup }}% \def\tbigvee{\mathop{\textstyle \bigvee }}% \def\tbigotimes{\mathop{\textstyle \bigotimes }}% \def\tbiguplus{\mathop{\textstyle \biguplus }}% \def\dint{\mathop{\displaystyle \int}}% \def\diint{\mathop{\displaystyle \iint}}% \def\diiint{\mathop{\displaystyle \iiint}}% \def\diiiint{\mathop{\displaystyle \iiiint }}% \def\didotsint{\mathop{\displaystyle \idotsint }}% \def\doint{\mathop{\displaystyle \oint}}% \def\dsum{\mathop{\displaystyle \sum }}% \def\dprod{\mathop{\displaystyle \prod }}% \def\dbigcap{\mathop{\displaystyle \bigcap }}% \def\dbigwedge{\mathop{\displaystyle \bigwedge }}% \def\dbigoplus{\mathop{\displaystyle \bigoplus }}% \def\dbigodot{\mathop{\displaystyle \bigodot }}% \def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}% \def\dcoprod{\mathop{\displaystyle \coprod }}% \def\dbigcup{\mathop{\displaystyle \bigcup }}% \def\dbigvee{\mathop{\displaystyle \bigvee }}% \def\dbigotimes{\mathop{\displaystyle \bigotimes }}% \def\dbiguplus{\mathop{\displaystyle \biguplus }}% \if@compatibility\else \RequirePackage{amsmath} \fi \def\makeatother\endinput{\makeatother\endinput} \bgroup \ifx\ds@amstex\relax \message{amstex already loaded}\aftergroup\makeatother\endinput \else \@ifpackageloaded{amsmath}% {\if@compatibility\message{amsmath already loaded}\fi\aftergroup\makeatother\endinput} {} \@ifpackageloaded{amstex}% {\if@compatibility\message{amstex already loaded}\fi\aftergroup\makeatother\endinput} {} \@ifpackageloaded{amsgen}% {\if@compatibility\message{amsgen already loaded}\fi\aftergroup\makeatother\endinput} {} \fi \egroup \typeout{TCILATEX defining AMS-like constructs in LaTeX 2.09 COMPATIBILITY MODE} \let\DOTSI\relax \def\RIfM@{\relax\ifmmode}% \def\FN@{\futurelet\next}% \newcount\intno@ \def\iint{\DOTSI\intno@\tw@\FN@\ints@}% \def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}% \def\iiiint{\DOTSI\intno@4 \FN@\ints@}% \def\idotsint{\DOTSI\intno@\z@\FN@\ints@}% \def\ints@{\findlimits@\ints@@}% \newif\iflimtoken@ \newif\iflimits@ \def\findlimits@{\limtoken@true\ifx\next\limits\limits@true \else\ifx\next\nolimits\limits@false\else \limtoken@false\ifx\ilimits@\nolimits\limits@false\else \ifinner\limits@false\else\limits@true\fi\fi\fi\fi}% \def\multint@{\int\ifnum\intno@=\z@\intdots@ \else\intkern@\fi \ifnum\intno@>\tw@\int\intkern@\fi \ifnum\intno@>\thr@@\int\intkern@\fi \int \def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi \ifnum\intno@>\tw@\intop\intkern@\fi \ifnum\intno@>\thr@@\intop\intkern@\fi\intop}% \def\intic@{% \mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}% \def\negintic@{\mathchoice {\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}% \def\ints@@{\iflimtoken@ \def\ints@@@{\iflimits@\negintic@ \mathop{\intic@\multintlimits@}\limits \else\multint@\nolimits\fi \eat@ \else \def\ints@@@{\iflimits@\negintic@ \mathop{\intic@\multintlimits@}\limits\else \multint@\nolimits\fi}\fi\ints@@@}% \def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}% \def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}% \def\intdots@{\mathchoice{\plaincdots@}% {{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}% {{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}% {{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}% \def\RIfM@{\relax\protect\ifmmode} \def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi} \let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi \def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice {\textdef@\displaystyle\f@size{#1}}% {\textdef@\textstyle\tf@size{\firstchoice@false #1}}% {\textdef@\textstyle\sf@size{\firstchoice@false #1}}% {\textdef@\textstyle \ssf@size{\firstchoice@false #1}}% \glb@settings} \def\textdef@#1#2#3{\hbox{{% \everymath{#1}% \let\f@size#2\selectfont #3}}} \newif\iffirstchoice@ \firstchoice@true \def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}% \def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}% \def\multilimits@{\bgroup\vspace@\Let@ \baselineskip\fontdimen10 \scriptfont\tw@ \advance\baselineskip\fontdimen12 \scriptfont\tw@ \lineskip\thr@@\fontdimen8 \scriptfont\thr@@ \lineskiplimit\lineskip \vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}% \def\Sb{_\multilimits@}% \def\endSb{\crcr\egroup\egroup\egroup}% \def\Sp{^\multilimits@}% \let\endSp\endSb \newdimen\ex@ \ex@.2326ex \def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$}% \def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}% \def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow \mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$}% \def\overrightarrow{\mathpalette\overrightarrow@}% \def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \let\overarrow\overrightarrow \def\overleftarrow{\mathpalette\overleftarrow@}% \def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \def\overleftrightarrow{\mathpalette\overleftrightarrow@}% \def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr \leftrightarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \def\underrightarrow{\mathpalette\underrightarrow@}% \def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil $\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}% \let\underarrow\underrightarrow \def\underleftarrow{\mathpalette\underleftarrow@}% \def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil $\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}% \def\underleftrightarrow{\mathpalette\underleftrightarrow@}% \def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th \hfil#1#2\hfil$\crcr \noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}% \def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@} \let\nlimits@\displaylimits \def\setboxz@h{\setbox\z@\hbox} \def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr \hfil$#1\m@th\operator@font lim$\hfil\crcr \noalign{\nointerlineskip}#2#1\crcr \noalign{\nointerlineskip\kern-\ex@}\crcr}}}} \def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@ $#1\copy\z@\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$} \def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@ $#1\mathord\leftarrow\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill \mkern-6mu\box\z@$} \def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}} \def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}} \def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@} \def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@} \def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}} \def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@ \hbox{$#1\m@th\operator@font lim$}}}} \def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}} \def\mathpalette\varlimsup@{}@#1{\mathop{\overline {\hbox{$#1\m@th\operator@font lim$}}}} \def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}% \begingroup \catcode `|=0 \catcode `[= 1 \catcode`]=2 \catcode `\{=12 \catcode `\}=12 \catcode`\\=12 |gdef|@alignverbatim#1\end{align}[#1|end[align]] |gdef|@salignverbatim#1\end{align*}[#1|end[align*]] |gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]] |gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]] |gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]] |gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]] |gdef|@gatherverbatim#1\end{gather}[#1|end[gather]] |gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]] |gdef|@gatherverbatim#1\end{gather}[#1|end[gather]] |gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]] |gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]] |gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]] |gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]] |gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]] |gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]] |gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]] |endgroup \def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim You are using the "align" environment in a style in which it is not defined.} \let\endalign=\endtrivlist \@namedef{align*}{\@verbatim\@salignverbatim You are using the "align*" environment in a style in which it is not defined.} \expandafter\let\csname endalign*\endcsname =\endtrivlist \def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim You are using the "alignat" environment in a style in which it is not defined.} \let\endalignat=\endtrivlist \@namedef{alignat*}{\@verbatim\@salignatverbatim You are using the "alignat*" environment in a style in which it is not defined.} \expandafter\let\csname endalignat*\endcsname =\endtrivlist \def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim You are using the "xalignat" environment in a style in which it is not defined.} \let\endxalignat=\endtrivlist \@namedef{xalignat*}{\@verbatim\@sxalignatverbatim You are using the "xalignat*" environment in a style in which it is not defined.} \expandafter\let\csname endxalignat*\endcsname =\endtrivlist \def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim You are using the "gather" environment in a style in which it is not defined.} \let\endgather=\endtrivlist \@namedef{gather*}{\@verbatim\@sgatherverbatim You are using the "gather*" environment in a style in which it is not defined.} \expandafter\let\csname endgather*\endcsname =\endtrivlist \def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim You are using the "multiline" environment in a style in which it is not defined.} \let\endmultiline=\endtrivlist \@namedef{multiline*}{\@verbatim\@smultilineverbatim You are using the "multiline*" environment in a style in which it is not defined.} \expandafter\let\csname endmultiline*\endcsname =\endtrivlist \def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim You are using a type of "array" construct that is only allowed in AmS-LaTeX.} \let\endarrax=\endtrivlist \def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.} \let\endtabulax=\endtrivlist \@namedef{arrax*}{\@verbatim\@sarraxverbatim You are using a type of "array*" construct that is only allowed in AmS-LaTeX.} \expandafter\let\csname endarrax*\endcsname =\endtrivlist \@namedef{tabulax*}{\@verbatim\@stabulaxverbatim You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.} \expandafter\let\csname endtabulax*\endcsname =\endtrivlist \def\endequation{% \ifmmode\ifinner \iftag@ \addtocounter{equation}{-1} $\hfil \displaywidth\linewidth\@taggnum\egroup \endtrivlist \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@ignoretrue \else $\hfil \displaywidth\linewidth\@eqnnum\egroup \endtrivlist \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@ignoretrue \fi \else \iftag@ \addtocounter{equation}{-1} \eqno \hbox{\@taggnum} \global\@ifnextchar*{\@tagstar}{\@tag}@false% $$\global\@ignoretrue \else \eqno \hbox{\@eqnnum $$\global\@ignoretrue \fi \fi\fi } \newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false \def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}} \def\@TCItag#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{(#1)}% \global\def\@currentlabel{#1}} \def\@TCItagstar*#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{#1}% \global\def\@currentlabel{#1}} \@ifundefined{tag}{ \def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}} \def\@tag#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{(#1)}} \def\@tagstar*#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{#1}} }{} \def\tfrac#1#2{{\textstyle {#1 \over #2}}}% \def\dfrac#1#2{{\displaystyle {#1 \over #2}}}% \def\binom#1#2{{#1 \choose #2}}% \def\tbinom#1#2{{\textstyle {#1 \choose #2}}}% \def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}% \makeatother \endinput
1,314,259,992,812
arxiv
\section{Introduction} \input{introduction} \section{Related Work} \label{sec:rw} \input{relatedwork} \section{Preliminaries} \label{sec:preliminaries} \input{preliminaries} \section{Methods for Estimating and Altering Personalities}\label{sec:Methods} \input{methods} \section{Experiments} \label{sec:experiments} \input{experiments} \section{Conclusion} \label{sec:conclusion} \input{conclusion} \bibliographystyle{plainnat} \subsection{Traits of Datasets} \label{subsec:exp-datasets} \subsubsection{Setup:} We explore the personality traits of datasets used in training the language models discussed in Section \ref{sec:preliminaries} namely: \textsc{BookCorpus}, \textsc{English Wikipedia}, \textsc{Wikitext103}, and \textsc{WebText Test Set}. \begin{itemize} \item \textsc{BookCorpus}~\citep{zhu2015aligning} is a large collection of free novel books written by unpublished authors, which contains 11,038 books of 16 different sub-genres and is used to train XLNET. \item \textsc{English Wikipedia} contains cleaned articles that are built from the Wikipedia dump and used to train XLNET and GPT-3. However, the exact versions of the dataset used to develop those models are publicly unknown. We obtain a version of this data that was available on May 1st, 2020. \item \textsc{Wikitext103}~\citep{merity2016wikitext} contains more than 100 million tokens retrieved from the arrangement of (verified) good and featured articles on Wikipedia and is used to train TransformerXL. \item \textsc{WebText Test Set}~\citep{gokaslan2019openwebtext} is provided by the firm OpenAI. The training dataset was used to train GPT-2 and has not been publicly released. Hence, we use the test set for our experiments. \end{itemize} Evaluating each of the datasets discussed above requires extensive computational resources due to their size. To overcome this issue, we infer the personality traits using random sub-samples of the datasets as indicated in Table~\ref{tab:dataset-table}. These samples of datasets can be evaluated at different levels of granularity, from sentence level, paragraph level, to document level. In the experiments, We notice that when a long document containing many paragraphs and sentences is passed as a standalone input, our ZSC that estimates personality traits doesn't perform well, predicting a score close to $3$ (which is neutral) for most samples. The reason is that these paragraphs and sentences may exhibit multiple conflicting traits or have multiple trait-less sentences (e.g., facts). Therefore, we process the data so that all samples are at the sentence or small paragraph level. The processed datasets are passed as an input independently to ZSCs' using \textit{Approach 3} discussed in Section~\ref{sec:Methods}. The resulting outputs are collated to obtain the personality trait distributions. \begin{table}[] \centering \resizebox{0.6\textwidth}{!}{% \begin{tabular}{|c|c|c|c|c|} \hline Dataset & Size & \begin{tabular}[c]{@{}c@{}}Percent uses for\\inference\end{tabular} & Models\\ \hline \textsc{Wikitext103} & 0.70 GB & 100\% & TransfoXL\\ \hline \textsc{Bookcorpus} & 5.75 GB & 10\% & XLNET\\ \hline \textsc{English Wikipedia} & 34.88 GB & 2\% & GPT-3, XLNET\\ \hline \textsc{Webtext Test Set} & 1.28 GB & 20\% & GPT2\\ \hline \end{tabular}% } \vspace {3mm} \caption{Summary statistics of the datasets} \label{tab:dataset-table} \end{table} \begin{figure}[htbp] \captionsetup[subfigure]{font=scriptsize,labelfont=scriptsize} \begin{multicols}{2} \subcaptionbox{Wikitext103}{\includegraphics[width=\linewidth]{wikitext_inf}} \par \subcaptionbox{BookCorpus}{\includegraphics[width=\linewidth]{bookcorpus_inf}}\par \end{multicols} \begin{multicols}{2} \subcaptionbox{English Wikipedia}{\includegraphics[width=\linewidth]{wikipedia_inf}}\par \subcaptionbox{Webtext Test Set}{\includegraphics[width=\linewidth]{webtext_inf}}\par \end{multicols} \caption{Personality trait distributions of datasets} \label{fig:dist_datasets} \end{figure} \subsubsection{Results:} Figure \ref{fig:dist_datasets} depicts the personality trait distributions across the datasets. In particular, we observe that the distributions for all five traits are skewed to the right of the neutral 3.0 score, reflecting a positive sense of personality. The medians of different traits from the box plots gives us the scores of the traits on average, which we use to compare the prominence of the traits. The lengths of the boxes can be used to determine the spreads or variances of the traits. The most prominent trait for \textsc{Wikitext103} is \textit{Extraversion}, and the least prominent trait is \textit{Conscientiousness}. \textit{Agreeableness} scores spread the widest compared to other traits, followed by openness. The most prominent traits in the \textsc{BookCorpus} dataset are \textit{Extraversion} and \textit{Agreeableness}, followed by \textit{Openness}, \textit{Emotional stability}, and \textit{conscientiousness}. Again, \textit{Agreeableness} scores have the highest variance compared to other trait scores, followed by \textit{Openness}. The trait scores for the \textsc{English Wikipedia} dataset have smaller spreads compared to other datasets. The most prominent trait is \textit{Agreeableness}, followed by \textit{Extraversion}, \textit{Emotional stability}, \textit{Openness}, and \textit{Conscientiousness}. Finally, the \textsc{Webtext Test Set} dataset's prominent traits are \textit{Agreeableness} and \textit{Extraversion}, followed by \textit{Emotional stability}, \textit{Openness}, and \textit{Conscientiousness}. The spreads of \textit{Agreeableness} and \textit{Openness} are wider than the other traits in the dataset. Our findings suggest that we can achieve the desired personalities in language models by training them using datasets with high median score and low spread for the corresponding trait. \subsection{Traits of Models} \label{subsec:traits_models} \subsubsection{Setup:} To quantify the personality traits of language models, we trigger them using the Big Five questionnaire and classify the generated text responses using the ZSC framework discussed in \textit{Approach 3}. In our experiments, we selected a fixed set of hyper-parameters for text generation using language models. For GPT-3 evaluation, we use the programmatic API from OpenAI~\cite{brown2020language} made publicly available to users for research and commercial purposes. We use the setting of open-ended text generation with hyper-parameters \textit{maxseqlength} set to 256, \textit{temperature}, and \textit{top-p} set to 1. For the rest of the language models, namely GPT-2, TransformerXL, and XLNET, we use text generation pipeline from Hugging Face's \texttt{transformer} package ~\citep{wolf2019huggingface}. We set the parameters \textit{temperature} to 1, \textit{top-k} to 40, \textit{top-p} to 1, and \textit{maxseqlength} to 256 respectively. \begin{table}[] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{|l|l|} \hline \textbf{Prompt} & \textbf{Generated Response} \\ \hline I don’t talk a lot. & \begin{tabular}[c]{@{}l@{}}I'm more of an introvert so I'm less than happy to take time to talk. \\ I prefer to be able to take control over how I work and make sure I be able to talk a lot.\end{tabular} \\ \hline I’m relaxed most of the time & \begin{tabular}[c]{@{}l@{}}In all my experiences I did not notice a single problem about myself or others I dealt with, \\ because I was focused on my personal well being and that was all….\end{tabular} \\ \hline I feel comfortable around people. & \begin{tabular}[c]{@{}l@{}}I get a lot done and it's nice working out or just enjoying myself and being around people. \\ A lot of that comes with me…..\end{tabular} \\ \hline I am full of ideas & \begin{tabular}[c]{@{}l@{}}I am also a very creative person, so it is important to me to develop my own ideas. \\ I have a lot of inspiration and a lot of love for the world.\end{tabular} \\ \hline \end{tabular}% } \vspace{3mm} \caption{Examples of text responses generated using GPT-2} \label{tab:gpt2-text} \end{table} \subsubsection{Results:} Table \ref{tab:gpt2-text} shows a few examples of generated text responses and the corresponding input prompts to GPT-2. These responses generated in an auto-regressive manner inherit the traits learned during model training. Consequently, we observe noticeable variations in trait distributions across language models due to differences in their training corpora, as shown in the figure \ref{fig:trait_eval_dist}. Furthermore, Table \ref{tab:eval-table} shows the median \emph{five factor} scores evaluated for all language models. Based on these, we can make a few observations. Firstly, we observe that the \emph{Agreeableness} scores are higher for GPT-3, suggesting the generated text reflects a more generous and empathetic personality. Secondly, \emph{Conscientiousness} scores are higher for TransformerXL. On the other hand, GPT-3 has a higher median \emph{Extraversion} score, implying that the response generated emulate extroverted personalities as opposed to introverted personalities. Finally, TransformerXL has the highest \emph{Emotional stability} score reflecting a more emotionally stable personality compared to the other language models. \begin{table}[htbp] \centering \resizebox{0.7\textwidth}{!}{% \begin{tabular}{|c|c|c|c|c|} \hline Trait & GPT2 & GPT3 & Transformer-XL & XLNET \\ \hline Agreeableness & 3.41 (0.73) & 4.19 (1.14) & 3.86 (0.66) & 3.64 (0.87) \\ \hline Conscientiousness & 3.18 (0.39) & 3.66 (0.77) & 3.96 (0.78) & 3.73 (0.64) \\ \hline Extraversion & 3.07 (0.60) & 3.94 (1.10) & 3.43 (0.69) & 3.63 (0.91) \\ \hline Emotional stability & 3.15 (0.46) & 2.79 (1.11) & 3.36 (0.74) & 3.01 (0.70) \\ \hline Openness & 2.97 (0.47) & 3.78 (1.06) & 4.02 (0.83) & 3.55 (0.71) \\ \hline \end{tabular}% } \vspace{3mm} \caption{Personality scores with their uncertainties of Language Models} \label{tab:eval-table} \end{table} \begin{figure}[htbp] \captionsetup[subfigure]{font=scriptsize,labelfont=scriptsize} \centering \begin{multicols}{2} \subcaptionbox{GPT-2}{\includegraphics[width=\linewidth]{images/gpt2.png}} \par \subcaptionbox{GPT-3}{\includegraphics[width=\linewidth]{images/gpt3.png}}\par \end{multicols} \begin{multicols}{2} \subcaptionbox{TransformerXL}{\includegraphics[width=\linewidth]{images/transfoxl.png}}\par \subcaptionbox{XLNET}{\includegraphics[width=\linewidth]{images/xlnet.png}}\par \end{multicols} \caption{Personality trait distributions of language models} \label{fig:trait_eval_dist} \end{figure} \textit{Scoring Modes:} As discussed earlier, the datasets are composed of text at different levels of granularity. Similarly, the text responses generated using these language models can be multiple sentences long. As a result, personality trait distributions may vary depending on whether we evaluate individual sentences or the complete responses. To address this issue, we generate personality trait scores using different output modes as discussed below. \begin{itemize} \item \textit{Mode 1:} Trait score of the entire generated response. \item \textit{Mode 2:} Trait score of the first sentence in the generated response. \item \textit{Mode 3:} Median of the trait scores of all sentences present in the generated response. \end{itemize} Figure \ref{eval_models} shows the trait distributions for all the language models obtained using the modes listed above. The distributions remain the same for GPT-3, XLNET and TransformerXL. This is because all the responses generated using these models were single sentences. However, distribution spread varies for GPT-2 across different modes of evaluation, implying that the structure of the output influences the personality score estimation. \begin{figure}[htbp] \captionsetup[subfigure]{font=scriptsize,labelfont=scriptsize} \begin{multicols}{3} \subcaptionbox{GPT-2 (\textit{Mode 1})}{\includegraphics[width=\linewidth]{images/gpt2.png}}\par \subcaptionbox{GPT-2 (\textit{Mode 2})}{\includegraphics[width=\linewidth]{images/gpt2_256_split_first.png}}\par \subcaptionbox{GPT-2 (\textit{Mode 3})}{\includegraphics[width=\linewidth]{images/gpt2_256_split.png}}\par \end{multicols} \begin{multicols}{3} \subcaptionbox{GPT-3 (\textit{Mode 1})}{\includegraphics[width=\linewidth]{images/gpt3.png}}\par \subcaptionbox{GPT-3 (\textit{Mode 2})}{\includegraphics[width=\linewidth]{images/gpt3.png}}\par \subcaptionbox{GPT-3 (\textit{Mode 3})}{\includegraphics[width=\linewidth]{images/gpt3.png}}\par \end{multicols} \begin{multicols}{3} \subcaptionbox{TransformerXL (\textit{Mode 1})}{\includegraphics[width=\linewidth]{images/transfoxl.png}}\par \subcaptionbox{TransformerXL (\textit{Mode 2})}{\includegraphics[width=\linewidth]{images/transfoxl.png}}\par \subcaptionbox{TransformerXL (\textit{Mode 3})}{\includegraphics[width=\linewidth]{images/transfoxl.png}}\par \end{multicols} \begin{multicols}{3} \subcaptionbox{XLNET (\textit{Mode 1})}{\includegraphics[width=\linewidth]{images/xlnet.png}}\par \subcaptionbox{XLNET (\textit{Mode 2})}{\includegraphics[width=\linewidth]{images/xlnet.png}}\par \subcaptionbox{XLNET (\textit{Mode 3})}{\includegraphics[width=\linewidth]{images/xlnet.png}}\par \end{multicols} \caption{Personality trait distributions of language models obtained from different modes of evaluation} \label{eval_models} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{font=scriptsize,labelfont=scriptsize} \begin{multicols}{3} \subcaptionbox{WebText Test }{\includegraphics[width=\linewidth]{webtext_inf}}\par \subcaptionbox{GPT-2 evaluated considering input prompts from Big Five Questionnaire}{\includegraphics[width=\linewidth]{images/gpt2_256_split_first.png}}\par \subcaptionbox{GPT-2 evaluated without considering input prompts }{\includegraphics[width=\linewidth]{small-117M_inf.png}}\par \end{multicols} \caption{Distributions of different traits in (a) Wikitext103, and (b) GPT-2.} \label{fig:dist_comp} \end{figure} \subsection{Traits of Datasets vs Models} To validate our hypothesis on whether the language models inherit personality traits of datasets during their training, we compare the trait distributions of the language models and their underlying corpora. Specifically, we evaluate the traits of \textsc{WebText Test} dataset and text responses generated using GPT-2. We observe that trait distributions of \textsc{WebText Test} dataset and randomly generated texts using the GPT-2 without considering any input prompts are very similar, as shown in the Figure~\ref{fig:dist_comp}, implying that traits of datasets are entirely inherited by GPT-2 during the training phase. On the other hand, there is a noticeable difference in trait distribution of \textsc{WebText Test} dataset in comparison to GPT-2 when the Big Five questionnaire is passed as an input. This phenomenon is probably due to the GPT-2 trying to capture personality from both input prompts and the dataset. Overall, we can conclude that datasets and input prompt together influence the inferred personality traits of the language models. \subsection{Altering the Traits of Models} \subsubsection{Setup:} To investigate the methods for altering traits, We restrict ourselves to working with GPT-2 due to limitations of computational resources. To evaluate Method 1 discussed in the Section~\ref{modify_trait}, we filter the \cite{siop:big5} dataset by retaining text responses labeled with Big Five factor scores greater than $4$ for the individual traits. We finetune the GPT-2 model on the filtered dataset corresponding to each trait and subsequently evaluate the generated text responses. For finetuning, we set the batch size to $16$ and the number of epochs to $20$ with warmup proportion set to $0$, learning rate set to $1e^{-5}$, and weight decay set to $0.01$. Similar to Method 1, we analyze Method 2 using the same filtered dataset \cite{siop:big5} according to the following criteria. Since the personality trait scores in the annotated dataset are continuous values from 1 to 5, we filter the dataset at different thresholds from the set $\{2.5,3,3.5, 4, 4.5\}$ to obtain labels suitable for defining a binary classification problem. We finetune the original model using this classification task. In particular, we use the standard cross-entropy loss and the ADAM optimizer \cite{kingma2014adam}. We set the learning rate to $5e^{-5}$ and the number of epochs to $10$ while finetuning. \subsubsection{Results:} Table \ref{tab:finetune_table} summarizes the results from using Method 1 for finetuning GPT-2. We observe a notable improvement in the personality scores for \emph{Agreeableness}, \emph{Conscientiousness}, \emph{Emotional Stability} and \emph{Openness} once the language model is finetuned using the respective filtered datasets. These changes are also reflected in the personality trait distributions of finetuned language models shown in \ref{ft_dist}. We can conclude that the derived language models learn from the new data corpus during finetuning, thus allowing one to alter their personality traits in an open loop setting. However, we also notice that finetuning changes the personality scores of other traits in addition to the focal trait (represented by the filtered dataset). This phenomenon is not desirable as we lose precise control over improving a specific trait during finetuning. Addressing this aspect is left for future work. \begin{table}[] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|} \hline Trait & Before finetuning & Agreeableness & Conscientiousness & Extraversion & Emotional stability & Openness \\ \hline Agreeableness & 3.41 (0.73) & 3.33 (0.43) & 3.30 (0.21) & 3.27 (0.36) & 3.09 (0.27) & 3.33 (0.37) \\ \hline Conscientiousness & 3.18 (0.39) & 3.41 (0.39) & \textbf{3.26 (0.25)} & 3.15 (0.22) & 3.13 (0.34) & 3.50 (0.38) \\ \hline Extraversion & 3.07 (0.60) & 3.27 (0.26) & 3.39 (0.30) & \textbf{3.39 (0.32)} & 3.15 (0.24) & 3.63 (0.44) \\ \hline Emotional stability & 3.15 (0.50) & 3.35 (0.38) & 3.26 (0.29) & 3.27 (0.28) & \textbf{3.23 (0.37)} & 3.42 (0.36) \\ \hline Openness & 2.97 (0.47) & 3.42 (0.36) & 3.37 (0.29) & 3.31 (0.31) & 3.27 (0.38) & \textbf{3.47 (0.39)} \\ \hline \end{tabular}% } \vspace{3mm} \caption{Personality scores with their uncertainties of the finetuned language models} \label{tab:finetune_table} \end{table} \begin{figure}[htbp] \captionsetup[subfigure]{font=scriptsize,labelfont=scriptsize} \begin{multicols}{3} \subcaptionbox{Original}{\includegraphics[width=\linewidth]{images/gpt2.png}} \par \subcaptionbox{Extraversion}{\includegraphics[width=\linewidth]{images/extraft.png}}\par \subcaptionbox{Agreeableness}{\includegraphics[width=\linewidth]{images/agreeft.png}}\par \end{multicols} \begin{multicols}{3} \subcaptionbox{Openness}{\includegraphics[width=\linewidth]{images/openft.png}}\par \subcaptionbox{Emotional Stability}{\includegraphics[width=\linewidth]{images/neuroft.png}}\par \subcaptionbox{Conscientiousness}{\includegraphics[width=\linewidth]{images/consentft.png}}\par \end{multicols} \caption{Personality trait distributions of finetuned language models} \label{ft_dist} \end{figure} \begin{table}[] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|} \hline Trait & Before finetuning & \begin{tabular}[c]{@{}c@{}}After finetuning \\ at threshold (2.5)\end{tabular} & \begin{tabular}[c]{@{}c@{}}After finetuning \\ at threshold (3.0)\end{tabular} & \begin{tabular}[c]{@{}c@{}}After finetuning\\ at threshold (3.5)\end{tabular} & \begin{tabular}[c]{@{}c@{}}After finetuning \\ at threshold (4.0)\end{tabular} & \begin{tabular}[c]{@{}c@{}}After finetuning \\ at threshold (4.5)\end{tabular} \\ \hline Agreeableness & 3.41 (0.73) & 3.02 (0.48) & 3.19 (0.54) & 3.33 (0.64) & 3.14 (0.76) & 3.24 (0.57) \\ \hline Conscientiousness & 3.18 (0.39) & 3.19 (0.53) & 3.11 (0.49) & 3.17 (0.67) & 3.04 (0.57) & 3.19 (0.57) \\ \hline Extraversion & 3.07 (0.60) & 2.89 (0.51) & 3.19 (0.53) & 3.24 (0.75) & 2.90 (0.74) & 3.25 (0.49) \\ \hline Emotional Stability & 3.15 (0.50) & 3.12 (0.67) & 2.99 (0.45) & 3.19 (0.64) & 3.00 (0.74) & 3.11 (0.46) \\ \hline Openness & 2.97 (0.47) & 2.96 (0.51) & 3.09 (0.55) & 3.13 (0.67) & 2.88 (0.80) & 3.15 (0.54) \\ \hline \end{tabular}% } \vspace{3mm} \caption{Personality scores before and after finetuning the GPT-2 using Extraversion labeled data at different thresholds using the Method 2} \label{tab:Extraversion-table} \end{table} Table \ref{tab:Extraversion-table} summarizes the results from using Method 2 to finetune the GPT-2 model using \emph{Extraversion} labeled data. We observe that some \emph{Extraversion} scores have improved compared to the GPT-2 before its finetuning. However, there are noticeable variations in the scores of other factors, which is not desirable (similar to Method 1). We observe similar changes in personality trait scores when the model is finetuned using other personality annotated datasets, and these results are summarized in Appendix \ref{appendix}. Overall, the results demonstrate that our methodology to alter traits is a promising first step in attempt to alter the personality traits of language models. Further work in this direction would be a fruitful endeavour. \subsection{Evaluating Personalities of Datasets} Extracting quantifiable personality traits from datasets requires defining suitable labels for ZSC, followed by a scoring scheme based on the ZSC outputs. As discussed earlier in Section \ref{sec:preliminaries}, ZSC takes a premise and a hypothesis as input and predicts whether the hypothesis is true (entailment), false (contradiction), or undetermined (neutral) given the premise. So the first step in dealing with ZSC as an entailment problem is to convert the desired trait labels into hypotheses. In this work, we use the following hypothesis input while evaluating the ZSC: \emph{This response is characterized by $\{$label$\}$. }, where the placeholder $\{$\emph{label}$\}$ is replaced by trait-specific keywords. We investigate different approaches to finalize our model setup and extract personality trait scores in terms of the Big Five factors. \paragraph{Approach 1:} We assume that the premise and the hypothesis can either be positively related or negatively related in this approach. Accordingly, We set up the ZSC to quantify the five personality traits independently using five labels: openness, conscientiousness, extraversion, agreeableness, and neuroticism, to generate the hypothesis for feeding into a ZSC. We transform the output probabilities obtained for each label to Big Five factor scores using a linear fit on a scale from $1$ to $5$. \paragraph{Approach 2:} We assume that the premise and hypothesis can either be positively related or unrelated in this approach. Under this assumption, we measure the personality scores using labels for the two extreme ends of each trait independently. Then, the output probabilities for both extremes are passed through a softmax function and interpolated to obtain the final scores for each Big Five factor. The labels for extreme ends of each trait are provided in Table \ref{tab:extreme}. \begin{table}[] \centering \resizebox{0.4\textwidth}{!}{% \begin{tabular}{|l|c|} \hline & \textbf{Labels} \\ \hline 1 & [agreeableness, antoganism] \\ \hline 2 & [conscientiousness, disinhibition] \\ \hline 3 & [extraversion, introversion] \\ \hline 4 & [emotional stability, neuroticism] \\ \hline 5 & [openness, closeness] \\ \hline \end{tabular}% } \vspace{3mm} \caption{Labels for creating the input hypotheses for ZSC in Approaches 2 and 3.} \label{tab:extreme} \end{table} \paragraph{Approach 3:} In this approach, we consider all three scenarios where the premise and hypothesis can either be positively related, negatively related, or unrelated. This is because, under assumptions made in the previous two approaches, measuring the five personality traits independently using ZSC might lead to incorrect results. The reason for this is because ZSC classifies a non-synonymic or antithetical hypothesis with low probability for a given premise. Therefore, one cannot guarantee if the lower score for a particular label is due to negative relation or no relation between the hypothesis and premise. Hence in this third approach, we set up one ZSC for each trait to measure the two extreme ends in a dependent manner. Under the hood, the ZSC will calculate a single probability score using only the NLI model's (DistilBart-MNLI) entailment scores of the two extreme ends of a trait. We then use a one-dimensional linear interpolation step to match the probabilities with values from $0$ to $1$ to the Big Five personality test scores ranging from $1$ to $5$. We use the same labels for extreme ends of each trait as discussed in \textit{Approach 2}. \paragraph{Discussion:} Our first approach works well whenever the hypothesis and premise entail or contradict each other. However, when ZSC is prompted with unrelated premises and hypotheses, output label probabilities are low, which leads to the inability of measuring the personality traits accurately. The second approach also has its limitations. ZSC uses the NLI model's entailment and contradiction scores behind-the-screens to calculate the probability of the output label. We cannot conclude that a low score means the hypothesis and the premise are neutral to each other since ZSC does not use the neutral score of the NLI model. Our third approach overcomes these pitfalls of the first two approaches discussed above, and enables a more precise assessment of personality traits of datasets. \subsection{Evaluating Personalities of Language Models} \label{ss:evaluate} We adopt an assessment questionnaire that measures personality traits using the Big Five factor markers. The questionnaire is a list of fifty statements, each referring to different characteristics of an individual. Accordingly, each statement is designed to elicit a specific Big Five factor behavior. In general, individuals respond to every statement in the questionnaire by opting for one of the following choices: (a) very inaccurate, (b) moderately inaccurate, (c) neither inaccurate nor accurate, (d) moderately accurate, and (e) very accurate. The response to every statement is scored against a predetermined Big Five factor on a scale of 1 to 5 as shown in Table \ref{tab:prob-table}. There are ten statements evaluated for the Extraversion factor, ten questions for Agreeableness, and so forth. Finally, the aggregated scores against each of the Big Five factors are averaged to obtain the quantifiable trait scores. \begin{table}[htbp] \centering \resizebox{0.4\textwidth}{!}{% \begin{tabular}{|c|c|} \hline \textbf{Response} & \textbf{Score} \\ \hline Very Inaccurate & 1 \\ \hline Moderately Inaccurate & 2 \\ \hline Neither Inaccurate nor Accurate & 3 \\ \hline Moderately Accurate & 4 \\ \hline Very Accurate & 5 \\ \hline \end{tabular}% } \vspace {3mm} \caption{Scoring scheme to determine the traits based on text output } \label{tab:prob-table} \end{table} Acquiring the responses in the format discussed above is not feasible in open-ended text generation since the language model output is a sequence of words. Instead, we start by setting the statements from the questionnaire as prompts to the language model and generate text responses as shown in Figure~\ref{fig:model}. To account for the stochastic nature of the responses, we trigger the model $N \in \mathbb{Z}_{+}$ times using the same prompt to observe $N$ different text completions for every statement in the questionnaire. Moreover, we prompt the language models with each statement independently. So the order in which statements are fed as input to the model does not affect the final results. \begin{figure}[hp] \centering \includegraphics[width=0.9\textwidth]{model.png} \caption{\small{ Steps involved in evaluating the personality of a pretrained language model.}} \label{fig:model} \end{figure} Each of the generated text responses is passed independently as an input premise to the ZSC setup as discussed in \textit{Approach 3}. The resulting outputs are the independent probabilities for the responses to be characterized according to the five factors. We obtain $N$ such probabilities for each input prompt. We then use a one-dimensional linear interpolation to match the probabilities to scores ranging from 1 to 5. Finally, we compute the median of the scores aggregated for all the respective statements, which represent the personality of the concerned language model. \subsection{Modifying Personalities of Language Models} \label{modify_trait} Pretrained language models possess varied personality traits due to their training on diverse datasets and due to differences in their model definitions and training approaches. We propose the following two methods aimed at altering the personality traits of language models. \subsubsection{\textbf{Method 1:}} \label{method1} Modifying the personalities of the language models in a desired fashion can be implicitly equated to updating the model's parameters such that it generates modified text responses that are closer to the desired personality traits. One way to achieve this partially is by finetuning the language models using suitably chosen personality annotated text data. In particular, since the training of these language models from scratch requires a substantial computational overhead and large datasets, we instead finetune the language models using a personality annotated dataset. Finetuning allows the language model to partially adapt to the new data corpus and change the traits of the generated text without expending much computational overhead. Accordingly, when we trigger the finetuned language model with a prompt from our questionnaire, we expect that the generated response reflects the altered personality. For finetuning, we leverage a personality annotated text dataset made available as part of a machine learning competition \cite{siop:big5}. The dataset includes text responses to open-ended situational judgment items (SJIs) designed to elicit trait-relevant behaviors and aggregate trait scores based on the Big Five personality traits. To modify the personality of a language model with respect to a specific trait, we train the model on the filtered textual responses corresponding to that factor. Note that under this approach, precise control towards changing the personality traits to a specific desired set of values while maintaining language generation quality is generally non-trivial. \subsubsection{\textbf{Method 2:}} \label{method2} In this closely related approach to the above, we start by finetuning a pre-trained model on personality annotated text data but focus on a specific auxiliary classification task instead of the original text generation objective. In particular, using the text annotated with a specific Big Five factor, we finetune the model on a binary classification task. For example, our classification task is to classify text responses as having either the extrovert or the introvert trait. Once we finetune the model via the auxiliary task, we use the same model weights for text generation. Irrespective of the approach, we evaluate the resultant model using the process discussed in Section~\ref{ss:evaluate} above and report the altered trait measurements if any. \subsection{Big Five personality traits} Our study quantifies personality traits using the Big Five Model, also known as the \emph{five-factor model}~\citep{digman1990personality}. Under this model, personality can be reduced to the following five core factors: \begin{itemize} \item \emph{Extraversion}: sociable and energetic versus reserved and solitary. \item \emph{Neuroticism}: sensitive and nervous versus secure and confident. \item \emph{Agreeableness}: trustworthy, straightforward, generous, and modest versus unreliable, complicated, meager, and boastful. \item \emph{Conscientiousness}: efficient and organized versus sloppy and careless. \item \emph{Openness}: inventive and curious versus dogmatic and cautious. \end{itemize} The \emph{Neuroticism} factor has a negative connotation in contrast to the other four factors. Therefore, we estimate \emph{Emotional stability} instead to be consistent with other factors in the rest of the paper. Assessment of these personality traits typically makes use of two types of data sources: self-reports and peer reports (e.g., friends, colleagues, etc.). Among the two, the more popular approach is via self‐reports, in which people describe how they see themselves while responding to a personality assessment questionnaire. For example, a participant is expected to respond to statements such as ``I am someone who is outgoing, sociable" on a Likert‐type scale (e.g., from 1 = strongly disagree to 5 = strongly agree). Self‐reports tap people's explicit self‐concepts about their traits, which are parts of their identities. On the other hand, peer reports help understand how an individual is perceived by his/her neighboring people. Unlike personality recognition using self-reports, the main target of perceived personality analysis is the personality attributed to them by their interaction with the neighboring people. These people fill a similar personality assessment questionnaire on the individual, which then determines the perceived personality of that individual. In this study, Our approach to measuring personality traits is analogous to self-reports. Besides, peer reports require users to fill out a questionnaire on how they perceive the language models, which, while feasible, is not in the scope of our work. \emph{We hypothesize that the language models generate text responses that carry the personality traits of the datasets they were trained upon when prompted.} Subsequently, we process the text responses generated by language models using auxiliary prediction models (which can be quite sophisticated themselves) to quantify their personalities. In addition, we also quantify the personality traits of datasets used to train these language models as a way to partially validate our hypothesis. \subsection{Zero-shot classifier} ZSC proposed by~\citep{yin2019benchmarking} effectively predicts the class label without any prior training data pertaining to that label. Compared to traditional supervised learning approaches, which rely on a large number of examples for each class, the critical idea of ZSC is based on the semantic transfer of information from observed labels to newly seen labels. ZSC uses a Natural Language Inference (NLI) model, which is a pre-trained sequence-pair transformer/classifier that uses both a \emph{premise} and a {hypothesis} input to predict whether the hypothesis is true (entailment), false (contradiction), or undetermined (neutral) given the premise. For instance, Bart-large model is a NLI model that is trained on the MultiNLI (MNLI) dataset~\citep{williams2017broad}, which is a collection of 433-thousand multi-genre spoken and written sentence pairs annotated with textual entailment information. While this model is publicly available, performing inference on this model requires extensive computational resources. Hence, we use a distilled version of Bart-large-mnli ~\citep{patelvalhalla} created using the \emph{No Teacher Distillation} idea to speed up the inference process without sacrificing much performance. Our work comprehensively explores different ways to set up ZSC for accurately assessing personality traits from a given text response. \subsection{Language Models in Open Ended Text Generation}\label{sec:lm} We study multiple \emph{pretrained} language models that differ in their training strategy and corpora. All these models make use of auto-regressive language generation, which is based on the assumption that the probability distribution of a word sequence can be decomposed into the product of conditional next word distributions: \begin{equation} P(w_{1:T}| W_{0}) = \Pi_{t=1}^{T} P(w_t|w_{1:t-1}, W_{0}), \end{equation} where $w_{1:0} := \varnothing$ and $W_0$ is the initial context word sequence. The length $T$ of the word sequence is usually determined on-the-fly and corresponds to timestep $t$ when a special token called the EOS (end of sentence) token is generated from $P(w_t|w_{1:t-1}, W_{0})$. Below we briefly discuss the language models studied in the paper. \subsubsection{\textbf{GPT-2:}} GPT-2 is a transformer-based language model that is trained with a causal language modeling objective: predicting the next word given a sequence of previous words~\citep{radford2019language}. GPT-2 was pretrained on the WebText dataset that was collected by scraping and filtering web pages from sources such as Reddit (a popular social networking website). \subsubsection{\textbf{GPT-3:}} GPT-3 is the 3rd version release and is an upgraded version of GPT-2. GPT-3 model is trained with 175 billion parameters~\citep{brown2020language} which is over 10x the size of its predecessor, GPT-2. With its superior performance, GPT-3 can generate text that human evaluators typically have a higher difficulty distinguishing from those written by humans. GPT-3 was pretrained on an open-source dataset called \emph{Common Crawl}, and other text corpora from sources such as Wikipedia (a popular online encyclopedia). \subsubsection{\textbf{TransformerXL:}} TransformerXL is a transformer-based language model capable of learning dependencies beyond a fixed-length without disrupting temporal coherence~\citep{dai2019transformer}. The model's architecture enables a segment-level recurrence mechanism and a novel positional encoding scheme that captures longer-term dependency better and resolves the so-called context fragmentation problem. TransformerXL was pretrained on the WikiText language modeling dataset, a collection of over 100 million tokens extracted from the set of verified \emph{good} and \emph{featured} articles on Wikipedia. \subsubsection{\textbf{XLNET:}} XLNET is an extension of the TransformerXL model~\citep{yang2019xlnet}. The model learns bidirectional contexts by maximizing the expected likelihood over all permutations of the input sequence factorization orders. The auto-regressive objective provides a natural way to use the product rule for factorizing the joint probability of the predicted tokens, eliminating a specific independence assumption that was made in BERT~\citep{devlin2018bert}. This model was trained on BooksCorpus ~\citep{zhu2015aligning} and English Wikipedia datasets in a self-supervised fashion. \subsection{Personality Measures} Researchers in the past have used various schemes for personality modeling such as 16PF~\citep{schuerger2000sixteen}, EPQ-R~\citep{miles2004eysenck}, Myers–Briggs Type Indicator (MBTI)~\citep{miles2004eysenck}, and three trait personality models/PEN~\citep{eysenck2012model} among others. For instance, MBTI is one of the most widely adopted personality measures. It relies on the theory that random variation in human behavior is quite orderly and consistent due to certain basic differences in the way people prefer to use perception and judgment. The MBTI personality measure categorizes people into two categories in each of the four dimensions: introversion versus extroversion; sensing versus intuiting; thinking versus feeling; and judging versus perceiving. Another popular measure used in the literature on automated personality detection is the Big Five personality traits measure~\citep{digman1990personality} given by: \emph{ Extraversion, Neuroticism, Agreeableness, Conscientiousness, and Openness}. In this work, we estimate the Big Five personality traits of language models and their underlying datasets using a novel methodology described in Section~\ref{sec:Methods}. \subsection{Methods for Automatic Personality Detection} Automatic detection of personality traits from text has gained significant attention in Natural Language Processing research due to its applicability in various fields. ~\citep{pennebaker1999linguistic} compiled a dataset of anonymous essays tagged with the authors' personalities based on the Big Five traits. The authors used the so-called Linguistic Inquiry and Word Count (LIWC) features to determine the correlation between the essays and personality.~\citep{liu2016analyzing} used deep learning-based models in combination with the atomic features of text, i.e., the characters, to predict personality traits of individuals using hierarchical and vectorized word and sentence representations.~\citep{akrami2019automatic} developed a model that can extract Big Five personality traits from text using machine learning techniques.~\citep{jeremy2019identifying} performed experiments to automatically predict a user's personality based on Big Five personality traits on Twitter.~\citep{ribeiro2020beyond} proposed an evaluation methodology and discussed the accompanying tool for comprehensive behavioral testing of NLP models.~\citep{mehta2020recent} provides an overview of state-of-the-art machine learning models for automatic personality detection with a specific focus on multi-modal approaches. Recently~\citep{zero-shotclassify} used Zero-shot learning (ZSL) to classify text responses from a self-report questionnaire in terms of the Big Five personality traits. Through their experiments, the authors show that a strong positive relationship (e.g., correlation) exists between the ZSL scores and the scores on the self-report questionnaire for each specific trait. Building upon their work, we quantify the traits of our models by using the ZSL framework (see Section~\ref{sec:Methods}). Our novel approach overcomes the drawbacks of the previous work by exploring all the possible scenarios for defining the personality trait labels, and thus robustly adapts the ZSL framework to language models. \subsection{Study of Biases in Language Generation} Recent works have explored multiple biases that are learned by language models, which may sometimes be at odds with the prevailing societal values.~\citep{bolukbasi2016man} quantitatively demonstrate that word-embeddings contain biases in their geometry that reflect gender stereotypes present in the broader society.~\citep{sheng2019woman} perform experiments to analyze different textual contexts where biases can occur for different demographics in NLG systems. ~\citep{bordia2019identifying} evaluate the magnitude of gender bias in word-level language models that are trained on a text corpus. The authors~\citep{nadeem2020stereoset} evaluate popular models like BERT~\citep{devlin2018bert}, GPT2~\citep{radford2019language}, and XLNET~\citep{yang2019xlnet} using a large scale dataset and show that these models exhibit strong stereotypical biases in four domains: gender, profession, race, and religion. Our work is complementary to all these studies in the following sense: While we aim to understand human tendencies captured by language models similar to these prior studies, our narrow but well defined focus on characterizing the learned personality traits and potentially altering them is different and novel, as well as a first of its kind.
1,314,259,992,813
arxiv
\section{Introduction} \label{Introduction} This work is the continuation of our series of papers~\cite{MBFB13, BMFB13, BMB13a}, where we computed the next-to-next-to-leading spin-orbit effects in the dynamics and gravitational radiation of black hole binary systems. These next-to-next-to-leading contributions are 2PN $\sim 1/c^{4}$ orders beyond the leading spin-orbit effect which arises at 1.5PN $\sim 1/c^{3}$ order --- thus being of absolute 3.5PN $\sim 1/c^{7}$ order.\footnote{As usual we refer to $n$PN as the post-Newtonian (PN) terms with formal order $\mathcal{O}(c^{-2n})$.} More specifically, we derived in Ref.~\cite{MBFB13} the corresponding contributions to the equations of motion in harmonic coordinates, and proved the equivalence of our result with the one obtained previously within the ADM Hamiltonian formalism~\cite{HS11so, HSS13}. In Ref.~\cite{BMFB13} we presented explicit results for the conserved integrals of the motion, the precession equations for the spins and the near-zone PN metric. In Ref.~\cite{BMB13a} we obtained the corresponding results for the radiative multipole moments, energy flux and orbital phasing. In the present paper, we address the computation of the tail contributions to the emitted energy flux and to the phasing of the binary to the next-to-leading order, which corresponds to 4PN $\sim 1/c^{8}$, thus extending the computation performed in Ref.~\cite{BBF11} where these tail effects were obtained at the leading 3PN $\sim 1/c^{6}$ order. Hereafter we shall refer to the works~\cite{BBF11} and~\cite{BMB13a} as Papers~I \& II respectively. The above PN counting for spin effects refers to maximally spinning black holes. In keeping with the conventions used in Papers~I \& II, we use as a spin variable $S\equiv c S_{\mathrm{true}} = G m^{2} \chi$, where $m$ is the compact body's mass and $S_{\mathrm{true}}$ has the dimension of an angular momentum, with $\chi$ the dimensionless spin parameter, which is 1 for a maximally spinning Kerr black hole. With this definition, the spins of the two bodies are considered as ``Newtonian'' quantities, and all spin effects include (at least) an explicit $1/c$ factor with respect to non-spinning effects. One should keep in mind that the spin-orbit effects will be formally half a PN order smaller --- and our computations will thus be half a PN order more accurate --- for non-maximally spinning objects like neutron stars. Computing high-order PN corrections to the gravitational waveform emitted by compact binaries permits a better comparison with numerical relativity results, and improves the accuracy of the templates that will be used in the data analysis of gravitational wave ground-based detectors such as LIGO, Virgo and KAGRA, and, further ahead, space-based LISA-like detectors. Including the effects of spins is essential, as recent astrophysical evidence indicates that stellar-mass black holes~\cite{AK01, Strohmayer01, McClint06, Gou11, Nowak12} and supermassive black holes \cite{FM05, BrennR06, Brenn11} (see Ref.~\cite{Reynolds13} for a review) can be generically close to maximally spinning. The presence of spins crucially affects the dynamics of the binary, in particular leading to orbital plane precession if they are not aligned with the orbital angular momentum (see for instance \cite{CF94, ACST94}), and to strong modulations in the observed signal frequency and phase. The spin-orbit effects have been known at the leading order (1.5PN) since the seminal works~\cite{BOC75, BOC79, KWW93, K95}. They have been extended more recently to the next-to-leading order (2.5PN) in Refs.~\cite{TOO01,FBB06, DJSspin, Levi10, Porto10} for the equations of motion and in Ref.~\cite{BBF06} for the radiation field. Spin-spin interactions are also known: see Refs.~\cite{K95, GR06, Porto06, BFH12} for the leading (2PN) order in the equations of motion and radiation field;~\cite{HSS10, SHS08b, PR08a, PR08b, Levi08} for the next-to-leading (3PN) order in the equations of motion; and~\cite{HS11ss, Levi12} for the next-to-next-to-leading (4PN) order in the equations of motion for the coupling of different spins. In line with Papers I \& II, we use the multipolar post-Newtonian approach to gravitational radiation, which combines a multipolar-post-Minkowskian expansion for the vacuum field in the exterior of the matter source~\cite{BD86}, together with a matching to the post-Newtonian field inside the source~\cite{B98mult} (see Ref.~\cite{Bliving} for a review). In that formalism, the tails, which are physically due to the backscatter of linear waves from the curvature of space-time generated by the total mass of the source, appear as integrals over the past of the source, which enter the relationships between the radiative multipole moments which are observed at infinity from the source, and the source-rooted multipole moments. From a data analysis point of view, such tail contributions are very important features of the waveform of inspiralling compact binaries, and will likely be decoded by the next generation of detectors, \textit{i.e.} the advanced versions of LIGO and Virgo on ground, and by the future LISA-like detectors in space. More specifically, we shall show, using an estimate of the number of cycles of the waveform in the appropriate frequency bands (based on the Taylor T2 approximant), that the spin-orbit tail contribution at leading and next-to-leading orders is relevant to the future data analysis of these detectors and should be included in the gravitational wave templates. The plan of this paper is as follows. In Sec.~\ref{sec:Tails} we briefly recall the general formalism for gravitational wave generation and the various types of contributions to the waveform and flux, including the tails. We also show that, at the 4PN order and at the spin-orbit level for circular orbits, the only contribution to the flux originates from the tails. In Sec.~\ref{sec:Dynamics}, we describe the dynamics of the precessing binary, and we give an explicit analytical solution for the precession, formally valid up to any PN order but neglecting radiation reaction and limited to the spin-orbit level. In Sec.~\ref{sec:Results}, we provide the necessary expressions for the source moments (taken from Paper II), explain our calculations of the tail integrals both in the Fourier and time domains, and give our final results for the emitted flux and the orbital phasing of the binary. Appendix A provides some further technical explanations. \section{Gravitational wave tails in the energy flux} \label{sec:Tails} \subsection{Radiative versus source multipole moments} The total gravitational-wave energy flux, emitted in all directions around the source, is \begin{equation}\label{fluxdef} \mathcal{F} \equiv \left(\frac{\mathrm{d}\mathcal{E}}{\mathrm{d} t}\right)^\mathrm{GW} \equiv \left(\int \mathrm{d}\Omega\,\frac{\mathrm{d}\mathcal{E}}{\mathrm{d} t\,\mathrm{d}\Omega}\right)^\mathrm{GW}\,, \end{equation} where $\mathcal{E}$ denotes the energy carried away in the gravitational waves. In the most general case the flux is given as an infinite series of multipolar contributions (starting at the quadrupole level $\ell=2$), by~\cite{Th80} \begin{equation}\label{flux} \mathcal{F} = \sum_{\ell = 2}^{+ \infty} \frac{G}{c^{2\ell +1}}\,\biggl[ \frac{(\ell+1)(\ell+2)}{(\ell-1) \ell \, \ell! (2\ell+1)!!} U_L^{(1)} U_L^{(1)} + \frac{4\ell (\ell+2)}{c^2 (\ell-1) (\ell+1)! (2\ell+1)!!} V_L^{(1)} V_L^{(1)}\biggr]\,. \end{equation} The radiative multipole moments $U_L$ with mass-type and $V_L$ with current-type parametrize (by definition) the asymptotic transverse-traceless spatial waveform at leading order in the distance to a general matter source. Consequently they also parametrize the various gravitational wave fluxes like the energy flux.\footnote{The notation for multi-indices and symmetric-trace-free (STF) tensors like $U_L$ and $V_L$ is the same as in Papers I \& II. Thus we denote by $L=i_1\cdots i_\ell$ a multi-index composed of $\ell$ multipolar spatial indices $i_1, \cdots, i_\ell$ ranging from 1 to 3. In the case of summed-up (dummy) multi-indices $L$, we do not write the $\ell$ summations from 1 to 3 over their indices. Time derivatives are indicated with a superscript $(n)$.} The radiative moments are functions of the retarded time $T_R\equiv T-R/c$ in a radiative coordinate system which by definition is a system for which $T_R$ coincides with a null coordinate asymptotically in the limit $R \equiv \vert X^i\vert\to\infty$. In order to define a wave generation formalism, the radiative moments $U_L(T_R)$ and $V_L(T_R)$ are to be related to the matter content of the source. This is done in two steps. First, they are expressed in terms of some ``canonical'' multipole moments $M_L$ and $S_L$. The relations between the radiative moments $U_L$, $V_L$ and the canonical ones $M_L$, $S_L$ encode the non-linearities in the wave propagation between the source and the detector. Those relations are re-expanded in a PN approximation and are then seen to contain, at the leading 1.5PN order, the contribution of the gravitational-wave tails, which take the form of ``hereditary'' type integrals, formally depending on all the infinite past of the source. Explicitly we have~\cite{BD92, B95} \begin{subequations} \label{tails} \begin{align} U_L(T_R) &= M_L^{(\ell)}(T_R) + \frac{2 G M}{c^3} \int_0^{+\infty}\! \mathrm{d}\tau \, M_L^{(\ell +2)}(T_R-\tau) \biggl[\ln \biggl(\frac{\tau}{2\tau_0} \biggr)+ \kappa_\ell \biggr] +\mathcal{O}\Bigl(\frac{1}{c^5}\Bigr)\, ,\label{eq:tailsU}\\ V_L(T_R) &= S_L^{(\ell)}(T_R) + \frac{2 G M}{c^3} \int_0^{+\infty}\! \mathrm{d}\tau \, S_L^{(\ell +2)}(T_R-\tau) \biggl[\ln \biggl(\frac{\tau}{2\tau_0} \biggr)+ \pi_\ell \biggr] +\mathcal{O}\Bigl(\frac{1}{c^5}\Bigr)\, .\label{eq:tailsV} \end{align} \end{subequations} The constant ADM mass $M$ of the source (or mass monopole) is responsible for the backscattering of the linear waves producing tails. The logarithmic kernels of the tail integrals involve a freely specifiable time scale $\tau_0$ entering the relation between the radiative time $T_R$ and the corresponding retarded time $t_r\equiv t - r/c$ in harmonic coordinates: \begin{equation}\label{TR} T_R=t_r-\frac{2GM}{c^3}\ln\left(\frac{r}{c \tau_0}\right)\,. \end{equation} The numerical constants $\kappa_\ell$ and $\pi_\ell$ appearing in Eqs.~\eqref{tails} (which depend on the choice of harmonic coordinates used to cover the source) are given by \begin{subequations}\label{kappapi} \begin{align} \kappa_\ell &= {2\ell^2 +5\ell+4\over \ell(\ell+1)(\ell+2)} + \sum^{\ell-2}_{k=1} {1\over k} \,,\\ \pi_\ell &= {\ell-1\over \ell(\ell+1)} + \sum^{\ell-1}_{k=1} {1\over k} \,. \end{align} \end{subequations} Since spin-orbit effects start at order $\mathcal{O}(c^{-3})$ in the mass-type moments and at order $\mathcal{O}(c^{-1})$ in the current-type moments~\cite{BBF06}, one can easily check that in order to obtain the spin-orbit terms at 4PN in the flux we need only the tails in the mass and current quadrupole moments $U_{ij}$ and $V_{ij}$ (\textit{i.e.} having $\ell=2$), and these will have to be computed at 1PN relative order, and in the mass and current octupoles $U_{ijk}$ and $V_{ijk}$ ($\ell=3$), to be computed at Newtonian order. As a second step, the canonical moments $M_L$ and $S_L$ are related to a particular set of six source-rooted multipole moments, that admit explicit analytic closed form expressions as integrals over the matter and gravitational fields in the source~\cite{B98mult}. This new set of moments can be divided into two ``source'' multipole moments $I_L$ and $J_L$ (mass-type and current-type), and four so-called ``gauge'' multipole moments $W_L$, $X_L$, $Y_L$, $Z_L$ which play a role only at high post-Newtonian orders. For our purpose, it will be sufficient to know that $M_L$ and $S_L$ coincide with the source moments $I_L$ and $J_L$ up to small PN remainders $\mathcal{O}(c^{-5})$: \begin{subequations}\label{MLSL}\begin{align} M_L &= I_L + \mathcal{O}\left(\frac{1}{c^5}\right)\,,\\ S_L &= J_L + \mathcal{O}\left(\frac{1}{c^5}\right)\,. \end{align}\end{subequations} The PN remainders $\mathcal{O}(c^{-5})$ in both Eqs.~\eqref{tails} and \eqref{MLSL} contain different sorts of non-linear interactions between (time derivatives of the) multipole moments. These can be divided into \textit{hereditary} terms~\cite{BD92}, which involve various integrals over the whole past of the multipole moments like in the tails~\eqref{tails}, and \textit{instantaneous} terms which depend only on the current values of the multipole moments at instant $T_R$. Here our nomenclature refers to terms which are hereditary or instantaneous functionals of the source and gauge moments $I_L$, $J_L$, $W_L$, $\cdots$, $Z_L$ (\textit{i.e.} after due replacement of the canonical moments $M_L$, $S_L$ in terms of $I_L$, $J_L$, $\cdots$, $Z_L$). For instance the hereditary terms in Eqs.~\eqref{tails} comprise at order $\mathcal{O}(c^{-5})$ the so-called non-linear memory effect which is a quadratic interaction between multipole moments,\footnote{Actually this effect appears only in the $\mathcal{O}(c^{-5})$ correction of the mass-type radiative multipole moment $U_L$, but not in the current-type radiative moment $V_L$.} and, at order $\mathcal{O}(c^{-6})$, the so-called tail-of-tail term which is cubic. The non-linear memory integral is simply given by an anti-derivative of an instantaneous term, while the tail-of-tail involves a logarithmic kernel similar to the one in Eqs.~\eqref{tails} --- although more complicated. In addition there are many couplings between moments which are just instantaneous; see the explicit formulas given in Refs.~\cite{BFIS08, FMBI12}. Recalling that spin-orbit contributions bring at least an additional factor $1/c$, we see that we should in principle take into account all these instantaneous corrections up to the order $\mathcal{O}(c^{-7})$ in the mass quadrupole moment $U_{ij}$ and $\mathcal{O}(c^{-5})$ in the current quadrupole moment $V_{ij}$ (as given in Refs.~\cite{BFIS08,FMBI12}). \subsection{Contributions to the flux for circular orbits} \label{subsec:contributionscircular} We now restrict ourselves to compact binaries whose orbit has been circularized by the emission of gravitational radiation, so that it can be considered as quasi-circular. That is to say, the orbital elements (except for precession effects due to the presence of spins) are assumed to vary only on long timescales, because of radiation reaction. This restriction to quasi-circular orbits will also allow us to model simply the dynamics of the binary in the past and therefore to compute the hereditary tail integrals \eqref{tails}. Anticipating on the notation used for compact binaries in the following section, the orbital separation $r$ and orbital frequency $\omega$ will thus be assumed to vary according to\footnote{As we shall check later the orbital frequency for circular orbits is constant at linear order in the spins.} \begin{equation}\label{romegadot} \dot{r}=\mathcal{O}\left(\frac{1}{c^5}\right)\,, \quad\dot{\omega}=\mathcal{O}\left(\frac{1}{c^5}\right)\,. \end{equation} An important point is that, when restricting the calculation to quasi-circular orbits, purely instantaneous terms cannot give any spin-orbit contribution at 4PN order in the energy flux~\eqref{flux}. We show this fact by a simple dimensional analysis. Indeed, we can write the general structure of such instantaneous terms in the flux as \begin{equation}\label{fluxinst} \left(\mathcal{F}\right)_\text{inst} \sim \sum \,\frac{(G m)^n}{c^a \,r^k} \,(n,v,S) \,(\bm{v}^2)^p\,(\bm{n}\cdot\bm{v})^q\,, \end{equation} where $m$ is any of the two masses in the binary system, $\bm{v}^2\equiv\dot{r}^2+r^2\omega^2$ is the squared Euclidean norm of the relative velocity between the two bodies, and $\bm{n}\cdot\bm{v}\equiv\dot{r}$ is the Euclidean scalar product between the unit separation vector between the two particles and their relative velocity. We are assuming that the expression of the flux is given in the frame of the center of mass. There is no dependence on the relative acceleration since it is supposed to have been consistently replaced by the equations of motion --- the normal practice in PN approximations. Note that since we are dealing with instantaneous (non-hereditary) terms, the velocity $\bm{v}$ and unit direction $\bm{n}$ are taken at the same time, which is the current instant $T_R$; there is no integration over some intermediate time in between which would couple together some of these vectors at different instants. The dependence on the two spin vectors can only arise through the mixed product $(n,v,S)\equiv\varepsilon_{ijk}n^iv^jS^k$, where $S^i$ denotes any of the two spin vectors, with any of the usual conventions adopted for the spin vectors. This is easily proven if one remembers that the spin vectors are actually pseudo-vectors with respect to parity transformations, while the flux must be a scalar, \textit{i.e.} not a pseudo-scalar. In Eq.~\eqref{fluxinst} we are considering only terms linear in the spins, neglecting quadratic spin-spin coupling terms. As recalled in the Introduction, with our convention used in this series of papers~\cite{MBFB13,BMFB13,BMB13a}, the dimension of the spin tensor and of all spin variables are that of an angular momentum times the speed of light $c$. With that convention it is easy to check that in order for the flux to have the correct dimension of a power (energy per unit time), we need $k=n+2$ and $2p+q+2n=a$. For a 4PN term, we should have $a=13$ in Eq.~\eqref{fluxinst} because this corresponds to 4PN $\sim 1/c^{8}$ beyond the leading radiation reaction at 2.5PN $\sim 1/c^{5}$ order, hence 6.5PN $\sim 1/c^{13}$ absolute order. Hence we deduce that $q=13-2p-2n$. The point is that $q$ should be an odd integer for a 4PN term, and thus that this term contains at least one factor $\bm{n}\cdot\bm{v}$. Since for quasi-circular orbits we have $\bm{n}\cdot\bm{v}=\dot{r}=\mathcal{O}(c^{-5})$, the real order of magnitude of this term is very small, being at least 6.5PN (or 9PN absolute). Thus, we have proved that instantaneous terms (\textit{i.e.} which do not involve any hereditary integral) will be negligible for our purposes. Now, let us show that the only truly hereditary integrals which can contribute spin-orbit terms at 4PN order in the flux are the tails given in \eqref{tails}. The tail-of-tail term which appears at order $\mathcal{O}(c^{-6})$ in $U_{ij}$ involves the mass quadrupole moment, and therefore the spin-orbit contributions therein, which are $\mathcal{O}(c^{-3})$ for mass moments, will appear only at higher order. On the other hand, we have already remarked that the non-linear memory integrals at orders $\mathcal{O}(c^{-5})$ and $\mathcal{O}(c^{-7})$ are given by some simple anti-derivatives. They become therefore instantaneous in the energy flux~\eqref{flux} in which all the radiative moments are differentiated with time; so the previous argument applies to such terms as well. Our conclusion is that the only contributions coming from the spin-orbit effect at the 4PN order in the case of quasi-circular orbits are due to the hereditary tail integrals given in Eqs.~\eqref{tails}. There are no contributions from other hereditary terms nor instantaneous ones, either coming from non-linear interactions between canonical moments in the remainders of~\eqref{tails}, or from the correspondance between canonical and source and gauge moments~\eqref{MLSL}. In particular, we can ignore the 4PN spin-orbit terms in the relative acceleration which is used in this calculation to order reduce the time derivatives of the moments.\footnote{Such 4PN spin-orbit terms in the equations of motion are instantaneous, and correspond to a 1.5PN spin-orbit modification of the standard 2.5PN radiation reaction force~\cite{W05}.} Notice that this argument about instantaneous terms shows that the arbitrary scale $\tau_0$ used to adimensionize the logarithmic kernel of the tail integrals~\eqref{tails} will disappear from the final result as it is in factor of an instantaneous term. The same is true for the numerical constants $\kappa_\ell$ and $\pi_\ell$ which are irrelevant for this calculation. We emphasize that all these statements are limited to quasi-circular orbits, neglecting their possible eccentricity, and to the computation of the energy flux. They do not apply to the computation of the full waveform with its two polarizations. The two polarizations $h_+$ and $h_\times$, although being scalars, depend on the direction of the source and on the polarization vectors, so the structure analogous to \eqref{fluxinst} is more complicated. The calculation of hereditary integrals like the tail integrals in Eqs.~\eqref{tails} in principle requires knowing explicitly the dynamics of the binary system in the past. One must first supplement the computation with some physical assumption regarding the behaviour of the source in the infinite past. Following Refs.~\cite{BD92,BS93} and Paper~I we can assume that at very early times the binary system was formed from freely falling black holes moving initially on some hyperbolic-like orbits. This ensures that the integrals in~\eqref{tails} are convergent (see \textit{e.g.} the discussion in Sec.~II B of Paper~I). It was then shown~\cite{BD92,BS93} that under such an assumption the tail integrals are very weakly sensitive over the past history of the source, and can essentially be computed by inserting the current dynamics (at current time $T_R$) of the binary into the integrals --- \textit{i.e.} neglecting the secular changes of the orbit by radiation reaction over the past. Quite naturally, as proved in the Appendix of Ref.~\cite{BS93}, one can proceed in that way modulo some PN remainder terms of the order of the radiation reaction scale, \textit{i.e.} $\mathcal{O}(c^{-5})$ and more precisely $\mathcal{O}(\ln c/c^{5})$. Nevertheless, even if we can always neglect the evolution of the orbit by gravitational radiation in the past, one has still to worry about the details of the current dynamics which has to be plugged into the tail integrals and consistently integrated. This is dealt with in the next section. \section{Analytical solution for the spin-orbit dynamics} \label{sec:Dynamics} In this section, we present an analytical solution for the dynamics of the binary of compact spinning objects on quasi-circular orbits, including the precession effects due to the presence of the spins. This solution will be valid formally at any post-Newtonian order, if radiation reaction effects are neglected, but will be restricted to the linear order in spins. The leading order solution was already obtained in Paper~I, but we shall show that the solution found there turns out to be in fact nicely valid to higher PN orders, provided that we restrict to spin-orbit contributions. To show this, we parallel the presentation given in Paper~I, repeating all the necessary definitions for completeness, and pointing out where the validity of the solution can in fact be extended to higher order. \subsection{Equations of motion and spin precession for quasi-circular orbits} Throughout this paper, we will work in the center-of-mass frame, defined by the cancellation of the center-of-mass integral of motion $\bm{G}=0$, and we will use conserved-norm spin variables as they are defined in Ref.~\cite{BMFB13}, where a systematic construction, fixing the convention, is proposed.\footnote{Notice that the definition used here for the conserved-norm spin vectors is distinct from the one used in Ref.~\cite{BBF06}. However, the difference between the two variables is of order 2PN and vanishes in the center-of-mass frame. For reference we give here the relation between these two conserved-norm variables: $$\mathbf{S}_1=\mathbf{S}_1^\text{BBF}+\frac{2G m^2}{c^4 r_{12}}\Bigl[(S_1v_1)\bm{v}_2-(S_1v_2)\bm{v}_1\Bigr] + \mathcal{O}\Bigl(\frac{1}{c^6}\Bigr)\,,$$ where $r_{12}$ is the orbital separation and $\bm{v}_{1,2}$ are the two velocities. In Paper~I we worked at leading-order where all spin variables are equivalent.} This choice allows one to write the evolution equations of the spin vectors as simple precession equations, see Eq.~\eqref{eq:defprecession} below, and, as discussed in Papers I \& II, it is crucial when applying the energy balance condition relating the emitted flux and the decrease of the orbital energy, since these variables will be secularly constant. It is convenient to introduce two combinations of the individuals spins defined by \begin{equation}\label{eq:defSSigma} \bm{S} \equiv \bm{S}_{1} + \bm{S}_{2} \,, \qquad \bm{\Sigma} \equiv \frac{m}{m_{2}}\bm{S}_{2} - \frac{m}{m_{1}}\bm{S}_{1} \,, \end{equation} with $m\equiv m_{1}+m_{2}$ the total mass. Later we will also use the symmetric mass ratio $\nu \equiv m_{1}m_{2}/m^{2}$ and the mass difference $\delta m \equiv m_{1} - m_{2}$. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{geometry.pdf} \caption{Geometric definitions to describe the precessional motion of the binary, identical to the ones used in Paper~I. The conserved angular momentum $\bm{J}$ gives a fixed direction $\bm{z}$, completed with two constant unit vectors $\bm{x}$ and $\bm{y}$ forming with $\bm{z}$ an orthonormal triad; $\bm{\ell}$ is the normal to the instantaneous orbital plane (shown in yellow), described by the Euler angles $\alpha,\iota$, and defines the auxiliary vectors $\bm{x}_{\ell}$, $\bm{y}_{\ell}$, see Eqs.~\eqref{eq:defxlyl}. The position of the unit separation vector $\bm{n}$ defines the third Euler angle $\Phi$, and the moving triad is completed by $\bm{\lambda}=\bm{\ell}\times\bm{n}$.}\label{fig:geometry} \end{figure} In the following, we will extensively employ the total angular momentum of the system, that we denote by $\bm{J}$, and which is conserved, \begin{equation}\label{eq:Jconserved} \frac{\mathrm{d}\bm{J}}{\mathrm{d} t} = 0 \,, \end{equation} neglecting radiation-reaction effects. It is customary to decompose the conserved angular momentum as $\bm{J} = \bm{L} + \bm{S}/c$, with $\bm{S}$ being specified by our choice of conserved-norm spin variables, and with $\bm{L}$ including both spin and non-spin PN contributions. We shall give $\bm{L}$ explicitly in Eq.~\eqref{eq:L} below for the case of circular orbits. To describe the relative motion of the binary in the center-of-mass frame, we keep the same geometric definitions as in Paper~I, which are recalled in Fig.~\ref{fig:geometry}. We introduce an orthonormal triad $(\bm{n},\bm{\lambda},\bm{\ell})$ defined as follows: $\bm{n}$ is the unit-norm separation vector, such that $\bm{x}=r\bm{n}$ with $\bm{x}\equiv\bm{y}_{1}-\bm{y}_{2}$. From the relative velocity $\bm{v}\equiv\bm{v}_{1}-\bm{v}_{2}$, we define the unit normal $\bm{\ell}$ to the instantaneous orbital plane, as $\bm{\ell}=\bm{n}\times\bm{v}/|\bm{n}\times\bm{v}|$ (excluding the head-on collision case). The orthonormal triad is then completed by $\bm{\lambda}=\bm{\ell}\times\bm{n}$. In the following, the components of a vector on this basis will be denoted by a subscript, for instance $A_{n}\equiv\bm{A}\cdot\bm{n}$. Next, denoting the time derivative by a dot, the orbital angular frequency $\omega$ and precession angular frequency $\varpi$ are defined by $\dot{\bm{n}}=\omega \bm{\lambda}$ and $\dot{\bm{\ell}}=-\varpi \bm{\lambda}$ respectively. This leads to the following system of equations for the time evolution of the triad vectors,\footnote{Notice that we changed our notations with respect to Paper~I; our $\varpi$ corresponding to $-\omega_{\mathrm{prec}}$ there.} \begin{subequations}\label{eq:precessionbasis} \begin{align} \dot{\bm{n}} &= \omega \bm{\lambda} \,, \\ \dot{\bm{\lambda}} &= - \omega\bm{n} + \varpi \bm{\ell} \,, \\ \dot{\bm{\ell}} &= -\varpi \bm{\lambda} \,. \end{align} \end{subequations} We also introduce a fixed orthonormal basis $(\bm{x},\bm{y},\bm{z})$, with the $\bm{z}$ direction along the total angular momentum $\bm{J}$ (which is conserved, as we said, if we neglect radiation reaction effects). It is convenient to introduce Euler angles to mark the position of the binary with respect to this fixed basis. Two additional vectors lying in the orbital plane are defined according to \begin{equation}\label{eq:defxlyl} \bm{x}_{\ell} =\frac{\bm{J}\times\bm{\ell}}{|\bm{J}\times\bm{\ell}|} \,, \qquad \bm{y}_{\ell} = \bm{\ell}\times\bm{x}_{\ell}\,, \end{equation} and the Euler angles $\alpha$, $\iota$, and $\Phi$ are defined as indicated in Fig.~\ref{fig:geometry}. The relation between $(\bm{n},\bm{\lambda})$ and $(\bm{x}_{\ell},\bm{y}_{\ell})$ is then \begin{subequations}\label{eq:xlylnlambda} \begin{align} \bm{n} &= \cos\Phi \, \bm{x}_{\ell} + \sin\Phi \, \bm{y}_{\ell} \,, \\ \bm{\lambda} &= -\sin\Phi \, \bm{x}_{\ell} + \cos\Phi \, \bm{y}_{\ell}\,. \end{align} \end{subequations} We also have for the inclination angle $\iota$: \begin{equation}\label{eq:siniotaexpiphi} \sin \iota =\frac{|\bm{J}\times\bm{\ell}|}{|\bm{J}|} \,. \end{equation} Computing the product $\sin \iota \, \bm{x}_{\ell}\cdot(\bm{n}+\mathrm{i}\bm{\lambda})$ in two different ways, using \eqref{eq:defxlyl} and \eqref{eq:xlylnlambda}, yields a relation which will be important in the following: \begin{equation} \sin \iota \, e^{-\mathrm{i} \Phi} = - \mathrm{i} \frac{J_{+}}{|\bm{J}|} \,, \end{equation} where we defined $J_{+}\equiv J_{n} + \mathrm{i} J_{\lambda}$. Using the derivatives of the basis vectors as given by \eqref{eq:precessionbasis}, we arrive at the following system of equations for the time derivatives of the Euler angles: \begin{subequations}\label{eq:derivativeeuler} \begin{align} \frac{\mathrm{d} \alpha}{\mathrm{d} t } &= \varpi\frac{\sin\Phi}{\sin\iota} \,, \\ \frac{\mathrm{d} \iota}{\mathrm{d} t } &= \varpi \cos\Phi \,, \\ \frac{\mathrm{d} \Phi}{\mathrm{d} t } &= \omega - \varpi\frac{\sin\Phi}{\tan\iota} \,. \end{align} \end{subequations} Notice that the only assumption we made in deriving Eqs.~\eqref{eq:derivativeeuler} was to treat the total angular momentum as a constant, that is to say neglecting the radiation reaction effects. The above relations are valid, in particular, for general orbits and not only for quasi-circular ones. They are suitable for insertion into the tail integrals modulo negligible radiation reaction corrections $\mathcal{O}(\ln c/c^{5})$. The general expression for the relative acceleration $\bm{a}\equiv\mathrm{d}\bm{v}/\mathrm{d} t$ decomposed in the moving frame is given by $\bm{a} = (\ddot{r}-r\omega^{2})\bm{n} +(r\dot{\omega}+2\dot{r}\omega)\bm{\lambda} + r\omega\varpi\bm{\ell}$. In the following, we will restrict ourselves to quasi-circular orbits, where we can set Eqs.~\eqref{romegadot} namely $\dot{r},\,\dot{\omega}=\mathcal{O}(c^{-5})$. Thus, the moving point will stay on a sphere of constant radius, and we have \begin{equation}\label{eq:ageneral} \bm{a} = -r\omega^{2}\bm{n} + r\omega\varpi\bm{\ell} + \mathcal{O}\Bigl(\frac{1}{c^5}\Bigr)\,. \end{equation} The component of the acceleration along $\bm{\ell}$, proportional to $\varpi=\mathcal{O}(S)$, is responsible for the slow precession of the orbital plane. All the information about the orbital dynamics of quasi-circular orbits is encoded in two equations: one relating the orbital frequency $\omega$ to the orbital separation $r$, and one relating $\varpi$ to $\omega$. As usual we introduce two dimensionless PN parameters $\gamma$ and $x$, both being of order $\mathcal{O}(c^{-2})$ and respectively linked to $r$ and to $\omega$ by \begin{equation}\label{gamx} \gamma \equiv \frac{G m}{rc^{2}}\,, \qquad x \equiv \left(\frac{G m \omega}{c^{3}}\right)^{2/3}\,. \end{equation} We give here $\omega$ and $\varpi$ including the spin-orbit contribution to next-to-leading order, \textit{i.e.} at 2.5PN order; we include all non-spin contributions up to this order, but notice that in fact we shall only need the next-to-leading order for the non-spin terms, \textit{i.e.} 1PN. We have (see \textit{e.g.} Ref.~\cite{BMFB13}) \begin{subequations}\begin{align} \label{eq:omega2ofgamma} \omega^2&=\frac{G m}{r^3}\Bigg\{ 1 +\gamma \left(-3+\nu\right)+\gamma^2 \left(6 + \frac{41}{4} \nu + \nu^2\right) +\frac{\gamma^{3/2}}{G m^2} \left[-5S_\ell-3\frac{\delta m}{m}\Sigma _\ell\right] \nonumber \\ &\qquad\qquad+\frac{\gamma^{5/2}}{G m^2} \left[\left(\frac{45}{2} -\frac{27}{2} \nu\right)S_\ell+\frac{\delta m}{m}\left(\frac{27}{2} -\frac{13}{2} \nu\right)\Sigma _\ell\right] \Bigg\} + \mathcal{O}\left(\frac{1}{c^6}\right)\,,\\ \label{eq:varpi} \varpi &= \frac{c^3 x^{3}}{G^2 m^3}\Bigg\{ \left[7S_n+3\frac{\delta m}{m}\Sigma _n\right] +x \left[\left(-3 -12 \nu\right)S_n+\frac{\delta m}{m}\left(-3 -\frac{11}{2} \nu\right)\Sigma _n\right] \Bigg\}+\mathcal{O}\left(\frac{1}{c^7}\right)\,. \end{align} \end{subequations} In the following, we will mostly use the PN parameter $x$ instead of $\gamma$. In fact, we will write down a solution for the dynamics directly from the conserved angular momentum $\bm{J}$ without resorting to the acceleration, so that we will not use the expression of $\varpi$ as such. An important point is that, as shown in Eq.~\eqref{eq:omega2ofgamma}, at linear order in the spins only the components of the conserved-norm spin vectors along $\bm{\ell}$ can contribute to $\omega$. As we shall show in Eq.~\eqref{eq:dtSlambda} below, these components are in fact constant at linear order in spin, when neglecting radiation reaction effects. Thus we can treat the orbital frequency $\omega$ as a constant for our purposes. The central result that encompasses the information we need for our solution of the spin-orbit dynamics is the expression of the conserved angular momentum $\bm{J}$. Again, we give here its expression at 2.5PN order but the non-spin part could be truncated at 1PN order for our purposes. The leading-order spin contribution is just $\bm{S}/c$. Having defined $\bm{J}=\bm{L}+\bm{S}/c$, we have then (see \textit{e.g.}~\cite{BMFB13}) \begin{align}\label{eq:L} \bm{L}=\frac{G m^2 \nu}{c\, x^{1/2}} \Bigg\{ & \bm{\ell}\left[ 1 +x \left(\frac{3}{2} + \frac{1}{6} \nu\right) +x^2 \left(\frac{27}{8} -\frac{19}{8} \nu + \frac{1}{24} \nu^2\right)\right]\\ &+\frac{x^{3/2}}{G m^2}\Bigg( \bm{\ell}\left[-\frac{35}{6}S_\ell-\frac{5}{2}\frac{\delta m}{m}\Sigma _\ell\right] +\bm{\lambda}\left[-3S_{\lambda }-\frac{\delta m}{m}\Sigma _{\lambda }\right] +\bm{n}\left[\frac{1}{2}S_n+\frac{1}{2}\frac{\delta m}{m}\Sigma _n\right] \Bigg)\nonumber\\ &+\frac{x^{5/2}}{G m^2}\Bigg( \bm{\ell}\left[\left(-\frac{77}{8} + \frac{427}{72} \nu\right)S_\ell+\frac{\delta m}{m}\left(-\frac{21}{8} + \frac{35}{12} \nu\right)\Sigma _\ell\right]\nonumber\\ &\qquad\qquad+\bm{\lambda}\left[\left(-\frac{7}{2} + 3 \nu\right)S_{\lambda }+\frac{\delta m}{m}\left(-\frac{1}{2} + \frac{4}{3} \nu\right)\Sigma _{\lambda }\right]\nonumber\\ &\qquad\qquad+\bm{n}\left[\left(\frac{11}{8} -\frac{19}{24} \nu\right)S_n+\frac{\delta m}{m}\left(\frac{11}{8} -\frac{5}{12} \nu\right)\Sigma _n\right] \Bigg)\Bigg\} +\mathcal{O}\left(\frac{1}{c^6}\right) \;.\nonumber \end{align} The use of Euclidean conserved-norm spin vectors allows us to write their evolution equations as ordinary precession equations (with $A=1,2$) \begin{equation}\label{eq:defprecession} \frac{\mathrm{d} \bm{S}_{A}}{\mathrm{d} t} = \bm{\Omega}_{A} \times \bm{S}_{A} \,. \end{equation} As already argued in Paper~II, the precession vectors $\bm{\Omega}_{A}$ are necessarily directed along $\bm{\ell}$ at linear order in spin, so we write $\bm{\Omega}_{A} \equiv \Omega_{A} \bm{\ell}$. We have $\Omega_{A} = \mathcal{O}(c^{-2})$, and the expression for $\Omega_{1}$ reads \begin{align} \label{eq:Omega1} \Omega_1=\omega\,x\Bigg\{ \left(\frac{3}{4} + \frac{1}{2} \nu -\frac{3}{4}\frac{\delta m}{m}\right) +x \left[\frac{9}{16} + \frac{5}{4} \nu -\frac{1}{24} \nu^2+\frac{\delta m}{m}\left(-\frac{9}{16} + \frac{5}{8} \nu\right)\right]\Bigg\} +\mathcal{O}\left(\frac{1}{c^6}\right) \,, \end{align} with $\Omega_{2}$ being obtained by replacing $\delta m \rightarrow - \delta m$. Using the time derivatives of the basis vectors \eqref{eq:precessionbasis} and the fact that $\bm{\Omega}_{A} \propto \bm{\ell}$, the exact evolution equations of the components of the spins are obtained as \begin{subequations} \begin{align} \frac{\mathrm{d} S^{A}_{n}}{\mathrm{d} t} &= (\omega-\Omega_{A}) S^{A}_{\lambda} \,, \\ \frac{\mathrm{d} S^{A}_{\lambda}}{\mathrm{d} t} &= -(\omega-\Omega_{A}) S^{A}_{n} + \varpi S^{A}_{\ell} \,, \\ \frac{\mathrm{d} S^{A}_{\ell}}{\mathrm{d} t} &= - \varpi S^{A}_{\lambda} \,, \end{align} \end{subequations} which readily translate, at linear order in spin, into \begin{subequations}\label{eq:dtSnlambdal} \begin{align} \frac{\mathrm{d} S^{A}_{n}}{\mathrm{d} t} &= (\omega-\Omega_{A}) S^{A}_{\lambda} \,, \\ \frac{\mathrm{d} S^{A}_{\lambda}}{\mathrm{d} t} &= -(\omega-\Omega_{A}) S^{A}_{n} + \mathcal{O}(S^{2}) \,, \\ \frac{\mathrm{d} S^{A}_{\ell}}{\mathrm{d} t} &= \mathcal{O}(S^{2}) \label{eq:dtSlambda}\,. \end{align} \end{subequations} We see, as stated before, that the spin components along $\bm{\ell}$ are constant, and so is the orbital frequency $\omega$ given by~\eqref{eq:omega2ofgamma}. \subsection{Analytical solution for the spin-orbit dynamics} \label{analsol} We now turn to the derivation of the explicit solution for the dynamics of the binary. We show that two relations from Paper~I which were indicated to be valid neglecting higher PN terms of order $\mathcal{O}(c^{-4})$ are in fact valid formally to any PN order, neglecting radiation reaction and working at linear order in spin. First, considering Eqs.~\eqref{eq:derivativeeuler}, we see that \begin{equation}\label{eq:dtphiplusalpha} \frac{\mathrm{d} (\Phi+\alpha)}{\mathrm{d} t} = \omega + \varpi \sin \Phi \frac{1-\cos \iota}{\sin \iota} = \omega + \mathcal{O}(S^{2}) \,, \end{equation} since both the inclination angle $\iota$ and the precession frequency $\varpi$ are of order $\mathcal{O}(S)$. Thus we arrive at \begin{equation}\label{eq:phiplusalpha} \Phi+\alpha = \phi + \mathcal{O}(S^{2}) \,, \end{equation} introducing the ``carrier'' phase $\phi$ as \begin{equation}\label{eq:defphi} \phi \equiv \int \mathrm{d} t\, \omega = \omega (t-t_{0}) + \phi_{0} \,, \end{equation} with $\phi_{0}$ the reference phase at some time $t_{0}$. Secondly, we turn to Eq.~\eqref{eq:siniotaexpiphi}. From a structural argument already presented in Paper~II, the non-spin part of the angular momentum must be directed along $\bm{\ell}$, since it is a pseudo-vector built only from the vectors $\bm{n}$ and $\bm{\lambda}$. Note that this is valid in fact for general orbits and not only for circular ones. This means that the components of the angular momentum along $\bm{n}$ and $\bm{\lambda}$ come only from the presence of spins, \textit{i.e.} $J_{+} = \mathcal{O}(S)$, as can be seen explicitly on~\eqref{eq:L}. Thus, using also \eqref{eq:phiplusalpha}, we have \begin{subequations}\label{eq:siniotaexpiphialpha} \begin{align} \sin \iota \, e^{-\mathrm{i} \Phi} &= - \mathrm{i} \frac{J_{+}}{|\bm{L}_{\mathrm{NS}}|} + \mathcal{O}(S^{2}) \,, \\ \sin \iota \, e^{\mathrm{i} \alpha} &= - \mathrm{i} \frac{J_{+}}{|\bm{L}_{\mathrm{NS}}|}e^{\mathrm{i} \phi} + \mathcal{O}(S^{2}) \,,\label{eq:siniotaexpialpha} \end{align} \end{subequations} with $\bm{L}_{\mathrm{NS}}$ denoting the non-spin part of $\bm{L}$ (or $\bm{J}$). We will see later that these relations, together with the post-Newtonian expansion of the angular momentum which is given by \eqref{eq:L} and of the spin precession frequencies \eqref{eq:Omega1}, are the only ones we will need to write down our dynamical solution. If we introduce an arbitrary reference time $t_{0}$, say the same as in Eq.~\eqref{eq:defphi}, and relate each of the triads $(\bm{n},\bm{\lambda},\bm{\ell})$ at time $t$ and $(\bm{n}_{0},\bm{\lambda}_{0},\bm{\ell}_{0})$ at time $t_{0}$ to the fixed triad $(\bm{x},\bm{y},\bm{z})$, and then eliminate the triad $(\bm{x},\bm{y},\bm{z})$, one obtains \begin{subequations}\label{eq:solutionnlambdal} \begin{align} \bm{n} =& \cos(\phi-\phi_{0})\bm{n}_{0} + \sin(\phi-\phi_{0}) \bm{\lambda}_{0} \nonumber\\&\quad + \bigl( \sin \iota \, \sin(\phi-\alpha) - \sin\iota_{0} \, \sin(\phi-\alpha_{0})\bigr)\bm{\ell}_{0} + \mathcal{O}(S^{2}) \,, \\ \bm{\lambda} =& -\sin(\phi-\phi_{0})\bm{n}_{0} + \cos(\phi-\phi_{0}) \bm{\lambda}_{0} \nonumber\\&\quad + \bigl( \sin \iota \, \cos(\phi-\alpha) - \sin\iota_{0} \, \cos(\phi-\alpha_{0})\bigr)\bm{\ell}_{0} + \mathcal{O}(S^{2}) \,, \\ \bm{\ell} =& ~\bm{\ell}_{0} + \bigl( \sin \iota \, \sin(\alpha-\phi_{0}) - \sin\iota_{0} \, \sin(\alpha_{0}-\phi_{0}) \bigr)\bm{n}_{0} \nonumber \\ & \quad+ \left( -\sin \iota \, \cos(\alpha-\phi_{0}) + \sin\iota_{0} \, \cos(\alpha_{0}-\phi_{0}) \right)\bm{\lambda}_{0} + \mathcal{O}(S^{2}) \,, \end{align} \end{subequations} where we used \eqref{eq:phiplusalpha} again together with $\cos \iota = 1+ \mathcal{O}(S^{2}) $. The previous result can be reformulated in a more compact form if we introduce the complex null vector $\bm{m} \equiv \frac{1}{\sqrt{2}}(\bm{n}+\mathrm{i}\bm{\lambda})$ and its complex conjuguate $\overline{\bm{m}}$. The normalization is chosen so that $\bm{m}\cdot\overline{\bm{m}}=1$. In terms of these vectors, the result \eqref{eq:solutionnlambdal} now becomes: \begin{subequations}\label{eq:solutionml} \begin{align} \bm{m} &= e^{-\mathrm{i}(\phi-\phi_{0})} \bm{m}_{0} + \frac{\mathrm{i}}{\sqrt{2}} \left( \sin \iota \, e^{\mathrm{i}\alpha} - \sin\iota_{0} \, e^{\mathrm{i} \alpha_{0}}\right) e^{-\mathrm{i}\phi} \bm{\ell}_{0} + \mathcal{O}(S^{2}) \,,\label{eq:solutionm} \\ \bm{\ell} &= \bm{\ell}_{0} + \left[ \frac{\mathrm{i}}{\sqrt{2}} \left( \sin \iota \, e^{-\mathrm{i}\alpha} - \sin\iota_{0} \, e^{-\mathrm{i} \alpha_{0}}\right) e^{\mathrm{i}\phi_{0}} \bm{m}_{0} + \mathrm{c.c.} \right] + \mathcal{O}(S^{2}) \,,\label{eq:solutionl} \end{align} \end{subequations} and we see that the precession effects in the dynamical solution for the evolution of the basis vectors $(\bm{n},\bm{\lambda},\bm{\ell})$, which are represented by the second term in the above equations, are all encompassed in the combination $\sin\iota\,e^{\mathrm{i} \alpha}$ and its complex conjugate $\sin\iota\,e^{- \mathrm{i} \alpha}$, which is given in terms of the spin and non-spin contributions to the angular momentum by Eq.~\eqref{eq:siniotaexpialpha}. Now our program is to insert the latter solution for the dynamics, Eqs.~\eqref{eq:solutionnlambdal} or~\eqref{eq:solutionml}, into the tail integrals~\eqref{tails}. For that purpose it is convenient to think of $t_{0}$ as being the current retarded time $T_{R}$ and to look at the orbital evolution backwards in time. On the other hand, the solution of the evolution equations \eqref{eq:dtSnlambdal} for the components of the spins is readily obtained as \begin{subequations}\label{eq:solutionspincomponents} \begin{align} S^{A}_{n} + \mathrm{i} S^{A}_{\lambda} &= S^{A}_{\perp} e^{-\mathrm{i} \psi_{A}} + \mathcal{O}(S^{2})\,, \\S^{A}_{\ell} &= S^{A}_{\parallel} + \mathcal{O}(S^{2})\,, \end{align}\end{subequations} in which we have introduced the two integration constants $S^{A}_{\perp}$ and $S^{A}_{\parallel}$, and where the two spin phases are defined by \begin{equation}\label{eq:defpsi} \psi_{A} = (\omega - \Omega_{A})(t-t_{0}) + \psi^{A}_{0} \,, \end{equation} with $\psi^{A}_{0}$ the phases at the reference time $t_{0}$. We are now able to analyze in more detail the dependence on time of the solution for the basis vectors and for the spins. In Eq.~\eqref{eq:siniotaexpiphialpha}, $|\bm{L}_{\mathrm{NS}}|$ is simply a constant, and $J_{+}$ depends on the spin components $S^{A}_{n},S^{A}_{\lambda}$ which are given by Eqs.~\eqref{eq:solutionspincomponents} and \eqref{eq:defpsi}. Thus, we see that the complete dependence in time in the triad $(\bm{n},\bm{\lambda},\bm{\ell})$, at linear order in spin, takes the simple form of complex exponentials $e^{\pm \mathrm{i} \omega}$ and $e^{\pm \mathrm{i} \psi_A}$, so that the general structure of the time-dependent part of any product or combination of the latter basis vectors and of spin vectors is of the type (see also Paper~I): \begin{equation}\label{eq:structure} e^{\mathrm{i} (m \omega + p \Omega_{1} + q \Omega_{2}) t} \,, \quad \text{with} ~m \in \mathbb{Z} ~\text{and} ~(p,q) \in \{-1,0,1\}\,. \end{equation} The restriction on the range of values for $p$ and $q$ comes from the fact that we are limited to the linear order in spins. This general structure will also be that of the time dependence of any of the source multipole moments, so that we shall be able to integrate the tail integrals using a simple formula in the Fourier domain. Finally, we turn to the leading PN order of precession effects. A superficial look at Eqs.~\eqref{eq:solutionml},\eqref{eq:siniotaexpiphialpha} and \eqref{eq:L} would tell us that precession effects in the dynamical solution for the moving basis starts at order $\mathcal{O}(c^{-1})$, which is the order of the first spin contribution in the angular momentum $\bm{J}$. However, we notice that only the combination $\sin \iota \, e^{\mathrm{i} \alpha} - \sin \iota_{0} \, e^{\mathrm{i} \alpha_{0}}$ and its complex conjugate intervene into the solution~\eqref{eq:solutionml}. At leading order, since $J_{+} = (S_{n} + \mathrm{i} S_{\lambda})/c + \mathcal{O}(c^{-3})$, and using $|\bm{L}_{\mathrm{NS}}| = G m^{2} \nu/(cx^{1/2}) + \mathcal{O}(c^{-2})$, we have \begin{subequations}\label{eq:leadingsiniotaeialpha} \begin{align} \sin \iota \, e^{\mathrm{i} \alpha} &= - \mathrm{i}\frac{x^{1/2}}{G m^{2}\nu} S^{1}_{\perp} e^{\mathrm{i} (\phi - \psi_{1})} + 1 \leftrightarrow 2 +\mathcal{O}\left(\frac{1}{c^3}\right) \nonumber\\ &= - \mathrm{i}\frac{x^{1/2}}{G m^{2}\nu} S^{1}_{\perp} e^{\mathrm{i}[\phi_{0}-\psi^{1}_{0}+\Omega_{1} (t-t_{0})]} + 1 \leftrightarrow 2 +\mathcal{O}\left(\frac{1}{c^3}\right) \,, \end{align} \end{subequations} where $1\leftrightarrow 2$ means the expression obtained by the exchange of the two particles. Now, by Taylor-expanding around the reference time $t_{0}$, we find that the combination $\sin \iota \, e^{\mathrm{i} \alpha} - \sin \iota_{0} \, e^{\mathrm{i} \alpha_{0}}$ is made of terms proportional to $\Omega_{1}/c$ or $\Omega_{2}/c$ and therefore is of order $\mathcal{O}(c^{-3})$, since the spin precession frequencies $\Omega_{A}$ are small and known to be already of 1PN order; \textit{cf.} Eq.~\eqref{eq:Omega1}. Thus, we see that the precession effects due to the spins in our solution \eqref{eq:solutionml} are in fact of order $\mathcal{O}(c^{-3})$ or 1.5PN, as one could expect from their corresponding order in the acceleration. \section{Tail-induced spin orbit effects in the flux} \label{sec:Results} The spin-orbit couplings in the relevant source moments $I_L$ and $J_L$ have been computed in Paper~II up to next-to-next-to-leading order. To compute the 4PN spin-orbit tail contributions, we will need the mass and current quadrupole moments $I_{ij}$ and $J_{ij}$ (with $\ell=2$) at relative order 1PN (for both the spin-orbit terms and the non-spin ones), and the mass and current octupoles $I_{ijk}$ and $J_{ijk}$ ($\ell=3$) at Newtonian order. The non-spin terms are well known at the corresponding 1PN order, see \textit{e.g.} Ref.~\cite{BFIS08}. However, we point out that we need for this computation not only the quadrupole and octupole moments at 1PN order, but also the mass monopole $M$ at 1PN order, since this is that mass monopole which is responsible for the tails in Eqs.~\eqref{tails}. The 1PN non-spin monopole for circular orbits reads \begin{equation} M = m\left(1 - \frac{\nu}{2}x \right)+ \mathcal{O}\left(\frac{1}{c^4}\right)\,. \end{equation} Similarly we need also to include the spin-orbit terms into the mass monopole moment $M$. Remind that $M=m+E/c^2$ where $E$ is the conservative energy associated with the equations of motion. The spin-orbit effects in $E$ arise at 1.5PN order and have been given in Eqs.~(3.9) of Ref.~\cite{BMFB13}. This means that the dominant spin-orbit effect in $M$ is not at order 1.5PN but rather at order 2.5PN; for the present computation we need only the dominant 2.5PN spin-orbit term given by \begin{equation} \mathop{M}_\text{S} = \frac{G m\nu}{c^5 r^2}\bigg\{ - (n,S,v) - \frac{\delta m}{m}(n,\Sigma,v)\bigg\}+ \mathcal{O}\left(\frac{1}{c^7}\right)\,. \end{equation} The spin-orbit contribution is indicated by a subscript S and we give the result already reduced to the center-of-mass frame. For the other moments we shall simply report the results taken from Paper~II: \begin{subequations}\begin{align} \mathop{I}_\text{S}{}_{ij}&= \frac{r\nu}{c^3}\bigg\{ -\frac{8}{3}(\mathbf{S}\times\bm{v})^{<i} n^{j>} - \frac{8}{3}\frac{\delta m}{m}(\mathbf{\Sigma}\times\bm{v})^{<i} n^{j>} \nonumber \\ &\quad\quad - \frac{4}{3}(\bm{n}\times\mathbf{S})^{<i} v^{j>} - \frac{4}{3}\frac{\delta m}{m}(\bm{n}\times\mathbf{\Sigma})^{<i} v^{j>}\bigg\} \nonumber \\ &+ \frac{r\nu}{c^5}\Bigg[\bigg\{ (\mathbf{S}\times\bm{v})^{<i} n^{j>}\left(-\frac{26}{21} + \frac{26}{7} \nu\right) v^{2} + (\mathbf{\Sigma}\times\bm{v})^{<i} n^{j>}\frac{\delta m}{m}\left(-\frac{26}{21} + \frac{116}{21} \nu\right) v^{2} \nonumber \\ & \quad\quad+ (\bm{n}\times\mathbf{S})^{<i} v^{j>}\left(-\frac{4}{21} + \frac{4}{7} \nu\right) v^{2} + (\bm{n}\times\mathbf{\Sigma})^{<i} v^{j>}\frac{\delta m}{m}\left(-\frac{4}{21} + \frac{12}{7} \nu\right) v^{2} \nonumber \\ & \quad\quad+ (\mathbf{S}\times\bm{v})^{<i} v^{j>}\left(\frac{4}{21} -\frac{4}{7} \nu\right) (nv) + (\mathbf{\Sigma}\times\bm{v})^{<i} v^{j>}\frac{\delta m}{m}\left(\frac{4}{21} -\frac{20}{21} \nu\right) (nv) \nonumber \\ & \quad\quad+ (n,S,v) v^{<i} v^{j>}\left(-\frac{3}{7} + \frac{9}{7} \nu\right) + (n,\Sigma ,v) v^{<i} v^{j>}\frac{\delta m}{m}\left(-\frac{3}{7} + \frac{40}{21} \nu\right) \bigg\}\nonumber \\ &+\frac{Gm}{r}\bigg\{ (n,S,v) n^{<i} n^{j>}\left(-\frac{38}{21} -\frac{4}{7} \nu\right) + (n,\Sigma ,v) n^{<i} n^{j>}\frac{\delta m}{m}\left(-\frac{16}{7} + \frac{26}{21} \nu\right) \nonumber \\ & \quad\quad+ (\bm{n}\times\mathbf{S})^{<i} n^{j>}\left(\frac{17}{21} + \frac{61}{21} \nu\right) (nv) + (\bm{n}\times\mathbf{\Sigma})^{<i} n^{j>}\frac{\delta m}{m}\left(1 + \frac{34}{21} \nu\right) (nv) \nonumber \\ & \quad\quad+ (nS) (\bm{n}\times\bm{v})^{<i} n^{j>}\left(-2 + \frac{10}{3} \nu\right) + (n\Sigma ) (\bm{n}\times\bm{v})^{<i} n^{j>}\frac{\delta m}{m}\left(-2 + \frac{4}{3} \nu\right) \nonumber \\ & \quad\quad+ (\mathbf{S}\times\bm{v})^{<i} n^{j>}\left(-\frac{11}{7} -\frac{125}{21} \nu\right) + (\mathbf{\Sigma}\times\bm{v})^{<i} n^{j>}\frac{\delta m}{m}\left(-\frac{1}{3} -\frac{16}{3} \nu\right) \nonumber \\ & \quad\quad+ (\bm{n}\times\mathbf{S})^{<i} v^{j>}\left(-\frac{22}{3} -\frac{10}{3} \nu\right) + (\bm{n}\times\mathbf{\Sigma})^{<i} v^{j>}\frac{\delta m}{m}\left(-\frac{8}{3} -\frac{34}{21} \nu\right) \bigg\}\Bigg] + \mathcal{O}\left(\frac{1}{c^7}\right)\,,\nonumber\\\\ \mathop{J}_\text{S}{}_{ij}&= \frac{r\nu}{c}\bigg\{-\frac{3}{2}\Sigma^{<i} n^{j>}\bigg\}\nonumber\\ &+\frac{r\nu}{c^3}\Bigg[\bigg\{ -\frac{2}{7}\frac{\delta m}{m} v^{2}S^{<i} n^{j>} + \Sigma^{<i} n^{j>}\left(-\frac{29}{28} + \frac{143}{28} \nu\right) v^{2} \nonumber \\ &\quad\quad + \frac{33}{28}\frac{\delta m}{m}(Sv) n^{<i} v^{j>} + (\Sigma v) n^{<i} v^{j>}\left(\frac{33}{28} -\frac{155}{28} \nu\right) \nonumber \\ & \quad\quad+ \frac{3}{7}\frac{\delta m}{m} (nv)S^{<i} v^{j>} + \Sigma^{<i} v^{j>}\left(\frac{3}{7} -\frac{16}{7} \nu\right) (nv) \nonumber \\ & \quad\quad- \frac{11}{14}\frac{\delta m}{m}(nS) v^{<i} v^{j>} + (n\Sigma ) v^{<i} v^{j>}\left(-\frac{11}{14} + \frac{47}{14} \nu\right)\bigg\}\nonumber \\ &\quad+\frac{Gm}{r}\bigg\{ -\frac{29}{14}\frac{\delta m}{m}(nS) n^{<i} n^{j>} + (n\Sigma ) n^{<i} n^{j>}\left(-\frac{4}{7} + \frac{31}{14} \nu\right) \nonumber \\ &\quad\quad + \frac{10}{7}\frac{\delta m}{m}S^{<i} n^{j>} + \Sigma^{<i} n^{j>}\left(\frac{61}{28} -\frac{71}{28} \nu\right)\bigg\}\Bigg] + \mathcal{O}\left(\frac{1}{c^5}\right)\,,\\ \mathop{I}_\text{S}{}_{ijk}&= \frac{r^{2}\nu}{c^3}\bigg\{ \frac{9}{2}\frac{\delta m}{m}(\mathbf{S}\times\bm{v})^{<i} n^{j} n^{k>} + (\mathbf{\Sigma}\times\bm{v})^{<i} n^{j} n^{k>}\left(\frac{9}{2} -\frac{33}{2} \nu\right) \nonumber \\ & \quad\quad+ 3\frac{\delta m}{m}(\bm{n}\times\mathbf{S})^{<i} n^{j} v^{k>} + (\bm{n}\times\mathbf{\Sigma})^{<i} n^{j} v^{k>}\left(3 -9 \nu\right) \bigg\} + \mathcal{O}\left(\frac{1}{c^5}\right)\,,\\ \mathop{J}_\text{S}{}_{ijk}&= \frac{r^{2}\nu}{c}\bigg\{ 2S^{<i} n^{j} n^{k>} + 2\frac{\delta m}{m}\Sigma^{<i} n^{j} n^{k>} \bigg\} + \mathcal{O}\left(\frac{1}{c^3}\right)\,. \end{align}\end{subequations} We recall that these spin parts of multipole moments are expressed in terms of the conserved-magnitude spins and of the useful variables \eqref{eq:defSSigma}. We recall also our notation, \textit{e.g.} $(v S)\equiv\bm{v}\cdot\mathbf{S}$ for the ordinary Euclidean scalar product, $(\bm{x}\times\mathbf{\Sigma})^i\equiv\varepsilon^{ijk}x^j\Sigma^k$ for the ordinary cross product, and $(S,x,v)\equiv\mathbf{S}\cdot (\bm{x}\times\bm{v})=\varepsilon^{ijk}S^ix^jv^k$ for the mixed product. We now turn to the calculation of the tail integrals~\eqref{tails}, where, as we have already shown, we can replace the canonical moments $M_{L}$, $S_{L}$ by the source moments $I_{L}$, $J_{L}$. Following Paper~I, we found more convenient to do this computation in the Fourier domain. We denote by $K_{L}$ a generic source moment $I_{L}$ or $J_{L}$, and we define its Fourier transform as \begin{equation}\label{eq:deffourier} K_{L}(t) = \int_{-\infty}^{+\infty}\frac{\mathrm{d} \Omega}{2\pi} \,\tilde{K}_{L}(\Omega) \,e^{-\mathrm{i} \Omega t} \,,\qquad \tilde{K}_{L}(\Omega) = \int_{-\infty}^{+\infty}\mathrm{d} t \,K_{L}(t) \,e^{\mathrm{i} \Omega t}\,. \end{equation} It was shown in Ref.~\cite{BS93} (see also Sec.~II~B in Paper~I) that, under the assumption that the binary formed in the remote past from some quasi-hyperbolic orbits by gravitational radiation, a generic integral of the form \begin{equation}\label{eq:defintegral} \mathcal{U}_{L} (T_{R}) \equiv \int_0^{+\infty}\!\mathrm{d}\tau \, K_{L}^{(\ell+2)}(T_R-\tau) \ln\left( \frac{\tau}{2\hat{\tau}_{0}} \right)\,, \end{equation} where $\hat{\tau}_{0}$ means either $\tau_0e^{-\kappa_\ell}$ or $\tau_0e^{-\pi_\ell}$, takes the following expression in the Fourier domain: \begin{equation}\label{eq:fourier} \mathcal{U}_{L} (T_{R})= \mathrm{i} \int_{-\infty}^{+\infty} \frac{\mathrm{d} \Omega}{2\pi} (-\mathrm{i} \Omega)^{\ell+1} \tilde{K}_{L}(\Omega) e^{-\mathrm{i} \Omega T_{R}} \left[ \frac{\pi}{2}s(\Omega) + \mathrm{i}\bigl( \ln(2|\Omega|\hat{\tau}_{0}) + \gamma_\text{E} \bigr) \right] \,, \end{equation} where $s(\Omega)$ is the sign of $\Omega$ and $\gamma_\text{E}$ is the Euler constant. Now, given the general structure of the frequency modes \eqref{eq:structure}, we see that the Fourier coefficients $\tilde{K}_{L}(\Omega)$ consist of a finite sum over frequencies, \begin{equation}\label{eq:structfourier} \tilde{K}_{L}(\Omega) = 2\pi \sum_{m,p,q} A_{L}^{m,p,q} \,\delta(\Omega-\omega_{m,p,q}) \,, \end{equation} in which $\omega_{m,p,q} = m\omega+p \Omega_{1}+q\Omega_{2}$, and where the sum is finite, limited to $-1 \leqslant p,q \leqslant 1$ and with $m$ taking a finite number of integer values (depending on the order of approximation). The amplitudes $A_{L}^{m,p,q}$ can be readily read off the explicit expressions of the source moments. Then Eq.~\eqref{eq:fourier} transforms into \begin{equation}\label{eq:fourierresult} \mathcal{U}_{L} (T_{R})= \mathrm{i} \sum_{m,p,q} A_{L}^{m,p,q} (-\mathrm{i}\omega_{m,p,q})^{\ell+1} e^{-\mathrm{i}\omega_{m,p,q}T_{R}} \left[ \frac{\pi}{2}s(\omega_{m,p,q}) + \mathrm{i}\Bigl( \ln(2|\omega_{m,p,q}|\hat{\tau}_{0}) + \gamma_\text{E} \Bigr) \right] \,. \end{equation} When applying this formula, in agreement with the dimensional argument presented in Sec.~\ref{subsec:contributionscircular}, we find that the constant $\hat{\tau}_{0}$ cancels out in the flux (and so does $\gamma_{E}$). It also turns out that the various precessional corrections cancel out. That is to say, ignoring the precessional contributions given by the second terms in Eqs.~\eqref{eq:solutionml} would yield the same final result for the flux. This is due to the fact that we are computing a scalar, and can be explained by a structural argument presented in Appendix~\ref{appA}. Finally, we give our main result for the emitted energy flux of quasi-circular orbits. The spin-orbit part of the flux up to 4PN order, thus including the new next-to-leading 4PN tail-induced term, reads \begin{align} \label{fluxres} \mathop{\mathcal{F}}_\text{S} &=\frac{32 c^5}{5 G}\,x^5\,\nu^2\left(\frac{x^{3/2}}{G\,m^2}\right)\left\{ -4S_\ell -\frac{5}{4}\frac{\delta m}{m}\Sigma_\ell \right. \nonumber\\&\left.\qquad+ x \left[ \left(-\frac{9}{2}+\frac{272}{9}\nu\right)S_\ell +\left(-\frac{13}{16}+\frac{43}{4}\nu\right)\frac{\delta m}{m}\Sigma_\ell\right]\right.\nonumber\\&\left.\qquad+ x^{3/2} \left[ -16 \pi\,S_\ell -\frac{31\pi}{6}\,\frac{\delta m}{m}\Sigma_\ell\right]\right. \nonumber\\ &\qquad+ x^2 \left[\left(\frac{476645}{6804}+\frac{6172}{189}\nu -\frac{2810}{27}\nu^2\right)S_\ell +\left(\frac{9535}{336}+\frac{1849}{126}\nu -\frac{1501}{36}\nu^2\right)\frac{\delta m}{m}\Sigma_\ell \right] \nonumber\\ &\qquad+ x^{5/2} \left[ \left( -\frac{3485 \pi}{96} + \frac{13879 \pi}{72}\nu \right) S_{\ell} + \left( -\frac{7163 \pi}{672} + \frac{130583 \pi}{2016}\nu \right)\frac{\delta m}{m} \Sigma_{\ell} \right] \nonumber\\ &\left.\qquad+ \mathcal{O}\left(\frac{1}{c^6}\right)\right\}\,. \end{align} As usual, the spin-orbit contributions due to the absorption by the black-hole horizons have to be added to the post-Newtonian result computed here~\cite{PS95, Alvi01, Tagoshi:1997, Chatziioannou:2012}. The result \eqref{fluxres} for the spin-orbit contribution to the energy flux is to be added to the non-spin contributions given up to 3.5 PN by Eq. (230) in Ref.~\cite{Bliving}. The spin-spin effects in the flux are known to leading order from Refs.~\cite{KWW93, K95,P97}. We have also derived the 4PN tail-induced terms in the energy flux through an alternative, but equivalent computation that uses Eq.~(2.9) in Ref.~\cite{ABIQ08} extended through 4PN order (\textit{i.e.} we have added also the term that involves the current octupole moment). For this derivation we have worked in the time domain, computed derivatives of the relevant multipole moments, reduced to quasi-circular orbits and then calculated the tail integrals in the complex plane, \textit{e.g.}, as described in Sec. IVB and Appendix C of Ref.~\cite{Racine:2008}. Moreover, quite satisfactorily, the result \eqref{fluxres} is in complete agreement in the test-mass limit where $\nu\to 0$ with the result of black-hole perturbation theory on a Kerr background~\cite{TSTS96}. To obtain the evolution of the orbital phase for quasi-circular orbits we apply like in Papers I \& II the usual energy balance equation. The conservative energy $E$ in the balance equation does not contain any spin-orbit term at 4PN order --- this can be seen dimensionally like for the absence of instantaneous terms in the flux. Therefore it is the same as used in Paper~II (and was computed at the right order in the previous works~\cite{MBFB13,BMFB13}). We obtain the secular evolution of the orbital frequency $\omega$ and carrier phase $\phi\equiv\int\omega\,\mathrm{d} t$ as \begin{subequations} \label{phaseres}\begin{align} \left(\frac{\dot{\omega}}{\omega^2}\right)_\text{S} &= \frac{96}{5}\nu\,x^{5/2}\,\left(\frac{x^{3/2}}{G\,m^2}\right)\left\{ -\frac{47}{3}S_\ell -\frac{25}{4}\frac{\delta m}{m}\Sigma_\ell \right. \nonumber\\ & \qquad+x \left[ \left(-\frac{5861}{144}+\frac{1001}{12}\nu\right)S_\ell +\left(-\frac{809}{84}+\frac{281}{8}\nu\right)\frac{\delta m}{m}\Sigma_\ell\right] \nonumber\\ &\left.\qquad+ x^{3/2} \left[ - \frac{188\pi}{3}\,S_\ell -\frac{151\pi}{6}\,\frac{\delta m}{m}\Sigma_\ell\right]\right. \nonumber\\ &\qquad+ x^2 \left[ \left(-\frac{4323559}{18144}+\frac{436705}{672}\nu -\frac{5575}{27}\nu^2\right)S_\ell\right. \nonumber\\ &\qquad\qquad\qquad\left. +\left(-\frac{1195759}{18144} +\frac{257023}{1008}\nu -\frac{2903}{32}\nu^2\right)\frac{\delta m}{m}\Sigma_\ell \right] \nonumber\\ &\qquad+ x^{5/2} \left[ \left( -\frac{15271 \pi}{72} + \frac{3317 \pi}{6}\nu \right) S_{\ell} + \left( -\frac{1665 \pi}{28} + \frac{50483 \pi}{224}\nu \right)\frac{\delta m}{m} \Sigma_{\ell} \right] \nonumber\\ &\left.\qquad+ \mathcal{O}\left(\frac{1}{c^6}\right)\right\}\,.\\ \mathop{\phi}_\text{S} &=-\frac{x^{-5/2}}{32\nu}\left(\frac{x^{3/2}}{G\,m^2}\right)\left\{ \frac{235}{6}S_\ell +\frac{125}{8}\frac{\delta m}{m}\Sigma_\ell \right. \nonumber\\&\left.\qquad+x \ln x \left[ \left(-\frac{554345}{2016}-\frac{55}{8}\nu\right)S_\ell +\left(-\frac{41745}{448}+\frac{15}{8}\nu\right)\frac{\delta m}{m}\Sigma_\ell\right]\right. \nonumber\\ &\left.\qquad+ x^{3/2} \left[ \frac{940\pi}{3}\,S_\ell +\frac{745\pi}{6}\,\frac{\delta m}{m}\Sigma_\ell\right]\right. \nonumber\\ &\qquad+ x^2 \left[ \left(-\frac{8980424995}{6096384}+\frac{6586595}{6048}\nu -\frac{305}{288}\nu^2\right)S_\ell\right. \nonumber\\ &\qquad\qquad\qquad\left. +\left(-\frac{170978035}{387072} +\frac{2876425}{5376}\nu+\frac{4735}{1152}\nu^2\right)\frac{\delta m}{m}\Sigma_\ell \right]\nonumber\\ &\qquad+ x^{5/2} \left[ \left( \frac{2388425 \pi}{3024} - \frac{9925 \pi}{36}\nu \right) S_{\ell} + \left( \frac{3237995 \pi}{12096} - \frac{258245 \pi}{2016}\nu \right)\frac{\delta m}{m} \Sigma_{\ell} \right] \nonumber\\ &\left.\qquad+ \mathcal{O}\left(\frac{1}{c^6}\right)\right\}\,. \end{align}\end{subequations} The expressions \eqref{fluxres} and \eqref{phaseres} constitute the main theoretical inputs needed for the construction of gravitational wave templates. The non-spin terms in the carrier phase can be found in Eq. (235) of Ref.~\cite{Bliving}, and those in $\dot{\omega}/\omega^2$ in e.g. Eq. (32) of Ref. \cite{BCP07}. However, recall that in the case of precessional binaries we must add to the carrier phase $\phi$ the precessional correction arising from the precession of the orbital plane, namely $\Phi=\phi-\alpha$ in the notation of Eq.~\eqref{eq:phiplusalpha}. For this precessional correction one can use directly the results of Sec.~\ref{analsol}. \begin{table*}[b] \begin{center} {\scriptsize \begin{tabular}{|r|c|c|c|} \hline LIGO/Virgo & $1.4 M_{\odot} + 1.4 M_{\odot}$ & $10 M_{\odot} + 1.4 M_{\odot}$ & $10 M_{\odot} + 10 M_{\odot}$ \\ \hline \hline Newtonian & $15952.6$ & $3558.9$ & $598.8$ \\ 1PN & $439.5$ & $212.4$ & $59.1$ \\ 1.5PN & $-210.3+65.6 \kappa_1\chi_1+65.6 \kappa_2\chi_2$ & $-180.9+114.0 \kappa_1\chi_1+11.7 \kappa_2\chi_2$ & $-51.2+16.0 \kappa_1\chi_1+16.0 \kappa_2\chi_2$ \\ 2PN & $9.9$ & $9.8$ & $4.0$ \\ 2.5PN & $-11.7+9.3 \kappa_1\chi_1+9.3 \kappa_2\chi_2$ & $-20.0+33.8 \kappa_1\chi_1+2.9 \kappa_2\chi_2$ & $-7.1+5.7 \kappa_1\chi_1+5.7 \kappa_2\chi_2$ \\ 3PN & $2.6-3.2 \kappa_1\chi_1-3.2 \kappa_2\chi_2$ & $2.3 - 13.2\kappa_1\chi_1 - 1.3 \kappa_2\chi_2$ & $2.2-2.6 \kappa_1\chi_1-2.6 \kappa_2\chi_2$ \\ 3.5PN & $-0.9+1.9 \kappa_1\chi_1+1.9 \kappa_2\chi_2$ & $-1.8+11.1 \kappa_1\chi_1+0.8 \kappa_2\chi_2$ & $-0.8+1.7 \kappa_1\chi_1+1.7 \kappa_2\chi_2$\\ 4PN & $ (\mathrm{NS}) -1.5 \kappa_1\chi_1 - 1.5 \kappa_2\chi_2 $ & $ (\mathrm{NS}) -8.0 \kappa_1\chi_1 - 0.7 \kappa_2\chi_2 $ & $(\mathrm{NS}) -1.5 \kappa_1\chi_1 - 1.5 \kappa_2\chi_2 $\\ \hline \end{tabular} }\end{center} \caption{Spin-orbit contributions to the number of gravitational-wave cycles $\mathcal{N}_\mathrm{GW} = (\phi_\mathrm{max}-\phi_\mathrm{min})/\pi$. For binaries detectable by ground-based detectors LIGO/Virgo, we show the number of cycles accumulated from $\omega_\mathrm{min} = \pi\times 10\,\mathrm{Hz}$ to $\omega_\mathrm{max} = \omega_\mathrm{ISCO}=c^3/(6^{3/2}G m)$. For each compact object we define the magnitude $\chi_A$ and the orientation $\kappa_A$ of the spin by $\mathbf{S}_A\equiv G \,m_A^2\,\chi_A\,\hat{\mathbf{S}}_A$ and $\kappa_A\equiv\hat{\mathbf{S}}_A \cdot \bm{\ell}$. For comparison, we give all the non-spin contributions up to 3.5PN order, but the non-spin 4PN terms (NS) are yet unknown. We neglect all the spin-spin terms. \label{table}} \end{table*} As an illustration of the significance of the new terms, we show in the Table~\ref{table} the contribution of each post-Newtonian order to the number of accumulated gravitational-wave cycles, computed using the so-called Taylor T2 approximant. For neutron star or stellar mass black hole binaries targeted by ground-based detectors similar to LIGO and Virgo, the number of cycles is between a minimal frequency corresponding to a seismic noise cut-off at $10\, \mathrm{Hz}$ and a maximal frequency taken to be the Schwarzschild ISCO frequency $\omega_\mathrm{max} = \omega_\mathrm{ISCO}=c^3/(6^{3/2}G m)$. Recall that the parameter $\chi$ is small for a neutron star but can be close to one for astrophysical black holes~\cite{Reynolds13}. As we see, the 4PN spin-orbit terms computed in the present paper can be significant and are worth to be included in the gravitational wave templates. In particular, these terms are comparable, although a bit smaller, to the previous 3.5PN spin-orbit terms. Interestingly, notice that in fact the 4PN terms tend to significantly cancel out numerically the contributions of the 3.5PN terms. At the 3.5PN order the effect of spin-orbit terms can be larger than the effect of the non-spinning terms, especially in the case of asymmetric binaries. At the 4PN order we do not know if this happens since the 4PN non-spin terms have not yet been computed. We emphasize that it will be important in the future to improve the knowledge of the phasing by computing spin-spin and even spin-spin-spin terms through at least 4PN and 3.5PN order, respectively, and also spin effects induced by the black-hole's horizon-absorbed energy flux~\cite{PS95, Alvi01, Tagoshi:1997, Chatziioannou:2012}. Those terms may give a contribution to the phasing of the same order as the one computed in this paper, especially when the black holes carry large spins and the orbit approaches the ISCO. As a last comment, one should obviously keep in mind that the numerical results in Table~\ref{table} only give an illustration of the order of magnitude of the various contributions. Indeed the precise analysis should take into account the details of the noise spectral density of the detectors, and one should focus on studying the incidence of the various contributions on the parameter estimation rather than simply counting the number of cycles. In addition, note that the numerical values reported in Table~\ref{table} depend on the type of approximant that one uses, here the T2 approximant. We find that using the Taylor T1 and Taylor T4 approximants leads to similar conclusions for our new 4PN tail contribution, \textit{i.e.} a variation of the order of one or a few cycles for maximally spinning black holes. \section*{Acknowledgements} It is a pleasure to thank Guillaume Faye for discussions. A. Boh\'e is grateful for the support of the Spanish MIMECO grant FPA2010-16495, the European Union FEDER funds, and the Conselleria d'Economia i Competitivitat del Govern de les Illes Balears. A. Buonanno acknowledges partial support from NSF Grant No. PHY-1208881 and NASA Grant NNX12AN10G. Our computations were done using Mathematica\textregistered{} and the symbolic tensor calculus package xAct~\cite{xtensor}.
1,314,259,992,814
arxiv
\section{INTRODUCTION} The modern cosmology is now in a dilemma, on one hand the standard model of cosmology known as $\Lambda$CDM describes a vast range of observations, from the Cosmic Microwave Background (CMB) radiation \cite{Komatsu:2010fb} to the observations related to the Large Scale Structures (LSS) of the Universe \cite{Tegmark:2003ud,Tegmark:2003uf,Eisenstein:2005su}. On the other hand $95\%$ of the Universe is made of the unknowns (Dark energy (DE)$\sim 68\%$ and Dark Matter (DM) $\sim 27\%$)\cite{Ade:2013zuv}. Beside this, the physics of early universe, despite the successes of inflationary paradigm \cite{Guth:1980zm} is also still unknown. Regarding the accelerated expansion of the Universe, there are three main categories of probable solutions: a) The Cosmological Constant (CC) \cite{Carroll:2000fy}; b) Dark Energy models \cite{Peebles:2002gy}; and c) The Modified Gravity (MG) theories \cite{Nojiri:2006ri,Sotiriou:2008rp, DeFelice:2010aj}. In the context of the early universe cosmology, the deviation from the 1) scale invariant primordial power spectrum \cite{Adams:2001vc}, 2) the statistical isotropy of the perturbations \cite{Erickcek:2008sm}, 3) the adiabatic perturbations \cite{Gordon:2000hv} and 4) Gaussian initial conditions \cite{Bartolo:2004if} are under an intense study in order to test the inflationary models. Any observed deviation from the above conditions will open a new horizon to study the physics of inflation. Now the cosmological observations and the vast data obtained from ground based telescopes and satellites open up a new era to test the cosmological models. One important question is the existence of the simultaneous effects of early universe physics and the late time cosmology on observational parameters. The LSS observations can be both affected from inflationary models and dark energy models. In this work we want to mainly address to these simultaneous effects on LSS. The clustering and the statistics of LSS of the Universe (i.e. galaxies, cluster of galaxies, voids) have long been known as a useful tool to constrain the cosmology \cite{Seljak:2004sj}. However to constrain cosmological models, there are three type of obstacles in using the LSS observations: ~ a){\it{ Non-linear structure formation}}: In scales corresponding to the wavenumber $k\sim 0.2 h/Mpc$ and larger wavenumbers (smaller scales), the structure formation process becomes non-linear \cite{Scoccimarro:2000gm}. Consequently we can not use the linear perturbation theory to study the cosmological models and we need semi-analytical models \cite{Smith:2002dz} to probe further, or use the N-body simulations\cite{Springel:2005nw}. In this work {\it {we mainly focus on the wavenumber interval of $0.01 h/Mpc < k< 0.2 h/Mpc$ where the perturbations are in the linear/quasi-linear regime}}. In these scales we also have statistically significant data from clustering of galaxies \cite{Ahn:2012fh}. b){\it{ Redshift space distortion (RSD)}}: The second complication becomes from RSD effect. Observationally, instead of measuring the radial coordinate, we measure redshift of the sources through spectroscopy. The redshift measurement in its turn is affected by the peculiar velocity of the structures and it is mixed up with the Hubble expansion redshift and adds more complications to interpret the cosmological results from LSS observations \cite{Hatton:1997xs,Seljak:2000jg}. However it can be used as a measure of matter density, where in this work we use RSD as an observational probe for MG theories. c) {\it{Bias}}: The third obstacle in LSS observations is the bias parameter. Bias parameter in linear order is defined as the ratio of density contrasts of the luminous matter ($\delta_g$) and the underlying dark matter ($\delta_m$) as ($b=\delta_g/\delta_m$). In observations, what we measure is the clustering of luminous matter which has to be related to the underlying dark matter halo distribution, which in its turn must be related to dark matter density perturbation. This is a complicated process affected by the baryon physics, non-linear structure formation, galaxy formation and evolution. However this parameter is used to study the effect of primordial NG on the distribution of matter in Universe. ~~ To be more precise, the above mentioned complications are themselves useful probes for studying the cosmological models. The redshift space distortion is used as a probe to measure the growth rate of the structures, $f\equiv d\ln\delta/d\ln a$, to underpin the expansion history of the Universe and to test the gravitational law in cosmological scales. It is difficult to measure the growth rate parameter straightforwardly in observations. Instead it is obtained via the knowledge of bias parameter and the redshift space distortion parameter $\beta\equiv f/b$ \cite{Kaiser:1987qv} obtained in galaxy power spectrum/correlation functions \cite{Blake:2011rj}. On the other hand the bias parameter is recently introduced \cite{Dalal:2007cu} as a new probe to detect the fingerprint of the primordial non-Gaussianity in LSS. Primordial NG introduces a scale dependent behavior in the bias parameter. {\it{The main question is that how the LSS observables will be affected by MG and NG, when we have these deviations from $\Lambda$CDM both in the same time.}} In this work we study the effect of MG and primordial non-Gaussianity(NG) on growth rate and RSD parameters. The main point here is that: It has been shown that the growth of the structures in MG theories (i.e. $f(R)$) is scale dependent \cite{Bertschinger:2008zb,Tsujikawa:2009ku}. This scale dependence is manifested in the growth rate. Consequently the growth rate can change the redshift space distortion and can be a potentially good observation to test the gravity \cite{Raccanelli:2012gt}. We assert that if our universe has a slight deviation from a CC dominated Universe and also a small local NG, $f_{NL}\sim 5$ (which parameterized the strength of the NG) allowed by recent Planck data \cite{Ade:2013ydc}, then the growth rate and the bias parameter will have a non trivial scale dependence. These features will affect the galaxy power spectrum measurement due to redshift space distortion. This will cause to a simultaneous effect between cosmological parameters and primordial NG \cite{Carbone:2010sb} worth to study in detail. We chose the specific model of $f(R)$ introduced by Hu-Sawicki (HS)\cite{Hu:2007nk} to show the discussed effects and we show the relation of our chosen model with more general parameterizations of MG introduced by Bertschinger and Zukin (BZ) \cite{Bertschinger:2008zb}. In this direction, Parfrey, Hui and Sheth propose the emergence of scale dependence in MG will affect the linear bias \cite{Parfrey:2010uy}. We assume that the linear bias modification due to MG is smaller than the MG effect on NG-bias. Also a very recent study, consider the effect of NG and MG in 3D correlation function of matte power spectrum\cite{Raccanelli:2013dza}. The structure of this work is as follows: In Sec. (\ref{Sec-Mg}) we introduce the $f(R)$ modified gravity theories and discuss the modified matter growth rate and its scale dependence. In Sec.(\ref{Sec-bias}) we study the bias parameter with primordial NG and the deviation from $\Lambda$CDM. In Sec.(\ref{Sec-galaxy}), we study the redshift space distortion parameter and the galaxy power spectrum in MG theories with primordial NG and show how both these deviations from standard case affect the observables. Then we define the galaxy growth rate parameter. In Sec. (\ref{Sec-Conc}) we conclude and discuss the future prospects of this work. In this work we use the cosmological parameters from recent Planck data as $\Omega_m=0.32$, $\Omega_{\Lambda}=0.68$, $n_s=0.96$ and $\ln(10^{10}A_{s})=3.1$ \cite{Ade:2013zuv}. \section{Modified Growth rate} \label{Sec-Mg} One of the probable solutions of the accelerated expansion of the Universe is the modification of gravity in cosmological scales. There is a large amount of literature on different modified gravity models which is supposed to produce the late time acceleration of the universe, like $f(R)$ theories of gravity in metric formalism \cite{Carroll:2003wy,Hu:2007nk}, in Palatini formalism \cite{Olmo:2011uz}, brane world models of Modified gravity \cite{Dvali:2000hr}, massive gravity \cite{D'Amico:2011jj} and etc. In this work we examine the $f(R)$ gravity in metric formalism as a candidate for the accelerated expansion of the Universe and a parameterized model to show the deviations from $\Lambda$CDM cosmology, in order to study the simultaneous effect of primordial NG and modified growth rate. In the first subsection we consider the Hu-Sawicki \cite{Hu:2007nk} model. In the second subsection we introduce the BZ parametrization as an alternative representation of MG theories and its relation to HS $f(R)$ model is discussed. In next sections we use the Hu-Sawicki $f(R)$ model. \subsection{Hu-Sawicki $f(R)$ model} One of the main concerns about $f(R)$ modifications of gravity is the solar system gravity test \cite{Erickcek:2006vf}. Khoury and Weltman \cite{Khoury:2003aq} propose a screening mechanism to evade solar system test. In this chapter we will use a $f(R)$ model that is viable and evade solar system tests. The $f(R)$ action of modified gravity is written as: \begin{equation} \label{eq-EH} S=\frac{1}{2\kappa^2}\int d^4x\sqrt{-g}f(R)+S_m(g_{\mu\nu},\chi_m) \end{equation} where $f(R)$ is a function of Ricci scalar, $\kappa^2\equiv 8\pi G $, and $\chi_m$ represents the matter fields. In the case of $f(R)=R$ we get the Einstein-Hilbert(EH) action. The deviation from EH action in cosmological scales is the cause of the cosmic expansion. In order to see this, we have to rewrite the modified Einstein field equations. The corresponding modified Friedman equations are obtained from the variation of action in Eq.(\ref{eq-EH}) with respect to metric as \cite{Tsujikawa:2009ku}: \begin{eqnarray} \label{Eq-MFried} 3FH^2&=&8\pi G\rho_m +\frac{FR-f}{2}-3H\dot{F},\\ \nonumber -2F\dot{H}&=&8\pi G\rho_m+\ddot{F}-H\dot{F} \end{eqnarray} where $\rho_m$ is the matter density, $F\equiv\partial f/\partial R$ is the first derivative of the action with respect to Ricci scaler, which represent the degree of freedom of the action in the equivalent scalar tensor theory of the $f(R)$ \cite{Tsujikawa:2008uc} and finally $H=\dot{a}/a$ is the Hubble parameter. The Hubble parameter is related to the Ricci scaler as $R=6(2H^2 + \dot{H})$ where dot represents the derivative with respect to cosmic time hereafter. The expansion rate obtained from Eqs.(\ref{Eq-MFried}) for viable $f(R)$ theories is very close to the expansion rate obtained from $\Lambda$CDM \cite{Hu:2007pj}. In order to quantify the deviation from $\Lambda$CDM we define a dimensionless parameter $m$ as\cite{Tsujikawa:2009ku} : \begin{equation} m=\frac{R F_{,R}}{F} \end{equation} where $F_{,R}\equiv\partial F/\partial R$, and the Ricci scaler is controlled by modified Hubble expansion rate. In the case of ($m=0$) we will recover the $\Lambda$CDM universe. However, the small deviation $m\ll 1$ is difficult to detect in the expansion history of the Universe. Also the MG theories in the background level are indistinguishable from smooth Dark Energy models \cite{Baghram:2010mc}. Consequently in studying the accelerated expanding universe models, we are interested in the behavior of cosmological models in perturbation theory and LSS. The main point here is that the growth of structures in the Universe is a very promising tool to distinguish the MG models from DE/CC. This happens by introducing a scale dependent growth of perturbations. In order to study the LSS in MG theories, we need to use the perturbation theory. In Fourier space the density perturbation is obtained as \cite{Song:2006ej} : \begin{eqnarray}\label{Eq-pert} \ddot{\delta}_m+\left(2H+\frac{\dot{F}}{2F}\right)\dot{\delta}_m-\frac{8\pi G\rho_m}{2F}\delta_m&=&\frac{1}{2F}\left[(-6H^2+\frac{k^2}{a^2})\delta F+3H\dot{\delta F}+3\ddot{\delta F}\right], \\ \nonumber \ddot{\delta F}+3H\dot{\delta F}+\left(\frac{k^2}{a^2}+\frac{f_{,R}}{3f_{,RR}}-\frac{R}{3}\right){\delta F}&=&\frac{8\pi G\rho_m\delta_m}{3}+\dot{F}\dot{\delta}_m \end{eqnarray} Considering the fact that the expansion history indicates that the time derivative of $F$ is small and also that $\ddot{\delta F}\ll H\dot{\delta F}\ll H^2$; the second order differential equation for evolution of density contrast by combining Eqs.(\ref{Eq-pert}) is \cite{Tsujikawa:2008uc}: \begin{equation} \label{eq-delta} \ddot{\delta}_m+2H\dot{\delta}_m-4\pi G_{eff}\rho_m\delta_m=0 \end{equation} where $G_{eff}$ is defined as: \begin{equation} \label{Eq-geff} G_{eff}\equiv\frac{G}{F}\left[1+\frac{1}{3}\left(\frac{k^2}{a^2M^2/F+k^2}\right)\right] \end{equation} in which $M^2\equiv R/3m = {F}/{3F_{,R}}$ is the effective mass of the scalaron, corresponding to the scaler field \cite{Tsujikawa:2008uc}. It is obvious that the scale dependence on the evolution of matter density is introduced via effective Newtonian constant in Eq.(\ref{Eq-geff}). Now we are ready to study the deviation from $\Lambda$CDM due to the MG theory. The first parameter is the Growth function defined as $\delta_m(k,z)=D(k,z)\delta_m^i$, where $\delta_m^i$ is the initial value of the matter density contrast fixed in an initial redshift. (We can fix the growth function in present time as well.) The other useful quantity is the logarithmic rate of change of matter density with respect to the scale factor, known as growth rate. The growth rate is defined as: \begin{equation} f(k,z)=\frac{d\ln\delta (k,z)}{d\ln a} \end{equation} where growth function now depends on wavenumber as well. Now by using $\dot{\delta}=Hf\delta$ and $\ddot{\delta}=(H^2f^2+\dot{H}f+H\dot{f})\delta$, we can rewrite Eq.(\ref{eq-delta}) in terms of growth rate and its derivative as: \begin{equation} \label{eq-f} \dot{f}+Hf^2+\left(2H+\frac{\dot{H}}{H}\right)f -\frac{3}{2}\Omega_m\frac{G_{eff}}{G}\frac{H^2_0}{H}=0 \end{equation} where the growth rate is related to the scalaron mass $M$, the action derivative $F$, the expansion rate of cosmos $H$ and the wavenumber that we observe the structures $k$ via Eq.(\ref{Eq-geff}). Now we write the Eq.(\ref{eq-f}) in terms of redshift: \begin{equation} \label{Eq-f} f'(k,z)-\frac{f^2(k,z)}{1+z}+\left(\frac{E'(z)}{E(z)}-\frac{2}{1+z}\right)f(k,z)+\frac{3}{2}\Omega^0_m\frac{(1+z)^2}{E^2(z)}\frac{G_{eff}(k,z)}{G}=0 \end{equation} where $E(z)\equiv H(z)/H_0$, and $\Omega^0_m$ is the present value of matter density. Prime denotes derivative with respect to redshift. Now by solving Eq.(\ref{Eq-MFried}) and Eq.(\ref{Eq-f}), we can find the growth rate of the structures which depends on the wavenumber via the $G_{eff}$ term \cite{Baghram:2010mc}. One type of MG parametrization is to use a free parameter $\gamma$ to parameterize the growth rate $f=\Omega(z)^{\gamma}$. In table \ref{table-b}, we report the constrains on free parameter $\gamma$. \begin{figure}[t] \centering \includegraphics[width=10cm]{growth.eps} \caption {The ratio of growth function $D_{MG}(k,z)/D_{\Lambda CDM}-1$ for MG, HS model with $|f_0|=10^{-2}$, $k=0.2 h/Mpc$ (black solid line) and $\Lambda CDM$ is plotted versus redshift. The red-long dashed line indicate the ratio for $|f_0|=10^{-4}$, $k=0.2 h/Mpc$ and the MG model with $|f_0|=10^{-2}$, $k=0.01 h/Mpc$ is plotted in blue dotted line.} \label{Fig-D} \end{figure} In order to be more precise and discuss the effect of growth rate on the redshift distortion and the galaxy power spectrum in Sec.(\ref{Sec-galaxy}) we choose a specific model of MG. The Hu and Sawicki \cite{Hu:2007nk} model \begin{equation}\label{Eq-hu} f(R)=R-\mu R_c\frac{(R/R_c)^2}{1+(R/R_c)^{2}} \end{equation} in which $\mu>0$ and $R_c>0$ are free parameters. This model evades solar system constraints of MG. In order to quantify the deviation from $\Lambda$CDM with only one free parameter, we write the first derivative of the action as $F\equiv 1+\tilde{F}$ by assuming that the action is $f(R)=R+\tilde{f}(R)$ and $\tilde{F}\equiv\partial\tilde{f}/\partial R$. For the action introduced in Eq.(\ref{Eq-hu}) we will have: \begin{equation} \tilde{F}=-2f_0 \frac{R}{H^2_0}\left[1+(\frac{R}{R_c})^2\right]^{-2} \end{equation} where $|f_0|\equiv (\mu H_0^2)/{R_c}$ is the free parameter of the model, $H_0$ is the present value of Hubble parameter, and $R_c$ is assumed to be the present value of the Ricci scalar. In Table (\ref{table-b}), we summarize the observational constraints on HS Model obtained from different geometrical and dynamical observations. We also mention the different conventions used to parameterize the degrees of freedom of the model. \begin{table}[ht] \caption{Growth rate $f_{obs}$ observational constraints} \centering \begin{tabular}{c c c c c} \hline\hline $Parameter$ & $ Observations$ & $Constraints$ & Note & Ref. \\ [0.5ex] \hline $|f_0|$ & SNIa \cite{Kowalski:2008ez} , BAO \cite{Eisenstein2006}, age \cite{Simon:2004tf} & $<0.03 (1\sigma)$r & one more free parameter & \cite{Martinelli:2009ek} \\ $B_0$ & cluster abundance + SNI + BAO + gISW & $1.1\times 10^{-3} (2\sigma)$ & $B_0=\frac{F,R}{F}\frac{dR}{dz}\frac{H}{dH/dz}$ & \cite{Lombriser:2010mp} \\ $B_0 $& CMB+BAO+ $H_0$ (HST)+ SNIa(Union2.1) & $0.079 (1\sigma)$ & BZ parametrization \cite{Bertschinger:2008zb} & \cite{Hu:2013aqa}\\ $B_0$ & CMB (WMAP 9y) + lensing & $0.473 (1\sigma)$ & BZ parametrization \cite{Bertschinger:2008zb} & \cite{Hu:2013aqa}\\ $B_0$ & CMB(Planck) + Polarization (WMAP) & $0.849 (1\sigma)$ & BZ parametrization \cite{Bertschinger:2008zb} & \cite{Hu:2013aqa}\\ $\gamma$ & CMB power spectrum+ CMB bispectrum & $0.555^{+0.034}_{-0.042} (2\sigma$)& $f=\Omega^\gamma(z)$ & \cite{DiValentino:2012yg} \\ $\mu$ & galaxy growth rate $f\sigma_8$ & $\mu>12$ & HS model with one free parameter $n=1.5$ & \cite{Okada:2012mn} \\ \hline \end{tabular} \label{table-b} \end{table} Now using the derivative of $\tilde{F}$ with respect to Ricci and substituting in the definition of the effective mass $M$ and accordingly in $G_{eff}$, we can solve the differential Eq.(\ref{eq-delta}) and Eq.(\ref{Eq-f}) for different wavenumbers with respect to redshift in order to find the growth function $D(k,z)$ and growth rate $f(k,z)$ respectively. The $M$ parameter for HS model, is defined as: \begin{equation} M^2=\frac{1-2f_0\tilde{R}\left[1+({\tilde{R}}/{\tilde{R}_c})^2\right]^{-2}}{2f_0[1+({\tilde{R}}/{\tilde{R}_c})^2]^{-3}\times[3({\tilde{R}}/{\tilde{R}_c})^2-1]} \end{equation} where $\tilde{R}=R/H_0^2$ and $\tilde{R}_c=R_c/H_0^2$ \begin{figure}[t] \centering \includegraphics[width=10cm]{fzkMG.eps} \caption {The growth rate versus redshift is plotted for $k=0.2 h/Mpc$ for $|f_0|=10^{-2},10^{-4},10^{-6}$ with red long dashed, green dashed and blue dotted lines, respectively. For comparison we also plot the $\Lambda$CDM growth rate (solid black curve). The data points are taken from LSS surveys listed in Table (\ref{table-f}) with $1\sigma$ error-bars.} \label{Fig-fzkMG} \end{figure} \begin{table}[ht] \caption{Growth rate $f_{obs}$ observational constraints} \centering \begin{tabular}{c c c c c} \hline\hline $z$ & $f_{obs}$ & $\sigma$ & Survey & Ref. \\ [0.5ex] \hline 0.15 & 0.51 & 0.11 & 2dF & \cite{Hawkins:2002sg,Linder:2007nu, Verde:2001sf} \\ 0.22 & 0.60 & 0.10 & WiggleZ & \cite{Blake:2011rj} \\ 0.32 & 0.654 & 0.18 & SDSS & \cite{Reyes:2010tr} \\ 0.35 & 0.70 & 0.18 & SDSS & \cite{Tegmark:2006az} \\ 0.41 & 0.70 & 0.07 & SDSS & \cite{Blake:2011rj} \\ 0.55 & 0.75 & 0.18 & 2dF-SDSS & \cite{Ross:2006me} \\ 0.60 & 0.73 & 0.07 & SDSS & \cite{Blake:2011rj} \\ 0.77 & 0.91 & 0.36 & VIMOS-VLT & \cite{Guzzo:2008ac} \\ 0.78 & 0.70 & 0.08 & SDSS & \cite{Blake:2011rj}\\ 1.4 & 0.90 & 0.24 & 2dF-SDSS & \cite{daAngela:2006mf} \\ 3.0 & 1.46 & 0.29 & SDSS & \cite{McDonald:2004xn} \\ [1ex] \hline \end{tabular} \label{table-f} \end{table} In Fig.(\ref{Fig-D}), we plot the ratio of growth function $D_{MG}(k,z)/D_{\Lambda CDM}-1$ for MG, HS model with $|f_0|=10^{-2}$, $k=0.2 h/Mpc$ (black solid line) and $\Lambda CDM$ is plotted versus redshift. The red-long dashed line indicate the ratio for $|f_0|=10^{-4}$, $k=0.2 h/Mpc$ and the MG model with $|f_0|=10^{-2}$, $k=0.01 h/Mpc$ is plotted in blue dotted line. Fig.(\ref{Fig-D}) shows that the deviation from $\Lambda CDM$ is larger in small scales. In Fig.(\ref{Fig-fzkMG}), we plot the growth rate versus redshift for a specific wavenumber $k=0.2 h/Mpc$ and different values of free parameter $|f_0|$. The viable $f(R)$ gravity models have two regimes: I) $a^2M^2\ll k^2$, we will be in scalar tensor mode where the effective gravitational constant reaches $4/3 G$; II) in the regime of very massive scalaron $k^2\ll a^2M^2$, we will recover the $\Lambda$CDM case. It is obvious from Fig.(\ref{Fig-fzkMG}) that for all values of $f_0$, the growth rate is higher than the $\Lambda$CDM case. This is because of the enhancement of the gravitational constant in all cases with an enhancement factor $G_{eff}/G$ running from $1$ to $4/3$. In Fig.(\ref{Fig-fzkMG}), it is shown that by increasing the deviation parameter from CC $|f_0|$, the growth rate deviates more from CC case. The observational data points are from different surveys with $1\sigma$ error-bar listed in Table (\ref{table-f}). Fig.(\ref{Fig-fzkMG}) shows that the recent growth rate measurement can not rule out the MG with small deviations $f_0\leq 10^{-5}$. The future observations like LSST \cite{Zhan:2006gi} and Euclid \cite{Amendola:2012ys} can constrain the growth rate at intermediate redshifts. Another important point to indicate is that in the wavenumber $k\simeq aM$(which depends on the model parameters, the growth of the structures changes its regime, where we can see this effect in matter power spectrum in upcoming sections. \begin{figure}[t] \centering \includegraphics[width=10cm]{fzk.eps} \caption {In this figure we plot the growth rate versus redshift with $|f_0|=10^{-2}$ for wavenumbers $k=0.001,0.01,0.2 ~h/Mpc$ with blue dotted, green dashed and red long dashed lines, respectively. For comparison we also plot the $\Lambda$CDM growth rate (solid black curve). The data points are taken from LSS surveys listed in Table (\ref{table-f}) with $1\sigma$ error-bars. } \label{Fig-fzk} \end{figure} In Fig.(\ref{Fig-fzk}), we study the scale dependence of the growth rate in MG. We set the free parameter $|f_0|=10^{-2}$ and plot the growth rate versus the redshift for wavenumbers $k=0.01,0.1,0.2 h/Mpc$, respectively. As discussed previously for very small wavenumbers (large structures), the $f(R)$ gravity is completely indistinguishable from $\Lambda$CDM. The main effect of these theories are in quasi-linear regimes, large wavenumbers (smaller structures) where the growth rate has a strong scale dependence and deviates from the standard case. The observational data points are taken from Table(\ref{table-f}). In this subsection, we have discussed the scale dependence growth rate in HS model. In the upcoming subsection we introduce the BZ parametrization and its relation to HS model. \subsection{Bertschinger-Zukin parametrization} In this subsection, we introduce the BZ parametrization and its relation with HS model. In general, the chameleon, symmetron and dilaton screening MG theories can be written in Einstein frame as an Einstein Hilbert action plus a new degree of freedom, which is coupled to dark matter/baryons via conformal factor: \begin{equation} S_{E}=\int d^4x\sqrt{-\tilde{g}}\left[\frac{M^2_{pl}}{2}\tilde{R} - \frac{1}{2}\tilde{g}^{\mu\nu}(\tilde{\nabla _{\mu}\phi})(\tilde{\nabla}_{\nu}\phi)-V(\phi)\right]+S_{i}[\chi_i, e^{-\kappa\alpha_i(\phi)}\tilde{g}_{\mu\nu}] \end{equation} where $\chi_i$ represent the matter fields and the Jordan frame metric is related to the Einstein frame metric by conformal factor: \begin{equation} g_{\mu\nu}=e^{-\kappa\alpha_i(\phi)}\tilde{g}_{\mu\nu} \end{equation} now in order to study the effect of MG on linear perturbations Bertschinger and Zukin \cite{Bertschinger:2008zb} define the scale dependent effective Newtonian constant $G\mu(k,z)$ and gravitational slip parameter $\gamma(k,z)$ as: \begin{equation} k^2\Psi = -4\pi Ga^2\mu(k,z)\rho_m\delta_m \end{equation} \begin{equation} \frac{\Phi}{\Psi}=\gamma (k,z) \end{equation} where $\Psi$ and $\Phi$ are metric perturbation terms in perturbed FRW ( $ds^2=-(1+2\Psi)dt^2+a^2(1-2\Phi)dx^idx_i$). Now by using the fact that the deviation from $\Lambda$CDM model, can be parameterized by one parameter \cite{Song:2006ej}, Bertschinger and Zukin parameterize the deviation parameters as: \begin{equation} \mu(k,z)=\frac{1+{\frac{4}{3}\lambda^2 k^2}/{(1+z)^4}}{1+{\lambda ^2k^2}/{(1+z)^4}}, \end{equation} \begin{equation} \gamma (k,z)=\frac{1+{\frac{2}{3}\lambda ^2 k^2}/{(1+z)^4}}{1+{\frac{4}{3}\lambda^2 k^2}/{(1+z)^4}} \end{equation} where the free parameter is the compton-wavelength of new degree of freedom. $\lambda$ is related to the free parameter $B_0$ as: \begin{equation} \lambda ^2 \equiv \frac{B_0}{2H_0^2} \end{equation} In table \ref{table-b}, we report some observational constraints on $B_0$. Now the $B_0$ parameter is related to the derivatives of $f(R)$ as: \begin{equation} B=\frac{F_{,R}}{F}R\frac{H}{H'} \end{equation} where $R$ is the Ricci scalar, which can be assumed to the has the same value as $\Lambda$CDM, because the viable $f(R)$ models can mimic the background evolution of $\Lambda$CDM. In this case we can relate the free parameter of HS model $f_0$ to the model independent free parameter of HZ parametrization as: \begin{equation} |f_0|\simeq|\frac{1}{B_0}\frac{R(z=0)H_0}{H'(z=0)}-1| \end{equation} In the next section, we will investigate the effect of primordial NG on bias parameter and then we will study the simultaneous effect of modified gravity growth and scale dependence bias in special case of HS model. The general study of this effect on MG models is out of the scope of this work. \section{Non-Gaussian Bias} \label{Sec-bias} In this section we will discuss the bias parameter and its application to study the effect of the primordial NG and MG on it. As mentioned in the introduction, the statistics of the luminous matter in the Universe is a promising tool to constrain the cosmological models. To study their statistics we need to know about the structure formation in nonlinear regime and the relation between the dark matter halos and luminous matter \cite{Mo:1995cs}. The structures in the Universe (i.e galaxies and cluster of galaxies) are hosted by dark matter halos. These dark matter halos are formed, when a region of a mass $M$, with a density contrast of $\delta =(\rho-\bar{\rho})/\bar{\rho}$, exceeds the critical density threshold. (for spherical collapse $\delta_c\simeq 1.68$)\cite{Gunn:1972sv}. The abundance of dark matter halos are modulated by the background density of matter perturbations. This modulation is formalized by the halo bias parameter \cite{Bardeen:1985tr}. In this context the bias parameter is defined as the ratio of the the halo density contrast $\delta_h$ and the background matter density $\delta_m$ (In this work we assume that the galaxy-halo bias is unity $\delta_g=\delta_h$): \begin{equation} b=\frac{\delta_h}{\delta_m} \end{equation} The bias parameter can be obtained in the framework of Peak-Background Splitting \cite{Sheth:1999mn}. In that case the bias can be obtained from the probability of the structures formed above a threshold height $\nu\equiv \delta_c / \sigma(M)$ where $\sigma(M)$ is the variance of density contrast. In this picture the bias is defined as: \begin{equation} \label{eq-bl} b_L= - \frac{1}{\sigma(M)}\frac{d\ln \bar{n}(M,\nu)}{d \nu} \end{equation} where $b_L$ is the Lagrangian bias and $\bar{n}$ is the mean number density of the structures with mass $M$ and height $\nu$. In the Eq.(\ref{eq-bl}) we assume that we can split the matter density as $\delta_h=\delta_s+\delta_l$ where $\delta_s$ indicate the short wavelength density contrast of matter corresponding to the structures and $\delta_l$ is the long wavelength mode with the condition $\delta_l\ll\delta_s$. If we assume the very simple case of Press-Schechter mass function as: \begin{equation} \bar{n}(M,z)=-2\frac{\bar{\rho}}{M^2}f(\nu)\frac{d\ln\sigma(M,z)}{d\ln M} \end{equation} with \begin{equation} f(\nu)=\frac{\nu}{\sqrt{2\pi}}e^{-\nu^2/2} \end{equation} The Eulerian bias parameter can be written: \begin{equation} b_{E}(z)=1+\frac{\nu^2(z)-1}{\delta_c} \end{equation} Using the Press-Schechter mass function \cite{Press:1973iz} and with the Gaussian initial function, the bias parameter becomes scale independent. This is almost true for all Gaussian initial conditions with the assumption of the universality of mass function, which means the probability of finding structure in a mass range is only dependent to the height parameter. Recently, it was shown that the primordial NG will introduce a scale dependent bias \cite{Dalal:2007cu}. After that a huge amount of literature is devoted to the study of the effect of NG in LSS observations specially the bias parameter. \cite{LoVerde:2007ri,Matarrese:2008nc, Afshordi:2008ru,Jeong:2009vd,Verde:2010wp,Desjacques:2010nn,Norena:2012yi} The idea is that in the local NG, the background perturbation is modulated by the NG case as: \begin{equation} \Phi_{NG}=\Phi_G+f_{NL}\Phi^2_{G} \end{equation} where $\Phi$ is the Bardeen potential which can be used instead of the density perturbation to study the effect of NG. It is obvious that we can use Bardeen potential and matter density contrast interchangeably by using the Poisson equation as a link. Dalal et al. \cite{Press:1973iz} showed that in the case of local non-Gaussianity the bias parameter is: \begin{equation}\label{Eq-bNG} b_{NG}=\frac{2f_{NL}(b_E-1)\delta_c}{{\cal{M}}(k,z)} \end{equation} where the ${\cal{M}}$ is a function relating the primordial curvature perturbation ${\cal{R}}$ to the linear density functions $\delta_m$ as ${\cal{M}}=\delta_m/{\cal{R}}$. The form of {\cal{M}} is as follows: \begin{equation} {\cal{M}}=\frac{2}{5}\frac{k^2 T(k)D(z)}{H^2_0\Omega^0_m} \end{equation} where $D(z)$ is the growth function and $T(k)$ is the transfer function. It is worth to note that the Eq.(\ref{Eq-bNG}) can be obtained, from the Excursion Set Theory approach \cite{Bond:1990iw}. This is discussed in App.(1) of \cite{Baghram:2013lxa} In this section, the linear Gaussian and local NG bias for $\Lambda$CDM and MG theories are calculated. For this task, we use the mass function of Sheth-Tormen \cite{Sheth:1999mn} to find the linear bias. The Sheth-Tormen probability function $f(\nu)$ is defined as: \begin{equation} f_{ST}=A\sqrt{\frac{\alpha\nu^2}{2\pi}}\left[1+\frac{1}{(\alpha\nu^2)^p}\right]e^{-\frac{\alpha\nu^2}{2}} \end{equation} where $A\simeq0.32$, $\alpha\simeq 0.707$ and $p\simeq 0.3$ are free parameters obtained from N-body simulation \cite{Sheth:1999mn}. The Sheth-Tormen linear bias will be: \begin{equation} b^{L}_{ST}(M,z)=1+\frac{1}{\delta_c}\left[\alpha\nu^2(z)-1+\frac{2p}{1+(\alpha\nu^2(z))^p}\right] \end{equation} In order to calculate the NG bias defined in Eq.(\ref{Eq-bNG}) for the $\Lambda$CDM model, we use the standard growth function and the Bardeen, Bond, Kaiser and Szalay(BBKS) transfer function \cite{Bardeen:1985tr} defined respectively as follows: \begin{equation} D(z)=\frac{5}{2}\frac{1}{1+z}\Omega_{m}\left[\Omega^{4/7}_{m}-\Omega_{\Lambda}+(1+\frac{\Omega_m}{2})(1+\frac{\Omega_{\Lambda}}{70})\right]^{-1} \end{equation} and \begin{equation} T(k=q\Omega_m h^2 Mpc^{-1})\approx \frac{\ln[1+2.34q]}{2.34q}\times\left[1+3.89q+(16.2q)^2+(5.47q)^3+(6.71q)^4\right]^{-1/4} \end{equation} where $\Omega_m=\Omega^0_m a^{-3}/(\Omega_m a^{-3}+\Omega_{\Lambda})$ and $\Omega_{\Lambda}=\Omega^0_{\Lambda}/(\Omega_m a^{-3}+\Omega_{\Lambda})$. \begin{figure}[t] \centering \includegraphics[width=9cm]{bias.eps} \caption {The bias parameter versus redshift is plotted for $f_{NL}=+5$ and $k=0.2 h/Mpc$ (red long dashed line), $f_{NL}=+5$ and $k=0.01 h/Mpc$ (green dashed line) and $f_{NL}=-5$ and $k=0.01 h/Mpc$ (blue dotted line). For comparison, we also plot the Gaussian Sheth-Tormen linear bias with solid black line.} \label{Fig-bias} \end{figure} In Fig.(\ref{Fig-bias}), we plot the bias parameter versus redshift for the linear case using the Sheth-Tormen mass function. We also plot the local non-Gaussian bias with $f_{NL}=\pm 5$, for different wavenumbers. As we expect, there is a $k^{-2}$ dependence in bias. The largest deviation appears for small wave-numbers. A positive $f_{NL}$ increases the bias parameter whereas a negative $f_{NL}$ decreases the bias in comparison to the linear case. The NG bias for $k=0.2 h/Mpc$ is almost indistinguishable from the $\Lambda$CDM case. The bias parameter measurement now are done in redshift averaged bins which are not as accurate to distinguish between the cosmological models with small local non-Gaussianity\cite{Blake:2011rj}. Although the future observations are very promising to reach the precision of Planck satellite to detect the local NG \cite{Verde:2010wp}. \begin{figure}[t] \centering \includegraphics[width=10cm]{bias-NGMG.eps} \caption {The NG bias ratio of $b^{MG}/b_{\Lambda CDM}-1$ is plotted versus redshift. The red long dashed line is for deviation parameter of $|f_0|=10^{-4}$ with wavenumber $k=0.01 h/Mpc$. For deviation of $|f_0|=10^{-2}$ we plot the bias ratio for the wavenumbers $k=0.001 , 0.01 , 0.2 h/Mpc$ with green dashed line, solid black line and blue dotted line respectively.} \label{Fig-biasMG} \end{figure} In the case of the MG, the non-Gaussian bias is also modified through the Poisson equation. Assuming that the universality of the mass function is unchanged in the MG, the only effect is imprinted in the relation between the curvature perturbation and matter density through the modified ${\cal{M}}_{MG}$: \begin{equation} \label{Eq-Meff} {\cal{M}}_{MG}= \frac{2}{5}\frac{G_{eff}}{G}\frac{k^2 T(k)D_{MG}(z,k)}{H^2_0\Omega^0_m} \end{equation} where $D_{MG}$ is MG growth function and $G_{eff}/G$ is the gravitational constant enhancement discussed in Sec.(\ref{Sec-Mg}). The modified growth function, can be obtained from the relation $\dot{D}_{MG}(k,z)=H f_{MG}(k,z)D_{MG}(k,z)$ with the same initial conditions as $\Lambda$CDM in dark matter dominated era. The modified growth rate can be obtained by solving Eq.(\ref{Eq-f}). A crucial point is the appearance of a new scale dependence in the NG bias parameter via the modified growth function and also an effective Newtonian constant in Eq.(\ref{Eq-Meff}). An important point to indicate is that this modification is almost concrete for the short wavenumbers (local case). The study of the general case for desired bispectrum is out of the scope of this work. In Fig.(\ref{Fig-biasMG}), we plot the ratio of the total bias (Gaussian+ NG) in MG Modified gravity introduced in previous section to the $\Lambda$CDM versus redshift. We plot this ratio for the deviation of MG with $|f_0|=10^{-4}$ with wavenumber $k=0.01 h/Mpc$ and also a larger deviation with $|f_0|=10^{-2}$ for two wavenumbers $k=0.01 h/Mpc$ and $k=0.2h/Mpc$ respectively. The NG bias with MG is the same as the $\Lambda$CDM, non-Gaussian case for large wavenumbers, this is because the NG bias is very small and indistinguishable with linear bias for large structures. Instead for small wavenumbers where local NG effect is most efficient a new scale dependence is introduced by MG. The future very large scale surveys are promising to constraint the Modified gravity NG bias parameter. An important point is that this ratio goes to zero in higher redshifts. This because the scale dependence introduced by MG is vanished in higher redshifts. In the next section we will discuss the redshift space distortion parameter $\beta=f/b$ and its scale dependence introduced by NG and MG. Then we will discuss the RSD effect on the galaxy power spectrum. \section{Large Scale Structure Observables} \label{Sec-galaxy} In the previous sections we showed that the growth rate and the bias parameter are both modified by the primordial NG and the deviation from $\Lambda$CDM. In this section, in first subsection we investigate the effect of these two parameters on Redshift Space Distortion (RSD) and galaxy power spectrum. In the second subsection we introduce the galaxy growth rate and discuss about its relation with the matter growth rate. \subsection{Redshift space distortion and galaxy power spectrum} The idea of the redshift space, as explained in introduction, is that what we observe is the luminous matter distribution in redshift space instead of the spatial coordinate. The clustering of the matter changes the peculiar velocity of the dark matter tracers and affect the observed redshift. Accordingly by finding the amount of distortion occurred from transferring the redshift coordinate to real coordinate will be a measure for the amount of clustered matter. The density contrast of matter in redshift space is related to the real space via: \begin{equation} \delta_z(k)=\delta_r(k)\left(1+\beta\mu^2\right) \end{equation} where $\mu={\bf{k}}.{\bf{r}}/kr$ is the direction between the wavenumber and the line of sight and $\beta$ is the redshift space parameter defined as: \begin{equation} \beta(k,z)=\frac{f(k,z)}{b(k,z)} \end{equation} where the $\beta$-function in contrast to the $\Lambda$CDM case is a scale dependent parameter. This scale dependent behavior is introduced by the NG bias on one hand and the MG growth rate function on the other hand. \begin{figure}[t] \centering \includegraphics[width=10cm]{beta.eps} \caption {The redshift space distortion parameter $\beta=f/b$ is plotted versus redshift. The solid black line shows the $\Lambda$CDM growth rate with Sheth-Tormen linear bias. The green dashed line shows the $\beta$ with the non-Gaussian bias with $f_{NL}=+5$ and the wavenumber $k=0.2$. The long dashed red line is with the non-Gaussian bias with $f_{NL}=+5$ with wavenumber $k=0.01$ and the blue dotted line is the $\beta$ function for $f_{NL}=-5$ and the wavenumber $k=0.01$. In all NG cases we use the $\Lambda$CDM growth rate.} \label{Fig-beta} \end{figure} In Fig.(\ref{Fig-beta}), we plot the $\beta$-function for the non-Gaussian case. In the NG case, $\beta$ for wavenumber $k=0.2 h/Mpc$ is almost indistinguishable from the linear case. This is because the NG shows up in small wavenumbers. The important point here is that for positive local NG, say $f_{NL}=+5$, the NG bias is increased which causes a decrease in $\beta$ -function as it is proportional to the inverse of bias. The negative NG increase the redshift space distortion in higher redshifts in comparison with the linear case. Now we want to explore the effect of the MG on the $\beta$-function. \begin{figure}[t] \centering \includegraphics[width=10cm]{betaMG.eps} \caption {The redshift space distortion parameter is plotted versus redshift for $f(R)$. The black solid line represents the $\Lambda$CDM RSD parameter with linear bias. The red long dashed line represents the $|f_{0}|=10^{-4}$ with wavenumber $k=0.01 h/Mpc$. The green dashed line, shows the $|f_{0}|=10^{-2}$ with $k=0.01 h/Mpc$The blue dotted shows $|f_{0}|=10^{-2}$ with $k=0.2h/Mpc$. In all cases the bias parameter is linear.} \label{Fig-betaMG} \end{figure} In Fig.(\ref{Fig-betaMG}) we plot the RSD parameter versus redshift for MG case with the deviations of $|f_0|=10^{-4}$ and $|f_0|=10^{-2}$. The deviations from $\Lambda$CDM case is almost negligible for $|f_{0}|=10^{-4}$. In the case of larger deviation the scale dependence of the RSD is affected by MG. For large wavenumbers we have the largest deviation from the standard case. The main deviation occurs for larger wavenumbers, as we expected. \begin{figure}[t] \centering \includegraphics[width=10cm]{betaMGNG.eps} \caption {The $\beta$-function is plotted versus redshift for $f_{NL}=+5$ with wavenumbers $k=0.2 h/Mpc$ and $k=0.01 h/Mpc$ with purple long dashed line and green dashed line. We also plot the RSD parameter for negative NG $f_{NL}=-5$ with wavenumbers $k=0.2 h/Mpc$ and $k=0.01 h/Mpc$ with blue dotted line and dash-dotted red line. For comparison we plot the linear $\Lambda$CDM case with solid black line.} \label{Fig-betaMGNG} \end{figure} Now the interesting point here is the study of the combination of the two effects of NG and MG on $\beta$-function. As the effect of NG is on small wavenumbers and the effect of MG on large wavenumbers, we will assume a non-trivial combination of the scale dependence. In Fig. (\ref{Fig-betaMGNG}) we plot the $\beta$-function versus redshift for positive and negative local NG amplitude $f_{NL}=\pm 5$ and two wavenumbers $k=0.01, 0.2 h/Mpc$. The green dashed line shows the $\beta$-function for MG with deviation $|f_0|=10^{-2}$ and a positive local NG $f_{NL}=+5$ with wavenumber $k=0.01 h/Mpc$. In small redshifts, the deviation from CC, has the dominate effect on the RSD parameter. Where in the higher redshifts the effect of positive NG dominates. For the wavenumber $k=0.2 h/Mpc$, the effect of NG is small so in higher redshifts the effect of positive and negative NG is indistinguishable. The largest $\beta$-function value in higher redshifts correspond to negative NG and small wavenumber $k=0.01 h/Mpc$ which cause to a decrease in bias parameter and correspondingly to higher $\beta$ function. The other LSS observable is the power spectrum of galaxies. Now we can transfer the RSD effect to the galaxy power spectrum using the relation between the square of the real and redshift space density contrast $P^{(z)}=P^{(r)}(1+\beta\mu^2)^2$ and averaging over the angle $\mu$ \begin{equation} P^{(z)}_g(k,z)= b^2 P^{(r)}_m(k,z)\left[1+\frac{2}{3}\beta +\frac{1}{5}\beta^2\right] \end{equation} Where $b$ is the total bias parameter , defined in Sec.(\ref{Sec-bias}) which is a sum of the linear and non-Gaussian terms $b=b_L+b_{NG}$ and $f$ is the growth rate of MG theory. $P^{(r)}_m$ is the linear matter power spectrum related to the growth function $D(z)$ and the transfer function $T(k)$ as: \begin{equation} P_m(k,z)=A k^{n_s}T^2(k)D^2(z) \end{equation} where $A$ is the amplitude of matter fluctuations, $n_s$ is the spectral index, and the $T(k)$ and $D(z)$ are the transfer function and growth function, respectively. For the MG case we substitute $D(z)$ with $D_{MG}(z,k)$ and assume that the transfer function is the same for both theories. In Fig.(\ref{Fig-Pg}) we plot the galaxy power spectrum versus redshift for $\Lambda$CDM case with linear bias (red-solid line) and for the modified gravity theory with the deviation parameter $|f_0|=10^{-2}$ and the local NG with $f_{NL}=+5$ (blue-dotted line) with non-linear corrections. The data points are the galaxy power spectrum from the Luminous Red Galaxy (LRG) data from SDSS survey \cite{Tegmark:2006az}. As it was shown in Fig.(\ref{Fig-Pg}), the primordial NG effect with the MG background is the enhancement of the galaxy power spectrum. {To show the scale dependence of this enhancement, in Fig. (\ref{Fig-Pgrel}) we plot the ratio of the galaxy power spectrum for the modified gravity theory with the deviation parameter $|f_0|=10^{-2}$ and with local NG initial conditions with $f_{NL}=+5$ ($P_g^{MG-NG}$)and $\Lambda$CDM galaxy power spectrum with linear bias. Accordingly this ratio is related to: \begin{equation}\label{eq-ratio} \frac{P_g^{NG-MG}}{P^{\Lambda CDM}}-1 =\frac{(b_L+b_{NG})^2}{b^2_L}\frac{D_{MG}(k,z)}{D(k,z)}\frac{1+\frac{2}{3}\tilde{\beta+}\frac{1}{5}\tilde{\beta}^2}{1+\frac{2}{3}\beta +\frac{1}{5}\beta ^2}-1 \end{equation} where $\tilde{\beta}=f_{MG}/(b_L+b_{NG})$ Considering the fact that we are probing this ratio in low redshifts, where the contribution of terms with bias NG bias parameter is small due to Eq.(\ref{Eq-bNG}), where $b_{NG}\propto (1+z)$. Consequently Eq.(\ref{eq-ratio}) can be approximated by \begin{equation} \frac{P_g^{NG-MG}}{P^{\Lambda CDM}}-1 \simeq \left(1+2\frac{b^{NG}}{b_L}\right)\frac{D_{MG}(k,z)}{D(k,z)}-1 \end{equation} where we neglect the redshift space distortion term, because of Fig.(\ref{Fig-betaMGNG}) in low redshifts. The main contribution to the galaxy power spectrum comes from the modified gravity (in small scales) than the bias parameter. This is because we compare the two power spectrums in redshift range of $0.155 < z < 0.474$ of the LRG sample with median of $z\sim0.3$ which the nonlinear bias is small in comparison to linear bias as it was shown in Fig.(\ref{Fig-bias}). Consequently the main contribution comes from modified growth function which deviates from standard case in small scales. The ration of growth function is plotted in Fig.(\ref{Fig-D}). In the next subsection we will introduce galaxy growth rate parameter as a new observational parameter. \begin{figure}[t] \centering \includegraphics[width=10cm]{Pg.eps} \caption {The galaxy power spectrum is plotted versus redshift for $\Lambda$CDM case with linear bias (red-solid line) and for the modified gravity theory with the deviation parameter $|f_0|=10^{-2}$ and the local NG with $f_{NL}=+5$ (blue-dotted line) with non-linear corrections. The data points are the galaxy power spectrum from the Luminous Red Galaxy (LRG) data from SDSS survey \cite{Tegmark:2006az}.} \label{Fig-Pg} \end{figure} \begin{figure}[t] \centering \includegraphics[width=10cm]{Pg-rel1.eps} \caption {The ratio of the galaxy power spectrum for the modified gravity theory with the deviation parameter $|f_0|=10^{-2}$ and the local NG with $f_{NL}=+5$ and $\Lambda$CDM case with linear bias. } \label{Fig-Pgrel} \end{figure} \subsection{Galaxy Growth Rate} In previous subsection we discussed the effect of MG and NG on RSD and growth rate of dark matter. Observationally we measure the RSD through the power spectrum or galaxy correlation function, where by knowing the bias parameter we will obtain the growth rate. Now we define a new parameter known as galaxy growth rate as:\ \begin{equation} f_{g}=\frac{d\ln\delta_g}{d\ln a} \end{equation} where $\delta_g$ is the galaxy number density contrast. The $f_g$ parameter could be a direct observable if we have enough statistics of galaxies in redshift bins very close to each other to obtain differential quantity $d\delta_g/dz$. Now to go further we can relate the galaxy growth rate to the dark matter growth rate as follows: \begin{equation} f_g(k,z)=\frac{d\ln(b\delta_m)}{d\ln a}=f_m(k,z)+\frac{d\ln b(k,z)}{d\ln a}=f_m-(1+z)\frac{b'(k,z)}{b(k,z)} \end{equation} where $'$ denotes the derivative with respect to redshift. In the case of linear bias, and the universality assumption, the bias term can be written as a function of height parameter $b=b(\nu)$. Consequently, the redshift dependence only appears in $\sigma(M,z)$. Accordingly, we can write the galaxy growth rate as: \begin{equation} f_g(k,z)=f_m(k,z)-(1+z)\frac{d\ln b(\nu)}{d\ln \nu}\nu\delta_c\frac{d\sigma_M(z)}{dz}\sigma_M^{-2}(z) \end{equation} Now using the fact that $\sigma_M(z)=\sigma_M(z=0){D(z)}/{D(z=0)}$ and $f_m=-(1+z)D'(z)/D(z)$ we will find: \begin{equation}{\label{Eq-fg1}} f_g(k,z)=\left[1-\frac{d\ln b(\nu)}{d\ln(\nu)}\right]f_m(k,z) \end{equation} Eq.(\ref{Eq-fg1}) shows that there is a linear bias between the growth rate of galaxies and the growth rate of matter where we have defined the {\it{linear growth rate bias}} as: \begin{equation} b^{(L)}_{(f)}(z)\equiv\left[1-\frac{d\ln b(\nu)}{d\ln(\nu)}\right] \end{equation} in which the superscript $(L)$ indicates the linearity of the growth rate bias. For the Press-Schechter and the Sheth-Tormen mass function the growth rate bias is obtained as \begin{equation} b^{L}_{PS(f)}(z)=\frac{\delta_c-1-\nu^2}{\delta_c-1+\nu^2}\ \end{equation} \begin{equation}\label{Eq-bfST} b^{L}_{ST(f)}(z)=1-\frac{2\alpha\nu^2}{\delta_c+\alpha\nu^2-1+\frac{2p}{1+(\alpha\nu^2)^p}}\left[1-2p^2 \frac{(\alpha\nu^2)^{p-1}}{\left(1+(\alpha\nu^2)^p\right)^2}\right] \end{equation} In Eq.(\ref{Eq-bfST}), if we set $p=0$ and $\alpha=1$, not surprisingly, the galaxy growth rate bias of Sheth-Tormen will be the same as Press-Schechtre bias. Now we turn our attention to the NG galaxy growth rate bias. In this case the bias will be a function of height and the cosmological evolution of perturbation via parameter ${\cal{M}}$ as: \begin{equation} \tilde{b}^{(L)}_f(k,z)=\tilde{b}^{(L)}(\nu, {\cal{M}}(k,z)) \end{equation} If the functionality of $\nu$ and ${\cal{M}}$ are separable $\tilde{b}^{(L)}_f(k,z)=\bar{b}_{{\cal{M}}}[{\cal{M}}(k,z)]\times\bar{b}_{\nu}[\nu(z)]$ , the growth rate bias will be: \begin{equation} \tilde{b}^{(L)}_f(k,z)=\left[b^{(L)}_f+\frac{1}{f_m}\frac{d\ln \bar{b}_{{\cal{M}}}[{\cal{M}}(z,k)]}{d\ln a}\right], \end{equation} where a new term added to growth rate bias term which has a scale dependence. In the case of the MG growth rate will be modified by introducing a ${\cal{M}}_{MG}(z,k)$, instead of the standard term. \section{Conclusion and Discussion} In this work we study the simultaneous effect of the Hu-Sawicki $f(R)$ modified gravity theories and the local primordial NG on Large scale structure observables. We use the specific model of $f(R)$ just as a toy model to show the effect the potentially important effect of MG in interpretation of the LSS observations. The modification of the gravity, introduces a scale dependence in the growth of the structures. On the other hand the primordial NG make the bias parameter a scale dependent quantity. In order to study the effect of NG/MG we assert that galaxy power spectrum and redshift space distortion are promising observables. As both of them are affected by the bias parameter and the growth rate of the structures. In the case of the modified gravity the bias parameter and growth rate function become scale dependent quantities. In this work we assume that the scale dependence in the linear bias parameter due to modification of the gravity is small and the new scale dependence comes from the contribution of modified growth rate in non-Gaussian bias. We show that the redshift space distortion parameter, in higher redshifts and larger scales can be used to distinguish between the positive and negative $f_{NL}$ (plotted in Fig.(\ref{Fig-beta})) where the background is $\Lambda$CDM. However, in the case of $f(R)$ gravity with deviation of $|f_0|=10^{-2}$ and the existence of primordial NG, there is a degenerate effect in higher redshifts for RSD. On the other hand the galaxy power spectrum is affected by redshift space distortion and bias simultaneously, where the scale dependence shows up again. Generally introducing a primordial NG or a modification of gravity enhance the galaxy power spectrum(\ref{Fig-Pg}). This enhancement has a slight scale dependence shown in Fig.(\ref{Fig-Pgrel}). This scale dependence is mainly sourced by the modification of the gravity than the bias parameter, consequently we show that this scale dependence shows up in smaller scales. Finally, in order to have a new cross-check for our observations and breaking the degeneracies, we introduce the galaxy growth rate. The galaxy growth rate which is a measure of the growth of the galaxies with respect to scale factor in logarithmic scale lead us to the growth rate bias parameter. The growth rate bias parameter relates the galaxy growth rate to dark matter growth rate. In the $\Lambda$CDM case this galaxy growth rate bias parameter is scale independent, while by introducing the NG/MG, the scale dependence emerged. It is worth to mention that future observation of large scale structure, mainly the galaxy counting with better precision and larger statistics will help us to constrain the cosmological models with deviation from six parameter standard model. The galaxy power spectrum, redshift space distortion and bias parameter is potentially promising observables to detect any scale dependence in them. Although there are many complications in the detection of departure from $\Lambda$CDM, where we list them as below with corresponding discussions: a) The detection of small NG in LSS observables (i.e. bias parameter) is a difficult task due to the statistics and the noise. However there are optimistic forecasts for future. On the hand there is a room for {\it{scale dependence bias}}, where we can get larger NG in sub-CMB scales. b)Constraining the cosmological models with galaxy power spectrum(correlation function) in small scales is always a venturesome task because of non-linear effects. The deviation from linear power spectrum, which is most affected with primordial NG and background evolution, could be a result of the non-linear growth of the structures. The future very large scale surveys can probe the linear power spectrum. c) The interpretation of the bias parameter is a complicated task. As the bias parameter depend on the sample of luminous matter that we choose (i.e. the color, mass, redshift ... of the galaxy sample). Consequently the existence of the degeneracy between the astrophysical effects and the cosmological ones are there. d)The scale dependence of linear bias because of the scale dependent growth function will be a source of uncertainty as well. As a future prospect of this work, it is possible to extend the study of the simultaneous effect of the general NG shape with the general modified growth function. Also future LSS observations will provide statistically meaningful data to constrain the $f_{NL}$ and deviation from $\Lambda$CDM parameter simultaneously. \label{Sec-Conc} \acknowledgments We would like to thank Sarah Shandera, Hassan Firouzjahi and Sohrab Rahvar for their insightful comments and discussions. Also we want to thank anonymous referee for his/her useful comments which help us to improve our representation and results. NM thanks the School of Astronomy of Institute for Research in Fundamental Science (IPM) for their kind hospitality during the preparation of this work.
1,314,259,992,815
arxiv
\section{Introduction} Fix an irrational number $\alpha\in \mathbb R$, and consider the family of Markov processes with the evolution governed by the transition kernel \begin{equation}\label{E:1.1} p(x, \cdot ) = \mathfrak{p}(x) \delta_{x+\alpha} + \mathfrak{q}(x) \delta_{x-\alpha}, \quad p : \mathbb{T} \times \mathcal B (\mathbb{T} ) \to [0,1], \end{equation} where $\mathcal B (\mathbb S^1 )$ stands for the $\sigma$-algebra of Borel subsets of $\mathbb S^1$ and $\mathfrak{q}(x)=1-\mathfrak{p}(x)$, $x\in \mathbb{T}$. We call the function $\mathfrak{p}$ symmetric if $$\int_{\mathbb{T}} f(x) dx = 0,$$ where \begin{equation}\label{E:1.2} f(x)=\ln\frac{\mathfrak{p}(x)}{\mathfrak{q}(x)}, \quad x\in\mathbb{T}, \end{equation} and asymmetric otherwise. We call a measure $\mu$ invariant for transition kernel (\ref{E:1.1}) if distributing the starting point according to $\mu$ makes the Markov process with this transition kernel stationary (thus $\mu$ is called also often a stationary measure). Since $\mathbb{T}$ is compact, the Krylov-Bogoliubov technique yields existence of an invariant distribution for (\ref{E:1.1}) for every choice of continuous $\mathfrak{p}$. However, it is far from being obvious if there exists more than one invariant distribution. The earliest paper known to the author dealing with similar (but still slightly different) system was by Sine \cite{Sine_1979}. More recently it was proven by Sinai in \cite{Sinai_1999} that if $\mathfrak{p}\in C^\infty(\mathbb{T})$ is asymmetric or $\mathfrak{p}\in C^\infty(\mathbb{T})$ is symmetric and $\alpha$ is Diophantine then the uniqueness follows. One year later Conze and Guivarc'h proved in \cite{Conze_Guivarc'h_2000} that in the symmetric case $\frac{\mathfrak{p}(x)}{\mathfrak{q}(x+\alpha)}\in BV$ implies uniqueness no matter if $\alpha$ is Diophantine or not. The present paper contains another proof of the latter statement assuming $\mathfrak{p}\in C^1$ is symmetric. The advantage of the new proof is that it gives more insight to the problem of mixing and the problem of uniqueness in higher dimensional analogs (where $\mathbb{T}$ is replaced by $\mathbb{T}^d$). See Section 5 for more details. The strategy is based on Sinai's. To explain it, fix $x\in \mathbb{T}$ and consider a Markov process $(X_n)$ started at $x$ with transition kernel (\ref{E:1.1}). It is evident that the process can achieve only the points of the form $x+j\alpha$, $j\in\mathbb{Z}$. Thus to learn the distribution of $(X_n)$ on $\mathbb{T}$ we consider a Markov chain $(\xi_n)$ on $\mathbb{Z}$, started at 0, with $$\mathbb{P}(\xi_{n+1}=k+1 | \xi_n=k )= \mathfrak{p}(x+k\alpha)$$ and $$\mathbb{P}(\xi_{n+1}=k-1 | \xi_n=k )= \mathfrak{q}(x+k\alpha)$$ for $n\ge 0$ and $k\in \mathbb{Z}$. Let us now restrict to the symmetric case, which is in our scope of interest. In that case the system on $\mathbb{Z}$ is recurrent. If $\mathfrak{p}\in C^\infty(\mathbb{T})$ is symmetric and $\alpha$ is Diophantine then the cohomological equation $f(x)=g(x+\alpha)-g(x)$, where $f$ is defined in (\ref{E:1.2}), possesses a solution. Using the solution $g$ we can easily check that the measure with density $h(z)/\mathfrak{q}(z)$ is invariant, where $h=\exp(g)$. Now the whole difficulty in Sinai's approach was to show the local limit theorem for $(\xi_n)$ on $\mathbb{Z}$. More precisely, in the symmetric case Sinai has proven that $$\mathbb{P}(\xi_n=k)\sim \frac{h(x+k\alpha)}{\mathfrak{p}(x+k\alpha)} \frac{1}{\sqrt{2\pi\sigma^2 n}}\exp{\frac{-k^2}{2n\sigma^2}},$$ for some $\sigma>0$ and all $x\in \mathbb{T}$, where $\sim$ means the ratio of both sides tends to one. With this fact one can show that $$\mathbb{E} \varphi(X_n) \to \int_{\mathbb{T}} \varphi(z) \frac{h(z)}{\mathfrak{q}(z)}dz,$$ which easily implies the unique ergodicity (in fact it's even a stronger property called mixing or stability). Unfortunately we cannot follow exactly the same path when generalizing result to all irrational $\alpha$. Recently Dolgopyat, Fayad and Saprykina \cite{DFS_2021} have proven that if $\alpha$ is Liouville then the behaviour of $(\xi_n)$ on $\mathbb{Z}$ is erratic for the generic choice of smooth and symmetric $\mathfrak{p}$ (see Theorems A-E therein). In particular, neither annealed, nor quenched central limit theorem holds (see Corollary D and G therein). However, we can still modify something in Sinai's idea to get desired assertion. The main result of this work is the following. \begin{thm}\label{T:1} If $\mathfrak{p}\in C^1(\mathbb{T})$ is symmetric and separated from $0$ and $1$ (i.e. $0<\mathfrak{p}(x)<1$ for each $x\in \mathbb{T}$) then there exists exactly one invariant measure for the transition kernel (\ref{E:1.1}). \end{thm} As it was mentioned, the proof is some sense is in the spirit of Sinai's. We still concentrate on the process $(\xi_n)$ on $\mathbb{Z}$ but instead of proving the local limit theorem we focus on the limits $$\lim_{n\to\infty} \frac{\mathbb{P}(\xi_0=k)+\cdots+\mathbb{P}(\xi_{n-1}=k)}{\mathbb{P}(\xi_0=m)+\cdots+\mathbb{P}(\xi_{n-1}=m)},$$ where $k, m\in \mathbb{Z}$ are two states. The problem of existence of such limits for general (including countable space, null recurrent) Markov chains was raised by Kolmogorov in 1936 and answered two years later by Doeblin \cite{Doeblin_1938} without identification of the value of the limit. It has been done only later by Chung \cite{Chung_1950}. It turns out we can define certain infinite measure on $\mathbb{Z}$, $k\longmapsto a_{x,k}$ (depending on $x\in\mathbb{T}$ since $(\xi_n)$ depends on $x\in\mathbb{T}$) such that the limit above tends to $a_{x, k}/a_{x, m}$ for arbitrary two states $k$ and $m$. In Section 2 we identify the measure $k\longmapsto a_{x,k}$ on $\mathbb{Z}$ and reproduce the proof of Doeblin ratio limit theorem. In Section 3 it is proved that if one takes a large interval of integers $A$ of length $q$ and projects the measure $k\longmapsto a_{x,k}$ to the circle (by identifying $k$ with $x+k\alpha$) then what we obtain is, after normalization and up to $\varepsilon$, independent of the choice of the interval $A$ and the point $x$, provided $q$ is sufficiently large. Section 4 contains how to complete the proof of Theorem \ref{T:1} using the above results. Section 5 contains some final remarks. \section{Acknowledgments and personal remarks} When I proved the main theorem I wasn't aware of Conze, Guivarc'h result. After discovering it, I started thinking if my proof can be used to show something more. I realized the advantage of mine is it can be modified to obtain mixing (assuming $\mathfrak{p}$ is $C^1$ and symmetric, no matter if $\alpha$ is Diophantine or not). Then I gave several talks about it, e.g. in the conference ``Probabilistic techniques in random and time-varying dynamical systems'', Luminy 3-7.10.2022 or in the KTH dynamical systems seminar, where I announced ``mixing'' result. Although I still think this result is true, I didn't predicted certain difficulties in the proof and I need more time and effort to complete it. Meanwhile I'm publishing the proof of uniqueness. It's not going to be submitted to any journal. The research was supported by the Polish National Science Center grant Preludium UMO-2019/35/N/ST1/02363. \section{Basic facts about symmetric random walks on $\mathbb{Z}$} Fix $x\in \mathbb{T}$ and define $(\xi_n)$ to be the Markov process on $\mathbb{Z}$, started at 0, with $$\mathbb{P}(\xi_{n+1}=k+1 | \xi_n=k )= \mathfrak{p}(x+k\alpha)$$ and $$\mathbb{P}(\xi_{n+1}=k-1 | \xi_n=k )= \mathfrak{q}(x+k\alpha)$$ for $n\ge 0$ and $k\in \mathbb{Z}$. In present section we are going to prove recurrence of this random walk and some related results. We say that $(\xi_n)$ is recurrent if almost surely there exists $n>0$ with $\xi_n=0$. We say $(\xi_n)$ is null recurrent if it is recurrent and the expected time of the first return to $0$ is infinite. \begin{prop}\label{P:1} If $\mathfrak{p}$ is of bounded variation, symmetric and separated from $0$ and $1$ then the process $(\xi_n)$ is recurrent. Moreover, for every $r>0$ there exists $m_0$ that can be chosen uniformly in $x\in \mathbb{T}$ such that the expected number of returns of $(\xi_n)$ to zero until $m_0$ is greater than $r$, i.e. $$\mathbb{P}(\xi_1 =0 ) +\cdots +\mathbb{P}(\xi_n =0 ) = \mathbb{E}\big( \mathds{1}_{\{0\}}(\xi_1) + \cdots + \mathds{1}_{\{0\}}(\xi_n) \big) > r$$ for $n\ge m_0$, whatever $x\in \mathbb{T}$. \end{prop} \begin{proof} To show the recurrence of $(\xi_n)$, we reproduce the analysis from \cite{DFS_2021}, Section 3.2. Let us define a function $M : \mathbb{Z} \rightarrow \mathbb R$ by $M(0)=0$, $M(1)=1$, $$M(n)=1+\sum_{k=1}^{n-1}\prod_{j=1}^k \frac{\mathfrak{q}(x+j\alpha)}{\mathfrak{p}(x+j\alpha)} \quad \textrm{for $n\ge 2$,}$$ and $$M(-n)=-\sum_{k=0}^n \prod_{j=0}^k \frac{\mathfrak{p}(x-j\alpha)}{\mathfrak{q}(x-j\alpha)} \quad \textrm{for $n\ge 1$.}$$ To avoid complicated notation, we do not stress the dependence of $M$ on $x$. It can be checked that $(M(\xi_n))$ is a martingale. Let $a<0<b$ and let us define $\tau$ to be the first moment when $(\xi_n)$ hits $a$ or $b$. By Doob's theorem $\mathbb{E} M(\xi_\tau)=M(\xi_0)=0$. On the other hand $$\mathbb{E} M(\xi_\tau)=M(a)\mathbb{P}(\xi_\tau=a)+M(b) \mathbb{P}(\xi_\tau=b)$$ $$=M(a)\mathbb{P}(\xi_\tau=a)+M(b)(1-\mathbb{P}(\xi_\tau=a)),$$ which combined with $\mathbb{E} M(\xi_\tau)=0$ yields $$\mathbb{P}(\xi_\tau=a)=\frac{M(b)}{M(b)-M(a)}.$$ If $\xi_\tau=a$ then $(\xi_n)$ returns to $0$ before hitting $b$. Setting $a=-1$ above we get therefore \begin{equation}\label{E:2.1} \mathbb{P}\bigg(\textrm{$(\xi_n)$ returns to $0$ before hitting $b$}\bigg)\ge \frac{M(b)}{M(b)-M(-1)}. \end{equation} Similarly \begin{equation}\label{E:2.2} \mathbb{P}\bigg(\textrm{$(\xi_n)$ returns to $0$ before hitting $a$}\bigg)\ge \frac{-M(a)}{M(1)-M(a)}. \end{equation} This easy implies the random walk $(\xi_n)$ is recurrent provided $M(n)\to \infty$ as $n\to \infty$ and $M(n)\to -\infty$ as $n\to-\infty$. The latter is implied by the following consequence of the Denjoy-Koksma inequality. \begin{lem}\label{L:2.1} For every $A>0$ there exists $n_0>0$ that is independent of $x\in \mathbb{T}$ such that $M(n)>A$ for $n\ge n_0$ and $M(n)<-A$ for $n \le n_0$. \end{lem} \begin{proof} Take $n>0$. The function $M(n)$ is a sum of expressions of the form $$\prod_{j=1}^k \frac{\mathfrak{q}(x+j\alpha)}{\mathfrak{p}(x+j\alpha)} \quad \textrm{for $k<n$,}$$ therefore to show the assertion it is sufficient to find $\delta>0$ such that the product above is greater than $\delta$ for infinitely many $k$'s. Define $f(x)=\ln \mathfrak{q}(x) - \ln \mathfrak{p}(x)$, $x\in \mathbb{T}$, and observe we can write $$\prod_{j=1}^k \frac{\mathfrak{q}(x+j\alpha)}{\mathfrak{p}(x+j\alpha)} = \exp\bigg(\sum_{j=1}^k f(x+j\alpha)\bigg).$$ The function $f$ is of bounded variation and $\int_T f(t)dt=0$ so the Denjoy-Koksma inequality (Theorem 3.1 in \cite{Herman_1979}, p. 73) yields $$\bigg|\sum_{j=1}^q f(x+j\alpha)\bigg| = \bigg|\sum_{j=1}^q f(x+j\alpha) - q\int_\mathbb{T} f(t)dt \bigg| < \textrm{var}(f)$$ for an arbitrary $x\in \mathbb{T}$ and an arbitrary closest return time $q$. But this means for an arbitrary closest return time $q$ we have $$\exp\bigg(\sum_{j=1}^k f(x+j\alpha)\bigg)>e^{-\textrm{var}(f)}>0.$$ Thus the assertion follows with $\delta=e^{-\textrm{var}(f)}$. \end{proof} To show the remaining part of Proposition, fix $r>0$ and take $\varepsilon>0$ so small that $(1-\varepsilon)^{2r}>1/2$. By (\ref{E:2.1}), (\ref{E:2.2}) and Lemma \ref{L:2.1} there exists $a>0$ (suitable for all $x\in\mathbb{T}$) such that $$\mathbb{P}\bigg(\textrm{$(\xi_n)$ returns to $0$ before hitting $-a$ or $a$}\bigg)\ge 1-\frac{\varepsilon}{2}.$$ Since $\mathfrak{p}$ and $\mathfrak{q}$ are separated from 0, there exists $n_0$ so large (suitable for all $x\in \mathbb{T}$) such that probability that $(\xi_n)$ stays in $(-a,a)$ for the first $n_0$ steps is less than $\varepsilon/2$. Combining these two facts yields $$\mathbb{P}\bigg( \textrm{$(\xi_n)$ returns to $0$ before $n_0$}\bigg)>1-\varepsilon.$$ By the strong Markov property $$\mathbb{P}\bigg( \textrm{$(\xi_n)$ returns $2r$-times to $0$ before $2rn_0$}\bigg)>(1-\varepsilon)^{2r}>1/2,$$ by the choice of $\varepsilon$. The assertion follows with $m_0=2rn_0$ since the expected number of returns to $0$ before $m_0$ is greater than $2r$ with probability $1/2$. \end{proof} It is advantageous to use the following notation in the remaining part of this section. Let $p_{i,j}^n$ denote the probability of transition from state $i$ to state $j$ in $n$ steps. We simply write $p_{i,j}$ instead of $p_{i,j}^1$. Let $\prescript{}{k}{p}_{i,j}^n$ stands for the probability of transition from state $i$ to state $j$ in $n$ steps under the restriction that state $k$ is visited in neither of steps $1, \ldots, n-1$. Again, these values depend on chosen point $x\in \mathbb{T}$ but we refrain from stressing that in the notation. Clearly $\pr{j}{k}{j}{n}$ is the probability of the first visit in $j$ starting at $k$ occurring in step $n$ and $\pr{k}{k}{j}{n}$ is the probability of transition to $j$ from $k$ in $n$ steps with the restriction that the state $k$ is not visited in steps $1,\ldots, n$. The series $\sum_{n=1}^\infty \pr{k}{k}{j}{n}$ is interpreted as the expected number of visits in $j$ starting at $k$ before the first return to $k$. It is not difficult to show the convergence of this series. \begin{lem}\label{L:2.2} If $\mathfrak{p}$ is of bounded variation, symmetric and separated from $0$ and $1$ then the series $\sum_{n=1}^\infty \pr{k}{i}{j}{n}$ is convergent. Moreover, for any $q\ge 1$ its sum is uniformly bounded over all $k$, $i$, $j$ with $|k-i|$, $|k-j|$, $|j-i|<q$, $x\in \mathbb{T}$. For every $\varepsilon>0$ and natural $q\ge 1$ there exists $N$ with $\sum_{n=N}^\infty \pr{k}{i}{j}{n}<\varepsilon$ whatever $x\in \mathbb{T}$, provided $|k-j|\le q$. \end{lem} \begin{proof} Let $m\in \mathbb N$ be such that $\pr{k}{j}{k}{m}>\eta$ for some $\eta>0$ and all $j, k$ with the same parity and $|j-k|\le q$ (remember the Markov chain is periodic with period two). It is clear $m$ and $\eta$ can be chosen uniformly in $x\in \mathbb{T}$ since $\mathfrak{p}$ is separated from $0$ and $1$. We have $$\pr{k}{i}{j}{n}\cdot \pr{k}{j}{k}{m} \le \pr{k}{i}{k}{n+m}$$ for $n\in \mathbb N$, hence $$\sum_{n=N}^\infty \pr{k}{i}{j}{n}\le \frac{1}{\pr{k}{j}{k}{m}} \sum_{n=N}^\infty \pr{k}{i}{k}{n+m}\le \frac{1}{\eta} \sum_{n=N}^\infty \pr{k}{i}{k}{n+m}.$$ The last series represents the probability that the first transition to $k$ starting at $i$ occurs at earliest at the step $N+m$. This number is bounded from above by $\varepsilon$ if $N$ is sufficiently large. Moreover, $N$ can be chosen to be suitable for all $x\in \mathbb{T}$ by a reasoning similar to the proof of Lemma \ref{L:2.1}. \end{proof} It is not difficult also to recover the value of $\sum_{n=1}^\infty \pr{k}{k}{j}{n}$, which represents the expected value of appearances in state $j$ of the process started at $k$ before it returns to $k$. \begin{lem}\label{L:2.3} If $\mathfrak{p}$ is of bounded variation, symmetric and separated from $0$ and $1$ and $a_{x, n}$ is defined by\footnote{In contrast to other symbols here we stress the dependence on $x\in \mathbb{T}$. That is because this symbol appears in the next section where the dependence on $x$ is significant.} $a_{x,0}=1$ and \begin{equation}\label{E:2.3} a_{x, n}= \frac{\mathfrak{q}(x)}{\mathfrak{q}(x+n\alpha)} \prod_{j=0}^{n-1} \frac{\mathfrak{p}(x+j\alpha)}{\mathfrak{q}(x+j\alpha)} \end{equation} and \begin{equation}\label{E:2.4} a_{x, -n}= \frac{\mathfrak{p}(x)}{\mathfrak{p}(x-n\alpha)} \prod_{j=0}^{n-1} \frac{\mathfrak{q}(x-j\alpha)}{\mathfrak{p}(x-j\alpha)} \end{equation} for $n>0$. Then $$\sum_{n=1}^\infty \pr{k}{k}{j}{n}= \frac{a_{x,j}}{a_{x,k}}$$ for any two states $k, j\in \mathbb Z$. \end{lem} \begin{proof} Fix $k$. First of all, the aim is to show the assertion for $j=k+1$. Notice if the process started at $k$ visits $k-1$ in the first step then it necessarily visits $k$ before ever reaching $k+1$. Thus the probability of exactly one appearance in $k+1$ before returning to $k$ is $\mathfrak{p}(x+k\alpha)\cdot \mathfrak{q}(x+(k+1)\alpha)$ and the probability of exactly $r$ appearances is $\mathfrak{p}(x+k\alpha)\cdot \mathfrak{p}(x+(k+1)\alpha)^{r-1}\cdot \mathfrak{q}(x+(k+1)\alpha)$ (since after the first $r-1$ visits it ``jumps'' to the state $k+2$ with probability $\mathfrak{p}(x+(k+1)\alpha)$ and right after $r$-th to $k$ with probability $\mathfrak{q}(x+(k+1)\alpha)$). Hence the expected number of appearances is $$\sum_{n=1}^\infty \pr{k}{k}{j}{n} = \sum_{r=1}^\infty r\cdot \mathfrak{p}(x+k\alpha) \cdot \mathfrak{p}(x+(k+1)\alpha)^{r-1} \cdot \mathfrak{q}(x+(k+1)\alpha)$$ $$= \mathfrak{p}(x+k\alpha) \mathfrak{q}(x+(k+1)\alpha) \sum_{r=1}^\infty r\mathfrak{p}(x+(k+1)\alpha)^{r-1}$$ $$= \frac{\mathfrak{p}(x+k\alpha) \mathfrak{q}(x+(k+1)\alpha)}{(1-\mathfrak{p}(x+(k+1)\alpha))^2}= \frac{\mathfrak{p}(x+k\alpha) \mathfrak{q}(x+(k+1)\alpha)}{ \mathfrak{q}(x+(k+1)\alpha)^2}=\frac{\mathfrak{p}(x+k\alpha)}{\mathfrak{q}(x+(k+1)\alpha)},$$ where in the passing from the second line to the third one the formula $\sum_{r=1}^\infty rz^{r-1}=\frac{1}{(1-z)^2}$ was used. Since the last equals $\frac{a_{x,k+1}}{a_{x,k}}$, this completes the proof for $j=k+1$. To end the proof we proceed by induction. Let us assume the assertion holds for $k+1, k+2, ..., j$ for some $j>k$. Let us consider the process started at $k$. Take $r>0$. It is easy to conclude the expected number of appearances of this process in $j+1$ under the condition the number of appearances in $k+1$ is $r$ equals, by the induction assumption, to $r\cdot\frac{a_{x,j+1}}{a_{x, k+1}}$. In turn, the probability of exactly $r$ visits in $k$ before returning to $k$ is, as before, $\mathfrak{p}(x+k\alpha)\cdot \mathfrak{p}(x+(k+1)\alpha)^{r-1}\cdot \mathfrak{q}(x+(k+1)\alpha)$. In the view of foregoing, the expected number of appearances in $j+1$ of the process started at $k$ before returning to $k$ equals $$\sum_{r=1}^\infty r\cdot\frac{a_{x,j+1}}{a_{x, k+1}} \mathfrak{p}(x+k\alpha)\cdot \mathfrak{p}(x+(k+1)\alpha)^{r-1}\cdot \mathfrak{q}(x+(k+1)\alpha)$$ $$= \frac{a_{x,j+1}}{a_{x, k+1}} \cdot \frac{\mathfrak{p}(x+k\alpha)}{\mathfrak{q}(x+(k+1)\alpha)}= \frac{a_{x,j+1}}{a_{x, k+1}}\cdot \frac{a_{x,k+1}}{a_{x,k}}= \frac{a_{x,j+1}}{a_{x,k}}.$$ This completes the proof of Lemma \ref{L:2.3} in the case of any two integers with $j>k$. The case $j<k$ is symmetric. \end{proof} The last result of this section is basically the Doeblin ratio limit theorem (cf. Corollary 2 to Theorem 4 in Section I.9, p. 48, in \cite{Chung_1960}). However, reproducing the proof is necessary because we need a kind of uniform convergence result over all $x\in \mathbb T$ and states $j$, $k$ that are sufficiently close to each other. \begin{prop}\label{P:2.2} If $\mathfrak{p}$ is of bounded variation, symmetric and separated from $0$ and $1$ then for every $\varepsilon>0$ and $q\ge 1$ there exists $N$ such that $$\bigg|\frac{\mathbb{P}(\xi_1=j)+\cdots+\mathbb{P}(\xi_n=j)}{\mathbb{P}(\xi_1=k)+\cdots+\mathbb{P}(\xi_n=k)} - \frac{a_{x, j}}{a_{x, k}} \bigg|<\varepsilon$$ for every $n\ge N$, $x\in \mathbb{T}$, provided $|k|,|j| \le q$ and $|k-j|\le q$. \end{prop} \begin{proof} Take $\varepsilon>0$. By Lemma \ref{L:2.2} there exists $B>0$ such that $\sum_{n=1}^N \pr{k}{0}{j}{n}\le B$ for every $N$ and states $k$, $j$ satisfying the assumptions. The number $B$ can be chosen also such that $$\max_{|j|, |k|\le q}\max_{x\in\mathbb{T}} \frac{a_{x,j}}{a_{x,k}}\le B.$$ Apply Lemma \ref{L:2.2} and \ref{L:2.3} to get $N_0$ so large that \begin{equation}\label{E:2.5} \bigg|\sum_{n=1}^{N-\nu} \pr{k}{k}{j}{n} - \frac{ a_{x, j}}{a_{x, k}} \bigg| < \frac{\varepsilon}{3} \quad \textrm {for $N\ge N_0$.} \end{equation} The number $N'_0>N_0$ should be so large that \begin{equation}\label{E:2.6} \frac{2B}{\sum_{n=1}^N p_{0,k}^n}<\frac{\varepsilon}{3} \end{equation} for $N\ge N'_0$ and \begin{equation}\label{E:2.7} \frac{BN_0}{\sum_{n=1}^N p_{0,k}^n}<\frac{\varepsilon}{3} \end{equation} The easily proven decomposition formula $$p_{0,j}^n=\pr{k}{0}{j}{n}+\sum_{\nu=1}^{n-1} p_{0, k}^\nu \cdot \pr{k}{k}{j}{n-\nu}$$ yields $$\sum_{n=1}^N p_{0,j}^n= \sum_{n=1}^N \pr{k}{0}{j}{n}+\sum_{\nu=1}^{N-1} p_{0, k}^\nu \sum_{n=1}^{N-\nu} \pr{k}{k}{j}{n}.$$ We have $$\bigg|\frac{\mathbb{P}(\xi_1=j)+\cdots+\mathbb{P}(\xi_N=j)}{\mathbb{P}(\xi_1=k)+\cdots+\mathbb{P}(\xi_N=k)} - \frac{a_{x, j}}{a_{x, k}} \bigg| =\bigg| \frac{\sum_{n=1}^N p_{0,j}^n}{\sum_{n=1}^N p_{0,k}^n} - \frac{a_{x, j}}{a_{x, k}} \bigg|$$ $$=\bigg| \frac{ \sum_{n=1}^N \pr{k}{0}{j}{n}+\sum_{\nu=1}^{N-1} p_{0, k}^\nu \sum_{n=1}^{N-\nu} \pr{k}{k}{j}{n}}{\sum_{n=1}^N p_{0,k}^n} - \frac{\sum_{n=1}^N \frac{ a_{x, j}}{a_{x, k}} p_{0,k}^n}{\sum_{n=1}^N p_{0,k}^n} \bigg|$$ $$\le \bigg| \frac{ \sum_{n=1}^N \pr{k}{0}{j}{n} -\frac{ a_{x, j}}{a_{x, k}} p_{0,k}^N +\sum_{\nu=1}^{N-1} p_{0, k}^\nu \big( \sum_{n=1}^{N-\nu} \pr{k}{k}{j}{n} - \frac{ a_{x, j}}{a_{x, k}}\big)}{\sum_{n=1}^N p_{0,k}^n} \bigg|$$ $$\le \bigg| \frac{ \sum_{n=1}^N \pr{k}{0}{j}{n} -\frac{ a_{x, j}}{a_{x, k}} p_{0,k}^N }{\sum_{n=1}^N p_{0,k}^n} \bigg| +\bigg|\frac{\sum_{\nu=N-N_0+1}^{N-1} p_{0, k}^\nu \big( \sum_{n=1}^{N-\nu} \pr{k}{k}{j}{n} - \frac{ a_{x, j}}{a_{x, k}}\big)}{\sum_{n=1}^N p_{0,k}^n} \bigg| $$ $$+\bigg| \frac{ \sum_{\nu=1}^{N-N_0} p_{0, k}^\nu \big( \sum_{n=1}^{N-\nu} \pr{k}{k}{j}{n} - \frac{ a_{x, j}}{a_{x, k}}\big)}{\sum_{n=1}^N p_{0,k}^n} \bigg|$$ By (\ref{E:2.5}) the third term is less than $\frac{\varepsilon}{3}$. By the very definition of $B$, the numerator of the first term is less that $2B$ and the numerator of the second expression is less than $BN_0$. Thus (\ref{E:2.6}) and (\ref{E:2.7}) complete the proof. \end{proof} \begin{remark}\label{R:2.1} Let us consider an interval $A\subseteq \mathbb{Z}$ of length $q$. Let $(\xi_n)$ be as usually the process started at $0$, and let $\tau$ be the moment of the first visit of $(\xi_n)$ in $A$. If $N$ is given in Proposition \ref{P:2.2}. Since $N$ was independent of $x\in \mathbb{T}$, a conditional argument easily implies $$\bigg| \frac{\mathbb{P}( \xi_0= j | \mathcal F_{\tau} ) + \cdots + \mathbb{P}( \xi_{n-1}= j |\mathcal F_{\tau} ) }{\mathbb{P}( \xi_0= k | \mathcal F_{\tau} ) + \cdots + \mathbb{P}( \xi_{n-1}=k |\mathcal F_{\tau} )} - \frac{a_{x,j}}{a_{x, k}} \bigg| < \varepsilon$$ almost surely on $\{ \tau < n-N\}$ for any two states $k, j\in A$. \end{remark} \begin{remark}\label{R:2.2} Let us now consider certain function $\varphi : \mathbb Z \rightarrow \mathbb R$ with support contained in an interval $A$, as above, and $\|\varphi\|_\infty\le 1$. An easy argument using Remark \ref{R:2.1} yields $$\bigg| \frac{\mathbb E\big( \varphi(\xi_0)+\cdots+\varphi(\xi_{n-1}) \big| \mathcal F_{\tau} \big)}{\mathbb{E}\big( \mathds{1}_{A}(\xi_0)+\cdots+\mathds{1}_{A}(\xi_{n-1}) \big| \mathcal F_{\tau} \big)} - \frac{\sum_{i \in A} \varphi(i) a_{x,i} }{ \sum_{i\in A} a_{x, i} } \bigg| < \varepsilon$$ almost surely on $\{ \tau < n-N\}$. It is clear that $N$ can be chosen uniformly over all intervals $A$ of fixed length $q$, $x\in T$ and function $\varphi$ as far as $\|\varphi\|_\infty \le 1$. \end{remark} \section{Projection of measures} Put $$a_{x, k}=\exp\bigg(\Phi(x)+\cdots+\Phi(x+(k-1)\alpha)\bigg)\frac{1+\exp \Phi(x+k\alpha)}{1+\exp{\Phi(x)}}$$ for $k\ge 1$ and $a_{x, 0}=1$. Define $$\mu_{x, n} = \frac{1}{M_{x, n}} \sum_{k=0}^{n-1} a_{x, k} \delta_{x+k\alpha}$$ for $x\in \mathbb{T}$ and $n\ge 1$, where $M_{x, n}$ is the normalizing constant, $$M_{x, n} = \sum_{k=0}^{n-1} a_{x, k}.$$ \begin{lem}\label{L:3.1} If $x\in \mathbb{T}$, $k_1, k_2 \in \mathbb N$, then $$a_{x, k_1+k_2}=a_{x, k_1} \cdot a_{x+k_1\alpha, k_2}$$ and $$ \mu_{x, k_1+k_2}=\frac{M_{x, k_1}}{M_{x,k_1+k_2}}\mu_{x, k_1} + a_{x,k_1}\frac{M_{x+k_1\alpha, k_2}}{M_{x, k_1+k_2}}\mu_{x+k_1\alpha, k_2}.$$ \end{lem} \noindent The proof is straightforward. \begin{lem}\label{L:3.2} For every $\varepsilon>0$ there exists $N$ such that if $q\ge N$ is the closest return time then $(1-\varepsilon) a_{y, n} \le a_{x, n} \le (1+\varepsilon) a_{y, n}$ for every natural $n\le q$ and $x, y\in \mathbb{T}$ with $|x-y|<\frac 2 q$. \end{lem} \begin{proof} Take $\delta>0$. We can find $n_0$ so large that \begin{equation}\label{E:3.1} \frac{1}{n} \bigg( f'\big(x\big)+\cdots+f'\big(x+(n-1)\alpha\big) \bigg)< \delta \quad \textrm{for $n\ge n_0$ and every $x\in \mathbb{T}$.} \end{equation} Indeed, this is the consequence of the Birkhoff ergodic theorem applied to the rotation by angle $\alpha$ and the Lebesgue measure (uniform convergence in $x$ follows from unique ergodicity and continuity of $f'$, see e.g. Proposition 4.1.13 in \cite{Hasselblatt_Katok_1995}). Let $q\ge n_0$ be so large that \begin{equation}\label{E:3.2} \frac{1}{q} \bigg( f'\big(x\big)+\cdots+f'\big(x+j\alpha\big) \bigg)< \delta \quad \textrm{for $j \le n_0$ and every $x\in \mathbb{T}$.} \end{equation} Finally, by uniform continuity, let us assume $q$ to be so large that \begin{equation}\label{E:3.3} 1-\delta \le \frac{1+\exp f(x)}{1+\exp f(y)} \le 1+\delta \quad \textrm{for $x, y\in \mathbb{T}$, $|x-y|\le 2/q$.} \end{equation} Take $x, y\in \mathbb{T}$ with $|x-y|\le 2/q$, a natural $n\le q$. By the mean value theorem there exists $z$ in the shorter arc joining $x$ and $y$ such that $$\frac{a_{x, n}}{a_{y, n}}=\exp \bigg( \big( f'(z)+\cdots+f'(z+(n-1)\alpha)\big)|x-y|\bigg)$$ $$\times \frac{1+\exp f(x)}{1+\exp f(y)}\cdot \frac{1+\exp f(x+(n+1)\alpha)}{1+\exp f(y+(n+1)\alpha)}.$$ If $n\ge n_0$ then apply (\ref{E:3.1}) and the fact that $|x-y|\le 2/q$ to get $$ \big( f'(z)+\cdots+f'(z+(n-1)\alpha)\big)|x-y| \le \frac{1}{n} \bigg( f'\big(x\big)+\cdots+f'\big(x+(n-1)\alpha\big) \bigg) \cdot \frac n q \le 2 \delta,$$ as $n\le q$. This combined with (\ref{E:3.3}) yields $$e^{-2\delta}(1-\delta)^2\le \frac{a_{x, n}}{a_{y, n}} \le e^{2\delta}(1+\delta)^2.$$ Using (\ref{E:3.2}) and (\ref{E:3.3}) we can deduce similar statement in the case $n<n_0$. If $\delta \to 0$ then the values on the left and right above tend to $1$, thus the assertion follows. \end{proof} \begin{prop}\label{P:3.1} Let $\varphi \in C(\mathbb{T})$. For every $\varepsilon>0$ there exists $N$ such that if $q\ge N$ is a closest return time then $$\bigg|\int_\mathbb{T} \varphi d\mu_{x, q} - \int_\mathbb{T} \varphi d\mu_{y, q} \bigg| < \varepsilon$$ for every $x,y \in \mathbb{T}$. \end{prop} \begin{proof} Take $\eta>0$ and $\varphi\in C(\mathbb{T})$. Choose $\delta>0$ small (to be determined), and let $q$ be the closes return time such that Lemma \ref{L:3.2} is satisfied with $\varepsilon$ replaced by $\delta$. As a consequence \begin{equation}\label{E:P2.4} 1-\delta<\frac{a_{z_1,n}}{a_{z_2,n}}<1+\delta \quad \textrm{and} \quad 1-\delta<\frac{M_{z_1,n}}{M_{z_2,n}}<1+\delta \end{equation} for $n\le q$ and $z_1, z_2\in \mathbb{T}$ with $|z_1-z_2|<2/q$. Further, using Lemma \ref{Denjoy_Koksma} we easily see $a_{z, q_n} \to 1$ uniformly in $z$, when $(q_n)$ is the sequence of closest return times. Thus $q$ can be chosen so large that $1-\delta \le a_{z, q} \le 1+\delta$ for all $z\in \mathbb{T}$. Using the first assertion in Lemma \ref{L:3.1} it implies \begin{equation}\label{E:P2.5} 1-\delta \le a_{z, n}a_{z+n\alpha, n-q} \le 1+\delta \quad \textrm{for $n< q$ and $z\in \mathbb{T}$.} \end{equation} The last thing we want to assume on $q$ it is so large that \begin{equation}\label{E:P2.6} \sup_{z\in \mathbb{T}}\sup_{|h|\le \frac{2}{q}} |\varphi(z+h)-\varphi(z)| < \delta. \end{equation} Let us take $x, y\in \mathbb{T}$. Denote $x_j=x+ j\alpha$, $y_j=y+ j\alpha$, $j \in [0, q]$. Let $t$ be the smallest natural number with $d(x_t, y) \le \frac 1 q$. Since rotation is an isometry we immediately see $d(x_{t+j}, y_j)<\frac 1 q$ for $j=0,1,\cdots q-t$. In particular $d(x_q, y_{q-t})<\frac 1 q$, hence $d(y_{q-t}, x)\le d(y_{q-t}, x_q)+d(x_q, x)<1/q+1/q=2/q$ and, since the rotation is isometry, $d(y_{q-t+j}, x_j)<\frac 2 q$ for $j=0,\cdots, t$. The measure $\mu_{x, q}$ is an atomic measure with atoms at the points $x, x+\alpha, \ldots, x+(q-1)\alpha$. The idea is to represent $\mu_{x, q}$ as a convex combination of measures concentrated on two disjoint subsets $\{x, x+\alpha, \ldots, x+(t-1)\alpha \}$ and $\{ x+t\alpha, \ldots, x+(q-1)\alpha\}$ and, similarly, represent $\mu_{y,q}$ and a convex combinations of measures concentrated on two disjoint subsets $\{ y, y+\alpha, \ldots, y+(q-t-1)\alpha \}$ and $\{y+(q-t)\alpha, \ldots, y+q\alpha\}$. Namely, it is easy to check using Lemma \ref{L:3.1} that $$\mu_{x, q} = \frac{M_{x, t}}{M_{x,q}} \mu_{x, t} + a_{x, t}\frac{M_{x_t, q-t}}{M_{x, q}} \mu_{x_t, q-t}$$ and $$\mu_{y,q} = \frac{M_{y,q-t}}{M_{y,q}} \mu_{y, q-t} + a_{y, q-t} \frac{M_{y_{q-t}, t}}{M_{y,q}}\mu_{y_{q-t}, t}.$$ Since $d(x_t , y) \le 1/q$, in view of (\ref{E:P2.4}) we expect the second measure in the decomposition of $\mu_{x, q}$ to be close to the first measure in decomposition of $\mu_{y,q}$. Similar reasoning applies to two remaining terms since $d(y_{q-t}, x)<2/q$. We have $$\bigg| \int_\mathbb{T} \varphi d\mu_{x,q} - \int_\mathbb{T} \varphi d\mu_{y,q} \bigg|\le \bigg| \frac{M_{x, t}}{M_{x,q}} \int_\mathbb{T} \varphi d\mu_{x, t} - a_{y, q-t} \frac{M_{y_{q-t}, t}}{M_{y,q}} \int_\mathbb{T} \varphi d\mu_{y_{q-t}, t} \bigg|$$ \begin{equation}\label{E:P2.3} + \bigg| a_{x, t}\frac{M_{x_t, q-t}}{M_{x, q}} \int_\mathbb{T} \varphi d\mu_{x_t, q-t} - \frac{M_{y,q-t}}{M_{y,q}} \int_\mathbb{T} \varphi d\mu_{y, q-t} \bigg|. \end{equation} Let us now focus on the second term on the right hand side. The analysis of the first term proceeds analogously. We have $$\bigg| a_{x, t}\frac{M_{x_t, q-t}}{M_{x, q}} \int_\mathbb{T} \varphi d\mu_{x_t, q-t} - \frac{M_{y,q-t}}{M_{y,q}} \int_\mathbb{T} \varphi d\mu_{y, q-t} \bigg| $$ \begin{equation}\label{E:P2.0} \le \bigg| a_{x, t}\frac{M_{x_t, q-t}}{M_{x, q}} - \frac{M_{y,q-t}}{M_{y,q}} \bigg| \int_\mathbb{T} |\varphi| d\mu_{x_t, q-t} \end{equation} $$+ \frac{M_{y,q-t}}{M_{y,q}}\bigg| \int_\mathbb{T} \varphi d\mu_{x_t, q- t} - \int_\mathbb{T} \varphi d\mu_{y, q- t}\bigg|.$$ We are going to show the first term in (\ref{E:P2.0}) is bounded by $\|\varphi\|_\infty\eta$ and the second by $\delta+\|\varphi\|_\infty\eta$. Since exactly the same estimates can be derived for the first term on the right-hand side of (\ref{E:P2.3}), it will give $$\bigg| \int_\mathbb{T} \varphi d\mu_{x,q} - \int_\mathbb{T} \varphi d\mu_{y,q} \bigg|\le 2\delta + 4\|\varphi\|_\infty\eta$$ and will complete the proof. Thus what remains to do is to find the desired bounds on the right-hand side of (\ref{E:P2.0}). \vspace{0.5cm} \noindent\textbf{A. Analysis of the first term on the right-hand side of (\ref{E:P2.0})} We have \begin{equation}\label{E:P2.1} \bigg| a_{x, t}\frac{M_{x_t, q-t}}{M_{x, q}} - \frac{M_{y,q-t}}{M_{y,q}} \bigg| = \frac{M_{y,q-t}}{M_{y,q}} \bigg| a_{x, t}\cdot \frac{M_{y,q}}{M_{x, q}}\cdot \frac{M_{x_t, q-t}}{M_{y,q-t}} -1 \bigg|. \end{equation} Since $d(y, x_t)<1/q\le 2/q$ we can apply (\ref{E:P2.4}) to get that $1-\delta\le \frac{M_{x_t, q-t}}{M_{y,q-t}}\le 1+\delta$. Further, $d(y_{q-t}, x)\le 2/q$, thus Lemma \ref{L:3.1} and (\ref{E:P2.4}) give $$M_{y,q}=M_{y, q-t} + a_{y, q-t} M_{y_{q-t}, t} \le (1+\delta) M_{x_t, q-t} + (1+\delta)^2 a_{x_t, q-t} M_{x, t}.$$ From (\ref{E:P2.5}) we have $a_{x_t, q-t}\le \frac{1+\delta}{a_{x,t}}$. Finally $$M_{y,q}\le (1+\delta) M_{x_t, q-t} + (1+\delta)^2 a_{x_t, q-t} M_{x, t} \le (1+\delta) M_{x_t, q-t} + \frac{(1+\delta)^3}{a_{x, t}} M_{x, t}$$ $$\le (1+\delta)^3 \bigg( M_{x_t, q-t} + \frac{1}{a_{x, t}} M_{x, t} \bigg)= \frac{(1+\delta)^3}{a_{x, t}} \bigg( a_{x, t} M_{x_t, q-t}+M_{x, t} \bigg)$$ $$= (1+\delta)^3\frac{M_{x, q}}{a_{x,t}}.$$ So far we used only the bounds from above in (\ref{E:P2.4} ) and (\ref{E:P2.5} ). Applying the same reasoning with estimates from below we see that $$M_{y,q}\ge (1-\delta)^3\frac{M_{x, q}}{a_{x,t}}.$$ Going back to (\ref{E:P2.1}) we have $$ (1-\delta)^4 \le a_{x, t}\cdot \frac{M_{y,q}}{M_{x, q}}\cdot \frac{M_{x_t, q-t}}{M_{y,q-t}}\le (1+\delta)^4.$$ Take $\eta>0$. If $\delta$ was chosen sufficiently small then $$ \bigg| a_{x, t}\cdot \frac{M_{y,q}}{M_{x, q}}\cdot \frac{M_{x_t, q-t}}{M_{y,q-t}} -1 \bigg|<\eta.$$ Since $\frac{M_{y,q-t}}{M_{y,q}}\le 1$ it leads to the estimate $$\bigg| a_{x, t}\frac{M_{x_t, q-t}}{M_{x, q}} - \frac{M_{y,q-t}}{M_{y,q}} \bigg| < \eta.$$ Thus the first term on the right-hand side of (\ref{E:P2.0}) is bounded by $\eta \|\varphi\|_\infty$. \vspace{0.5cm} \noindent\textbf{B. Analysis of the second term on the right-hand side of (\ref{E:P2.0})} To deal with the second expression we have clearly $\frac{M_{y,q-t}}{M_{y,q}}\le 1$ and $$\bigg| \int_\mathbb{T} \varphi d\mu_{x_t, q- t} - \int_\mathbb{T} \varphi d\mu_{y, q- t}\bigg|=\bigg| \sum_{k=0}^{q-t-1} \frac{a_{x_t, k}}{M_{x_t, q-t}} \varphi(x_t+k\alpha) - \sum_{k=0}^{q-t-1} \frac{a_{y, k}}{M_{y,q-t}} \varphi(y+k\alpha) \bigg|$$ $$\le \bigg| \sum_{k=0}^{q-t-1} \frac{a_{x_t, k}}{M_{x_t, q-t}} \varphi(x_t+k\alpha) - \sum_{k=0}^{q-t-1} \frac{a_{y, k}}{M_{y,q-t}} \varphi(x_t+k\alpha)\bigg|$$ $$ + \bigg| \sum_{k=0}^{q-t-1} \frac{a_{y, k}}{M_{y,q-t}} \varphi(x_t+k\alpha) - \sum_{k=0}^{q-t-1} \frac{a_{y, k}}{M_{y,q-t}} \varphi(y+k\alpha) \bigg|$$ $$\le \sum_{k=0}^{q-t-1} \frac{a_{x_t, k}}{M_{x_t, q-t}} \big| \varphi(x_t+k\alpha)\big| \bigg| 1-\frac{a_{y, k}}{a_{x_t, k}}\cdot \frac{M_{x_t, q-t}}{M_{y, q-t}}\bigg| $$ $$+ \sum_{k=0}^{q-t-1} \frac{a_{y, k}}{M_{y,q-t}} \big| \varphi(x_t+k\alpha) - \varphi(y+k\alpha) \big|. $$ Since $d(x_t, y)<1/q$, (\ref{E:P2.4}) yields $$(1-\delta)^2\le \frac{a_{y, k}}{a_{x_t, k}}\cdot \frac{M_{x_t, q-t}}{M_{y, q-t}}\le (1+\delta)^2,$$ thus $$ \bigg| 1-\frac{a_{y, k}}{a_{x_t, k}}\cdot \frac{M_{x_t, q-t}}{M_{y, q-t}}\bigg|<\eta$$ if $\delta$ is sufficiently small. This leads us to the estimate $$ \sum_{k=0}^{q-t-1} \frac{a_{x_t, k}}{M_{x_t, q-t}} \big| \varphi(x_t+k\alpha)\big| \bigg| 1-\frac{a_{y, k}}{a_{x_t, k}}\cdot \frac{M_{x_t, q-t}}{M_{y, q-t}}\bigg| \le \|\varphi\|\eta.$$ Clearly, $$\sum_{k=0}^{q-t-1} \frac{a_{y, k}}{M_{y,q-t}} \big| \varphi(x_t+k\alpha) - \varphi(y+k\alpha) \big|\le \delta$$ by (\ref{E:P2.6}), which completes the proof. \end{proof} \section{Proof of Theorem \ref{T:1}} We shall use the following criterion for the uniqueness of the stationary distribution. \vspace{0.5cm} \textit{\noindent If for every $\varepsilon>0$ and nonnegative $\varphi\in C(\mathbb{T})$ with $1/2 < \|\varphi\|_\infty<1$ there exist $\beta \in \mathbb R$ and $N>0$ such that $$\bigg| \frac{\varphi(x)+\cdots+P^{n-1}\varphi(x)}{n} - \beta \bigg| < \varepsilon$$ for every $x\in \mathbb{T}$ and $n\ge N$, then there exists exactly one stationary distribution.} \vspace{0.5cm} Let us take $\varepsilon>0$ and $\varphi \in C(\mathbb{T})$ as stated in the criterion. Let $y\in \mathbb{T}$ be arbitrary, and let $\beta=\int_\mathbb{T}\varphi d\mu_{y,q}$, where $q$ is chosen so large that Proposition \ref{P:3.1} holds with $\varepsilon$ replaced by $\varepsilon/3$. Take $x\in\mathbb{T}$. Set $A_k=[kq, (k+1)q)$, $k\in \mathbb{Z}$, and define $$\varphi_k(j)= \mathds{1}_{A_k}(j) \cdot \varphi(x+j\alpha), \quad \varphi_k : \mathbb{Z} \rightarrow \mathbb R, \ k\in \mathbb{Z}.$$ Observe that $$\frac{\sum_{i \in A_k} \varphi_k(i) a_{x,i} }{ \sum_{i\in A_k} a_{x, i} } = \int_\mathbb{T} \varphi d\mu_{x+k\alpha, q}$$ for every $k$, thus Proposition \ref{P:3.1} gives \begin{equation}\label{E:5.2} \bigg| \frac{\sum_{i \in A_k} \varphi_k(i) a_{x,i} }{ \sum_{i\in A_k} a_{x, i} } - \beta \bigg| < \frac{\varepsilon}{3}, \end{equation} for an arbitrary $k\in \mathbb{Z}$. For $k\in \mathbb{Z}$ denote by $\tau_k$ the moment of the first visit of $(\xi_n)$ in $A_k$. Fix $n$ sufficiently large and set $\Gamma \subseteq \mathbb Z$ to be the set of $k$'s such that $A_k$ is visited with positive probability till $n$. Apply Proposition \ref{P:2.2} and Remark \ref{R:2.2} to get a number $N$ such that \begin{equation}\label{E:5.11} \bigg| \frac{\mathbb{E}\big( \varphi_k(\xi_0)+\cdots+\varphi_k(\xi_{n-1}) \big| \mathcal F_{\tau_k} \big)}{\mathbb{E}\big( \mathds{1}_{A_k}(\xi_0)+\cdots+\mathds{1}_{A_k}(\xi_{n-1}) \big| \mathcal F_{\tau_k} \big)} - \beta \bigg| < \varepsilon \quad \textrm{a.s. on $\{\tau_k<n-N\}$.} \end{equation} Let $(X_n)$ be the process with transition kernel (\ref{E:1.1}) started at $x\in \mathbb{T}$. We have \begin{equation}\label{E:5.14} |\mathbb{E} \big( \varphi(X_0)+ \cdots + \varphi(X_n)\big) - \beta n| \end{equation} $$= \bigg|\mathbb{E} \bigg(\sum_{k\in \Gamma} \varphi_k(\xi_0)+\cdots+\varphi_k(\xi_{n-1})\bigg) - \beta \mathbb{E}\bigg( \sum_{k\in \Gamma} \mathds{1}_{A_k}(\xi_0)+\cdots+\mathds{1}_{A_k}(\xi_{n-1}) \bigg) \bigg|$$ $$\le \sum_{k\in\Gamma} \mathbb{E} \bigg| \mathbb{E}\big( \varphi_k(\xi_0)+\cdots+\varphi_k(\xi_{n-1}) \big| \mathcal F_{\tau_k} \big) - \beta \mathbb{E}\big( \mathds{1}_{A_k}(\xi_0)+\cdots+\mathds{1}_{A_k}(\xi_{n-1}) \big| \mathcal F_{\tau_k} \big) \bigg|$$ $$=\sum_{k\in \Gamma} \mathbb{E} \bigg( \mathbb{E}\big( \mathds{1}_{A_k}(\xi_0)+\cdots+\mathds{1}_{A_k}(\xi_{n-1}) \big| \mathcal F_{\tau_k} \big) $$ $$\cdot \bigg| \frac{\mathbb{E}\big( \varphi_k(\xi_0)+\cdots+\varphi_k(\xi_{n-1}) \big| \mathcal F_{\tau_k} \big)}{\mathbb{E}\big( \mathds{1}_{A_k}(\xi_0)+\cdots+\mathds{1}_{A_k}(\xi_{n-1}) \big| \mathcal F_{\tau_k} \big)} - \beta \bigg| \bigg).$$ Let us fix $k\in \Gamma$ and split the expectation above as follows. $$ \mathbb{E} \bigg( \mathbb{E}\big( \mathds{1}_{A_k}(\xi_0)+\cdots+\mathds{1}_{A_k}(\xi_{n-1}) \big| \mathcal F_{\tau_k} \big) \cdot \bigg| \frac{\mathbb{E}\big( \varphi_k(\xi_0)+\cdots+\varphi_k(\xi_{n-1}) \big| \mathcal F_{\tau_k} \big)}{\mathbb{E}\big( \mathds{1}_{A_k}(\xi_0)+\cdots+\mathds{1}_{A_k}(\xi_{n-1}) \big| \mathcal F_{\tau_k} \big)} - \beta \bigg| \bigg)$$ $$= \mathbb{E} \mathds{1}_{\{\tau_k<n-N\}} \bigg( \mathbb{E}\big( \mathds{1}_{A_k}(\xi_0)+\cdots+\mathds{1}_{A_k}(\xi_{n-1}) \big| \mathcal F_{\tau_k} \big) $$ $$ \cdot \bigg| \frac{\mathbb{E}\big( \varphi_k(\xi_0)+\cdots+\varphi_k(\xi_{n-1}) \big| \mathcal F_{\tau_k} \big)}{\mathbb{E}\big( \mathds{1}_{A_k}(\xi_0)+\cdots+\mathds{1}_{A_k}(\xi_{n-1}) \big| \mathcal F_{\tau_k} \big)} - \beta \bigg| \bigg)$$ $$+\mathbb{E} \mathds{1}_{\{\tau_k\ge n-N\}} \bigg( \mathbb{E}\big( \mathds{1}_{A_k}(\xi_0)+\cdots+\mathds{1}_{A_k}(\xi_{n-1}) \big| \mathcal F_{\tau_k} \big) $$ $$\cdot \bigg| \frac{\mathbb{E}\big( \varphi_k(\xi_0)+\cdots+\varphi_k(\xi_{n-1}) \big| \mathcal F_{\tau_k} \big)}{\mathbb{E}\big( \mathds{1}_{A_k}(\xi_0)+\cdots+\mathds{1}_{A_k}(\xi_{n-1}) \big| \mathcal F_{\tau_k} \big)} - \beta \bigg| \bigg)$$ By (\ref{E:5.11}) the first expectation does not exceed $$\varepsilon \mathbb{E} \mathds{1}_{\{\tau_k<n-N\}}\mathbb{E}\big( \mathds{1}_{A_k}(\xi_0)+\cdots+\mathds{1}_{A_k}(\xi_{n-1}) \big| \mathcal F_{\tau_k} \big)$$ \begin{equation}\label{E:5.12} \le \varepsilon \mathbb{E} \mathbb{E}\big( \mathds{1}_{A_k}(\xi_0)+\cdots+\mathds{1}_{A_k}(\xi_{n-1}) \big| \mathcal F_{\tau_k} \big) = \varepsilon \mathbb{E}\big( \mathds{1}_{A_k}(\xi_0)+\cdots+\mathds{1}_{A_k}(\xi_{n-1}) \big). \end{equation} To deal with the second expectation we use the fact that $ \|\varphi_k\|_\infty \le 1$ and the support of $\varphi_k$ is contained in $A_k$. These facts combined imply easily $$\mathbb{E}\big( \varphi_k(\xi_0)+\cdots+\varphi_k(\xi_{n-1}) \big| \mathcal F_{\tau_k} \big) \le \mathbb{E}\big( \mathds{1}_{A_k}(\xi_0)+\cdots+\mathds{1}_{A_k}(\xi_{n-1}) \big| \mathcal F_{\tau_k} \big)$$ for every $n$ and $k\in \Gamma$. Furthermore, $$\mathbb{E}\big( \mathds{1}_{A_k}(\xi_0)+\cdots+\mathds{1}_{A_k}(\xi_{n-1}) \big| \mathcal F_{\tau_k} \big) \le N$$ almost surely on $\{\tau_k\ge n-N\}$. Summarizing we get the second expectation does not exceed \begin{equation}\label{E:5.13} \mathbb{P} (\tau_k \ge n-N ) \cdot N \cdot (1+\beta)\le N(1+\beta) \mathbb{P}\big(\{\xi_{n-N} \in A_k \}\cup \cdots \cup \{\xi_{n-1} \in A_k \} \big). \end{equation} We can combine now (\ref{E:5.12}), (\ref{E:5.13}) and (\ref{E:5.14}) to get $$\bigg| \frac{\mathbb{E} \big( \varphi(X_0)+ \cdots + \varphi(X_{n-1})\big) }{n} -\beta \bigg|$$ $$ \le \frac{1}{n} \sum_{k\in \Gamma} \varepsilon \mathbb{E}\big( \mathds{1}_{A_k}(\xi_0)+\cdots+\mathds{1}_{A_k}(\xi_{n-1})$$ $$+ \frac{1}{n} \sum_{k\in \Gamma} N(1+\beta) \mathbb{P}\big(\{\xi_{n-N} \in A_k \}\cup \cdots \cup \{\xi_{n-1} \in A_k \} \big)$$ $$\le \frac{1}{n} \cdot \varepsilon \cdot n + \frac{1}{n} N(1+\beta) N.$$ This is less than $2\varepsilon$ if $n$ is sufficiently large. \section{Final remarks} \begin{enumerate} \item Theorem \ref{T:1} is less general than the result of Conze, Guivarc'h. It was proven that the assumption there is optimal by Br\'{e}mont \cite{Bremont_1999}. \item One can replace the investigated system (random circle rotation) by higher dimensional analog, namely toral rotation, and ask the same question again about the uniqueness of stationary distribution. Sinai \cite{Sinai_1999} considered it on the same footing with circle rotations, which means that Sinai's result holds also there with the correct definition of Diophantine vector $\alpha$. Conze, Guivarc'h \cite{Conze_Guivarc'h_2000} ideas cannot be generalized to that case. Moreover, it has been proven by Nicolas Chevallier \cite{Chevallier_2004} that given Diophantine $\alpha\in \mathbb{R}^d$ there exists a Lipschitz $\mathfrak{p}$ on $\mathbb{T}^d$ for which one can find two different stationary distributions. When we try to generalize the proof of present paper to higher dimensional tori an obstacle is revealed just on the very beginning in the part devoted to recurrence. Indeed, one can define a martingale as in the proof of Proposition \ref{P:1} on state that recurrence is equivalent to $M(n) \to \infty$ when $n\to \infty$ and $M(n)\to - \infty$ when $n\to - \infty$. In one-dimensional setting it was the consequence of symmetry and Denjoy-Koksma inequality applied to $f(x)=\ln \mathfrak{p}(x) - \ln \mathfrak{q}(x)$. The question therefore is if a higher dimensional analog of Denjoy-Koksma inequality holds. A counterexample (with analytic observable!) was given by J.-C. Yoccoz in his paper \cite{Yoccoz_1995}, Appendix 1. In my opinion it suggests a conjecture that for any $d\ge 2$ there exits $\alpha\in \mathbb R^d$ and analytic $\mathfrak{p}$ such that the corresponding system has at least two different stationary measures. \item In \cite{DFS_2021} the authors asked about mixing (or stability) of investigated system. The reasoning of Conze, Guivarc'h doesn't give any hopes to obtain this stronger property. However, in our paper one can replace Doeblin ratio limit theorem by strong ratio limit property (see \cite{Orey_1961}) saying $$\bigg|\frac{\mathbb{P}(\xi_{2n}=j)}{\mathbb{P}(\xi_{2n}=k)}-\frac{a_{x,j}}{a_{x,k}}\bigg| \to 0$$ as $n\to \infty$ provided $j,k$ are both even (the same should be true with odd states and epochs). Analogs of Proposition \ref{P:2.2} and \ref{P:3.1} are still valid. However, the estimates from Section 5. get much more troublesome and delicate, and require much more work than I expected. \item A similar system was investigated in a sequence of papers by Dolgopyat and Goldsheid, see \cite{Goldsheid_2008}, \cite{Dolgopyat_Goldsheid_2013}, \cite{Dolgopyat_Goldsheid_2018}, \cite{Dolgopyat_Goldsheid_2019}, \cite{Dolgopyat_Goldsheid_2020}, \cite{Dolgopyat_Goldsheid_2021}. \item One can replace the circle rotation by any automorphisms of any space and ask about the properties of this system. A general nonsymmetric system with ergodic authomorphims where considered in \cite{Kaloshin_Sinai_2000a}. In \cite{Kaloshin_Sinai_2000b} the authors investigated typical behavior for Anosov diffeomorphisms. \end{enumerate} \bibliographystyle{alpha}
1,314,259,992,816
arxiv
\section{Introduction} \label{intro} Solar neutrino flux measurements from the Super-Kamiokande (SK)~\cite{sk1} and the Sudbury Neutrino Observatory (SNO)~\cite{sno1} experiments provided direct evidence that the deficit of solar neutrinos observed by the Homestake~\cite{homestake} and other solar neutrino experiments is the result of solar neutrino flavor conversion. While this solar neutrino flavor conversion is well described by neutrino oscillations (in particular oscillation parameters extracted using solar neutrinos agree with those extracted using reactor antineutrinos \cite{kamland}), there is still no direct evidence for this to be so. It is possible that the flavor conversion is driven by some other mechanism. However, based on the current model and parameters of solar neutrino oscillations, there are two testable signatures available for the SK experiment to look for. The first is the observation and precision measurement of the expected Mikheyev-Smirnov-Wolfenstain (MSW)~\cite{msw} resonance curve. Based on the current best-fit oscillation parameters extracted using both solar neutrino and reactor antineutrino data, there is an expected characteristic energy dependence of the flavor conversion. Higher energy solar neutrinos, such as $^8$B and $hep$ neutrinos, undergo complete resonant conversion within the Sun, while lower energy solar neutrino, such as $pp$, $^7$Be, $pep$, CNO and the lowest energy $^8$B neutrinos, only suffer from vacuum oscillations. After averaging the vacuum oscillations due to energy resolution, the survival probability for low energy electron flavor solar neutrinos must exceed $50\%$, while the resonant conversion of the higher energy solar neutrinos within the Sun leads to the currently observed survival probability of about $30\%$. The transition between the vacuum dominated and solar resonance dominated oscillations should occur near three MeV, making $^8$B solar neutrinos the best choice when searching for the transition point within the energy spectrum. The second solar neutrino oscillation signature comes from the effect of the terrestrial matter density. This effect can be tested directly by comparing the rate of solar neutrino interactions during the daytime to the rate during the nighttime, when the solar neutrinos have passed through the Earth. After being resonantly converted into the second mass eigenstate within the Sun, the neutrinos which then pass through the Earth will generally have an enhanced electron neutrino flavor content. This will lead to an excess in the electron elastic scattering rate during the nighttime, and hence a negative ``day-night asymmetry'' $A_{\text{\tiny DN}}=(r_{\text{\tiny D}}-r_{\text{\tiny N}})/r_{\mbox{\tiny ave}}$, where $r_{\text{\tiny D}}$ ($r_{\text{\tiny N}}$) is the average daytime (nighttime) rate and $r_{\mbox{\tiny ave}}=\frac{1}{2}(r_{\text{\tiny D}}+r_{\text{\tiny N}})$ is the average rate. SK observes a wide range of $^8$B solar neutrinos, making it a prime detector to search for both of the solar neutrino oscillation signatures. The most recent solar neutrino results from the SK experiment have been presented. This includes the latest flux measurement from the fourth phase of SK (SK-IV), energy spectrum and day-night asymmetry analyses using all SK data and oscillation analyses using SK data only and then SK data plus all other relevant data (other solar neutrino and reactor anti-neutrino data). Complete details of these analyses can be found in~\cite{sk4,skall_dn}. \section{Super-Kamiokande IV Improvements} \label{sk4} Super-Kamiokande is a 40 m diameter, 40 m tall right cylindrical stainless steel tank filled with 50 kton of ultra-pure water, located in Kamioka, Japan. The detector is optically separated into 2 distinct volumes, a 32 kton inner detector (ID) and a 2 m active veto outer detector (OD) surrounding the ID. The structure used to divide the two volumes houses an array of 11,129 50 cm photo-multiplier tubes (PMTs) facing the ID and 1,885 20 cm PMTs facing the OD. The detector itself is currently in the same configuration as during the SK-III phase~\cite{sk3}, however improvements to the data acquisition system (DAQ) marked the end of SK-III and the beginning of SK-IV. SK-IV began data taking in September of 2008, after having all of its front-end electronics upgraded. The new boards, called QBEEs (QTC Based Electronics with Ethernet Readout)~\cite{qbee}, allowed for the development of a new online DAQ. The essential components of the QBEEs, used for the analog signal processing and digitization, are the QTC (high-speed Charge-to-Time Converter) ASICs, which achieve very high speed signal processing and allow the readout of every hit of every PMT. The resulting hit PMT information is sent to online computers which scan the data and use a software trigger to select time coincidences within 200 nsec, in order to pick out events. The software trigger ensures that a high rate of super low energy events does not impact the efficiency of triggering on high energy events and allows for flexible event time windows. The energy threshold using this software trigger is only limited by the speed of the online computers, and is set at 3.5 MeV recoil electron kinetic energy, the lowest of all SK phases. The triggering efficiency of SK-IV events is better than $99\%$ at 4.0 MeV and $\sim84\%$ between 3.5 and 4.0 MeV. Because of the large size of SK, it is necessary to continuously recirculate the water to maintain optimal water clarity. This is done by extracting water from the top of the detector, sending it through a water purification system and then re-injecting it into the bottom of the detector. If the temperature of the water being injected into the bottom of the tank is not closely matched to that of the rest of the detector, convection will occur within the tank. This allows radioactive radon (Rn) gas, which is most commonly produced near the edge of the detector by decays from the U/Th chain, to make its way into the central region of the detector. Radioactivity coming from the decay products of $^{222}$Rn, most commonly $^{214}$Bi, can mimic the recoil electron signal coming from the elastic scattering of a solar neutrino. In January of 2010, a new automated temperature control system was installed to control the temperature of the water being injected into the detector at the $\pm$0.01 K level. By controlling the supply water temperature and the rate at which water is extracted and injected to different places in the detector, convection within the tank has been kept to a minimum and the background level in the central region has become significantly lower, compared to SK-III. Besides the above hardware improvements to the detector, a new analysis method was introduced to separate background and signal events. Even at the low energies of solar neutrinos, it is still possible to use the PMT hit patterns to reconstruct the amount of multiple Coulomb scattering a recoil electron will incur. As the energy of the recoil electron is decreased, the amount of multiple scattering the electron will incur increases, thus leading to a more isotropic PMT hit pattern. The majority of the low energy background in SK is believed to be coming from the $\beta$-decay of $^{214}$Bi, which has an endpoint kinetic energy of $\sim2.8$ MeV. With the low energy threshold of SK-IV set at 3.5 MeV, the only way these lower energy $\beta$-decays contaminate the solar neutrino data set is due to Poisson fluctuations of the number of reconstructed photons, resulting in a larger reconstructed energy. However, despite these events fluctuating up in energy, they should still multiple scatter as electrons with kinetic energy less than 2.8 MeV. These $\beta$-decays should therefore undergo more multiple scattering than the solar neutrino interactions. SK-IV has introduced a new multiple Coulomb scattering goodness (MSG) variable, described in detail in~\cite{sk4}, allowing data events to be broken into sub-samples based on the amount of multiple scattering, before the solar neutrino signal is extracted. \section{Detector Performance} \label{detector_performance} The methods used for the vertex, direction and energy reconstructions are the same as those used for SK-III~\cite{sk3}. There is a very slight improvement in the vertex resolution during the SK-IV phase ($\sim50$ cm at 9.5 MeV), compared to SK-III, the result of improved timing resolution and timing residual agreement between data and MC simulated events coming from the upgraded front-end electronics. The angular and energy resolutions are nearly identical to the SK-III phase, $\sim25^{\circ}$ and $\sim14\%$ for 9.5 MeV electrons, respectively. The absolute energy scale is determined with a small electron linear accelerator (LINAC), which injects single monoenergetic electrons into the SK tank, in the downward direction, with energies between 4.2 and 18.5 MeV. More details are described in~\cite{linac}. The energy of the LINAC electrons are precisely measured by a germanium (Ge) detector. The directional and position dependence of the energy scale is further check using a deuterium-tritium (DT) fusion neutron generator~\cite{dt}. The total error on the absolute energy scale resulting from these calibrations is found to be $0.54\%$, similar to the SK-III value of $0.53\%$. The water transparency (WT) in the MC simulation is defined using absorption and scattering coefficients as a function of wavelength (see~\cite{sk4calib} for details). The dominant contribution to the variation of the WT is a variation in the absorption length. The scattering coefficients are taken as constants, while the absorption coefficient is both time and position dependent. The time variation of the absorption coefficient is checked using the light attenuation of Cherenkov light from decay electrons, resulting from cosmic-ray $\mu$'s. The position dependence of the absorption coefficient arises from draining water from the top of the detector and re-injecting it into the bottom as it is continuously recirculated. Due to the precise control of the input water temperature, the convection inside the tank is minimized everywhere but the bottom, below $z=-11$ m. Due to a small amount of convection in the bottom of the tank and a constant rising temperature above, the absorption coefficient is modeled as a constant below $z=-11$ m and with a linear function above this height. This ``top-bottom'' asymmetry of the WT is determined by studying the distribution of hits coming from a Ni-Cf gamma-ray source (see~\cite{sk4calib}) in the ``top'', ``bottom'' and ``barrel'' regions of the detector. It is found that the hit rate of the top region of the detector is $3\sim5\%$ lower than that of the bottom region. The time dependence of this top-bottom asymmetry is monitored using the same Ni calibration, as well as an auto-xenon calibration~\cite{sk4calib}. The introduction of this time dependent absorption coefficient has much reduced the systematic uncertainty resulting from the directional dependence of the energy scale, especially useful for the solar neutrino day-night asymmetry analysis. \section{Data Reduction} \label{reduction} The majority of the analysis cuts are the same as used for the SK-III phase~\cite{sk3}, however, in order to optimize the significance $(S/\sqrt{BG})$, the applied energy regions have slightly changed and a new tight fiducial volume cut is applied. Events between 4.5 and 5.0 MeV are cut if the radius squared $r^2$ is larger than 180 m$^2$ or the height $z$ is less than -7.5 m. Below 4.5 MeV, events are cut if they do not satisfy \begin{equation} \frac{r^2}{\mbox{m}}+\frac{150}{11.75^4}\times\left|\frac{z}{\mbox{m}}-4.25\right|^4 \le 150, \end{equation} with the coordinates given in meters. The remaining efficiency above 6.0 MeV is almost identical to SK-III, while for 5.0 to 6.0 MeV, SK-IV is better than SK-III. This is caused by removing the second vertex cut and making a looser ambient event cut. Using the new tight fiducial volume cut and a tighter ambient event cut for 3.5 to 5.0 MeV gives a lower selection efficiency, however, in exchange the background level has been much reduced. \section{Data Analysis} \label{data_analysis} \subsection{Total Flux} \label{flux} The start of SK-IV physics data taking occurred on October 6th, 2008. The results presented include data through the end of December 31st, 2012, a total of 1306.3 live days. As opposed to SK-III, which had different livetimes for the different low energy threshold periods, SK-IV took all data with the same low energy threshold of 3.5 MeV recoil electron kinetic energy. SK observes all flavors of solar neutrinos through the process of neutrino-electron elastic scattering, however, the total cross section for electron flavor neutrinos is roughly six times larger than that of the muon or tau neutrinos. This comes from the inclusion of both the charged-current (CC) and neutral-current (NC) interactions for electron flavor neutrinos, whereas the muon and tau flavors interact via the NC interaction only, making SK most sensitive the electron flavor solar neutrinos. The differential cross section for this interaction, at the energies of solar neutrinos, is strongly peaked in the direction of the incoming neutrino. If $\theta_{\mbox{\tiny sun}}$ is the angle between the incoming solar neutrino (which is the directional vector from the Sun to the event vertex) and the reconstructed recoil electron direction, the solar neutrino signal should peak at $\cos\theta_{\mbox{\tiny sun}}=1$, while background events will be mostly uniformly distributed. SK utilizes this by using an extended maximum likelihood fit between 3.5 and 19.5 MeV recoil electron kinetic energy to extract the solar neutrino flux. The same method is used for SK-I~\cite{sk1}, SK-II~\cite{sk2} and SK-III~\cite{sk3}. The left panel of Fig.~\ref{fig:cossun} shows the $\cos\theta_{\mbox{\tiny sun}}$ distribution of the SK-IV final data sample (black points), along with the best-fit of the background (blue) and background plus solar neutrino signal (red). The systematic uncertainties on the total flux for SK-IV were calculated using the same methods as for SK-III~\cite{sk3} (see~\cite{sk4} for full systematic uncertainty details). The total systematic uncertainty of the SK-IV flux was found to be $1.7\%$, improved from the $2.2\%$ seen in SK-III, and the best value among all phases. The main contributions to the reduction come from improvements in the uncertainties arising from the energy-bin uncorrelated uncertainties; the vertex shift, trigger efficiency and the angular resolution. There is also a reduction in the uncertainties associated with the energy scale and resolution, coming from the addition of the two lowest energy bin, 3.5-4.5 MeV, for the entire period of SK-IV, compared to SK-III which use a low energy threshold of 6.0 MeV for the first half of the phase, and 4.5 MeV for the second half. The installation of the new front-end electronics has lead to a slightly better timing resolution and agreement of the timing residuals between data and MC simulated events. The total number of solar neutrino events extracted via the extended maximum likelihood fit for the SK-IV phase is $25,253^{+252}_{-250}(\mbox{stat.})\pm455(\mbox{syst.})$. This number corresponds to a $^8$B solar neutrino flux of \begin{align*} \Phi_{^8\text{B}}(\text{SK-IV})= [2.36\pm0.02(\text{stat.})\pm0.04(\text{syst.})]\times 10^6 /(\text{cm}^2\text{sec}), \end{align*} assuming a pure $\nu_e$ flavor content. As seen in Table~\ref{tab:flux}, the flux measurements from each phase of SK agree within the statistical errors. These four measurements can be combine together to give the total SK-I-IV combine flux of \begin{align*} \Phi_{^8\text{B}}(\text{SK})= [2.37\pm0.015(\text{stat.})\pm0.04(\text{syst.})]\times 10^6 /(\text{cm}^2\text{sec}). \end{align*} \begin{table}[h] \begin{center} \caption{SK measured solar neutrino flux by phase.} \begin{tabular}{l c c} \hline\hline & Energy Threshold & Flux ($\times10^6$/(cm$^2$sec)) \\ \hline SK-I & 4.5 MeV & $2.38\pm0.02\pm0.08$ \\ SK-II & 6.5 MeV & $2.41\pm0.05^{+0.16}_{-0.15}$ \\ SK-III & 4.5 MeV & $2.40\pm0.04\pm0.05$ \\ SK-IV & 3.5 MeV & $2.36\pm0.02\pm0.04$ \\ \hline Combined & & $2.37\pm0.02\pm0.04$ \\ \hline\hline \end{tabular} \label{tab:flux} \end{center} \end{table} \begin{figure}[t] \begin{subfigure}{0.49\textwidth} \vspace*{0.15cm} \includegraphics[keepaspectratio=false,height=8.8cm,width=\textwidth,clip]{solarangle_40_20mev.eps} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\textwidth,clip]{skiv_msg_cossun.eps} \end{subfigure} \caption{Left: SK-IV solar angle distribution for 3.5 to 19.5 MeV. $\theta_{\mbox{\tiny sun}}$ is the angle between the incoming neutrino direction and the reconstructed recoil electron direction. Black points are data while the blue and red histograms are best fits to the background and signal plus background, respectively. Right: Distribution of $\cos\theta_{\text{\tiny sun}}$ for the energy ranges 3.5-4.0 MeV, 4.0-4.5 MeV, 4.5-5.0 MeV and 7.0-7.5 MeV (from top to bottom), for each MSG bin (left to right). The colors are the same as the left panel.} \label{fig:cossun} \end{figure} \subsection{Solar Neutrino Energy Spectrum} \label{spectrum} Solar neutrino flavor oscillations above about 5.0 MeV are dominated by the solar MSW~\cite{msw} resonance, while low energy solar neutrino flavor changes are dominated by vacuum oscillations. Since the MSW effect rests solely on standard weak interactions, it is rather interesting to confront the expected resonance curve with data. Unfortunately multiple Coulomb scattering prevents the kinematic reconstruction of the neutrino energy in neutrino-electron elastic scattering interactions. However, the energy of the recoiling electron still provides a lower limit to the neutrino's energy. Thus, the neutrino spectrum is inferred statistically from the recoil electron spectrum. Moreover, the differential cross section of $\nu_{\mu,\tau}$'s is not just a factor of about six smaller than the one for $\nu_e$'s, but also has a softer energy dependence. In this way, the observed recoil electron spectrum shape depends both on the flavor composition and the energy-dependence of the composition of the solar neutrinos. So even a flat composition of $33\%$ $\nu_e$ and $67\%$ $\nu_{\mu,\tau}$ still distorts the recoil electron spectrum compared to one with $100\%$ $\nu_e$. The energy dependence of the day-night effect and rare $hep$ neutrino interactions (with a higher endpoint than $^8$B $\nu$'s) also distort the spectrum. To analyze the spectrum, we simultaneously fit the SK-I, II, III and IV spectra to their predictions, while varying the $^8$B and $hep$ neutrino fluxes within uncertainties. The $^8$B flux is constrained to $[5.25\pm0.20]\times10^6$ /(cm$^2$sec) and the $hep$ flux to $[2.3\pm2.3]\times10^4$ /(cm$^2$sec) (motivated by SNO's measurements~\cite{snothreephase,snohep}). \subsubsection{SK-IV Energy Spectrum} \label{sk4spec} The SK-IV $^8$B solar neutrino energy spectrum is extracted using the same method as the total flux, extracting the number of signal events in 23 energy bins separately. There are 20 0.5 MeV bins between 3.5 and 13.5 MeV, two 1.0 MeV bins between 13.5 and 15.5 and one 4.0 MeV energy bin between 15.5 and 19.5 MeV. Below 7.5 MeV each energy bin is split into three sub-samples based on MSG, with the boundaries set at MSG=0.35 and 0.45. The three sub-samples in each of these low energy bins are simultaneously fit to a single signal and three independent background components, with the fraction of events in each sub-sample determined by MC simulated events. The right panel of Fig.~\ref{fig:cossun} shows the measured angular distributions and fit results for the energy ranges of 3.5-4.0 MeV, 4.0-4.5 MeV, 4.5-5.0 MeV and 7.0-7.5 MeV. As expected in the lowest energy bins, the background component is the largest in the sub-samples with the lowest MSG, while the signal component grows as the MSG is increased. Using this method of MSG sub-samples has reduced the total uncertainty by up to $15\%$ for the lowest energy bins. The left panel of Fig.~\ref{fig:spec} shows the resulting SK-IV recoil electron energy spectrum, where below 7.5 MeV sub-samples of MSG has been used and above 7.5 MeV the standard signal extraction method is used. \begin{figure}[t] \begin{subfigure}{0.48\textwidth} \begin{center} \includegraphics[width=\textwidth]{skiv_spectrum_with_msg.eps} \end{center} \end{subfigure} \begin{subfigure}{0.5\textwidth} \vspace*{-0.25cm} \includegraphics[trim=0cm 0cm 0cm 0.76cm,width=\textwidth,clip=true]{cspec.eps} \end{subfigure} \caption{Left: SK-IV energy spectrum using MSG sub-samples below 7.5 MeV, shown as the ratio of the measured rate to the MC simulated unoscillated rate. The horizontal dashed line gives the SK-IV total average (0.451). Error bars shown are statistical plus energy-uncorrelated systematic uncertainties. Right: SK-I+II+III+IV recoil electron spectrum compared to the no-oscillation expectation. The green (blue) shape is the MSW expectation using the SK (solar+KamLAND) best-fit oscillation parameters. The orange (black) line is the best-fit to SK data with a general exponential/quadratic (cubic) $P_{ee}$ survival probability.} \label{fig:spec} \end{figure} \subsubsection{SK Combined Solar Neutrino Energy Spectrum Analysis} \label{combinespec} The spectral data from SK-III has been refit using the same energy bins and MSG sub-samples as SK-IV, down to 4.0 MeV. The gain in precision in SK-III is similar as to SK-IV. However, in SK-II, the same MSG sub-sample have been applied for all energy bins. In order to discuss the energy dependence of the solar neutrino flavor composition in a general way, the electron neutrino survival probability $P_{ee}$ has been parameterized using a general quadratic function $P_{ee}=c_0+c_1(E_{\nu}-10)+c_2(E_{\nu}-10)^2$, as SNO did in~\cite{snothreephase}, and then by general exponential and cubic functions as well. Each phase of SK is fit separately, and then combined together using a minimum chi-squared method. The right panel of Fig.~\ref{fig:spec} shows the statistical combination of the four phases of SK, along with the best-fits coming from the general quadratic/exponential (identical and shown in orange) and general cubic (black) function fits. Also shown in green (blue) is the expected MSW resonance curves assuming the best-fit neutrino oscillation parameters coming from a fit to SK data only (all solar neutrino plus KamLAND~\cite{kamland} data). This figure is shown only as an illustration of the resulting SK combine fit and should not be used to do further analysis. Fig.~\ref{fig:pee} shows the resulting $1\sigma$ uncertainties on the spectrum fit to the general functions, along with the expected MSW curves (same as in Fig.~\ref{fig:spec}). \begin{figure}[t] \begin{subfigure}{0.49\textwidth} \includegraphics[width=\textwidth]{pee_exp.eps} \end{subfigure} \begin{subfigure}{0.495\textwidth} \vspace*{-.25cm} \includegraphics[width=\textwidth]{solarneutrinospectrum.eps} \end{subfigure} \caption{Left: Allowed survival probability $1\sigma$ band from SK data. The red (blue) area is based on an exponential (quadratic) fit and the green band is based on a cubic fit. The $^8$B flux is constrained to the measurement from SNO. The absolute value of the $^8$B flux doesn't affect the shape constraint much, just the average value. Also shown are predictions based on the oscillation parameters of a fit to all solar data (green) and a fit to all solar+KamLAND data (blue). Right: Predicted solar neutrino spectra~\cite{ssm}. Overlaid are expected MSW survival probabilities; green is the expectation assuming oscillation parameters from the SK best-fit, turquoise from the global solar neutrino best-fit and blue from the solar plus KamLAND best fit. The $1\sigma$ band from the combined data of SK and SNO is shown in red. Also shown are measurements of the $^7$Be (green point), $pep$ (light green point) and $^8$B flux (red point) by Borexino~\cite{otherborexino}, as well as $pp$ (blue point) and CNO values (gold point) extracted from other experiments~\cite{othersolar}.} \label{fig:pee} \end{figure} There are added benefits when combining the results of the quadratic fit to the survival probability of SK and SNO together, since SK's correlation between the quadratic coefficients $c_1$ and $c_2$ is opposite to that of SNO's. The resulting combine $c_1-c_2$ correlation becomes much smaller. The addition of the SK data to the SNO data not only significantly increases the precision of the $c_0$ determination, but the uncertainties on the shape are reduced. While SK data by itself prefers an ``upturn'' when going from high to low neutrino energy and SNO data prefers a ``downturn'', the combined fit favors an ``upturn'' more strongly than the SK data by itself. SNO's sensitivity is dominated by charged-current interactions which preserve the neutrino energy, however, the nuclear threshold energy takes away some of the advantage over SK, which has higher statistics in the elastic scattering data. As a consequence, SNO's uncertainties are smaller at higher neutrino energy, while SK's uncertainties are smaller at lower neutrino energy. The right panel of Fig.~\ref{fig:pee} superimposes the SK plus SNO $1\sigma$ $P_{ee}$ quadratic fit band (red) (on a logarithmic scale) on the SSM~\cite{ssm} solar neutrino spectrum. Also shown are the $pp$ and CNO neutrino flux constraints from all solar neutrino data~\cite{homestake,othersolar} and the $^7$Be, $pep$ and $^8$B flux measurements of the Borexino experiment~\cite{otherborexino}. The SK and SNO combined allowed band (and the other solar data) are in good agreement with the predicted MSW curves based on either SK data only, all solar neutrino data or all solar neutrino plus KamLAND data (shown in green, turquoise and blue, respectively). \subsection{Solar Neutrino Day-Night Flux Asymmetry} \label{dn} The matter density of the Earth affects solar neutrino oscillations while the Sun is below the horizon. This so called ``day-night effect'' will lead to an enhancement of the $\nu_e$ flavor content during the nighttime for most oscillation parameters. The most straight-forward test of this effect uses the solar zenith angle $\theta_z$ at the time of each event to separately measure the solar neutrino flux during the day $\Phi_{\text{\tiny D}}$ (defined as $\cos\theta_z \leq 0$) and the night $\Phi_{\text{\tiny N}}$ (defined as $\cos\theta_z > 0$). The day-night asymmetry $A_{\text{\tiny DN}}=(\Phi_{\text{\tiny D}}-\Phi_{\text{\tiny N}})/\frac{1}{2}(\Phi_{\text{\tiny D}}+\Phi_{\text{\tiny N}}$) defines a convenient measure of the size of the effect. The SK-IV livetime during the day (night) is 626.4 days (679.9 days). The solar neutrino flux between 4.5 and 19.5 MeV and assuming no oscillations is measured as $\phi_{\text{\tiny D}}=[2.29\pm0.03(\mbox{stat.})\pm0.05(\mbox{sys.})]\times10^6$ /(cm$^2$sec) during the day and $\phi_{\text{\tiny N}}=[2.42\pm0.03(\mbox{stat.})\pm0.05(\mbox{sys.})]\times10^6$ /(cm$^2$sec) during the night. By comparing the separately measured day and night fluxes, the measured day-night asymmetry for SK-IV is found to be $[-5.3\pm2.0(\mbox{stat.})\pm1.4(\mbox{sys.})]\%$. When this is combined with the previous three phases (see the center column of Table~\ref{tab:dn}), SK measures the day-night asymmetry in this simple way as $[-4.2\pm1.2(\mbox{stat.})\pm0.8(\mbox{sys.})]\%$~\cite{skall_dn}. This result deviates from zero by $2.8\sigma$. \begin{table}[t] \begin{center} \caption{Day-night asymmetry for each SK phase, coming from separate day and night rate measurements (middle column) and the amplitude fit (right column). The uncertainties shown are statistical and systematic. The entire right column assumes the SK best-fit point of oscillation parameters.} \begin{tabular}{l c c} \hline\hline & $A_{\text{\tiny DN}}\pm(\text{stat})\pm(\text{syst})$ & $A_{\text{\tiny DN}}^{\text{\tiny fit}}\pm(\text{stat})\pm(\text{syst})$ \\ \hline SK-I & $(-2.1\pm2.0\pm1.3)\%$ & $(-2.0\pm1.7\pm1.0)\%$ \\ SK-II & $(-5.5\pm4.2\pm3.7)\%$ & $(-4.3\pm3.8\pm1.0)\%$ \\ SK-III & $(-5.9\pm3.2\pm1.3)\%$ & $(-4.3\pm2.7\pm0.7)\%$ \\ SK-IV & $(-5.3\pm2.0\pm1.4)\%$ & $(-3.4\pm1.8\pm0.6)\%$ \\ \hline Combined & $(-4.2\pm1.2\pm0.8)\%$ & $(-3.2\pm1.1\pm0.5)\%$ \\ \hline\hline \end{tabular} \label{tab:dn} \end{center} \end{table} To eliminate systematic effects and increase statistical precision, a more sophisticated method to test the day-night effect is given in~\cite{dn,sk1}. For a given set of oscillation parameters, the interaction rate as a function of the solar zenith angle is predicted. Only the shape of the calculated solar zenith angle variation is used, the amplitude of it is scaled by an arbitrary parameter. The extended maximum likelihood fit to extract the solar neutrino signal is expanded to allow time-varying signals. The likelihood is then evaluated as a function of the average signal rates, the background rates and a scaling parameter, termed the ``day-night amplitude''. The equivalent day-night asymmetry is calculated by multiplying the fit scaling parameter with the expected day-night asymmetry. In this manner the day-night asymmetry is measured more precisely statistically and is less vulnerable to some key systematic effects. Because the amplitude fit depends on the assumed shape of the day-night variation (given for each energy bin in~\cite{dn} and \cite{sk1}), it necessarily depends on the oscillation parameters, although with very little dependence expected on the mixing angles (in or near the large mixing angle solution and for $\theta_{13}$ values consistent with reactor neutrino measurements~\cite{reactorexp}). The fit is run for parameters covering the MSW region of oscillation parameters ($10^{-9}$ eV$^2\le\Delta{m_{21}^2}\le10^{-3}$ eV$^2$ and $10^{-4}\le\sin^2\theta_{12} < 1$), for values of $\sin^2\theta_{13}$ between 0.015 and 0.035. Details of the estimates of the systematic uncertainties resulting from this method are given in~\cite{sk4}. \begin{figure}[t] \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{dnspectrum.eps} \end{subfigure} \begin{subfigure}{0.52\textwidth} \centering \includegraphics[width=\textwidth]{dndm2.eps} \end{subfigure} \caption{Left: SK day-night amplitude fit as a function of recoil electron kinetic energy, shown as the measured amplitude times the expected day-night asymmetry, for oscillation parameters chosen by the SK best-fit. The error bars shown are statistical uncertainties only and the expected dependence is shown in red. Right: Dependence of the measured day-night asymmetry (fitted day-night amplitude times the expected day-night asymmetry (red)) on $\Delta{m_{21}^2}$, for $\sin^2\theta_{12}=0.314$ and $\sin^2\theta_{13}=0.025$. The $1\sigma$ stat (stat+syst) uncertainties are given by the light (dark) gray band. Overlaid are the $1\sigma$ allowed ranges from the solar global fit (green box) and the KamLAND experiment (blue box).} \label{fig:dn} \end{figure} The resulting day-night asymmetry when using the extended maximum likelihood method can be seen for individual phases in the right column of Table~\ref{tab:dn}. The left panel of Fig.~\ref{fig:dn} shows the combined SK-I+II+III+IV day-night amplitude fit as a function of recoil electron energy. In each recoil electron energy bin $e$, the day-night variation is fit to an amplitude $\alpha_e$. The displayed day-night asymmetry values are the product of the fit amplitude $\alpha_e$ with the expected day-night asymmetry $A_{\text{\tiny DN, calc}}^e$ (red), when using the SK best-fit point of oscillation parameters ($\Delta{m_{21}^2}=4.84\times10^{-5}$ eV$^2$, $\sin^2\theta_{12}=0.342$ and $\sin^2\theta_{13}=0.025$). These parameters are chosen when using SK's spectral and time variation data along with constraints on the $^8$B solar neutrino flux and $\theta_{13}$. When all energy bins are fit together and the same oscillation parameters assumed, the resulting SK-measured day-night asymmetry coming from the amplitude fit is \begin{align*} A_{\mbox{\tiny DN}}^{\mbox{\tiny fit}}=[-3.2\pm1.1(\mbox{stat.})\pm0.5(\mbox{sys.})]\%\mbox{ \cite{skall_dn}}, \end{align*} with an asymmetry of $-3.3\%$ expected by numerical calculations (see \cite{dn} for details). This result deviates from zero by $2.7\sigma$, giving the first significant direct indication for matter enhanced neutrino oscillations. If this value is combined with SNO's measurement~\cite{snothreephase}, the resulting measured SK equivalent day-night asymmetry is $A_{\mbox{\tiny DN}}^{\mbox{\tiny fit}}=[-2.9\pm1.0(\mbox{stat.+sys.})]\%$, increasing the significance for a non-zero day-night asymmetry to $2.9\sigma$. While the expected day-night asymmetry at SK changes to $-1.7\%$ if the value of $\Delta m^2_{21}$ is changed to $7.41\times10^{-5}$ eV$^2$ (motivated by KamLAND data~\cite{kamland}), the measured value is found to be $A_{\mbox{\tiny DN}}^{\mbox{\tiny fit}}=[-3.0\pm1.0(\mbox{stat.})\pm0.5(\mbox{sys.})]\%$, reducing the significance for a non-zero day-night asymmetry from 2.7 to $2.6\sigma$. The dependence of the SK measured day-night asymmetry on $\Delta m^2_{21}$, for $\sin^2\theta_{12}=0.314$ and $\sin^2\theta_{13}=0.025$, can be seen in the right panel of Fig.~\ref{fig:dn}, with the expected day-night asymmetry shown by the red curve. Superimposed are the $1\sigma$ allowed ranges in $\Delta m^2_{21}$ from the solar global fit~\cite{sk4} (green) and from the KamLAND experiment~\cite{kamland}. The resulting day-night asymmetry has negligible dependence on the values of $\theta_{12}$ (within the LMA region) and $\theta_{13}$ (near the reactor antineutrino best-fit~\cite{reactorexp}). \subsection{Solar Neutrino Oscillation Analysis} \label{osc} \begin{figure}[t] \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=\textwidth]{globalcont.eps} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=\textwidth]{globalcontangle.eps} \end{subfigure} \caption{Left: Allowed contours of $\Delta m^2_{21}$ vs. $\sin^2\theta_{12}$ from solar neutrino data (green) at 1, 2, 3, 4 and $5\sigma$ and KamLAND data (blue) at the 1, 2 and $3\sigma$ confidence levels. Also shown is the combined result in red. For comparison, the almost identical result of the SK+SNO combined fit is shown by the dashed dotted lines. The filled regions give the $3\sigma$ confidence levels. $\theta_{13}$ is constrained by $\left(\frac{\sin^2\theta_{13}-0.0242}{0.0026}\right)^2$. Right: Allowed contours of $\sin^2\theta_{13}$ vs. $\sin^2\theta_{12}$, colors are the same as the left panel.} \label{fig:osc} \end{figure} We analyzed the SK-IV elastic scattering rate, the recoil electron spectral shape and the day-night variation to constrain the solar neutrino oscillation parameters. We then combined the SK-IV constraints with those of the previous three SK phases, as well as all other solar neutrino experiments. The allowed contours of all solar neutrino data (as well as KamLAND's constraints) are shown in Fig.~\ref{fig:osc}. SK and SNO dominate the combined fit to all solar neutrino data. This can be seen from the almost identical two sets of green contours in the left panel of Fig.~\ref{fig:osc}. In the side panel of this figure, some tension between the solar neutrino and reactor antineutrino measurements of the solar $\Delta m^2_{21}$ is evident, stemming from the SK day-night measurement. Even though the expected amplitude agrees within $\sim1.1\sigma$ with the fitted amplitude for any $\Delta m^2_{21}$, in either the KamLAND or the SK range, the SK data somewhat favor the shape of the variation predicted by values of $\Delta m^2_{21}$ that are smaller than KamLAND's. The right panel of Fig.~\ref{fig:osc} shows the results of the $\theta_{13}$ unconstrained fit. The significance of non-zero $\theta_{13}$ from the solar+KamLAND data combined fit is about $2\sigma$, measured as $\sin^2\theta_{13}=0.026^{+0.017}_{-0.012}$ and quite consistent with reactor antineutrino measurements~\cite{reactorexp}. \section{Conclusion} \label{conclusion} The fourth phase of SK measured the solar $^8$B neutrino-electron elastic scattering-rate with the highest precision yet. When combined with the results from the previous three phases, the SK combined flux is\\ $[2.37\pm0.015$(stat)$\pm0.04$(syst)$]\times10^6$ /(cm$^{2}$sec). A quadratic fit of the electron-flavor survival probability as a function of energy to all SK data, as well as a combined fit with SNO solar neutrino data, slightly favors the presence of the MSW resonance. The solar neutrino elastic scattering day-night rate asymmetry is measured as [$-3.2\pm1.1$(stat)$\pm0.5$(syst)]$\%$. This solar zenith angle variation data gives the first significant indication for matter enhanced neutrino oscillation, and leads SK to having the world's most precise measurement of $\Delta m_{21}^2=4.8^{+1.8}_{-0.9}$ eV$^2$, using neutrinos rather than anti-neutrinos. There is a slight tension of $1.5\sigma$ between this value and KamLAND's measurement using reactor anti-neutrinos. The tension increases to $1.6\sigma$, if other solar neutrino data are included. A $\theta_{13}$ constrained fit to all solar neutrino data and KamLAND yields $\sin^2\theta_{12}=0.305\pm0.013$ and $\Delta m_{21}^2=7.49^{+0.19}_{-0.17}\times10^{-5}$ eV$^2$. When this constraint is removed, solar neutrino experiments and KamLAND measure $\sin^2\theta_{13}=0.026^{+0.017}_{-0.012}$, a value in good agreement with reactor antineutrino measurements. \bibliographystyle{elsarticle-num}
1,314,259,992,817
arxiv
\section{Introduction} \label{sec:intro} A remarkable fact of Nature is the left-handed chirality, or handedness, of nearly all the amino acids used by living creatures in the production of proteins, to the near exclusion of the right-handed forms. Molecular chirality was discovered in the Nineteenth Century by Pasteur \citep{pasteur48,flack09}, and the homochirality of the amino acids was deduced subsequently. However, an explanation of the origin of the amino acid chirality has remained a mystery. We define enantiomeric excess as $ee = (NL-NR)/(NL+NR)$, where $NL(NR)$ is the number of left- (right-) handed molecules in an ensemble. Thus Earth's amino acids have an ee = 1.0 (except for glycine, which is achiral), that is, they are left-handed and homochiral. If $ee$ = 0.0, the ensemble is said to be racemic. Although laboratory experiments in the 1950s \citep{miller53,miller59} suggested that amino acids might have been produced in an early Earthly lighting storm, that scenario fails to explain how the amino acids might have become totally left-handed. Furthermore, the several suggested means of converting the racemic amino acids to homochirality via Earthly processes were discussed by \cite{bonner91}, and shown to be unlikely to produce the observed result. General discussions were also provided by \cite{mason84} and \cite{barron08}. However, analysis of meteorites has found that they do contain amino acids, so that they are made in outer space \citep{kvenvolden70,bada83,cronin97,cronin98,glavin09,herd11} and that some of them do exhibit nonzero enantiomeric excesses, typically at a level of a few percent, with a preference for left-handedness. Thus cosmic production of the amino acids becomes a strong contender for explaining how Earth was seeded with amino acids, and how they came to have a left-handed chirality. The observed $ee$s, however, necessitate the existence of amplification via autocatalysis \citep{frank53,kondepudi85,goldanskii89}, which is thought, and demonstrated in laboratory experiments \citep{soai95,soai02,klussman06,breslow06,soai14}, to be able to convert small $ee$s to Earthly homochirality. One model that purports to explain how amino acids achieved their left-handed chirality in outer space has reached a sufficient stage of development that it now seems appropriate to consider its probability for producing chiral amino acids. The Supernova Neutrino Amino Acid Processing (SNAAP) Model \citep{boyd10,boyd11,boyd12,famiano14,famiano16,famiano18a}, has been developed over the past few years. Recent efforts using quantum molecular calculations have shown that this model appears to produce amino acids within its framework that do have a significant ee, and that it is positive for most of the amino acids studied. In this work we will address the issue of whether or not the SNAAP model can explain how the chiral amino acids observed in meteorites were made and to what extent they might have populated the galaxy. Other models have also been developed to explain how the amino acids developed $ee$s in outer space. Perhaps the best developed one is the Circularly Polarized Light (CPL) model, which relies on ultraviolet CPL, produced by first scattering the light from an extremely hot star to polarize it, then letting it process the amino acids. It was first suggested by \cite{flores77} and \cite{norden77}, and subsequently elaborated in detail by many groups \citep{rubenstein83,bailey98,meierhenrich05,takano07,meierhenrich08,takahashi09,meierhenrich10,meinert10,demarcellus11, meinert12,meinert14}. Although there are certainly other suggested explanations for the origin of a preferred amino acid chirality in outer space, we believe that they are less well developed than either the CPL model or the SNAAP model. In any event, they have been discussed in other publications \citep{bonner91,meierhenrich08,guijarro09,boyd12}. The essential features of any model include (i) how it generates some enantiomerism in the amino acids, (ii) how that gets amplified, if necessary, to the few percent level found in carbonaceous chondrite meteorites, (iii) how the model explains the processing of some of the enantiomeric amino acids throughout the volume of the carbonaceous chondrite meteorites, and (iv) how its amino acids can be delivered to present-day Earth via meteorites. In Section \ref{model} we will discuss the basics of the SNAAP model. Section \ref{results} will discuss how the above issues are solved within that model. Section \ref{conclusion} will give our conclusions. \section{The SNAAP Model} \label{model} In this model \citep{boyd10,boyd11,boyd12,famiano14,famiano16,famiano18a} large meteoroids might be processed in the intense magnetic field and electron anti-neutrino (hereafter denoted `anti-neutrino') flux from one of several stellar objects. The anti-neutrinos are selective in their destruction of the amino acids with right-handed helicity, a result of the weak interaction nuclear physics that describes their interaction with the $^{14}$N nuclei. The relevant nuclear reaction is \begin{equation} \bar{\nu}_e+^{14}N\rightarrow e^++^{14}C \end{equation} where $\bar{\nu}_e$ is an electron anti-neutrino and $e^+$ is an antielectron, a positron. If the $\bar{\nu}_e$ spin (1/2, in units of $\hbar$, Planck's constant divided by 2$\pi$) is antiparallel to the $^{14}$N (spin 1), then the total spin of 1/2 on the left-hand side of the equation will equal the sum of the spin of $^{14}$C (spin 0) and the positron (spin 1/2) on the right-hand side. However, if the $\bar{\nu}_e$ spin and the $^{14}$N spins are aligned, then conservation of angular momentum will require one unit of angular momentum to come from either the $\bar{\nu}_e$ wave function or the positron wave function in order for the total angular momentum on the right-hand side to equal the 3/2 on the left-hand side. This is known from basic nuclear physics \citep{boyd08} to introduce roughly an order of magnitude smaller cross section for the latter case compared to the former, and is the origin of the effect predicted for the SNAAP model. Detailed quantum molecular calculations have shown that the complex interactions of the molecules with the intense magnetic field of the nascent neutron star in a developing supernova or of the cooling neutron star following a supernova event and the electric field caused by the motion of the meteoroids through the magnetic field do produce an environment that is truly chiral \citep{barron86,barron08}. In this situation, the interactions of the $^{14}$N with the $\bar{\nu}_e$s are chirally selective, and will, at least in nearly every case, destroy more of the right-handed amino acids than the left-handed ones \citep{famiano18b}. The meteoroids that are processed by the anti-neutrinos can be as large as needed to survive the possibly intense fields of the stellar object they pass by or orbit. That isn't a particularly stringent assumption, since all that is needed is the magnetic field and the anti-neutrino flux, and there are several candidates that appear capable of satisfying those requirements: supernovae, cooling neutron stars, magnetars, Wolf-Rayet stars, and even ``silent supernovae;'' stars that are sufficiently massive that they collapse to black holes, develop strong magnetic fields, and emit the usual copious streams of neutrinos and anti-neutrinos while producing very few photons. Calculations were performed \citep{famiano18a,famiano18b} with the quantum molecular code \texttt{Gaussian} to examine several possible ways in which the $^{14}$N, coupled to the molecular chirality, could undergo chirality dependent destruction. This was done for twenty one amino acids. The motion of the meteoroids in the magnetic field of the central object is critical, as it induces an electric field from the cross product of the velocity with that magnetic field. The angle that the nuclear magnetization makes with the anti-neutrino spin is then chirally dependent. The cross section for destruction of the $^{14}$N by the anti-neutrinos, hence of the molecule, depends on that angle, producing the chirality dependent molecular destruction. The most promising scenario of the several studied \citep{famiano18a,famiano18b} appears to result from the coupling of the molecular electric dipole moment to the electric field induced in the meteoroid by its motion. This produces transverse magnetization components that differ between the two molecular chiral states. These components exist even without the coupling to the electric dipole moment \citep{buckingham04,buckingham06}, but that coupling enhances the difference between the angles that the two chiral states make with the anti-neutrino spin, hence of the chirality selective destruction of the amino acids \citep{famiano18a}. From the magnitude of these effects, one can determine the ees that might be expected for amino acids from the SNAAP model. In principle, electron neutrinos could drive the $^{14}$N to $^{14}$O, but the threshold energy is higher for this reaction. Since the cross section for neutrino capture processes is proportional to the square of the energy above threshold \citep{boyd08} this reaction has a smaller effect on the enantiomerism that results from the combined flux from anti-neutrinos and neutrinos. \section{Results} \label{results} \paragraph{Can the SNAAP model produce ees in the amino acids?} At present the quantum molecular calculations have assumed that the meteoroids pass by the central object, if it is a supernova or cooling neutron star, at mid-plane and normal to the axis that connects the poles. But the resulting $ee$s, as high as one percent, with the amino acid isovaline in an aqueous environment, as has been suggested in recent meteoritic analyses \citep{herd11}, are particularly noteworthy in that they are in the ballpark of what is observed in the meteorites. However, if more sophisticated calculations fail to increase the predicted ees over the one percent level, some autocatalysis will be necessary for the SNAAP model to explain the meteoritic $ee$s. \paragraph{Can the SNAAP model produce sufficiently large ees that some autocatalysis can boost them to the levels observed in the meteoroids?} The required level of any ee producing mechanism might be relaxed if autocatalysis \citep{frank53,kondepudi85,goldanskii89} can prevail in outer space. The experiments that have demonstrated autocatalysis \citep{soai95,soai02,breslow06,klussman06,soai14} have been performed in laboratory settings. Although the minimum ee required for that to take effect is not known, it can be safely assumed to be less than the roughly one percent level in the experiments in which it has been demonstrated. Since the SNAAP model appears capable of producing ees at roughly that level, the required ee should not be a problem at all, unless autocatalysis is more restrictive in the cold confines of outer space than it is on Earth. Of course, that is a possibility. Thus experiments to determine the temperature dependence of autocatalysis would be very useful. \paragraph{Can the SNAAP model predict that some of the carbonaceous chondrite meteorites that get to Earth will have nonzero ees?} \label{meteoroids} In order for any model to explain how some of the carbonaceous chondrite meteorites end up having ees, the model must either have a well-defined local source that can produce ees, or it must explain how it can process the space debris in some larger region of space. a) One possibility might be thought to be the processing of the planets around a single massive star as it becomes a supernova. KEPLER \citep{borucki16} has now detected planets around many stars. Thus it might be safe to argue that most, or at least many, stars do have planets associated with them. The inner ones will be completely processed by the anti-neutrinos, since nearly all of them will pass through any object, even a planet, as the star becomes a supernova. When the shock wave from the explosion hits the inner planets a few hours later, material will undoubtedly be spalled off, creating meteoroids. However, this model has a fundamental problem for the SNAAP model (and others) in that the magnetic field from the nascent neutron star extends to about 1 A.U., whereas the star, when it moves into its Red Giant phase will extend to about that same distance. Thus any meteoroids or planets that had any amino acids prior to the Red Giant phase would most likely have them destroyed when the star expanded. Although supernovae may be a major source of the galaxy's space debris, the amino acids in the resulting meteoroids would most likely have tiny enantiomeric excesses. b) Another possible scenario might result from a neutron star that is recoiling, after it has been produced in a supernova, typically at 1000 km/s or less, through the space debris of the galaxy for the 10$^5$ years it would be expected to continue to emit anti-neutrinos, processing each nearby floating planet \citep{sumi11} or piece of space rock as it goes. We investigated this scenario, but found that, even with generous estimates of the supernova frequency and the energies of the anti-neutrinos emitted by the cooling neutron star (they may be thermal, as described by \cite{bahcall65}, but may also have considerably higher energies from the nuclear processes taking place in the cooling star, as noted by \cite{fuller91,schatz14,misch16} and \cite{patton17}), the volume of the space that could be processed by all the neutron stars produced since the Big Bang was more than 10 orders of magnitude less than the volume of the galaxy. Furthermore, the space rocks so processed would be widely distributed, so would not be likely to populate a restricted region of space. c) A third possibility might be a Wolf-Rayet star. When the star became a supernova any amino acids that resided within a passing meteoroid or in the material surrounding the star within an A.U. of the star would be processed by the magnetic field and anti-neutrinos emitted. This does seem to be a plausible scenario for creating enantiomerism in the amino acids, although the trajectory of the passing meteoroid couldn't be too close to the hot star or too far from it to experience its magnetic field when it exploded. And dust grains within the surrounding cloud would have to have been in a sufficiently cool region for amino acids to form. d) Perhaps a more likely scenario is one in which a massive star exists as part of a close binary system in which the partner is a neutron star. In such a system, the neutron star gradually siphons off the outer layer of the massive star, producing a star that will ultimately become a Type Ib/c supernova, and creating an accretion disk around the neutron star \citep{wolsczan08}. The disk apparently ranges from close to, but slightly beyond, the disk surface of the neutron star \citep[see, e.g.,][]{ludlam17} to beyond 10$^5$ km \citep[see, e.g.,][]{pringle82}. The material would all be well inside the volume in which the combined magnetic field from the neutron star and from the supernova when it occurred would be sufficient to provide the necessary magnetic orientation, and close enough to the massive star to be subject to a robust anti-neutrino flux, when it exploded. This scenario introduces a complex set of possibilities. Any planets that were in orbits around the massive star would lose some of their gravitational attraction to that star as its mass was transferred to the neutron star, so that those in outer orbits might assume new, possibly highly elongated, orbits around the binary-star system \citep[see, e.g.,][]{jain17}, or might undergo a hyperbolic trajectory pass-by of the neutron star. In either scenario, the planet might be shredded by the strong gravitational field gradient, or as it passed through the accretion disk, so the result might produce the mass of the planet in meteoroids. The accretion disk itself is thought to be a nursery for dust grains, meteoroids, and even planets \citep{lithwick09}, and the temperature falloff with radius in the disk, thought to be $r^{-3/4}$ for large enough distance from the neutron star \citep[see, e.g.,][]{mineshige94}, would eventually provide a sufficiently low temperature environment in the outer regions of the disk that racemic amino acids could form, awaiting the anti-neutrinos from the exploding supernova to create some enantiomerism. The anti-neutrinos emitted by the cooling neutron star might become thermal soon after the neutron star is created so, except for those far out on the high energy tail of the distribution, their energy would be insufficient to cause the conversion of $^{14}$N to $^{14}$C. However, as noted above, nuclear processes might modify that conclusion. But when the massive star companion became a supernova, the matter in the accretion disk would all be well within the range of the neutrinos emitted from the supernova, which would process any amino acids that had developed in the accretion disk. Furthermore, the intense emissions from the X-ray binary and the shock wave from the supernova would surely cause sufficient disruption of at least some of the material in the disk to propel it beyond the gravitational well of the two stars. What would happen to the binary system that had now become two neutron stars? Recent gravitational wave and space borne gamma ray detectors \citep{abbott17,goldstein17,savchenko17} have shown that neutron star mergers can produce a huge abundance of neutron-rich material, and presumably enough of an accompanying shock wave to create a new stellar system from the material ejected from what was originally two massive stars. This system may be capable of creating its own new stellar system, complete with r-process nuclides and enantiomeric amino acids. e) A recent study \citep{schatz14} of the crust in a neutron star deserves special note. It suggests that the nuclei that are contained in the matter that is accreted from the companion star into the neutron star accretion disk, and subsequently onto the surface of the neutron star, would be absorbed into the surface region of the star. They would encounter the essentially neutron pure matter ultimately to a depth of about 150 meters, and would be driven to the neutron drip line by successive beta-decays and electron captures. The processes that would occur in one of the shells of the star would be \begin{eqnarray} (Z-1, A)&\rightarrow(Z,A) + e^- + \bar{\nu}_e\\\nonumber (Z,A) + e^-&\rightarrow (Z-1, A) + \nu_e, \end{eqnarray} where (Z,A) is a nucleus with proton number Z and nucleon number A. The star is cooled by the emission of the neutrinos, $\nu_e$, and anti-neutrinos, $\bar{\nu}_e$. As the nuclides are pushed more deeply into the neutron rich region below the crust, they become increasingly neutron rich until they reach the neutron drip line. The result could be a so-called URCA process \citep{gamow41} that would emit electron neutrinos and anti-neutrinos. The anti-neutrino end point energies would be expected to achieve several MeV for some of the neutron-rich nuclides created. While the intensity of the resulting anti-neutrinos would not be as high as those emitted when the supernova explodes, they would be high enough in energy to process any amino acids that had been produced. Furthermore, they could continue to be processed for years, creating an additional opportunity to process any amino acids created in the accretion disk around a neutron star from the electron anti-neutrinos emitted. Thus this scenario might enhance the enantiomerism produced in the accretion disk in a binary system discussed in Section \ref{meteoroids}d. \\ Could this model populate the entire Galaxy with enantiomeric amino acids? That is very doubtful. WR stars and binary systems of the type discussed are not extremely rare, but neither do they occur frequently. However the meteoroids thrown out from the accretion disk of the binary system or the WR star would attain enough momentum from the Type Ib/c supernova to carry them to appreciable distances from the central system, and thus to populate a region that would ultimately be considerably larger than the Solar system. This would suggest that, although the potential for life would not be uniform throughout the Galaxy, there should be numerous pockets in which life might have been initiated as the enantiomeric amino acids were distributed around the binary star systems. Even though planets might lie in the Goldilocks zone, that is, within a temperature range that is neither too hot nor too cold for life to exist, they might not have amino acids that had received the necessary processing to make them enantiomeric. However, there might also be systems, specifically remnants of binary massive star systems, that would be strong candidates for life. Indeed, if the SNAAP model is the correct description of amino acid enantiomeric excess production, remnants of such systems should provide good places for astronomers to search for chiral amino acids. \paragraph{Can the SNAAP model produce meteorites that can make it to Earth's surface?} Since the anti-neutrinos will have processed the entire meteoroid, no matter how large it was, the $ee$s established would prevail throughout its body. Thus, assuming that some of the resulting meteoroids would be large enough to suffer some ablation in passing through the Earth's atmosphere, whatever portion remained would carry the $ee$s it achieved prior to entering Earth's atmosphere. Dust grains would not be so fortunate; they would be likely to burn up before reaching the surface of the Earth. \section{Some Conclusions} \label{conclusion} Several effects that are beyond the scope of the current paper will be dealt with in future studies. These include more calculations of quantum molecular chemistry and inclusion of time changing magnetic fields. Although we cannot be sure how these will affect the $ee$s, those calculated in the simplified model assumed in \cite{famiano18a} were approaching the levels found in the meteorites. Thus the SNAAP model may require little, if any, outer space autocatalysis to produce the few percent $ee$s seen in meteorites. Perhaps the most troublesome aspect of the SNAAP model is that its $ee$ predictions are, at this stage, completely theoretical. Although the calculated $ee$s are the result of state of the art quantum molecular codes, it would be helpful to that model if some experiments could be performed to demonstrate the viability of at least some of its predictions. Experiments do appear to be feasible, and are under consideration. None the less, there do seem to be several plausible sites that apparently could produce the necessary magnetic fields and anti-neutrino fluxes to convert the amino acids produced in the outer reaches of the accretion disk from racemic to the slightly enantiomeric values found in some of the meteorites that made it to the surface of the earth. The predictions from this model are compelling. The enantiomeric levels achieved are approaching the levels seen in the meteorites, even without autocatalysis. And the possibility of a massive-star-neutron-star binary system being able to produce pockets of enantioermic amino acids suggests that this might well be the origin of the molecules found in the meteorites, and perhaps even of those required to initiate life one early Earth. It might behoove astronomers, when they are able to detect amino acids in space, to direct their efforts to determine enantiomerism toward the regions around close massive-star-neutron-star binaries. We note that another scenario for producing enantiomeric amino acids in outer space, that of \cite{barron84} and \cite{rikken97} would be facilitated by the sites we discuss above. This magneto-chiral dichroism model utilizes the light and a parallel magnetic field from a supernova to process the previously created amino acids. A single supernova would not suffice for the reason it doesn't suffice for the SNAAP model: the size of the Red Giant produced as one stage of the stellar evolution would extend beyond the region that could be served by the magnetic field of the nascent neutron star, a requirement of that model. However, the Wolf-Rayet star or a binary system obviates that issue, thus providing several sites described above for that model also. \acknowledgments The authors thank L. Nittler, F. Thielemann, H. Schatz, and L.D. Barron for helpful comments and suggestions. MAF's work was supported by the National Astronomical Observatory of Japan, and by a WMU Faculty Research and Creative Activities Award (FRACAA). TK was supported partially by Grants-in-Aid for Scientific Research of JSPS (15H03665, 17K05459). \software{Gaussian \citep{g16}}
1,314,259,992,818
arxiv
\section{Introduction} The Box-Ball System (BBS) is a one-dimensional cellular automaton in $\{0,1\}^{\mathbb{Z}}$ that was introduced by Takahashi and Satsuma in 1990 \cite{TS}, and has been extensively studied from the viewpoint of integrable systems. In particular, it is connected with the KdV equation \cite{KdV} \[\frac{\partial u}{\partial t}+6u\frac{\partial u}{\partial x}+\frac{\partial^3 u}{\partial x^3}=0,\ \ u=u(x,t),\ x,t\in\mathbb{R},\] which is a non-linear partial differential equation giving a mathematical model for waves on shallow water surfaces. The BBS equation of motion is obtained from the KdV equation by applying an appropriate discretization and transform \cite{TTMS}. The KdV equation has soliton solutions whose shape and speed are conserved after collision with other solitons, and such a phenomenon is also observed in the BBS. Now we present the original definition of the BBS from \cite{TS}. We denote a particle configuration by $(\eta_n)_{n\in\mathbb{Z}}\in \{0,1\}^{\mathbb{Z}}$ for the two-sided case or $(\eta_n)_{n\in\mathbb{N}}\in \{0,1\}^{\mathbb{N}}$ for the one-sided case. Specifically, we write $\eta_n = 1$ if there is a particle at site $n$, and $\eta_n = 0$ otherwise. On the condition that there is a finite number of particles, that is, $\sum_{n\in\mathbb{Z}}\eta_n<\infty$, the evolution of the BBS is described by an operator $T:\{0,1\}^{\mathbb{Z}}\rightarrow\{0,1\}^{\mathbb{Z}}$ that is characterized by the following BBS equation of motion, \[(T\eta)_{n}=\min\left\{1-\eta_{n},\sum_{m=-\infty}^{n-1}\left(\eta_m - (T\eta)_m\right)\right\},\] where we suppose $(T\eta)_n=0$ for $n\leq \inf\{l:\eta_l=1\}$, so the sums in the above definition are well-defined. In other words, the balls move sequentially from left to right, that is, from negative to positive, with each being transported to the leftmost unoccupied site to its right as follows. \vspace{10pt} \makebox[2.6cm][r]{$\eta=\ $}$(\cdots\ 0\ 1\ 1\ 1\ 0\ 0\ 0\ 0\ 0\ 0\ 1\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ \cdots)$ \makebox[2.6cm][r]{$T\eta=\ $}$(\cdots\ 0\ 0\ 0\ 0\ 1\ 1\ 1\ 0\ 0\ 0\ 0\ 1\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ \cdots)$ \makebox[2.6cm][r]{$T^2\eta=\ $}$(\cdots\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 1\ 1\ 1\ 0\ 0\ 1\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ \cdots)$ \makebox[2.6cm][r]{$T^3\eta=\ $}$(\cdots\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 1\ 1\ 0\ 1\ 1\ 0\ 0\ 0\ 0\ 0\_0\ \cdots)$ \makebox[2.6cm][r]{$T^4\eta=\ $}$(\cdots\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 1\ 0\ 0\ 1\ 1\ 1\ 0\ 0\ 0\ \cdots)$ \vspace{10pt} \hspace{-13pt}This example exhibits a string of $3$ consecutive balls, called a soliton, moving distance $3$ in each time step when there is no interaction, and recovering its shape and speed after a collision with another soliton (of length $1$). In this paper we consider a generalization of the BBS that incorporates multiple colors of balls, that is, we assume that there are $\kappa$-color balls (particles) for some $\kappa\in\mathbb{N}$. This model is called multicolor BBS and was introduced in \cite{TaK}, as a generalization of the original $\kappa=1$ BBS first introduced in \cite{T}. In this model, particle configurations are given by $(\eta_n)_{n\in\mathbb{Z}}\in \{0,1,\cdots,\kappa\}^{\mathbb{Z}}$, where we suppose that the numbers $1,\cdots,\kappa$ represent the colors of the balls and $0$ represents the empty site. For each $i=1,\cdots,\kappa$, we define the operator $T_i$ under which the balls of color $i$ move from left to right, with each being transported to the leftmost unoccupied site to its right, with balls of other colors remaining static. The dynamics of the multicolor BBS are then defined by the operator $T=T_\kappa\circ\cdots\circ T_1$. For example, the evolution of the BBS with 3-color balls is as follows \vspace{10pt} \makebox[2.6cm][r]{$\eta=\ $}$(\cdots\ 0\ 1\ 2\ 0\ 3\ 1\ 3\ 2\ 0\ 3\ 0\ 1\ 1\ 2\ 3\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ \cdots)$ \makebox[2.6cm][r]{$T_1\eta=\ $}$(\cdots\ 0\ 0\ 2\ 1\ 3\ 0\ 3\ 2\ 1\ 3\ 0\ 0\ 0\ 2\ 3\ 1\ 1\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ \cdots)$ \makebox[2.6cm][r]{$T_2\circ T_1\eta=\ $}$(\cdots\ 0\ 0\ 0\ 1\ 3\ 2\ 3\ 0\ 1\ 3\ 2\ 0\ 0\ 0\ 3\ 1\ 1\ 2\ 0\ 0\ 0\ 0\ 0\ 0\ \cdots)$ \makebox[2.6cm][r]{$T\eta=T_3\circ T_2\circ T_1\eta=\ $}$(\cdots\ 0\ 0\ 0\ 1\ 0\ 2\ 0\ 3\ 1\ 0\ 2\ 3\ 3\ 0\ 0\ 1\ 1\ 2\ 3\ 0\ 0\ 0\ 0\ 0\ \cdots)$ \makebox[2.6cm][r]{$T^2\eta=\ $}$(\cdots\ 0\ 0\ 0\ 0\ 1\ 0\ 2\ 0\ 3\ 1\ 0\ 0\ 0\ 2\ 3\ 3\ 0\ 0\ 0\ 1\ 1\ 2\ 3\ 0\ \cdots)$ \vspace{10pt} \hspace{-13pt}where $T=T_3\circ T_2\circ T_1$. In the multicolor case, a string of consecutive balls of non-decreasing colors is called a soliton and shows the same behavior as in the 1-color case. The multicolor BBS with finite number of balls has been well studied mostly in the context of integrable systems (see, e.g., the review article \cite{IKT} or the textbook on the BBS \cite{Tokihiro}). Recently, \cite{KLO} and \cite{AH}, \cite{LLPS} considered the multicolor BBS with one-sided random initial configuration and derived scaling limits of probability measures on the space of κ-tuple of Young diagrams induced by the random configuration. Later, we introduce the two-sided version of the multicolor BBS, which is one of the main contributions of this paper. The dynamics of the one-color BBS has been extended to two-sided infinite configurations and studied when the initial condition is random \cite{DS,F}. In the paper \cite{DS} for the one-color BBS, the particle configuration is encoded by a certain path $S=(S_n)_{n\in\mathbb{Z}}$ in $\mathbb{Z}$ and the action $T$ of the BBS is defined via an operation on the path space. Moreover, a formal inverse $T^{-1}$ of $T$ is defined, and the class of configurations $S$ below such that $TS$ and $T^{-1}S$ are well-defined and reversible for all times, i.e. \[\{S=(S_n)_{n\in\mathbb{Z}} \in \mathbb{R}^{\mathbb{Z}}\::\:T^{k}S \mbox{\ is well defined and }TT^{-1}(T^{k}S)=T^{-1}T(T^{k}S)=T^{k}S,\ \forall k\in\mathbb{Z}\} ,\] is precisely characterized. Within this framework, random initial conditions such that almost all paths are in the class is studied from the viewpoint of invariance under $T$, the current of particles crossing the origin, and the speed of a single tagged particle. Such an extended analysis was made possible thanks to connection that was identified between the BBS dynamics and Pitman's transformation. Indeed, in \cite{DS}, the action $T$ on the path space is shown to correspond to the operation of reflection in the past maximum of the path, which is precisely the operation known Pitman transform. Pitman transform is introduced by \cite{Pitman} and appears in the well-known Pitman's theorem, which states that if $(B_t)_{t\ge0}$ is a one-dimensional Brownian motion, then the stochastic process $(2\sup_{0\le s\le t}B_s-B_t)_{t\ge0}$ is a three dimensional Bessel process, i.e. is distributed as the Euclidean norm of a three dimensional Brownian motion. This transform has been generalized to the multidimensional case by Biane \cite{BBC}, and in this paper, we show that the actions of the multicolor BBS can be described by the multidimensional Pitman transform. We start by introducing the one-sided and two-sided Pitman transform for the multicolor BBS theory (Section \ref{one-sided pitman}, \ref{two-sided pitman and its inverse}). Next, as in the case of the one-color BBS, we show that particle configurations of multicolor BBS can be encoded by a certain path in $\mathbb{R}^{\mathbb{\kappa}}$ (Section \ref{Vectors for path encodings}, \ref{one-sided configuration}) and the action $T_i$ corresponds to the composition of the extended Pitman transform and a certain operator (Section \ref{one-sided BBS}, \ref{two-sided BBS}). Moreover, we characterize the set of configurations for which the actions $T_1,T_2,\cdots T_\kappa$ are well-defined and reversible for all times (Section \ref{invariant set}). Then, we give an example of a random initial condition that is invariant in distribution under the dynamics of the multicolor BBS (Section \ref{iid}). Finally, we consider a generalization of the multicolor BBS, that is defined for continuous paths on $\mathbb{R}$ (Section \ref{onR}), and show that $\kappa$-dimensional Brownian motion with a certain drift is invariant under the action of the generalized multicolor BBS (Section \ref{BMD}). Regarding notational conventions, we distinguish $\mathbb{N}=\{1,2,\dots,\}$ and $\mathbb{Z}_+=\{0,1,\dots\}$. \section{Pitman transform}\label{pitman} In this section, we prepare Pitman transform and the extended versions of it which will be used for the path encoding of the particle configuration in the subsequent sections. We start by defining one-sided Pitman transform and studying its property (Section \ref{one-sided pitman}). Then, in Section \ref{two-sided pitman and its inverse}, we define two-sided Pitman transform and examine its inverse on an appropriate set. \subsection{One-sided Pitman transform}\label{one-sided pitman} We first see the definition of the multidimensional version of Pitman transform introduced by Biane \cite{BBC}. {\df \label{original pitman} Suppose that ${\mathbb{R}}^k$ is k-dimensional Euclidean space with dual space $V$ and let $\alpha\in {\mathbb{R}}^k, \alpha^*\in V$ be such that $\alpha^*(\alpha)=2$. The Pitman transform $P_\alpha$ is defined on the set of continuous paths $\pi:[0,T]\to {\mathbb{R}}^k$, satisfying $\pi(0)=0$, by the formula, \[P_{\alpha,\alpha^*}\pi(t)=\pi(t)-\inf_{0\leq s\leq t}\alpha^*(\pi(s))\alpha,\ \ \ 0\leq t\leq T. \]} For the multicolor BBS theory, we take the domain of $\pi$ as $\mathbb{Z}_+$ and $\alpha^{*}$ as the inner product with $\frac{\alpha}{|\alpha|^2}$ in the above definition, and define the one-sided Pitman transform. {\df \label{oneside pitman} Let $\alpha\in{\mathbb{R}}^k$ be such that $\alpha\ne0$. The one-sided Pitman transform with respect to $\alpha$ is defined on the set of discrete paths $\pi:\mathbb{Z_+}\to {\mathbb{R}}^k$, satisfying $\pi(0)=0$, by the formula, \[P_\alpha\pi(n)=\pi(n)-2\inf_{0\leq m\leq n}\frac{\alpha\cdot\pi(m)}{|\alpha|^2}\alpha,\ \ \ n\geq 0 \] where $\alpha\cdot\pi(m)$ is the inner product of $\alpha$ and $\pi(m)$, and $|\alpha|^2=\alpha\cdot\alpha$.} {\exmp \label{1dimpitman} For any $\alpha\in\mathbb{R},\ \alpha\ne0$, the one-sided Pitman transform is given by\[P_\alpha\pi(n)=\pi(n)-2\inf_{0\leq m\leq n}\pi(m),\ \ \ n\geq 0 \ \]for $\pi:\mathbb{Z_+}\to {\mathbb{R}}$, satisfying $\pi(0)=0$. Therefore the one-sided Pitman transform $P_\alpha$ on 1-dimensional Euclidean space does not depend on $\alpha$. We write it as $P_1$. (See Figure 1.)} {\df \label{1dim oneside pitman}\[P_1:=P_\alpha\ \ \ for\ \alpha \in\mathbb{R},\ \alpha\ne0.\] That is, \begin{equation}\label{half P_1} P_1\pi(n)=\pi(n)-2\inf_{0\leq m\leq n}\pi(m),\ \ \ n\geq 0 \end{equation} for $\pi:\mathbb{Z_+}\to {\mathbb{R}}$, satisfying $\pi(0)=0$.} \begin{figure}[H] \centering \scalebox{0.40}{\includegraphics{pitman1.pdf}} \vspace{10pt} \caption{$P_1\pi(n)=\pi(n)-2\inf_{0\leq m\leq n}\pi(m).$} \end{figure} Next, we show the useful property of the one-sided Pitman transform for considering the actions of the BBS. {\prop\label{half orthogonal} Let $k\geq2$ and $\pi_\alpha(n):=\frac{\alpha\cdot\pi(n)}{|\alpha|^2}$ for $\alpha\in\mathbb{R}^k$. $\pi:\mathbb{Z_+}\to {\mathbb{R}}$ is decomposed into the sum of the vector projection of $\pi(n)$ along $\alpha$ and the vector orthogonal to $\alpha$\:: \[\pi(n)=\pi_\alpha(n)\alpha+\left\{\pi(n)-\pi_\alpha(n)\alpha\right\}\] for any $n\geq0$. Then, it holds that \[P_\alpha\pi(n)=\left\{P_1\pi_\alpha(n)\right\}\alpha+\left\{\pi(n)-\pi_\alpha(n)\alpha\right\}.\](See figure 2.)} \begin{proof} \begin{align*} P_\alpha\pi(n)&=\pi(n)-2\inf_{0\leq m\leq n}\frac{\alpha\cdot\pi(m)}{|\alpha|^2}\alpha\\ &=\pi_\alpha(n)\alpha+\left\{\pi(n)-\pi_\alpha(n)\alpha\right\}-2\inf_{0\leq m\leq n}\pi_\alpha(m)\alpha\\ &=\left\{\pi_\alpha(n)-2\inf_{0\leq m\leq n}\pi_\alpha(m)\right\}\alpha+\left\{\pi(n)-\pi_\alpha(n)\alpha\right\}\\ &=\left\{P_1\pi_\alpha(n)\right\}\alpha+\left\{\pi(n)-\pi_\alpha(n)\alpha\right\}. \end{align*} \end{proof} \begin{figure}[H] \centering \scalebox{0.40}{\includegraphics{pitman2.pdf}} \vspace{0pt} \caption{$P_\alpha\pi(n)=\left\{P_1\pi_\alpha(n)\right\}\alpha+\left\{\pi(n)-\pi_\alpha(n)\alpha\right\}.$} \end{figure} \subsection{Two-sided Pitman transform and its inverse}\label{two-sided pitman and its inverse} This section provides the two-sided Pitman transform and its inverse on an appropriate set. {\df \label{twoside pitman} Let $\alpha\in{\mathbb{R}}^k,\ \alpha\ne0$. The two-sided Pitman transform with respect to $\alpha$ is defined on the set of discrete paths \[\{\pi:\mathbb{Z}\to {\mathbb{R}}^k,\ \pi(0)=0,\ \inf_{m\leq0}\alpha\cdot\pi(m)>-\infty\}\]by the formula, \[P_\alpha\pi(n)=\pi(n)-2\inf_{m\leq n}\frac{\alpha\cdot\pi(m)}{|\alpha|^2}\alpha+2\inf_{m\leq 0}\frac{\alpha\cdot\pi(m)}{|\alpha|^2}\alpha,\ \ \ n\in\mathbb{Z} .\] Similarly to Example \ref{1dimpitman}, it holds that \[P_\alpha\pi(n)=\pi(n)-2\inf_{m\leq n}\pi(m)+2\inf_{m\leq 0}\pi(m),\ \ \ n\in\mathbb{Z}\] for any $\alpha\in\mathbb{R},\ \alpha\ne0$, and it does not depend on $\alpha$. Then we define \[P_1:=P_\alpha\ \ \ for\ \alpha \in\mathbb{R},\ \alpha\ne0.\] That is, \begin{equation}\label{P_1} P_1\pi(n)=\pi(n)-2\inf_{m\leq n}\pi(m)+2\inf_{m\leq 0}\pi(m),\ \ \ n\in\mathbb{Z} \ \end{equation} for $\pi:\mathbb{Z}\to {\mathbb{R}}$, satisfying $\pi(0)=0,\ \inf_{m\leq 0}\pi(m)>-\infty$. } Next, we introduce a new transform which will be inverse of the two-sided Pitman transform on an appropriate set. {\df \label{inverse pitman} Let $\alpha\in{\mathbb{R}}^k,\ \alpha\ne0$. Define the transform ${P_\alpha}^{-1}$ on the set of discrete paths \[\{\pi:\mathbb{Z}\to {\mathbb{R}}^k,\ \pi(0)=0,\ \inf_{m\geq0}\alpha\cdot\pi(m)>-\infty\}\]by the formula, \[{P_\alpha}^{-1}\pi(n)=\pi(n)-2\inf_{m\geq n}\frac{\alpha\cdot\pi(m)}{|\alpha|^2}\alpha+2\inf_{m\geq 0}\frac{\alpha\cdot\pi(m)}{|\alpha|^2}\alpha,\ \ \ n\in\mathbb{Z} .\] In this case, it also holds that \[P^{-1}_\alpha\pi(n)=\pi(n)-2\inf_{m\geq n}\pi(m)+2\inf_{m\geq 0}\pi(m),\ \ \ n\in\mathbb{Z}\] for any $\alpha\in\mathbb{R},\ \alpha\ne0$, and it does not depend on $\alpha$. Then we define \[{P_1}^{-1}:=P^{-1}_\alpha\ \ \ for\ \alpha \in\mathbb{R},\ \alpha\ne0.\] That is, \begin{equation}\label{P^-1_1} {P_1}^{-1}\pi(n)=\pi(n)-2\inf_{m\geq n}\pi(m)+2\inf_{m\geq 0}\pi(m),\ \ \ n\in\mathbb{Z} \ \end{equation} for $\pi:\mathbb{Z}\to {\mathbb{R}}$, satisfying $\pi(0)=0,\ \inf_{m\geq0}\pi(m)>-\infty$.} {\rem \label{orthogonal}With the same notation as Proposition \ref{half orthogonal}, it holds that, \[P_\alpha\pi(n)=\left\{P_1\pi_\alpha(n)\right\}\alpha+\left\{\pi(n)-\pi_\alpha(n)\alpha\right\}\] \[P^{-1}_\alpha\pi(n)=\left\{P^{-1}_1\pi_\alpha(n)\right\}\alpha+\left\{\pi(n)-\pi_\alpha(n)\alpha\right\}.\] Therefore, $P_1 {P_1}^{-1}=\mathrm{id.}$ on some set $E_\alpha$ implies $P_\alpha {P_\alpha}^{-1}=\mathrm{id.}$ on $\{\pi :\mathbb{Z}\to {\mathbb{R}}^k\::\:\pi_\alpha\in\ E_\alpha\}$, and ${P_1}^{-1}P_1=\mathrm{id.}$ on some set $F_\alpha$ implies $ {P_\alpha}^{-1}P_\alpha=\mathrm{id.}$ on $\{\pi :\mathbb{Z}\to {\mathbb{R}}^k\::\:\pi_\alpha\in\ F_\alpha\}$. } {\df We define the domain of $P_1$ and ${P_1}^{-1}$, and their subsets, \begin{equation}\label{P} \mathcal{R}^{P_1}:=\{\pi:\mathbb{Z}\to {\mathbb{R}},\ \ \pi(0)=0,\ \inf_{m\leq0}\pi(m)>-\infty\}, \end{equation} \begin{equation}\label{P'} \mathcal{R}^{{P_1}^{-1}}:=\{\pi:\mathbb{Z}\to {\mathbb{R}},\ \ \pi(0)=0,\ \inf_{m\geq0}\pi(m)>-\infty\}, \end{equation} \begin{equation}\label{P'P} \mathcal{R}^{{P_1}^{-1}P_1}:=\{\pi\in\mathcal{R}^{P_1}\::\:|\pi(n+1)-\pi(n)|\in\{0,1\},\ \forall n,\ \inf_{m\leq n}\pi(m)=\pi(n)\ \ i.o.\ as\ \ n\rightarrow\infty\}, \end{equation} \begin{equation}\label{PP'} \mathcal{R}^{P_1{P_1}^{-1}}:=\{\pi\in\mathcal{R}^{{P_1}^{-1}}\::\:|\pi(n+1)-\pi(n)|\in\{0,1\},\ \forall n,\ \inf_{m\geq n}\pi(m)=\pi(n)\ \ i.o.\ as\ \ n\rightarrow-\infty\}. \end{equation}} We prepare following proposition to guarantee that ${P_1}^{-1}P_1$ and $P_1{P_1}^{-1}$ are well-defined on $\mathcal{R}^{P_1}$ and $\mathcal{R}^{{P_1}^{-1}}$ respectively. {\prop It holds that \[P_1\left(\mathcal{R}^{P_1}\right)\subseteq\mathcal{R}^{{P_1}^{-1}},\] \[{P_1}^{-1}\left(\mathcal{R}^{{P_1}^{-1}}\right)\subseteq\mathcal{R}^{P_1}.\]} \begin{proof} Suppose that $n\geq0$ and $\pi\in\mathcal{R}^{P_1}$. Since $\inf_{m\leq n}\pi(m)\leq \inf_{m\leq 0}\pi(m)$, we have \begin{align*} P_1\pi(n)&=\pi(n)-2\inf_{m\leq n}\pi(m)+2\inf_{m\leq 0}\pi(m)\\ &\geq \pi(n). \end{align*} On the other hand, since $\inf_{m\leq n}\pi(m)\leq \pi(n)$, we have \begin{align*} P_1\pi(n)&=\pi(n)-2\inf_{m\leq n}\pi(m)+2\inf_{m\leq 0}\pi(m)\\ &\geq -\pi(n)+2\inf_{m\leq 0}\pi(m). \end{align*} The above two inequalities show \[P_1\pi(n)-\inf_{m\leq 0}\pi(m)\geq \pm\left\{-\pi(n)+\inf_{m\leq 0}\pi(m)\right\},\] then \[P_1\pi(n)\geq\inf_{m\leq 0}\pi(m).\] It shows the first claim and we can prove the second in the same way. \end{proof} {\thm\label{inversemap} It holds that \[{P_1}^{-1}P_1 =\mathrm{id.}\ \ {on}\ \ \mathcal{R}^{{P_1}^{-1}P_1},\] \[P_1 {P_1}^{-1}=\mathrm{id.}\ \ {on}\ \ \mathcal{R}^{P_1{P_1}^{-1}}.\]} \begin{proof} Let $\pi\in\mathcal{R}^{{P_1}^{-1}P_1}$. Define the sequence\[\lambda_x=\inf_{m\in\mathbb{Z}}\{m\::\:\pi(m)=x\}\ \ \mbox{for}\ x\in\mathbb{Z}\] with the convention that $\inf \emptyset= \infty$. (See Figure 3,4.) Then, the sequence satisfies one of the following 4 conditions\:: \begin{align*} &(1)\ \ \cdots<\lambda_{x+1}<\lambda_{x}<\lambda_{x-1}<\cdots \\ &(2)\ \ -\infty=\lambda_{s}<\lambda_{s-1}<\cdots<\lambda_{x+1}<\lambda_{x}<\lambda_{x-1}<\cdots \\ &(3)\ \ \cdots<\lambda_{x+1}<\lambda_{x}<\lambda_{x-1}<\cdots<\lambda_{t+1}<\lambda_{t}<\lambda_{t-1}=\infty \\ &(4)\ \ -\infty=\lambda_{s}<\lambda_{s-1}<\cdots<\lambda_{x+1}<\lambda_{x}<\lambda_{x-1}<\cdots<\lambda_{t}<\lambda_{t-1}=\infty \end{align*} where $s=\liminf_{n\rightarrow-\infty}\pi(n)$ when it is bounded and $t=\liminf_{n\rightarrow\infty}\pi(n)$ when it is bounded. The condition $\inf_{m\leq n}\pi(m)=\pi(n)\ \ i.o.\ as\ \ n\rightarrow\infty$ implies $s\le t$, and if $s=t$, it is the case that $-\infty=\lambda_{s}=\lambda_{t}<\lambda_{t-1}=\infty$. If (1)\::\:$n=\lambda_{x}$, for some $x$, it holds that \[P_1\pi(n)=P_1\pi(\lambda_{x})=-\pi(\lambda_{x})+2\inf_{m\leq0}\pi(m)=-\pi(n)+2\inf_{m\leq0}\pi(m)\] and also it holds that \[P_1\pi(\lambda_{x})>P_1\pi(\lambda_{x+1})\ \ \mbox{for any}\ -\infty<\lambda_{x+1}<\lambda_{x}<\infty.\] If (2)\::\:$-\infty<\lambda_{x+1}<n<\lambda_{x}<\infty$ for some $x$, it holds that \begin{align*} P_1\pi(n)=\pi(n)-2\pi(\lambda_{x})+2\inf_{m\leq0}\pi(m)&\geq P_1\pi(\lambda_{x})\\ &=-\pi(\lambda_{x})+2\inf_{m\leq0}\pi(m). \end{align*} If (3)\::\:$n<\lambda_{s-1}$, it holds that \begin{align*} P_1\pi(n)=\pi(n)-2s+2\inf_{m\leq0}\pi(m)&\geq P_1\pi(\lambda_{s-1})-1\\ &=-(s-1)+2\inf_{m\leq0}\pi(m)-1 \end{align*} If (4)\::\:$n>\lambda_{t}$, it holds that \[P_1\pi(n)=\pi(n)-2t+2\inf_{m\leq0}\pi(m).\] and also it holds that \[P_1\pi(n)=P_1\pi(\lambda_{t})\ \ i.o.\ as\ \ n\rightarrow\infty.\] From the above discussion, it holds that \[\inf_{m\ge n}P_1\pi(m)=\left\{\begin{array}{ll} -\pi(n)+2\inf_{m\leq0}\pi(m), & \mbox{if }(1),\\ -\pi(\lambda_{x})+2\inf_{m\leq0}\pi(m), & \mbox{if }(2),\\ -s+2\inf_{m\leq0}\pi(m), & \mbox{if }(3),\\ -t+2\inf_{m\leq0}\pi(m), & \mbox{if }(4). \end{array}\right.\] Therefore, if (1), \begin{align*} &{P_1}^{-1}P_1\pi(n)=P_1\pi(n)-2\inf_{m\geq n}P_1\pi(m)+2\inf_{m\geq 0}P_1\pi(m)\\ ={}&\left\{-\pi(n)+2\inf_{m\leq0}\pi(m)\right\}-2\left\{-\pi(n)+2\inf_{m\leq0}\pi(m)\right\}+2\inf_{m\geq 0}P_1\pi(m)\\ ={}&\pi(n)-2\inf_{m\leq0}\pi(m)+2\inf_{m\geq 0}P_1\pi(m). \end{align*} If (2), \begin{align*} &{P_1}^{-1}P_1\pi(n)=P_1\pi(n)-2\inf_{m\geq n}P_1\pi(m)+2\inf_{m\geq 0}P_1\pi(m)\\ ={}&\left\{\pi(n)-2\pi(\lambda_{x})+2\inf_{m\leq0}\pi(m)\right\}-2\left\{-\pi(\lambda_{x})+2\inf_{m\leq0}\pi(m)\right\}+2\inf_{m\geq 0}P_1\pi(m)\\ ={}&\pi(n)-2\inf_{m\leq0}\pi(m)+2\inf_{m\geq 0}P_1\pi(m). \end{align*} If (3), \begin{align*} &{P_1}^{-1}P_1\pi(n)=P_1\pi(n)-2\inf_{m\geq n}P_1\pi(m)+2\inf_{m\geq 0}P_1\pi(m)\\ ={}&\left\{\pi(n)-2s+2\inf_{m\leq0}\pi(m)\right\}-2\left\{-s+2\inf_{m\leq0}\pi(m)\right\}+2\inf_{m\geq 0}P_1\pi(m)\\ ={}&\pi(n)-2\inf_{m\leq0}\pi(m)+2\inf_{m\geq 0}P_1\pi(m). \end{align*} If (4), \begin{align*} &{P_1}^{-1}P_1\pi(n)=P_1\pi(n)-2\inf_{m\geq n}P_1\pi(m)+2\inf_{m\geq 0}P_1\pi(m)\\ ={}&\left\{\pi(n)-2t+2\inf_{m\leq0}\pi(m)\right\}-2\left\{-t+2\inf_{m\leq0}\pi(m)\right\}+2\inf_{m\geq 0}P_1\pi(m)\\ ={}&\pi(n)-2\inf_{m\leq0}\pi(m)+2\inf_{m\geq 0}P_1\pi(m). \end{align*} Therefore it is enough to show that \[\inf_{m\leq0}\pi(m)=\inf_{m\geq 0}P_1\pi(m),\] and it is obtained by following inequalities\:: \begin{align*} \inf_{m\geq 0}P_1\pi(m)&=\inf_{m\geq 0}\left\{\pi(m)-2\inf_{l\leq m}\pi(l)+2\inf_{l\leq 0}\pi(l)\right\}\\ &\geq\inf_{m\geq 0}\left\{\pi(m)-\left(\inf_{l\leq 0}\pi(l)+\inf_{0\leq l\leq m}\pi(l)\right)+2\inf_{l\leq 0}\pi(l)\right\}\\ &=\inf_{m\geq 0}\left\{\pi(m)-\inf_{0\leq l\leq m}\pi(l)\right\}+\inf_{l\leq0}\pi(l)\\ &\geq\inf_{l\leq0}\pi(l). \end{align*} On the other hand, by the conditions on $\mathcal{R}^{{P_1}^{-1}P_1}$, there exists $m_1\geq0$ such that $\pi(m_1)=\inf_{l\leq m_1}\pi(l)=\inf_{l\leq 0}\pi(l)$, then \begin{align*} \inf_{m\geq 0}P_1\pi(m)&=\inf_{m\geq 0}\left\{\pi(m)-2\inf_{l\leq m}\pi(l)+2\inf_{l\leq 0}\pi(l)\right\}\\ &\leq \pi(m_1)-2\inf_{l\leq m_1}\pi(l)+2\inf_{l\leq 0}\pi(l)\\ &=\inf_{l\leq 0}\pi(l). \end{align*} We can prove the second claim in the same way. \end{proof} \begin{figure} \centering \scalebox{0.40}{\includegraphics{pitman3.pdf}} \vspace{-20pt} \caption{Example of the sequence $\{\lambda_x\}$ with $\pi(n),\ \inf_{m\leq n}\pi(m)$.} \end{figure} \begin{figure} \centering \scalebox{0.40}{\includegraphics{pitman4.pdf}} \vspace{-20pt} \caption{The sequence $\{\lambda_x\}$ in figure 3 with $P_1\pi(n)$.} \end{figure} {\rem The condition $|\pi(n+1)-\pi(n)|\in\{0,1\}$ in $\mathcal{R}^{{P_1}^{-1}P_1}$ and $\mathcal{R}^{P_1{P_1}^{-1}}$ can be replaced by $|\pi(n+1)-\pi(n)|\in\{0,c\}$ with any positive constant $c$ for Theorem \ref{inversemap} to hold.} {\rem The condition \begin{equation}\label{pi} \inf_{m\leq n}\pi(m)=\pi(n)\ \ i.o.\ as\ \ n\rightarrow\infty \end{equation} in $\mathcal{R}^{{P_1}^{-1}P_1}$ is necessary for ${P_1}^{-1}P_1 =\mathrm{id}$. Indeed, one can check that if $\pi$ does not satisfy \eqref{pi}, the increment of $-\inf_{m\leq n}\pi(m)$ does not match that of $\inf_{m\geq n}P_1\pi(m)$. (See Figure 5, 6.)} \begin{figure} \centering \scalebox{0.40}{\includegraphics{pitman5.pdf}} \vspace{-30pt} \caption{Example of $\pi$ not satisfying \eqref{pi} and $J_n:=\inf_{m\leq n}\pi(m)$.} \end{figure} \begin{figure} \centering \scalebox{0.40}{\includegraphics{pitman6.pdf}} \vspace{-10pt} \caption{Example of $\pi$ not satisfying \eqref{pi} and $J'_n:=\inf_{m\geq n}P_1\pi(m)$.} \end{figure} {\cor \label{k-dim inverse} By Remark \ref{orthogonal}}, it holds that \[{P_\alpha}^{-1}P_\alpha \pi=\pi,\ \ if\ \ \pi_\alpha\in\mathcal{R}^{{P_1}^{-1}P_1},\] \[P_\alpha {P_\alpha}^{-1}\pi=\pi,\ \ if\ \ \pi_\alpha\in\mathcal{R}^{P_1{P_1}^{-1}},\] where $\pi_\alpha(n)=\frac{\alpha\cdot\pi(n)}{|\alpha|^2}$. \section{Path encodings of the multicolor BBS}\label{pathsec} In the original paper \cite{DS}, the particle configuration is corresponded to the nearest-neighbour walk path $S$ on $\mathbb{Z}$ in $\mathbb{R}$, satisfying $S_0=0$ and $S_n-S_{n-1}=1$ if $\eta_n=0$ and $S_n-S_{n-1}=-1$ if $\eta_n=1$. In this section, we extend this concept to the multicolor BBS with $\kappa$-color balls by considering the path $S$ in $\mathbb{R}^{\kappa}$ (Section \ref{one-sided configuration}). In particular, $S$ satisfies $S_0=0$ and $S_n-S_{n-1}=e_i$ if $\eta_n=i\in\{0,1,\cdots,\kappa\}$, where the vectors $e_0,\cdots e_\kappa\in\mathbb{R}^\kappa$ is obtained in Section \ref{Vectors for path encodings}. Then we consider the dynamics of the one-sided multicolor BBS in terms of the ‘carrier’ processes which pick up and drop a certain color ball moving on $\mathbb{Z_+}$ (Section \ref{carrier}), and Pitman transform on $S$ which describes the action $T_i$ (Section \ref{one-sided BBS}). In Section \ref{two-sided BBS}, we extend them to the case of two-sided multicolor BBS. Also we describe the inverse $T^{-1}_i$ and define the reversible set of $S$ for color $i$ such that $T^{-1}_iT_iS=T_iT^{-1}_iS=S$ (Section \ref{inverse}). Moreover, we investigate the set of configurations for which the actions $T_1,T_2,\cdots T_\kappa$ are well-defined and reversible for all times. (Section \ref{invariant set}). From this section, we fix $\kappa\in\mathbb{N}$ the number of all colors and define the set of numbers representing colors $\mathcal{C}:=\{1,\cdots,\kappa\}$. \subsection{Vectors for path encodings}\label{Vectors for path encodings} In this subsection, we introdece a set of vectors which will be used for path encoding of the particle configuration. {\df\label{vectorsdef} Let vectors $e_0,e_1,\cdots,e_\kappa\in\mathbb{R}^{\kappa}$ represent the vertices of a regular $\kappa$-dimensional simplex center the origin, satisfying following conditions\:: \begin{equation}\label{length} |e_i|=1\ \ \ \forall i\in\mathcal{C}\cup{\{0\}}. \end{equation} \begin{equation}\label{product} e_{i}\cdot e_{j}=-\frac{1}{\kappa}\ \ \ \forall i,j\in\mathcal{C}\cup{\{0\}},i\ne j. \end{equation} } \begin{figure}[H] \centering \scalebox{0.40}{\includegraphics{vector.pdf}} \vspace{0pt} \caption{$e_0,\,e_1\in\mathbb{R}$,\ $e_0,\,e_1,\,e_2\in\mathbb{R}^2$,\ $e_0,\,e_1,\,e_2,\,e_3\in\mathbb{R}^3$} \end{figure} {\prop\label{vectorsproperty} The vectors $e_0,e_1,\cdots,e_\kappa$ have following properties, immediately obtained from \eqref{length} and \eqref{product}, which will be useful in subsequent sections when it comes to defining the path encodings of the particle configuration and considering the actions of the multicolor BBS. \begin{enumerate} \setlength{\itemsep}{0.3cm} \renewcommand{\labelenumi}{(\roman{enumi})} \item $e_0+e_1+\cdots+e_\kappa=0$ \item Let $a_i\in\mathbb{R}\ $for $i\in\mathcal{C}\cup{\{0\}}$. It holds that \[a_0e_0+a_1e_1+\cdots+a_\kappa e_\kappa=0 \Leftrightarrow a_0=a_1=\cdots=a_\kappa.\] \item Let $a_i,a'_i\in\mathbb{R}\ $for $i\in\mathcal{C}\cup{\{0\}}$. Suppose that \[a_0e_0+a_1e_1+\cdots+a_\kappa e_\kappa=a'_0e_0+a'_1e_1+\cdots+a'_\kappa e_\kappa.\] Then there is a constant $c$ such that $a_i=a'_i+c$ for any $i$. In addition, suppose that \[a_0+a_1+\cdots a_\kappa=a'_0+a'_1+\cdots +a'_\kappa.\] Then it is the case that $a_i=a'_i$ for any $i$. \item Let $a_l\in\mathbb{R}$ for $l\in\mathcal{C}\cup{\{0\}}$, and $d_j\in\mathbb{R}$ for $j\in\mathcal{C}$. It holds that\begin{align*} &a_0e_0+a_1e_1+\cdots+a_\kappa e_\kappa=d_i(e_i-e_0)+\sum_{j\in\mathcal{C},j\ne i}d_je_j\\ \Leftrightarrow{}&d_j=a_j-\frac{a_0+a_i}{2}\ \ \forall j\in\mathcal{C} \end{align*} for any $i\in\mathcal{C}$. \item Any set of $\kappa$ vectors in $\{e_0,e_1,\cdots,e_\kappa\}$ is the basis of $\mathbb{R}^\kappa$. \item For any $v\in\mathbb{R}^\kappa$, there is an $\kappa+1$-tuple $a_0,\cdots,a_\kappa$ of real numbers satisfying \[v=a_0e_0+\cdots+a_\kappa e_\kappa,\ \ a_0+\cdots+a_\kappa=0\] \end{enumerate} } \subsection{Configuration of the one-sided multicolor BBS}\label{one-sided configuration} In this section, we consider the one-sided multicolor BBS, and denote the particle configuration by $\eta=(\eta_n)_{n\in\mathbb{\kappa}}\in \{0,1,2,\cdots,\kappa\}^{\mathbb{N}}$ As in the introduction, we write $\eta_n=i$ if there is a particle of color $i\in\mathcal{C}$ at site $n$, and $\eta_n=0$ if there is no particle at site $n$. We define a nearest-neighbour path in $\mathbb{R}^{\kappa}$ as the path encoding of a particle configuration. {\df\label{half path encoding} Given the particle configuration by $\eta=(\eta_n)_{n\in\mathbb{\kappa}}\in \{0,1,2,\cdots,\kappa\}^{\mathbb{N}}$, we define $S=(S_n)_{n\in\mathbb{Z}_+}$ by setting \begin{equation}\label{increment} S_0=0\ \ S_n-S_{n-1}=e_i\ \ \text{if}\ \ \eta_n=i. \end{equation} The S is called the path encoding of $\eta$. We can describe it as \begin{equation}\label{path encoding} S_n=a_0(n)e_0+a_1(n)e_1+\cdots+a_\kappa(n)e_\kappa\ \ \end{equation} for $n\in\mathbb{Z_{+}}$, where $a_i(n)\in\mathbb{Z_+},\ i\in\mathcal{C}$ is the number of the particles of color $i$ at the sites located from $1$ to $n$, $a_0(n)\in\mathbb{Z_+}$ is the number of the empty sites located from $1$ to $n$, and $a_i(0)=0,\ i\in\mathcal{C}\cup{\{0\}}$. Also we define the path space in $\mathbb{R}^\kappa$ as follows\:: \[\mathcal{S}_+:=\{S:\mathbb{Z_+}\rightarrow\mathbb{R}^{\kappa}\::S_0=0,\ S_{n+1}-S_{n}\in\{e_0,e_1,\cdots,e_\kappa\},\ \forall n\in\mathbb{Z_+}\}.\] } {\exmp For $\eta=(0,1,1,2,\cdots)$, the path encoding S is given by\[S_0=0,\ S_1=e_0,\ S_2=e_0+e_1,\ S_3=e_0+2e_1,\ S_4=e_0+2e_1+e_2,\ \cdots\]} {\rem By the definition, it clearly holds that the map from $\eta=(\eta_n)_{n\in\mathbb{N}}\in \{0,1,2,\cdots,\kappa\}^{\mathbb{N}}$ to $S\in\mathcal{S}_+$ is one to one. Also it holds that\[a_0(n)+a_1(n)+\cdots+a_\kappa(n)=n\ \ \forall n.\]Therefore, from Proposition \ref{vectorsproperty} $($\hspace{-1pt}ⅲ\hspace{-1pt}$)$, the map from $(a_0(n),a_1(n),\cdots,a_\kappa(n))\in\mathbb{Z_+}^{\kappa+1}$ to $S_n\in\mathbb{R}^\kappa$ is one to one for any $n\in\mathbb{Z_+}$.} \vspace{10pt} For the subsequent sections, we introduce some operators of $S$. {\df \label{Adef1} For $i\in\mathcal{C}$, we define the function $A_i:\mathcal{S}_+\rightarrow\mathbb{Z}^{\mathbb{Z_+}}$ given by \begin{equation}\label{A1} A_iS_n=a_0(n)-a_i(n), \end{equation} for $S_n=a_0(n)e_0+a_1(n)e_1+\cdots+a_\kappa(n)e_\kappa,\ n\in\mathbb{Z_+}$. } {\rem\label{Adef2} For $S_n=a_0(n)e_0+a_1(n)e_1+\cdots+a_\kappa(n) e_\kappa$, Proposition \ref{vectorsproperty} $($\hspace{-1pt}ⅳ\hspace{-1pt}$)$ shows \[S_n=\frac{1}{2}\left\{a_i(n)-a_0(n)\right\}(e_i-e_0)+\sum_{j\ne0,i}d_j(n)e_j\] and \eqref{product} implies \[e_j\cdot(e_i-e_0)=0\ \ \ \forall i,j\in\mathcal{C},\ i\ne j.\] Therefore, the projection of $S_n$ along $(e_i-e_0)$ is equal to \[\frac{(e_i-e_0)\cdot S_n}{|e_i-e_0|^2}(e_i-e_0)=\frac{1}{2}\left\{a_i(n)-a_0(n)\right\}(e_i-e_0)=-\frac12A_iS_n(e_i-e_0),\] then it holds that \begin{equation}\label{A2} A_iS_n=-2\frac{(e_i-e_0)\cdot S_n}{|e_i-e_0|^2}. \end{equation} } {\rem\label{projection} From Remark \ref{Adef2}, we can write $S_n$ as the sum of the vector projection on $(e_i-e_0)$ and the vector orthogonal to $(e_i-e_0)$ as following \[S_n=-\frac12A_iS_n(e_i-e_0)+\left(S_n+\frac12A_iS_n(e_i-e_0)\right).\] Then, by Proposition \ref{half orthogonal}, it holds that \begin{align*} P_{e_i-e_0}S_n&=P_{e_i-e_0}\left(-\frac{1}{2}A_iS_n\left(e_i-e_0\right)+\left(S_n+\frac12A_iS_n(e_i-e_0)\right)\right)\\ &=P_1\left(-\frac{1}{2}A_iS_n\right)\left(e_i-e_0\right)+\left(S_n+\frac12A_iS_n(e_i-e_0)\right). \end{align*} } {\df \label{taudef1} We define the the permutation operator $\tau_{(0,i)}:\mathcal{S}_+\rightarrow \mathcal{S}_+$ given by \begin{equation}\label{tau1} \tau_{(0,i)}S_n=a_i(n)e_0+a_0(n)e_i+\sum_{j\ne0,i}a_j(n)e_j \end{equation} for $S_n=a_0(n)e_0+a_1(n)e_1+\cdots+a_\kappa(n)e_\kappa,\ n\in\mathbb{Z_+}$. } {\rem \label{taudef2} Comparing \begin{align*} S_n&=\frac{1}{2}\left\{a_i(n)-a_0(n)\right\}(e_i-e_0)+\sum_{j\ne0,i}d_j(n)e_j\\ &=-\frac12A_iS_n(e_i-e_0)+\left(S_n+\frac12A_iS_n(e_i-e_0)\right), \end{align*} and \begin{align*} \tau_{(0,i)}S_n&=\frac{1}{2}\left\{a_0(n)-a_i(n)\right\}(e_i-e_0)+\sum_{j\ne0,i}d_j(n)e_j\\ &=\frac12A_iS_n(e_i-e_0)+\left(S_n+\frac12A_iS_n(e_i-e_0)\right), \end{align*} it is the case that $\tau_{(0,i)}$ is the operator which multiply only the vector projection part of $S$ along $e_i-e_0$ by $-1$. Also it holds that \begin{equation}\label{tau2} \tau_{(0,i)}S_n=S_n+A_iS_n(e_i-e_0). \end{equation} } \subsection{Carrier process for the one-sided multicolor BBS}\label{carrier} We introduce the concept of carrier with respect to particles of a certain color $i\in\mathcal{C}$. It moves along $\mathbb{Z_+}$ from left to right picking up a particle of color $i$ when it crosses one, and dropping off a particle of color $i$ when it is holding at least one particle and sees an empty site. The dynamic $T_i$ can be viewed in terms of this carrier. The carrier process is given as follows. {\df The carrier process $W^{(i)}=\{W^{(i)}_n\}_{n\in\mathbb{Z_+}}$ of the color $i$ associated with $\eta\in \{0,1,2,\cdots,\kappa\}^{\mathbb{N}}$ is defined by $W^{(i)}_0=0$ and \begin{equation}\label{W} W^{(i)}_n=\left\{\begin{array}{ll} W^{(i)}_{n-1}+1, & \mbox{if }\eta_n=i,\\ W^{(i)}_{n-1}, & \mbox{if }\eta_n=j,\ j\ne 0,i\\ W^{(i)}_{n-1}, & \mbox{if }\eta_n=0\mbox{ and }W^{(i)}_{n-1}=0,\\ W^{(i)}_{n-1}-1, & \mbox{if }\eta_n=0\mbox{ and }W^{(i)}_{n-1}>0. \end{array}\right. \end{equation}} $W$ is obtained from $S$ as following lemma. {\lem\label{W=M-S} It holds that\[W^{(i)}_n=\sup_{0\leq m\leq n}A_iS_m-A_iS_n,\ \ \forall n\in\mathbb{Z_+}.\]} \begin{proof} We prove it by induction. Clearly the result is true for $n=0$. Suppose that $W^{(i)}_{n-1}=\sup_{0\leq m\leq n-1}A_iS_m-A_iS_{n-1}$ for some $n\geq1$. Now, if $\eta_n=i$, then $A_iS_n=A_iS_{n-1}-1$ and $\sup_{0\leq m\leq n}A_iS_m=\sup_{0\leq m\leq n-1}A_iS_m$, and so \[\left\{\sup_{0\leq m\leq n}A_iS_m-A_iS_n\right\}-\left\{\sup_{0\leq m\leq n-1}A_iS_m-A_iS_{n-1}\right\}=1.\] If $\eta_n=j,\ j\ne 0,i$, then $A_iS_n=A_iS_{n-1}$ and $\sup_{0\leq m\leq n-1}A_iS_m=\sup_{0\leq m\leq n}A_iS_m$, and so \[\left\{\sup_{0\leq m\leq n}A_iS_m-A_iS_n\right\}-\left\{\sup_{0\leq m\leq n-1}A_iS_m-A_iS_{n-1}\right\}=0.\] Moreover, if $\eta_n=0$ and $W^{(i)}_{n-1}=0$, then it is the case that $\sup_{0\leq m\leq n}A_iS_m=A_iS_n$, and so \[\left\{\sup_{0\leq m\leq n}A_iS_m-A_iS_n\right\}-\left\{\sup_{0\leq m\leq n-1}A_iS_m-A_iS_{n-1}\right\}=0.\] Similarly, if $\eta_n=0$ and $W^{(i)}_{n-1}>0$, then $A_iS_n=A_iS_{n-1}+1$ and $\sup_{0\leq m\leq n}A_iS_m=\sup_{0\leq m\leq n-1}A_iS_m$, and so \[\left\{\sup_{0\leq m\leq n}A_iS_m-A_iS_n\right\}-\left\{\sup_{0\leq m\leq n-1}A_iS_m-A_iS_{n-1}\right\}=-1.\] Thus it holds that \[W^{(i)}_{n}-W^{(i)}_{n-1}=\left\{\sup_{0\leq m\leq n}A_iS_m-A_iS_n\right\}-\left\{\sup_{0\leq m\leq n-1}A_iS_m-A_iS_{n-1}\right\}\] which by the inductive hypothesis implies \[W^{(i)}_{n}=\sup_{0\leq m\leq n}A_iS_m-A_iS_n.\] \end{proof} \subsection{Action of the carrier for the one-sided multicolor BBS}\label{one-sided BBS} In this section, we consider the action $T_i$ on S given by \eqref{path encoding}. We fix the color $i\in\mathcal{C}$. From the viewpoint of the carrier process, we can write $T_i$ as \[T_i(\eta)_n=\mathbf{1}_{\{W^{(i)}_n=W^{(i)}_{n-1}-1\}},\ \ \forall n\in\mathbb{N}.\] For $j\ne i$, the numbers $\{a_j(n)\}_{n\in\mathbb{Z_{+}}}$ do not change under the action $T_i$, so the path encoding $T_iS=(T_iS_n)_{n}\in\mathbb{Z_{+}}$ of $T_i\eta$ can be described as follows, \[T_iS_n=a'_0(n)e_0+a'_i(n)e_i+\sum_{j\ne0,i}a_j(n)e_j\] for some $a'_0(n)$ and $a'_i(n)$. Then $T_i$ satisfies the following formula. {\lem \label{2M-S}It holds that \[a'_0(n)-a'_i(n)=2\sup_{0\leq m\leq n}\left\{a_0(m)-a_i(m)\right\}-\left\{a_0(n)-a_i(n)\right\}.\] That is, from Definition \ref{Adef1} \begin{align*} A_iT_iS_n&=2\sup_{0\leq m\leq n}A_iS_m-A_iS_n\\ &=P_1(-A_iS)_n \end{align*} by using Pitman transform (\ref{P_1}) in Definition \ref{1dim oneside pitman}.} \begin{proof} It is easy to check that \[2\mathbf{1}_{\{A_iS_n-A_iS_{n-1}=1\}}=1+(A_iS_n-A_iS_{n-1})-\mathbf{1}_{\{\eta_n\ne0,i\}}.\] This equation and Theorem \ref{W=M-S} show that \begin{align*} & A_iT_iS_n-A_iT_iS_{n-1}\\ ={}&1-2\mathbf{1}_{\{W^{(i)}_n=W^{(i)}_{n-1}-1\}}-\mathbf{1}_{\{\eta_n\ne0,i\}}\\ ={}&1-2\mathbf{1}_{\{A_iS_{n-1}<\sup_{0\leq m\leq n-1}A_iS_m,\ A_iS_n-A_iS_{n-1}=1\}}-\mathbf{1}_{\{\eta_n\ne0,i\}}\\ ={}&1-\left(2\mathbf{1}_{\{A_iS_n-A_iS_{n-1}=1\}}-2\mathbf{1}_{\{A_iS_{n-1}=\sup_{0\leq m\leq n-1}A_iS_m,\ A_iS_n-A_iS_{n-1}=1\}}\right)-\mathbf{1}_{\{\eta_n\ne0,i\}}\\ ={}&-(A_iS_n-A_iS_{n-1})+2\mathbf{1}_{\{A_iS_{n-1}=\sup_{0\leq m\leq n-1}A_iS_m,\ A_iS_n-A_iS_{n-1}=1\}}. \end{align*} Summing over the increments, we obtain \begin{align*} &A_iT_iS_n-A_iT_iS_0\\ ={}&\sum^n_{m=1}\left(A_iT_iS_m-A_iT_iS_{m-1}\right)\\ ={}&A_iS_0-A_iS_n+2\sum^n_{m=1}\mathbf{1}_{\{A_iS_{n-1}=\sup_{0\leq m\leq n-1}A_iS_m,\ A_iS_n-A_iS_{n-1}=1\}}\\ ={}&A_iS_0-A_iS_n+2\left(\sup_{0\leq m\leq n}A_iS_m-\sup_{0\leq m\leq 0}A_iS_m\right). \end{align*} Since $A_iS_0=A_iT_iS_0=\sup_{0\leq m\leq 0}A_iS_m=0$, the claim is proved. \end{proof} {\thm \label{oneside TP}It holds that \[T_iS=\tau_{(0,i)}P_{e_i-e_0}S,\ \forall S\in\mathcal{S_+}\] where P is one-sided Pitman transform defined in Definition \ref{oneside pitman}.} \begin{proof} By Remark \ref{projection} and Lemma \ref{2M-S}, it holds that \begin{align*} P_{e_i-e_0}S_n&=P_1\left(-\frac{1}{2}A_iS_n\right)\left(e_i-e_0\right)+\sum_{j\ne0,i}d_j(n)e_j\\ &=\frac{1}{2}P_1\left(-A_iS_n\right)\left(e_i-e_0\right)+\sum_{j\ne0,i}d_j(n)e_j\\ &=\frac{1}{2}A_iT_iS_n\left(e_i-e_0\right)+\sum_{j\ne0,i}d_j(n)e_j \end{align*} where $d_j(n)=a_j(n)-\frac{a_0(n)+a_i(n)}{2}=a_j(n)-\frac{a'_0(n)+a'_i(n)}{2},\ j\ne0,i$. On the other hand, by Definition \ref{taudef1}, \begin{align*} \tau_{(0,i)}T_iS_n&=a'_i(n)e_0+a'_0(n)e_i+\sum_{j\ne0,i}a_j(n)e_j\\ &=\frac{1}{2}(a'_0(n)-a'_i(n))\left(e_i-e_0\right)+\sum_{j\ne0,i}d_j(n)e_j\\ &=\frac{1}{2}A_iT_iS_n\left\{e_i-e_0\right\}+\sum_{j\ne0,i}d_j(n)e_j. \end{align*} Therefore, we obtain the equation $\tau_{(0,i)}T_iS_n=P_{e_i-e_0}S_n$ for any $n\in\mathbb{Z_+}$. \end{proof} {\rem The dynamic T for the 1-color BBS in the paper \cite{DS} is expressed as follows\:: \[TS_n=2\sup_{0\leq m\leq n}S_m-S_n,\] where $S_n=a_0(n)e_0+a_1(n)e_1=a_0(n)-a_1(n)$. This is also called Pitman transform and corresponds to Lemma \ref{2M-S}. For the multicolor case, however, the supremum expression is \begin{align*} &2\sup_{0\leq m\leq n}\frac{(e_0-e_i)\cdot S_m}{|e_0-e_i|^2}(e_0-e_i)-S_n\\ ={}&2\sup_{0\leq m\leq n}\frac{(e_0-e_i)\cdot S_m}{|e_0-e_i|^2}(e_0-e_i)-\left(a_0(n)e_0+a_1(n)e_1+\cdots+a_\kappa(n)e_\kappa\right)\\ ={}&2\sup_{0\leq m\leq n}\frac{a_0(m)-a_i(m)}{2}(e_0-e_i)-\left(\frac{a_0(n)-a_i(n)}{2}(e_0-e_i)+\sum_{j\ne0,i}d_j(n)e_j\right)\\ ={}&\frac12\left(2\sup_{0\leq m\leq n}\left\{a_0(m)-a_i(m)\right\}-\left\{a_0(n)-a_i(n)\right\}\right)(e_0-e_i)-\sum_{j\ne0,i}d_j(n)e_j\\ ={}&\frac{a'_0(m)-a'_i(m)}{2}(e_0-e_i)-\sum_{j\ne0,i}d_j(n)e_j, \end{align*} where $d_j(n)=a_j(n)-\frac{a_0(n)+a_i(n)}{2}=a_j(n)-\frac{a'_0(n)+a'_i(n)}{2},\ j\ne0,i$. Then this does not correspond to $T_iS_n$ because the sign of $d_j(n)$ is negative. This is the reason why we use infimum expression of Pitman transform. } {\rem From Theorem \ref{oneside TP}, it holds that \begin{align*} T_2T_1&=\left(\tau_{(0,2)}P_{e_2-e_0}\right)\left(\tau_{(0,1)}P_{e_1-e_0}\right)\\ &=\tau_{(0,1)}\left(\tau_{(1,2)}P_{e_2-e_1}\right)P_{e_1-e_0}\\ &=\tau_{(0,1)}\tau_{(1,2)}P_{e_2-e_1}P_{e_1-e_0}. \end{align*} Similarly, the dynamic $T$ of the multicolor BBS is as follows\:: \[T=T_\kappa\cdots T_2T_1=\tau_{(0,1)}\tau_{(1,2)}\cdots\tau_{(\kappa-1,\kappa)}P_{e_{\kappa}-e_{\kappa-1}}\cdots P_{e_2-e_1}P_{e_1-e_0}.\] } \subsection{Two-sided multicolor BBS}\label{two-sided BBS} In this section, we extend the particle configuration to $\eta=(\eta_n)_{n\in\mathbb{Z}}\in \{0,1,2,\cdots,\kappa\}^{\mathbb{Z}}$. We can again obtain the path encoding $S=(S_n)_{n\in\mathbb{Z}}$ of the $\eta$ given by (\ref{increment}) and (\ref{path encoding}). In this case, for $i\in\mathcal{C}$ and $n\geq1$, $a_i(n)$ means the number of the particles of color $i$ at the sites located from $1$ to $n$, and, for $i\in\mathcal{C}$ and $n\leq-1$, $-a_i(n)$ means the same at the sites located from $n+1$ to $0$. The same is true for the number of the empty sites. Also we define that $a_i(0)=0$ for $i\in\mathcal{C}\cup{\{0\}}$. As in the case of one-sided multicolor BBS, it obviously holds $a_0(n)+a_1(n)+\cdots+a_\kappa(n)=n\ \ \forall n\in\mathbb{Z}$. Also we define the path space in $\mathbb{R}^\kappa$\:: \[\mathcal{S}^0:=\{S=(S_n)_{n\in\mathbb{Z}}\::\:S_0=0,\ S_{n+1}-S_{n}\in\{e_0,e_1,\cdots,e_\kappa\},\ \forall n\in\mathbb{Z}\}.\] Moreover, we define the function $A_i:\mathcal{S}^0\rightarrow\mathbb{R}^{\mathbb{Z}}$ and the operator $\tau_{(0,i)}:\mathcal{S}^0\rightarrow \mathcal{S}^0$ given by \eqref{A1} and \eqref{tau1}. Whilst in the one-sided case, carrier process W and the actions $T_i,\ i=1,\cdots \kappa$ are defined for any $S\in\mathcal{S_+}$ (that is, for any configuration $\eta\in\{0,1,2,\cdots,\kappa\}^{\mathbb{N}}$), in the two-sided case, the following restriction on $S$ is required to define the carrier and actions\:: \begin{equation}\label{twoside condition} \limsup_{n\rightarrow-\infty}A_iS_n<\infty. \end{equation} This condition can be transformed as follows\:: \begin{align*} \limsup_{n\rightarrow-\infty}A_iS_n<\infty&\Leftrightarrow \sup_{n\leq0}A_iS_n<\infty\\ &\Leftrightarrow \sup_{n\leq0}\left\{a_0(n)-a_i(n)\right\}<\infty\\ &\Leftrightarrow \inf_{n\leq0}\left\{\left(-a_0(n)\right)-\left(-a_i(n)\right)\right\}>-\infty\\ &\Leftrightarrow -A_iS\in\mathcal{R}^{P_1} \end{align*} and this means that the number of particles of color $i$ is not too much compared with the number of empty sites in the left side. Indeed, in section 2.4 in the paper \cite{DS}, two-sided multicolor BBS is understood with two-sided carrier process \[W^{(i)}_n=\sup_{m\leq n}A_iS_m-A_iS_n\] under the condition (\ref{twoside condition}). Also the path encoding $T_iS_n=a'_0(n)e_0+a'_i(n)e_i+\sum_{j\ne0,i}a_j(n)e_j$ of $T_i\eta$, is obtained by the equation \begin{equation}\label{twoside action} A_iT_iS_n=2\sup_{m\leq n}A_iS_m-A_iS_n-2\sup_{m\leq 0}A_iS_m \end{equation} under the condition (\ref{twoside condition}). Then, in the same way as proof of Theorem \ref{oneside TP}, it holds that \[T_iS=\tau_{(0,i)}P_{e_i-e_0}S\] where P is two-sided Pitman transform defined in Definition \ref{twoside pitman}. From the above discussion, the next set is obtained\:: \begin{align*} \mathcal{S}^{T_i}:&=\{S\in\mathcal{S}^0\::\:T_iS\mbox{ well-defined}\}\\ &=\{S\in\mathcal{S}^0\::\:\limsup_{n\rightarrow-\infty}A_iS_n<\infty\} \end{align*} \subsection{Inverse of the action}\label{inverse} In the previous section, we found that $T_i=\tau_{(0,i)}P_{e_i-e_0}$ on $\mathcal{S}^{T_i}$. Then we can defined $T^{-1}_i=P^{-1}_{e_i-e_0}\tau_{(0,i)}$ on an appropriate set, where $P^{-1}$ is defined by Definition \ref{inverse pitman}. As in the proof of Theorem \ref{oneside TP}, $T_i$ acts on $S_n$ as follows\:: \begin{align*} T_iS_n&=\tau_{(0,i)}P_{e_i-e_0}S_n\\ &=\tau_{(0,i)}\left(\frac{1}{2}P_{1}\left(-A_iS\right)_n(e_i-e_0)+\sum_{j\ne0,i}d_j(n)e_j\right). \end{align*} Therefore, Theorem \ref{inversemap} shows \begin{align*} T^{-1}_iT_iS_n&=T^{-1}_i\left(\frac{1}{2}P_{1}\left(-A_iS\right)_n(e_i-e_0)+\sum_{j\ne0,i}d_j(n)e_j\right)\\ &=P^{-1}_{e_i-e_0}\tau_{(0,i)}\tau_{(0,i)}\left(\frac{1}{2}P_{1}\left(-A_iS\right)_n(e_i-e_0)+\sum_{j\ne0,i}d_j(n)e_j\right)\\ &=P^{-1}_{e_i-e_0}\left(\frac{1}{2}P_{1}\left(-A_iS\right)_n(e_i-e_0)+\sum_{j\ne0,i}d_j(n)e_j\right)\\ &=P^{-1}_1\left(\frac{1}{2}P_{1}\left(-A_iS\right)\right)_n(e_i-e_0)+\sum_{j\ne0,i}d_j(n)e_j\\ &=\frac{1}{2}P^{-1}_1P_{1}\left(-A_iS\right)_n(e_i-e_0)+\sum_{j\ne0,i}d_j(n)e_j\\ &=\frac{1}{2}\left(-A_iS\right)_n(e_i-e_0)+\sum_{j\ne0,i}d_j(n)e_j\\ &=S_n \end{align*} if and only if $-A_iS\in\mathcal{R}^{{P_1}^{-1}P_1}$. On the other hand, by Remark \ref{taudef2}, $T^{-1}_i$ acts on $S_n$ as follows\:: \begin{align*} T^{-1}_iS_n&=P^{-1}_{e_i-e_0}\tau_{(0,i)}S_n\\ &=P^{-1}_{e_i-e_0}\tau_{(0,i)}\left(-\frac12\left(A_iS_n\right)(e_i-e_0)+\sum_{j\ne0,i}d_j(n)e_j\right)\\ &=P^{-1}_{e_i-e_0}\left(\frac12\left(A_iS_n\right)(e_i-e_0)+\sum_{j\ne0,i}d_j(n)e_j\right)\\ &=P^{-1}_{1}\left(\frac12A_iS\right)_n(e_i-e_0)+\sum_{j\ne0,i}d_j(n)e_j\\ &=\frac12P^{-1}_{1}\left(A_iS\right)_n(e_i-e_0)+\sum_{j\ne0,i}d_j(n)e_j \end{align*} Therefore, Theorem \ref{inverse} and Remark \ref{taudef2} shows \begin{align*} T_iT^{-1}_iS_n&=\tau_{(0,i)}P_{e_i-e_0}\left(\frac12P^{-1}_{1}\left(A_iS\right)_n(e_i-e_0)+\sum_{j\ne0,i}d_j(n)e_j\right)\\ &=\tau_{(0,i)}\left(P_1\left(\frac12P^{-1}_{1}\left(A_iS\right)\right)_n(e_i-e_0)+\sum_{j\ne0,i}d_j(n)e_j\right)\\ &=\tau_{(0,i)}\left(\frac12P_1P^{-1}_{1}\left(A_iS\right)_n(e_i-e_0)+\sum_{j\ne0,i}d_j(n)e_j\right)\\ &=\tau_{(0,i)}\left(\frac12A_iS_n(e_i-e_0)+\sum_{j\ne0,i}d_j(n)e_j\right)\\ &=-\frac12A_iS_n(e_i-e_0)+\sum_{j\ne0,i}d_j(n)e_j\\ &=S_n \end{align*} if and only if $A_iS\in\mathcal{R}^{P_1{P_1}^{-1}}$. Above discussion gives the following theorem characterizing the following sets\:: \begin{align*} \mathcal{S}^{T^{-1}_iT_i}:=&\{S\in\mathcal{S}^0\::\:T_iS,T^{-1}_iTS\mbox{ well-defined},\ T^{-1}_iT_iS=S\}\\ \mathcal{S}^{T_iT^{-1}_i}:=&\{S\in\mathcal{S}^0\::\:T^{-1}_iS,T_iT^{-1}_iS\mbox{ well-defined},\ T_iT^{-1}_iS=S\}. \end{align*} {\thm It holds that \begin{align*} \mathcal{S}^{T^{-1}_iT_i}&=\{S\in\mathcal{S}^0\::\:-A_iS\in\mathcal{R}^{{P_1}^{-1}P_1}\}\\ &=\{S\in\mathcal{S}^0\::\:\inf_{m\leq0}\left(-A_iS_m\right)>-\infty,\ \inf_{m\leq n}\left(-A_iS_m\right)=-A_iS_n,\ i.o.\ as\ n\rightarrow\infty\}\\ &=\{S\in\mathcal{S}^0\::\:\sup_{m\leq0}A_iS_m<\infty,\ \sup_{m\leq n}A_iS_m=A_iS_n,\ i.o.\ as\ n\rightarrow\infty\}, \end{align*} and \begin{align*} \mathcal{S}^{T_iT^{-1}_i}&=\{S\in\mathcal{S}^0\::\:A_iS\in\mathcal{R}^{P_1{P_1}^{-1}}\}\\ &=\{S\in\mathcal{S}^0\::\:\inf_{m\geq0}A_iS_m>-\infty,\ \inf_{m\geq n}A_iS_m=A_iS_n,\ i.o.\ as\ n\rightarrow-\infty\}. \end{align*} } {\rem The above conditions can be transformed as follows\:: \[\sup_{m\leq n}A_iS_m=A_iS_n,\ i.o.\ as\ n\rightarrow\infty\ \Leftrightarrow\ \sup_{n\in\mathbb{Z}}A_iS_n=\limsup_{n\rightarrow\infty}A_iS_n,\] \[\inf_{m\geq n}A_iS_m=A_iS_n,\ i.o.\ as\ n\rightarrow-\infty\ \Leftrightarrow\ \inf_{n\in\mathbb{Z}}A_iS_n=\liminf_{n\rightarrow-\infty}A_iS_n.\] Then it holds that \[\mathcal{S}^{T^{-1}_iT_i}=\{S\in\mathcal{S}^0\::\:M^{(i)}_0<\infty,\ \limsup_{n\rightarrow\infty}A_iS_n=M^{(i)}_\infty\},\] \[\mathcal{S}^{T_iT^{-1}_i}=\{S\in\mathcal{S}^0\::\:I^{(i)}_0>-\infty,\ \liminf_{n\rightarrow-\infty}A_iS_n=I^{(i)}_{-\infty}\}.\] where, we define \[M^{(i)}_0:=\sup_{n\leq 0}A_{(i)}S_n,\ M^{(i)}_\infty:=\sup_{n\in\mathbb{Z}}A_{(i)}S_n,\] \[I^{(i)}_0:=\inf_{n\geq 0}A_iS_n,\ I^{(i)}_{-\infty}:=\inf_{n\in\mathbb{Z}}A_iS_n.\] Also we obtain the following set\:: \begin{align*} \mathcal{S}^{rev}_i:&=\{S\in\mathcal{S}^0\::\:T_iS,T^{-1}_iS,T^{-1}_iTS,T_iT^{-1}_iS\mbox{ well-defined},\ T^{-1}_iT_iS=T_iT^{-1}_iS=S\}\\ &=\{S\in\mathcal{S}^0\::\:M^{(i)}_0<\infty,\ I^{(i)}_0>-\infty,\ \limsup_{n\rightarrow\infty}A_iS_n=M^{(i)}_\infty,\ \liminf_{n\rightarrow-\infty}A_iS_n=I^{(i)}_{-\infty}\}. \end{align*} } \subsection{Set of configurations}\label{invariant set} Even if $S\in\mathcal{S}^{rev}_i$ holds, it does not necessarily hold that $T_iS\in\mathcal{S}^{rev}_i$. In the paper \cite{DS} for the 1-color BBS, the set \[\mathcal{S}^{inv}_i:=\{S\in\mathcal{S}^0\::\:T^{k}_iS\in\mathcal{S}^{rev}_i,\ \forall k\in\mathbb{Z}\}\] is characterized as following lemma. {\lem \label{Sinv} For any $i\in\mathcal{C}$, it holds that \[\mathcal{S}^{inv}_i=\bigcup_{*_1,*_2\in\{sub-critical(i),critical(i)\}}\left(\mathcal{S}_{*_1}^-\cap\mathcal{S}_{*_2}^+\right),\] where \begin{align*} \mathcal{S}_{sub-critical(i)}^{\pm}&:=\left\{S\in\mathcal{S}^0\::\:\lim_{n\rightarrow\pm\infty}\frac{A_iS_n}{F_i(n)}=1,\ \exists F_i\in\mathcal{F}\right\},\\ \mathcal{S}_{critical(i)}^{\pm}&:=\left\{S\in\mathcal{S}^0\::\: \sup_{n\in\mathbb{Z}}W^{(i)}_n<\infty,\:\limsup_{n \to \pm \infty}A_iS_n= \liminf_{n \to \pm \infty}A_iS_n + \sup_{n}W^{(i)}_n \in \mathbb{R}\right\},\\ \mathcal{F}&:=\{F:\mathbb{Z}\rightarrow\mathbb{R}\::\:\mbox{increasing function, }\lim_{n\rightarrow\infty} F(n)=\infty,\ \lim_{n\rightarrow-\infty} F(n)=-\infty\}. \end{align*} Moreover, it holds that \begin{equation}\label{Fnochange} \lim_{n\rightarrow\pm\infty}\frac{A_iS_n}{F_i(n)}=1,\ \exists F_i\in\mathcal{F}\ \Rightarrow\ \lim_{n\rightarrow\pm\infty}\frac{A_iT_iS_n}{F_i(n)}=1 \end{equation} and \begin{equation}\label{FisM} \lim_{n\rightarrow\pm\infty}\frac{A_iS_n}{F_i(n)}=1,\ \exists F_i\in\mathcal{F}\ \Leftrightarrow\ \lim_{n\rightarrow\pm\infty}\frac{A_iS_n}{\sup_{m\leq n}A_iS_m}=1. \end{equation} } For the study of the multicolor BBS theory, it is natural to ask when $T^{-1}TS=TT^{-1}S=S$ is true where $T$ is any composition of $T_i,\ i\in\mathcal{C}$ such as $T=T_\kappa\cdots T_2T_1,\ T=T_2T_1T^2_2$ etc. In other words, what is the condition for $S$ to be in the following set ? \[\mathcal{S}^{inv}_{\mathcal{C}}:=\{S\in\mathcal{S}^0\::\:TS\in\bigcap_{i\in\mathcal{C}}\mathcal{S}^{rev}_i\mbox{\ for any composition }T\mbox{ of }T_i,\ i\in\mathcal{C}\}.\] One might expect that \[\mathcal{S}^{inv}_{\mathcal{C}}\supseteq\bigcap_{i\in\mathcal{C}}\mathcal{S}^{inv}_i\] but this is not true. (See Remark \ref{noinvariant}.) The main result of this section is the following theorem which gives a sufficient condition for $S$ to be in the set $S\in\mathcal{S}^{inv}_{\mathcal{C}}$. {\thm \label{invariantthm} Define the subset of $\bigcap_{i\in\mathcal{C}}\left(\mathcal{S}_{sub-critical(i)}^-\cap\mathcal{S}_{sub-critical(i)}^+\right)$ such that $F_i$ and $F_j$ have the same asymptotic behavior as $n\rightarrow\pm\infty$ for any $i,j\in\mathcal{C}$ as follows, \[\mathcal{S}^{good}_{\mathcal{C}}:=\left\{S\in\mathcal{S}^0\::\:\forall i\in\mathcal{C}\ \exists F_i\in\mathcal{F},\ \lim_{n\rightarrow\pm\infty}\frac{A_iS_n}{F_i(n)}=1\mbox{ and }\limsup_{n\rightarrow\pm\infty}\frac{F_j(n)}{F_i(n)}<\infty\ \forall i,j\in\mathcal{C}\right\}. \] It holds that \[\mathcal{S}^{inv}_{\mathcal{C}}\supseteq\mathcal{S}^{good}_{\mathcal{C}}.\] } To prove the above result, we prepare a simple lemma. {\lem For any $i, j\in\mathcal{C},\ i\ne j$, and $S\in\mathcal{S}^{T_i}$it holds that \begin{equation}\label{a'1} A_jT_iS_n=A_jS_n+ W^{(i)}_n-M^{(i)}_0 \end{equation} and \begin{equation}\label{a'2} A_jT_iS_n=A_jS_n+\frac12\left(A_iT_iS_n-A_iS_n\right) \end{equation} for any $n\in\mathbb{Z}$}. \begin{proof} Let $S_n=a_0(n)e_0+a_1(n)e_1+\cdots+a_\kappa(n)e_\kappa$ and $T_iS_n=a'_0(n)e_0+a'_i(n)e_i+\sum_{k\ne0,i}a_k(n)e_k$. Then (\ref{twoside action}) shows \[a'_0(n)-a'_i(n)=2\sup_{m\leq n}\left\{a_0(m)-a_i(m)\right\}-\left\{a_0(n)-a_i(n)\right\}-2M^{(i)}_0\] By adding $a'_0(n)+a'_i(n)=a_0(n)+a_i(n)$ to the above equation, we have \[2a'_0(n)=2\sup_{m\leq n}\left\{a_0(m)-a_i(m)\right\}+2a_i(n)-2M^{(i)}_0.\] Then it follows that \[a'_0(n)=a_0(n)+\sup_{m\leq n}\left\{a_0(m)-a_i(m)\right\}-\left\{a_0(n)-a_i(n)\right\}-M^{(i)}_0\] Since $A_jT_iS_n=a'_0(n)-a_j(n)$ and $\sup_{m\leq n}A^{(i)}S_m-A^{(i)}S_n=W^{(i)}_n$, the first claim is proved. Also $a'_0(n)+a'_i(n)=a_0(n)+a_i(n)$ shows \[2a'_0(n)-\left\{a'_0(n)-a'_i(n)\right\}=2a_0-\left\{a_0(n)-a_i(n)\right\}\] then, \[a'_0(n)=a_0(n)+\frac12\left(A_iT_iS_n-A_iS_n\right)\] and this prove the second claim. \end{proof} \begin{proof}[Proof of Theorem \ref{invariantthm}] Suppose that $S\in\mathcal{S}^{good}_{\mathcal{C}}$. It is enough to show that $T_iS\in\mathcal{S}^{good}_{\mathcal{C}}$ for any $i\in\mathcal{C}$, so for that we show \[\lim_{n\rightarrow\pm\infty}\frac{A_jT_iS_n}{F_j(n)}=1\] for any $i,j\in\mathcal{C}$. From \eqref{a'2}, we can write \[\frac{A_jT_iS_n}{F_j(n)}=\frac{A_jS_n}{F_j(n)}+\frac12\frac{F_j(n)}{F_i(n)}\left(\frac{A_iT_iS_n}{F_i(n)}-\frac{A_iS_n}{F_i(n)}\right).\] By the assumption and \eqref{Fnochange}, it holds that \[\lim_{n\rightarrow\pm\infty}\frac{A_jS_n}{F_j(n)}=1,\ \lim_{n\rightarrow\pm\infty}\frac{A_iS_n}{F_i(n)}=1,\ \lim_{n\rightarrow\pm\infty}\frac{A_iT_iS_n}{F_i(n)}=1.\] Then the condition $\limsup_{n\rightarrow\pm\infty}\frac{F_j(n)}{F_i(n)}<\infty$ shows the conclusion. \end{proof} {\rem\label{noinvariant} Now we consider three examples of the configurations with $\mathcal{C}=\{1,2\}$. Each example shows one of the following three claims. \begin{align*} (a)\ \ \mathcal{S}^{inv}_{\mathcal{C}}&\not\supseteq \bigcap_{i\in\mathcal{C}}\left(\mathcal{S}_{sub-critical(i)}^-\cap\mathcal{S}_{sub-critical(i)}^+\right),\\ (b)\ \ \mathcal{S}^{inv}_{\mathcal{C}}&\not\supseteq \bigcap_{i\in\mathcal{C}}\left(\mathcal{S}_{critical(i)}^-\cap\mathcal{S}_{critical(i)}^+\right),\\ (c)\ \ \mathcal{S}^{inv}_{\mathcal{C}}&\varsupsetneq\mathcal{S}^{good}_{\mathcal{C}}. \end{align*} \vspace{10pt} $(a)$\ We give an example of $\eta$ whose path encoding $S$ satisfies \[S\in\bigcap_{i\in\mathcal{C}}\left(\mathcal{S}_{sub-critical(i)}^-\cap\mathcal{S}_{sub-critical(i)}^+\right),\ T_2S\notin\mathcal{S}_{sub-critical(1)}^+,\ T_2S\notin\mathcal{S}_{critical(1)}^+\] Let $\eta$ be as follows\:: \vspace{5pt} \makebox[0.37cm][r]{$\eta=\ $}$(\cdots\ 0\ \eta_0=0\ 0\ 2_{(1)}\ (0\ 1)_{(1)}\ 0\ (0\ 1)_{(2)}\ 0\ 2_{(3)}\ (0\ 1)_{(3)}\ 0\ (0\ 1)_{(4)}\ 0\ 2_{(5)}\ (0\ 1)_{(5)}\ 0\ (0\ 1)_{(6)}\cdots$ \vspace{5pt} \makebox[2.6cm][r]{}$\cdots\cdots\ 0\ 2_{(2m-1)}\ (0\ 1)_{(2m-1)}\ 0\ (0\ 1)_{(2m)}\ \cdots),$ \vspace{5pt} \noindent where $i_{(k)}:=i\ i\ \cdots i$ means k consecutive i, and $(i\ j)_k:=i\ j\ i\ j\ \cdots\ i\ j$ means that i and j alternately appear k times. For simplicity, Figure 6 and Figure 7 show the graph of $A_2S_n$ and $A_1S_n$ skipping places where there is no increase or decrease where $S$ is path encoding of $\eta$. As seen in Figure 6, it holds that \begin{align*} 1&\geq\limsup_{n\rightarrow\infty}\frac{A_2S_n}{\sup_{m\leq n}A_2S_m}\\ &\geq\liminf_{n\rightarrow\infty}\frac{A_2S_n}{\sup_{m\leq n}A_2S_m}\\ &=\lim_{k\rightarrow\infty}\frac{1+(1+3)+(1+5)+\cdots+(1+2k-1)-(2k-1)}{1+(1+3)+(1+5)+\cdots+(1+2k-1)}\\ &=1. \end{align*} As seen in Figure 7, it holds that $\left|\sup_{m\leq n}A_1S_m-A_1S_n\right|\leq1,\ \forall n$ and $\lim_{n\rightarrow\infty}A_1S_n=\infty$, then $\lim_{n\rightarrow\infty}\frac{A_1S_n}{\sup_{m\leq n}A_1S_m}=1$. Also it clearly holds that $\lim_{n\rightarrow-\infty}\frac{A_1S_n}{\sup_{m\leq n}A_1S_m}=\lim_{n\rightarrow-\infty}\frac{A_2S_n}{\sup_{m\leq n}A_2S_m}=1$. Therefore, by Lemma \ref{Sinv}, $S\in\bigcap_{i\in\mathcal{C}}\left(\mathcal{S}_{sub-critical(i)}^-\cap\mathcal{S}_{sub-critical(i)}^+\right)$. However, the configuration of $T_2\eta$ is as follows\:: \vspace{5pt} \makebox[0.37cm][r]{$T_2\eta=\ $}$(\cdots\ 0\ \eta_0=0\ 0\ 0_{(1)}\ (2\ 1)_{(1)}\ 0\ (0\ 1)_{(2)}\ 0\ 0_{(3)}\ (2\ 1)_{(3)}\ 0\ (0\ 1)_{(4)}\ 0\ 0_{(5)}\ (2\ 1)_{(5)}\ 0\ (0\ 1)_{(6)}\cdots$ \vspace{5pt} \makebox[2.6cm][r]{}$\cdots\cdots\ 0\ 0_{(2m-1)}\ (2\ 1)_{(2m-1)}\ 0\ (0\ 1)_{(2m)}\ \cdots).$ \vspace{5pt} \noindent Figure 8, the graph of $A_1T_2S_n$, shows that $\liminf_{n\rightarrow\infty}\frac{A_1T_2S_n}{\sup_{m\leq n}A_1T_2S_m}=\frac12$. Therefore, $T_2S\notin\mathcal{S}_{sub-critical(1)}^+$. Also $T_2S\notin\mathcal{S}_{critical(1)}^+$ is obvious. Then, from Lemma \ref{Sinv}, $T_2S\notin\mathcal{S}^{inv}_1$. Such a phenomenon occurs because $W^{(2)}_n$ can be arbitrarily large and it causes a gap between the asymptotic behavior of $A_1T_2S_n$ and that of $A_1S_n$ as $n\rightarrow\infty$ from the equation $A_1T_2S_n=A_1S_n+ W^{(2)}_n-M^{(2)}_0$ by \eqref{a'1}. \vspace{5pt} \begin{figure}[h] \centering \scalebox{0.30}{\includegraphics{A2S.pdf}} \vspace{4pt} \caption{The graph of $A_2S_n$ skipping places where there is no increase or decrease.} \end{figure} \vspace{6pt} \begin{figure}[h] \centering \scalebox{0.30}{\includegraphics{A1S.pdf}} \caption{The graph of $A_1S_n$ skipping places where there is no increase or decrease.} \end{figure} \vspace{6pt} \begin{figure}[h] \centering \scalebox{0.30}{\includegraphics{A1T2S.pdf}} \vspace{4pt} \caption{The graph of $A_1T_2S_n$ skipping places where there is no increase or decrease.} \end{figure} $(b)$\ We give an example of $\xi$ whose path encoding $S^{(\xi)}$ satisfies \[S^{(\xi)}\in\bigcap_{i\in\mathcal{C}}\left(\mathcal{S}_{critical(i)}^-\cap\mathcal{S}_{critical(i)}^+\right),\ T_2S^{(\xi)}\notin\mathcal{S}_{sub-critical(1)}^+,\ T_2S^{(\xi)}\notin\mathcal{S}_{critical(1)}^+.\] Let $\xi$ be as follows\:: \vspace{5pt} \makebox[1.85cm][r]{$\xi=\ $}$(\cdots\ 0\ 1\ 2\ 0\ 1\ 2\ 0\ 2\ 1\ 0\ 1\ 2\ 0\ 1\ 2\ 0\ 1\ 2\ \cdots).$ \vspace{5pt} \noindent Then, \vspace{5pt} \makebox[1.85cm][r]{$T_2\xi=\ $}$(\cdots\ 2\ 1\ 0\ 2\ 1\ 0\ 2\ 0\ 1\ 2\ 1\ 0\ 2\ 1\ 0\ 2\ 1\ 0\ \cdots)$ \vspace{5pt} \noindent and they show above conditions. $(c)$\ We give an example of $\zeta$ whose path encoding $S^{(\zeta)}$ satisfies \[S^{(\zeta)}\in\bigcap_{i\in\mathcal{C}}\left(\mathcal{S}_{critical(i)}^-\cap\mathcal{S}_{critical(i)}^+\right),\ S^{(\zeta)}\in\mathcal{S}^{inv}_\mathcal{C}.\] Let $\zeta$ be as follows\:: \vspace{5pt} \makebox[1.85cm][r]{$\zeta=\ $}$(\cdots\ 0\ 1\ 2\ 0\ 1\ 2\ 0\ 1\ 2\ 0\ 1\ 2\ 0\ 1\ 2\ 0\ 1\ 2\ \cdots).$ \vspace{5pt} \noindent$TS^{(\zeta)}\in\bigcap_{i\in\mathcal{C}}\left(\mathcal{S}_{critical(i)}^-\cap\mathcal{S}_{critical(i)}^+\right)$, where $T$ is any composition of $T_1$ and $T_2$, because the configuration of $T\zeta$ is always repeating $(012)$ or $(021)$. Therefore, it holds that $S^{(\zeta)}\in\mathcal{S}^{inv}_{\mathcal{C}}$. } \section{Random initial configurations} In this section, we consider the case when the initial configuration is random. Suppose that $\eta=(\eta_n)_{n\in\mathbb{Z}}$ is an ergodic sequence which is stationary with respect to the space shift. In particular, if we assume that the densities of the balls of color $i$ \begin{equation}\label{density} p_i=\mathbf{P}(\eta_0=i)<p_0=\mathbf{P}(\eta_0=0),\ \ \ \forall i\in\mathcal{C}, \end{equation} then ergodicity implies that $A_iS$ satisfies \[\frac{A_iS}{n}=\frac{a_0(n)-a_i(n)}{n}\rightarrow p_0-p_i>0,\ \ \ \mathbf{P}\mbox{-a.s.}\] as $n\rightarrow\pm\infty$. Thus we obtain the following result, which yields that $(T^kS)_{k\in\mathbb{Z}}$ is well-defined and reversible by Theorem \ref{invariantthm}. {\lem \label{ergodic}If $\eta=(\eta_n)_{n\in\mathbb{Z}}$ is a stationary, ergodic sequence satisfying \eqref{density}, then it holds that\[\frac{A_iS}{(p_0-p_i)n}\rightarrow 1,\ \ \ \mathbf{P}\mbox{-a.s.}\] as $n\rightarrow\pm\infty$ for any $i\in\mathcal{C}$. In particular, $S\in\mathcal{S}^{good}_C,\ \mathbf{P}\mbox{-a.s.}$.} Next, it is natural for random initial configuration to ask whether the law of $\eta$ is preserved by $T_i$, that is, $T_i\eta\buildrel{d}\over{=}\eta$. We introduce the example of an invariant measure in Section \ref{iid}. Moreover, we consider generalized multicolor BBS whose dynamic is defined for continuous path in $\mathbb{R}^\kappa$, and generalize each object appearing in the discrete case (Section \ref{onR}). And in Section \ref{BMD}, we check that $\kappa$-dimensional Brownian motion with certain drift is invariant under the action of the multicolor BBS, and it is obtained by appropriate scaling limit of asymmetric random walk with distribution $\mathbf{P}(S_m-S_{m-1}=e_j)=\frac1{\kappa+1}+\frac{c_j}{\sqrt{n\kappa}},j\in\mathcal{C}\cup{0}$ such that $c_0>c_i,\forall i\in\mathcal{C}$, which represents a high density particle configuration. \subsection{Independent and identically distributed initial configuration}\label{iid} Suppose that $\eta=(\eta_n)_{n\in\mathbb{Z}}$ is given by a sequence of i.i.d. random variables with following distribution \begin{equation}\label{iidprob} p_i=\mathbf{P}(\eta_0=i)<p_0=\mathbf{P}(\eta_0=0),\ \ \ \forall i\in\mathcal{C}, \end{equation} then it satisfies \eqref{density} and the conditions in Lemma \ref{ergodic}. Furthermore, $S$ is a random walk path in $\mathbb{R}^{\kappa}$ satisfying $S_0=0$ and \[\mathbf{P}(S_n-S_{n-1}=e_j)=p_j,\ \ \ \forall j\in\mathcal{C}\cup\{0\},\] where the increments of S are independent. {\thm \label{iidinvariant} If $\eta=(\eta_n)_{n\in\mathbb{Z}}$ is given by a sequence of i.i.d. random variables with \eqref{iidprob}, it holds that \[T_i\eta\buildrel{d}\over{=}\eta\] for any $i\in\mathcal{C}$.} \begin{proof} We introduce some notations. For each $n\in\mathbb{Z}$, define transform $f_n:\{0,1,\cdots,\kappa\}^{\mathbb{Z}}\rightarrow\{1,\cdots,\kappa\}$ given by \[f_n(\eta)=\left\{\begin{array}{ll} \eta_n, & \mbox{if }\eta_n\not\in\{0,i\},\\ i, & \mbox{if }\eta_n\in\{0,i\}.\\ \end{array}\right.\] Then it holds that $\left(f_n(T_i\eta)\right)_{n\in\mathbb{Z}}=\left(f_n(\eta)\right)_{n\in\mathbb{Z}}$, for each $\eta\in\{0,1,\cdots,\kappa\}^{\mathbb{Z}}$. For each $n\in\mathbb{Z}$ and $\eta\in\{0,1,\cdots,\kappa\}^{\mathbb{Z}}$, define a subsequence $\{k_n(\eta)\}_{k\in\mathbb{Z}}$ of $\mathbb{Z}$ given by \[k_n(\eta)=\left\{\begin{array}{ll} \min\left\{m\in\mathbb{Z}\::\:m>0,\ \eta_m\in\{0,i\}\right\}, & \mbox{if }n=0,\\ \min\left\{m\in\mathbb{Z}\::\:m> k_{n-1}(\eta),\ \eta_m\in\{0,i\}\right\}, & \mbox{if }n\geq1,\\ \max\left\{m\in\mathbb{Z}\::\:m< k_{n+1},\ \eta_m\in\{0,i\}\right\}, & \mbox{if }n\leq-1,\\ \end{array}\right.\] This $\{k_n(\eta)\}_{k\in\mathbb{Z}}$ is well-defined for $\eta$ almost everywhere. For each $n\in\mathbb{Z}$, define transform $g_n:\{0,1,\cdots,\kappa\}^{\mathbb{Z}}\rightarrow\{0,i\}$ given by \[g_n(\eta)=\eta_{k_n(\eta)}.\] Then it holds that $\left(g_n(T_i\eta)\right)_{n\in\mathbb{Z}}=T_i\left(g_n(\eta)\right)_{n\in\mathbb{Z}}$, for each $\eta\in\{0,1,\cdots,\kappa\}^{\mathbb{Z}}$. Moreover, \cite{DS} shows that \[T_i\left(g_n(\eta)\right)_{n\in\mathbb{Z}}\buildrel{d}\over{=}\left(g_n(\eta)\right)_{n\in\mathbb{Z}}.\] Denote the filtration, \[\mathcal{F}:=\left\{f_n,n\in\mathbb{Z}\right\},\ \ \mathcal{G}:=\left\{g_n,n\in\mathbb{Z}\right\}.\] It is obvious that $f_n(\eta)$ and $g_m(\eta)$ are independent for any $n,m\in\mathbb{Z}$, so is $\mathcal{F}$ and $\mathcal{G}$. Also $f_n(\eta)$ and $g_m(T_i\eta)$ are independent Configuration $\eta$ is determined by $\left(f_n(\eta)\right)_{n\in\mathbb{Z}}$ and $\left(g_n(\eta)\right)_{n\in\mathbb{Z}}$, so there is a transform $\varphi$ such that \[\varphi\left(\left(f_n(\eta)\right)_{n\in\mathbb{Z}},\ \left(g_n(\eta)\right)_{n\in\mathbb{Z}}\right)=\eta,\ \ \forall \eta\in\{0,1,\cdots,\kappa\}^{\mathbb{Z}},\] which is measurable with respect to product measure $\mathcal{F}\times\mathcal{G}$. Then it holds that \begin{align*} T_i\eta&=\varphi\left(\left(f_n(T_i\eta)\right)_{n\in\mathbb{Z}},\ \left(g_n(T_i\eta)\right)_{n\in\mathbb{Z}}\right)\\ &=\varphi\left(\left(f_n(\eta)\right)_{n\in\mathbb{Z}},\ T_i\left(\left(g_n(\eta)\right)_{n\in\mathbb{Z}}\right)\right)\\ &\buildrel{d}\over{=}\varphi\left(\left(f_n(\eta)\right)_{n\in\mathbb{Z}},\ \left(g_n(\eta)\right)_{n\in\mathbb{Z}}\right)\\ &=\eta. \end{align*} \end{proof} {\cor As the same setting in Theorem \ref{iidinvariant}, it holds that \[T\eta\buildrel{d}\over{=}\eta,\] where $T=T_\kappa\circ\cdots\circ T_1$. } \subsection{Multicolor BBS on $\mathbb{R}$}\label{onR} In this section, we consider a generalization of the multicolor BBS, whose dynamic is defined for continuous path in $\mathbb{R}^\kappa$. At first, we define Pitman transform for continuous path. {\df \label{twoside pitman conti} Let $\alpha\in{\mathbb{R}}^k,\ \alpha\ne0$. The two-sided Pitman transform $P_\alpha$ with respect to $\alpha$ is defined on the set \[\{\pi:\mathbb{R}\to {\mathbb{R}}^k,\ \pi(0)=0,\ \inf_{y\leq0}\alpha\cdot\pi(y)>-\infty\}\]by the formula, \[P_\alpha\pi(x)=\pi(x)-2\inf_{y\leq x}\frac{\alpha\cdot\pi(y)}{|\alpha|^2}\alpha+2\inf_{y\leq 0}\frac{\alpha\cdot\pi(y)}{|\alpha|^2}\alpha,\ \ \ x\in\mathbb{R} \] Similarly to discrete case, for $k=1$, it holds that \[P_\alpha\pi(x)=\pi(x)-2\inf_{y\leq x}\pi(y)+2\inf_{y\leq 0}\pi(y),\ \ \ x\in\mathbb{R}\] for any $\alpha\in\mathbb{R},\ \alpha\ne0$, and it does not depend on $\alpha$. Then we define \[P_1:=P_\alpha\ \ \ for\ \alpha \in\mathbb{R},\ \alpha\ne0.\] Also we define the transform ${P_\alpha}^{-1}$ on the set \[\{\pi:\mathbb{R}\to {\mathbb{R}}^k,\ \pi(0)=0,\ \inf_{y\geq0}\alpha\cdot\pi(y)>-\infty\}\]by the formula, \[{P_\alpha}^{-1}\pi(x)=\pi(x)-2\inf_{y\geq x}\frac{\alpha\cdot\pi(y)}{|\alpha|^2}\alpha+2\inf_{y\geq 0}\frac{\alpha\cdot\pi(y)}{|\alpha|^2}\alpha,\ \ \ x\in\mathbb{R}, \] and for $k=1$, \[{P_1}^{-1}\pi(x)=\pi(y)-2\inf_{y\geq x}\pi(y)+2\inf_{y\geq 0}\pi(y),\ \ \ x\in\mathbb{R}.\] } Unlike the discrete case, we can not describe the particle configuration $\eta$ directly, so we consider the dynamic for the path encoding S only. By analogy with the relevant discrete objects, define the path space \[\mathcal{S}^0_c=\{S:\mathbb{R}\rightarrow\mathbb{R}^\kappa\::\:S_0=0,\ S \mbox{ is continuous} \}.\] As the extension of \eqref{A2} in Remark \ref{Adef2} and \eqref{tau2} in Remark \ref{taudef2}, we define $A_i$ and $\tau$ as follows. {\df Define $A_i:\mathcal{S}^0_c\rightarrow\mathbb{R}$ and $\tau_{(0,i)}:\mathcal{S}^0_c\rightarrow\mathcal{S}^0_c$ as follows\:: \[A_iS_x=-2\frac{(e_i-e_0)\cdot S_x}{|e_i-e_0|^2},\] \[\tau_{(0,i)}S_x=S_x+A_iS_x(e_i-e_0)\] for $x\in\mathbb{R}$.} It is the case that the projection of $S_x$ along $e_i-e_0$ is $-\frac{1}{2}A_iS_x\left(e_i-e_0\right)$, and $S_x$ is decomposed into the sum as follows\:: \begin{equation}\label{projectionexpression} S_x=-\frac12A_iS_x(e_i-e_0)+\left\{S_x+\frac12A_iS_x(e_i-e_0)\right\}, \end{equation} and also it holds that \begin{equation}\label{tauconti} \tau_{(0,i)}S_x=\frac12A_iS_x(e_i-e_0)+\left\{S_x+\frac12A_iS_x(e_i-e_0)\right\}. \end{equation} \vspace{5pt} Then we can define the dynamics of the generalized multicolor BBS, given by \[T_i=\tau_{(0,i)}P_{e_i-e_0},\ \ \mbox{on}\ \{S\in\mathcal{S}^0_c\::\:\limsup_{x\rightarrow-\infty}A_iS_x<\infty\},\] \[T^{-1}_i=P^{-1}_{e_i-e_0}\tau_{(0,i)},\ \ \mbox{on}\ \{S\in\mathcal{S}^0_c\::\:\liminf_{x\rightarrow\infty}A_iS_x>-\infty\}\] for each $i\in\mathcal{C}$. Moreover, the previous definitions of $A_i$ and $\tau_{(0,i)}$ yield the following alternative expression for $T_i$. {\thm It holds that \begin{equation}\label{Texpression} T_iS_x=S_x+\left(A_iS_x-\sup_{y\leq x}A_iS_y+\sup_{y\leq 0}A_iS_y\right)(e_i-e_0),\ \ x\in\mathbb{R} \end{equation} for any $i\in\mathcal{C}$. } \begin{proof} From \eqref{projectionexpression} and \eqref{tauconti}, it holds that \begin{align*} T_iS_x &=\tau_{(0,i)}P_{e_i-e_0}S_x\\ &=\tau_{(0,i)}P_{e_i-e_0}\left(-\frac12A_iS(e_i-e_0)+\left\{S+\frac12A_iS(e_i-e_0)\right\}\right)_x\\ &=\tau_{(0,i)}\left(P_{1}\left(-\frac12A_iS\right)_x(e_i-e_0)+\left\{S_x+\frac12A_iS_x(e_i-e_0)\right\}\right)\\ &=\tau_{(0,i)}\left(\frac12P_{1}\left(-A_iS\right)_x(e_i-e_0)+\left\{S_x+\frac12A_iS_x(e_i-e_0)\right\}\right)\\ &=-\frac12P_{1}\left(-A_iS\right)_x(e_i-e_0)+\left\{S_x+\frac12A_iS_x(e_i-e_0)\right\}\\ &=-\frac12\left\{-A_iS_x-2\inf_{y\leq x}(-A_iS_y)+2\inf_{y\leq 0}(-A_iS_y)\right\}(e_i-e_0)+\left\{S_x+\frac12A_iS_x(e_i-e_0)\right\}\\ &=S_x+\left(A_iS_x-\sup_{y\leq x}A_iS_y+\sup_{y\leq 0}A_iS_y\right)(e_i-e_0). \end{align*} \end{proof} As in the discrete case, it is natural to seek to characterize the set \[\mathcal{S}^{inv}_{\mathcal{C},c}:=\{S\in\mathcal{S}^0_c\::\:TS\in\bigcap_{i\in\mathcal{C}}\mathcal{S}^{rev}_{i,c}\mbox{\ for any composition }T\mbox{ of }T_i,\ i\in\mathcal{C}\},\] where \[\mathcal{S}^{rev}_{i,c}=\{S\in\mathcal{S}^0_c\::\:T_iS,T^{-1}_iS,T^{-1}_iTS,T_iT^{-1}_iS\mbox{ well-defined},\ T^{-1}_iT_iS=T_iT^{-1}_iS=S\}.\] The following result is obtained by the similar argument in the discrete case. {\thm It holds that \[\mathcal{S}^{inv}_{\mathcal{C},c}\supseteq\mathcal{S}^{good}_{\mathcal{C},c},\] where \[\mathcal{S}^{good}_{\mathcal{C},c}:=\left\{S\in \mathcal{S}^0_c\::\:\forall i\in\mathcal{C}\ \exists F_i\in\mathcal{F}_c,\ \lim_{x\rightarrow\pm\infty}\frac{A_iS_x}{F_i(x)}=1\mbox{ and }\limsup_{x\rightarrow\pm\infty}\frac{F_j(x)}{F_i(x)}<\infty\ \forall i,j\in\mathcal{C}\right\},\] and \[\mathcal{F}_c=\{F:\mathbb{R}\rightarrow\mathbb{R}\::\:\mbox{increasing function, }\lim_{x\rightarrow\infty} F(x)=\infty,\ \lim_{x\rightarrow-\infty} F(x)=-\infty\}.\] } \subsection{Brownian motion with drift}\label{BMD} Next, we consider a stochastic process whose path belongs to $\mathcal{S}^{good}_{\mathcal{C},c}$ almost surely. As an example, let $S=(S_x)_{x \in \mathbb{R}}$ be two-sided standard $\kappa-$dimensional standard Brownian motion with drift ${D}\in\mathbb{R}^\kappa$. Namely, for $x\geq 0$, we define $S_x=B^1_x+x{D}$, $S_{-x}=-\left(B^2_x+x{D}\right)$, where $B^1,B^2$ are independent standard Brownian motions in $\mathbb{R}^\kappa$. Since \[A_iS_x=-2\frac{(e_i-e_0)\cdot B^1_x}{|e_i-e_0|^2}-2x\frac{(e_i-e_0)\cdot {D}}{|e_i-e_0|^2}\] for $x\ge0$, the condition $\lim_{x\rightarrow\infty}\frac{A_iS_x}{F_i(x)}=1,\ a.s.\ \exists F_i\in\mathcal{F}_c$ is satisfied if and only if $(e_i-e_0)\cdot {D}<0$, and we can take $F_i(x)=-2x\frac{(e_i-e_0)\cdot {D}}{|e_i-e_0|^2},x\ge0$. Similarly, $\lim_{x\rightarrow-\infty}\frac{A_iS_x}{F_i(x)}=1,\ a.s.\ \exists F_i\in\mathcal{F}_c$ if and only if $(e_i-e_0)\cdot {D}<0$. Therefore, it holds that \[S\in\mathcal{S}^{good}_{\mathcal{C},c},\ a.s.\ \Leftrightarrow\ (e_i-e_0)\cdot {D}<0,\ \ \forall i\in\mathcal{C}.\] On the other hand, from Proposition \ref{vectorsproperty} $($\hspace{-1pt}ⅵ\hspace{-1pt}$)$ , there is an $\kappa+1$-tuple $c_0,\cdots,c_\kappa$ of real numbers for ${D}\in\mathbb{R}^\kappa$ such that \[{D}=c_0e_0+\cdots+c_\kappa e_\kappa,\ \ c_0+\cdots+c_\kappa=0,\] and, by Proposition \ref{vectorsproperty} $($\hspace{-1pt}ⅳ\hspace{-1pt}$)$, it holds that \begin{align*} (e_i-e_0)\cdot{D}&=(e_i-e_0)\cdot\left(\frac12(c_i-c_0)(e_i-e_0)+\sum_{j\ne i}\left(c_j-\frac{c_i+c_0}{2}\right)e_j\right)\\ &=(c_i-c_0)\frac{|e_i-e_0|^2}{2} \end{align*} Thus we obtain the following set\:: \begin{align*} \mathcal{D}:=&\{D\in\mathbb{R}^\kappa\::\:(e_i-e_0)\cdot {D}<0,\ \ \forall i\in\mathcal{C}\}\\ =&\{{D}\in\mathbb{R}^\kappa\::\:{D}=c_0e_0+\cdots+c_\kappa e_\kappa,\ c_0>c_i,\ \forall i\in\mathcal{C},\ c_0+\cdots+c_\kappa=0\}, \end{align*} and it is the case that \[S\in\mathcal{S}^{good}_{\mathcal{C},c},\ a.s.\ \Leftrightarrow\ D\in\mathcal{D}.\] The main theorem in this subsection is the following which implies any Brownian motion with drift belonging to $\mathcal{S}^{good}_{\mathcal{C},c}$ is invariant under the actions of the generalized multicolor BBS. {\thm\label{BMinvariant} If $S$ is the two-sided $\kappa-$dimensional standard Brownian motion with drift ${D}\in\mathcal{D}$, then $T_iS \buildrel{d}\over{=}S$ for each $i\in\mathcal{C}$.} {\cor As the same setting in Theorem \ref{BMinvariant}, it holds that \[TS\buildrel{d}\over{=}S,\] where $T=T_\kappa\circ\cdots\circ T_1$. } \vspace{10pt} Before prove this main theorem, we show that Brownian motion with drift is obtained by a simple random walk scaling limit. From now on, fix $c_0,\cdots c_\kappa$ satisfying $c_0>c_i\ \forall i\in\mathcal{C},\ c_0+\cdots+c_\kappa=0$ and define \[{D}=c_0e_0+\cdots+c_\kappa e_\kappa,\] and \[p^{(n)}_i=\frac1{\kappa+1}+\frac{c_i}{\sqrt {n\kappa}},\ \ i\in\mathcal{C}\cup\{0\}.\] for large enough $n$ satisfying $0<p^{(n)}_i<1,\forall i\in\mathcal{C}$. Then we introduce vector valued random variables $\xi^{(n)}$ with distribution \begin{equation}\label{xidist} \mathbf{P}(\xi^{(n)}=e_i)=p^{(n)}_i,\ \ i\in\mathcal{C}\cup\{0\}. \end{equation} Moreover, let $\left\{\zeta^{(n)}_j\right\}_{j\in\mathbb{Z}}$ a sequence of independent identically distributed vector valued random variables and each $\zeta^{(n)}_j$ has the same distribution as $\xi^{(n)}$. Also we define the sequence of partial sums \[S^{(n)}_{[x]}=\left\{\begin{array}{ll} \zeta^{(n)}_1+\cdots+\zeta^{(n)}_{[x]}, & \mbox{if }[x]\ge1,\\ 0, & \mbox{if }[x]=0,\\ -\left(\zeta^{(n)}_{-1}+\cdots+\zeta^{(n)}_{[x]}\right), & \mbox{if }[x]\le-1, \end{array}\right.\] and its linear interpolation \begin{equation}\label{Y} Y^{(n)}_x=S^{(n)}_{[x]}+\left(x-[x]\right)\zeta^{(n)}_{[x]+1},\ \ x\in\mathbb{R}. \end{equation} We introduce the notation $\mu^{p^{(n)}}$ to represent the probability measure on $\mathcal{S}^0_c$ induced by the stochastic process $(Y^{(n)}_x)_{x\in\mathbb{R}}$. As shown in theorem \ref{iidinvariant}, we have the invariance of $\mu^{p^{(n)}}$ under $T_i$ for any $i\in\mathcal{C}$. As explained above, let $S=(S_x)_{x \in \mathbb{R}}$ be two-sided $\kappa-$dimensional Brownian motion with drift ${D}\in\mathbb{R}^\kappa$ and denote $\nu_D$ the probability measure on $\mathcal{S}^0_c$ induced by $S=(S_x)_{x \in \mathbb{R}}$. Also we write $\mu_{a,b}$ to be the scaled measure such that \[\mu_{a,b}\left(S \in A\right)= \mu\left( aS_{b \cdot} \in A\right),\] for a probability measure $\mu$ on $\mathcal{S}^0_c$ and $a,b>0$. The following theorem is known as the Invariance Principle of Donsker. {\thm\label{donsker} $\nu_n:=\mu^{p^{(n)}}_{\frac{\sqrt\kappa}{\sqrt n},n}$ converges weakly to $\nu_{D}$.} To prove this theorem, we prepare some lemmas. {\lem \label{property5} For any $u\in\mathbb{R}^\kappa$ satisfying $|u|=1$, it holds that \[(e_0\cdot u)^2+\cdots(e_\kappa\cdot u)^2=\frac{\kappa+1}{\kappa}.\] } \begin{proof} By Proposition \ref{vectorsproperty} $($\hspace{-1pt}ⅴ\hspace{-1pt}$)$, there are $a_0,\cdots,a_\kappa\in\mathbb{R}$ such that \[u=a_0e_0+\cdots+a_\kappa e_\kappa\] The condition $|u|=1$, \eqref{length} and \eqref{product} shows that \[\sum^{\kappa}_{i=0}a^2_i-\frac2{\kappa}\sum_{i\ne j}a_ia_j=1\] Then it holds that \begin{align*} \sum^{\kappa}_{i=0}(e_i\cdot u)^2&=\sum^{\kappa}_{i=0}\left(a_i-\frac1{\kappa}\sum_{j\ne i}a_j\right)^2\\ &=\sum^{\kappa}_{i=0}\left\{a^2_i-\frac{2a_i}{\kappa}\sum_{j\ne i}a_j+\frac1{\kappa^2}\left(\sum_{j\ne i}a_j\right)^2\right\}\\ &=\frac{\kappa+1}{\kappa}\sum^{\kappa}_{i=0}a^2_i-\frac{2(\kappa+1)}{\kappa^2}\sum_{i\ne j}a_ia_j\\ &=\frac{\kappa+1}{\kappa}. \end{align*} \end{proof} {\lem \label{property6} For any $u,\,v\in\mathbb{R}^\kappa$ satisfying $u\cdot v=0$, it holds that \[(e_0\cdot u)(e_0\cdot v)+\cdots+(e_\kappa\cdot u)(e_\kappa\cdot v)=0.\] } \begin{proof} By Proposition \ref{vectorsproperty} $($\hspace{-1pt}ⅴ\hspace{-1pt}$)$, there are $a_0,\cdots,a_\kappa,b_0,\cdots,b_\kappa\in\mathbb{R}$ such that \[u=a_0e_0+\cdots+a_\kappa e_\kappa,\] \[v=b_0e_0+\cdots+b_\kappa e_\kappa.\] The condition $u\cdot v=0$, \eqref{length} and \eqref{product} show that \[\sum^{\kappa}_{i=0}a_ib_i-\frac1{\kappa}\sum_{i\ne j}a_ib_j=0.\] Then it holds that \begin{align*} \sum^{\kappa}_{i=0}(e_i\cdot u)(e_i\cdot v)&=\sum^{\kappa}_{i=0}\left(a_i-\frac1{\kappa}\sum_{j\ne i}a_j\right)\left(b_i-\frac1{\kappa}\sum_{j\ne i}b_j\right)\\ &=\frac{\kappa+1}{\kappa}\sum^{\kappa}_{i=0}a_ib_i-\frac{\kappa+1}{\kappa^2}\sum_{i\ne j}a_ib_j=0. \end{align*} \end{proof} {\lem \label{property8} $(a)$\ For each $i\in\mathcal{C}\cup\{0\}$, we denote the components of $e_i$ as follows\:: \[e_0=\left(\begin{array}{c}e_{0,1}\\e_{0,2}\\e_{0,3}\\ \vdots\\ e_{0,\kappa-1}\\e_{0,\kappa}\end{array}\right),\ \ e_1=\left(\begin{array}{c}e_{1,1}\\e_{1,2}\\e_{1,3}\\ \vdots\\ e_{1,\kappa-1}\\e_{1,\kappa}\end{array}\right),\ \cdots,\ e_\kappa=\left(\begin{array}{c}e_{\kappa,1}\\e_{\kappa,2}\\e_{\kappa,3}\\ \vdots\\ e_{\kappa,\kappa-1}\\e_{\kappa,\kappa}\end{array}\right).\] For any $s,t\in\mathcal{C},s\ne t$ it holds that \[e^2_{0,s}+\cdots+e^2_{\kappa,s}=\frac{\kappa+1}{\kappa},\] \[e_{0,s}e_{0,t}+\cdots+e_{\kappa,s}e_{\kappa,t}=0.\] $(b)$\ For each $n$, we denote the components of $\xi^{(n)}$ by $\xi^{(n)}=(\xi^{(n)}_{1},\cdots,\xi^{(n)}_{\kappa})$. For any $s,t\in\mathcal{C},s\ne t$ it holds that \[\lim_{n\rightarrow\infty}\mathbf{E}(\xi^{(n)}_s)=0,\ \lim_{n\rightarrow\infty}\mathbf{V}(\xi^{(n)}_s)=\frac1{\kappa},\ \lim_{n\rightarrow\infty}\mathbf{E}(\xi^{(n)}_s\xi^{(n)}_s)=0.\] } \begin{proof} In Lemma \ref{property5} and \ref{property6}, let $u=(\delta_{s\,1},\cdots,\delta_{s\,\kappa})$ and $v=(\delta_{t\,1},\cdots,\delta_{t\,\kappa})$ for $s,t\in\mathcal{C},s\ne t$, where $\delta$ is the Kronecker delta. Then the two equations in (a) follow directly. Assume that $\xi$ is vector valued random variable with distribution \begin{equation}\label{uniformprob} \mathbf{P}(\xi=e_i)=\frac{1}{\kappa+1},\ \ i\in\mathcal{C}\cup\{0\}, \end{equation} and denote its components by $\xi=(\xi_1,\cdots,\xi_\kappa)$. Then, by Proposition \ref{vectorsproperty} , \[\mathbf{E}(\xi_s)=0,\ \ s\in\mathcal{C},\] where $\mathbf{E}$ is the expectation with respect to $\mathbf{P}$. Also above equations in (a) show that \[\mathbf{V}(\xi_s)=\frac1{\kappa},\ \ s\in\mathcal{C}\] where $\mathbf{V}$ is the variance with respect to $\mathbf{P}$, and \[\mathbf{E}(\xi_s\xi_t)=0,\ \ s,t\in\mathcal{C},s\ne t.\] The distribution \eqref{xidist} and \eqref{uniformprob} imply that $\xi^{(n)}$ converges to $\xi$ almost surely as $n\rightarrow\infty$, and convergence theorem shows the claim (b). \end{proof} {\rem \label{Dcomponent} Denote the components of $D=c_0e_0+\cdots+c_\kappa e_\kappa$ by $D=(D_1,\cdots,D_\kappa)$. Then it holds that \[E\left(\xi^{(n)}_j\right)=\frac{1}{\sqrt {n\kappa}}D_j,\ \ j\in\mathcal{C}.\] It follows directly by \eqref{xidist}. } \vspace{10pt} To prove Theorem \ref{donsker}, it is enough to show following two claims.\\ (1) The finite-dimensional distribution of $\nu_n$ converges weakly to that of $\nu_{D}$.\\ (2) $\left\{\nu_n\right\}_n$ is tight. \vspace{5pt} We prove (1) as Proposition \ref{finitedist} and show what is equivalent to (2) as Proposition \ref{tight}. In the proof of Proposition \ref{finitedist} and \ref{tight}, we write $|\cdot|$ as the Euclidean norm. {\prop\label{finitedist} Define the stochastic process \[X^{(n)}_x=\frac{\sqrt\kappa}{\sqrt n}Y^{(n)}_{nx},\] where $Y^{(n)}$ is given by \eqref{Y}. Then, for any $0\leq x_1<\cdots<x_d<\infty$, \[\left(X^{(n)}_{x_1},\cdots,X^{(n)}_{x_d}\right)\buildrel{d}\over{\rightarrow}\left(B_{x_1}+x_1{D},\cdots,B_{x_d}+x_d {D}\right)\ \ \ \ \mbox{as}\ n\rightarrow\infty\] where $\{B_x\}_{x\ge0}$ is a $\kappa$-dimensional Brownian motion. Also the same is true for $x\le0$.} \begin{proof} We prove the case $d=2$, that is \[\left(X^{(n)}_{s},X^{(n)}_{t}\right)\buildrel{d}\over{\rightarrow}\left(B_{s}+s{D},B_{t}+t{D}\right)\ \ \mbox{for}\ 0<s<t,\] and the other case proved samely. Since \[\left|X^{(n)}_x-\frac{\sqrt\kappa}{\sqrt n}S^{(n)}_{[nx]}\right| \leq \frac{\sqrt\kappa}{\sqrt n}\left|\zeta^{(n)}_{[nx]+1}\right|=\frac{\sqrt\kappa}{\sqrt n},\] we have by the Chebyshev inequality, \[\mathbf{P}\left(\left|X^{(n)}_x-\frac{\sqrt\kappa}{\sqrt n}S^{(n)}_{[nx]}\right|>\varepsilon\right) \leq \frac{\kappa}{\varepsilon^2n}\rightarrow0\] as $n\rightarrow\infty$. Then it is clear that \[\left|\left(X^{(n)}_{s},X^{(n)}_{t}\right)-\frac{\sqrt\kappa}{\sqrt n}\left(S^{(n)}_{[ns]},S^{(n)}_{[nt]}\right)\right|\rightarrow0\ \ \ \mbox{in probability.}\] Therefore, it is enough to show that \[\frac{\sqrt\kappa}{\sqrt n}\left(S^{(n)}_{[ns]},S^{(n)}_{[nt]}\right)\buildrel{d}\over{\rightarrow}\left(B_{s}+s{D},B_{t}+t{D}\right), \] and it is equivalent to \[\frac{\sqrt\kappa}{\sqrt n}\left(\sum^{[ns]}_{m=1}\zeta^{(n)}_m,\sum^{[nt]}_{m=[ns]+1}\zeta^{(n)}_m\right)\buildrel{d}\over{\rightarrow}\left(B_{s}+s{D},B_{t}-B_{s}+(t-s) {D}\right).\] The independence of the random variables $\{\zeta_m\}^{\infty}_{m=1}$ implies \begin{align*} &\mathbf{E}\left(\exp\left\{\frac{\sqrt\kappa}{\sqrt n}i\left(\sum^{[ns]}_{m=1}\zeta^{(n)}_m\cdot u+\sum^{[nt]}_{m=[ns]+1}\zeta^{(n)}_m\cdot v\right)\right\}\right)\\ ={}&\mathbf{E}\left(\exp\left\{\frac{\sqrt\kappa}{\sqrt n}i\sum^{[ns]}_{m=1}\zeta^{(n)}_m\cdot u\right\}\right)\mathbf{E}\left(\exp\left\{\frac{\sqrt\kappa}{\sqrt n}i\sum^{[nt]}_{m=[ns]+1}\zeta^{(n)}_m\cdot v\right\}\right), \end{align*} for any $u=(u_1,\cdots,u_\kappa),v=(v_1,\cdots,v_\kappa)\in\mathbb{R}^\kappa$, and also it holds that \[\mathbf{E}\left(\exp\left\{\frac{\sqrt\kappa}{\sqrt n}i\sum^{[ns]}_{m=1}\zeta^{(n)}_m\cdot u\right\}\right)= \left\{\varphi_n\left(\frac{\sqrt\kappa}{\sqrt n}u\right)\right\}^{[ns]},\] where $\varphi_n(\theta)$ is the characteristic function of $\xi^{(n)}$ given by \[\varphi_n(\theta)=\mathbf{E}\left(\exp\left\{i\xi^{(n)}\cdot\theta\right\}\right)\] for $\theta=(\theta_j)_{1\le j\le\kappa}\in\mathbb{R}^\kappa$. The function $\varphi_n$ satisfies \begin{align*} \frac{\partial \varphi_n}{\partial \theta_j}(\theta)&=\mathbf{E}\left(i\xi^{(n)}_j\exp\left\{i\xi^{(n)}\cdot\theta\right\}\right)\ \ j\in\mathcal{C},\\ \frac{\partial \varphi_n}{\partial^2 \theta_j}(\theta)&=\mathbf{E}\left(-{\xi^{(n)}_j}^2\exp\left\{i\xi^{(n)}\cdot\theta\right\}\right)\ \ j\in\mathcal{C},\\ \frac{\partial \varphi_n}{\partial \theta_j\partial \theta_k}(\theta)&=\mathbf{E}\left(-\xi^{(n)}_j\xi^{(n)}_k\exp\left\{i\xi^{(n)}\cdot\theta\right\}\right)\ \ j,k\in\mathcal{C}, j\ne k. \end{align*} Remark \ref{Dcomponent} implies \[\frac{\partial \varphi_n}{\partial \theta_j}(0)=\frac{iD_j}{\sqrt {n\kappa}},\] and Lemma \ref{property8} shows, if $n\rightarrow\infty$ and $\theta\rightarrow0$, that is, $\theta_j\rightarrow0$ for any $j\in\mathcal{C}$, \[\frac{\partial \varphi_n}{\partial^2 \theta_j}(\theta)\rightarrow-\frac1{\kappa}\ \ \ \frac{\partial \varphi_n}{\partial \theta_j\partial \theta_k}(\theta)\rightarrow0.\] By Taylor's theorem, there is a vector $u'=(u'_1,\cdots,u'_\kappa)\in\mathbb{R}^\kappa$ for fixed $u\in\mathbb{R}^\kappa$, such that $0\le u'_j\le \frac{\sqrt\kappa}{\sqrt n}u_j$ for any $j\in\mathcal{C}$ and \begin{align*} &\varphi_n\left(\frac{\sqrt\kappa}{\sqrt n}u\right)\\ ={}&\varphi_n(0)+\sum_{j\in\mathcal{C}}\frac{\sqrt\kappa}{\sqrt n}u_j\frac{\partial \varphi_n}{\partial \theta_j}(0)+\sum_{j\in\mathcal{C}}\frac12\left(\frac{\sqrt\kappa}{\sqrt n}u_j\right)^2\frac{\partial \varphi_n}{\partial^2 \theta_j}(u')+\sum_{j\ne k}\frac12\left(\frac{\kappa}{n}u_ju_k\right)\frac{\partial \varphi_n}{\partial \theta_j\partial \theta_k}(u')\\ ={}&1+\sum_{j\in\mathcal{C}}\frac{\sqrt\kappa}{\sqrt n}u_j\frac{iD_j}{\sqrt {n\kappa}}+\sum_{j\in\mathcal{C}}\frac12\left(\frac{\sqrt\kappa}{\sqrt n}u_j\right)^2\frac{\partial \varphi_n}{\partial^2 \theta_j}(u')+\sum_{j\ne k}\frac12\left(\frac{\kappa}{n}u_ju_k\right)\frac{\partial \varphi_n}{\partial \theta_j\partial \theta_k}(u'). \end{align*} Since $\log(1+x)=x+o(x)$ as $x\rightarrow0$, it holds that \begin{align*} &\log\left\{\varphi_n\left(\frac{\sqrt\kappa}{\sqrt n}u\right)\right\}^{[ns]}\\ ={}&[ns]\log\varphi_n\left(\frac{\sqrt\kappa}{\sqrt n}u\right)\\ ={}&i\frac{[ns]}{n}\sum_{j\in\mathcal{C}}D_ju_j+\frac{[ns]}{2n}\sum_{j\in\mathcal{C}}u^2_j\kappa\frac{\partial \varphi_n}{\partial^2 \theta_j}(u')+\frac{[ns]\kappa}{n}\sum_{j\ne k}\frac12\left(u_ju_k\right)\frac{\partial \varphi_n}{\partial \theta_j\partial \theta_k}(u')\\ \rightarrow{}&is {D}\cdot u-\sum_{j\in\mathcal{C}}\frac{su^2_j}{2} \end{align*} as $n\rightarrow\infty$. Thus, \[\lim_{n\rightarrow\infty}\mathbf{E}\left(\exp\left\{\frac{\sqrt\kappa}{\sqrt n}i\sum^{[ns]}_{m=1}\zeta^{(n)}_m\cdot u\right\}\right)=\exp\left\{is {D}\cdot u-\sum_{j\in\mathcal{C}}\frac{su^2_j}{2}\right\}.\] Similarly, \[\lim_{n\rightarrow\infty}\mathbf{E}\left(\exp\left\{\frac{\sqrt\kappa}{\sqrt n}i\sum^{[nt]}_{m=[ns]+1}\zeta^{(n)}_m\cdot v\right\}\right)=\exp\left\{i(t-s) {D}\cdot v-\sum_{j\in\mathcal{C}}\frac{(t-s)v^2_j}{2}\right\},\] and the proof is complete. \end{proof} The tightness of $\{\nu_n\}_n$ is known to be equivalent to the following proposition \cite[Theorem.2.4.10, 2.4.15]{KS}. {\prop\label{tight} With the same setting in Proposition \ref{finitedist}, it holds that \begin{equation}\label{tight1} \lim_{\lambda\uparrow\infty}\sup_{n\ge1}\mathbf{P}\left(\left|X^{(n)}_0\right|>\lambda\right)=0, \end{equation} and, for any $T>0,\varepsilon>0$, \begin{equation}\label{tight2} \lim_{\delta\downarrow0}\max_{n\ge1}\mathbf{P}\left(\sup_{|t-s|\le \delta\ 0\le s,t\le T}\left|X^{(n)}_t-X^{(n)}_s\right|>\varepsilon\right)=0. \end{equation} } \begin{proof} Since $X^{(n)}_0=0$ for every $n$, \eqref{tight1} is obvious. We may replace $\sup_{n\ge1}$ in \eqref{tight2} by $\limsup_{n\rightarrow\infty}$ because for a finite number of integers $n$ we can make the probability appearing in \eqref{tight2} as small as we choose by reducing $\delta$. Let $X^{(n)}_t=\left(X^{(n)}_{t,1},\cdots,X^{(n)}_{t,\kappa}\right)$ for $t\geq0$, it holds that \begin{align*} \max_{|t-s|\le \delta\ 0\le s,t\le T}\left|X^{(n)}_t-X^{(n)}_s\right|&=\max_{|t-s|\le \delta\ 0\le s,t\le T}\sqrt{\sum^{\kappa}_{j=1}\left|X^{(n)}_{t,j}-X^{(n)}_{t,j}\right|^2}\\ &\leq \kappa \sum^{\kappa}_{j=1}\max_{|t-s|\le \delta\ 0\le s,t\le T}\left|X^{(n)}_{t,j}-X^{(n)}_{t,j}\right|. \end{align*} Thus, \begin{align*} &\lim_{\delta\downarrow0}\limsup_{n\rightarrow\infty}\mathbf{P}\left(\max_{|t-s|\le \delta\ 0\le s,t\le T}\left|X^{(n)}_t-X^{(n)}_s\right|>\varepsilon\right)\\ \le{}&\lim_{\delta\downarrow0}\limsup_{n\rightarrow\infty}\mathbf{P}\left(\bigcup_{j\in\mathcal{C}}\max_{|t-s|\le \delta\ 0\le s,t\le T}\left|X^{(n)}_{t,j}-X^{(n)}_{s,j}\right|>\frac{\varepsilon}{\kappa}\right)\\ \le{}&\sum^{\kappa}_{j=1}\lim_{\delta\downarrow0}\limsup_{n\rightarrow\infty}\mathbf{P}\left(\max_{|t-s|\le \delta\ 0\le s,t\le T}\left|X^{(n)}_{t,j}-X^{(n)}_{s,j}\right|>\frac{\varepsilon}{\kappa}\right). \end{align*} By the definition of $X^{(n)}$, $Y^{(n)}$ and $S^{(n)}$, it holds that \[\mathbf{P}\left(\max_{|t-s|\le \delta\ 0\le s,t\le T}\left|X^{(n)}_{t,j}-X^{(n)}_{s,j}\right|>\frac{\varepsilon}{\kappa}\right) =\mathbf{P}\left(\max_{|t-s|\le n\delta\ 0\le s,t\le nT}\left|Y^{(n)}_{t,j}-Y^{(n)}_{s,j}\right|>\frac{\varepsilon\sqrt n}{\kappa\sqrt\kappa}\right),\] and \begin{align*} \max_{|t-s|\le n\delta\ 0\le s,t\le nT}\left|Y^{(n)}_{t,j}-Y^{(n)}_{s,j}\right|&\le\max_{|t-s|\le [n\delta]+1\ 0\le s,t\le [nT]+1}\left|Y^{(n)}_{t,j}-Y^{(n)}_{s,j}\right|\\ &\le\max_{1\le m\le[n\delta]+1\ 0\le k\le [nT]+1}\left|S^{(n)}_{m+k,j}-S^{(n)}_{k,j}\right|, \end{align*} where $Y^{(n)}_x=(Y^{(n)}_{x,1},\cdots,Y^{(n)}_{x,\kappa})$ and $S^{(n)}_m=(S^{(n)}_{m,1},\cdots,S^{(n)}_{m,\kappa})$. Therefore it is enough to show that \[\lim_{\delta\downarrow0}\limsup_{n\rightarrow\infty}\mathbf{P}\left(\max_{1\le m\le[n\delta]+1\ 0\le k\le [nT]+1}\left|S^{(n)}_{m+k,j}-S^{(n)}_{k,j}\right|>\frac{\varepsilon\sqrt n}{\kappa\sqrt\kappa}\right)=0\] for each $j\in\mathcal{C}$. Recall the definition of $S^{(n)}$, the $j$-th component of it is as follows \[S^{(n)}_{0,j}=0,\ S^{(n)}_{m,j}=\zeta^{(n)}_{1,j}+\cdots+\zeta^{(n)}_{m,j},\ \ m\ge1,\] where $\{\zeta^{(n)}_{\ell,j}\}_{\ell\ge1}$ are independent and \begin{align*} \mathbf{P}\left(\zeta^{(n)}_{\ell,j}=e_{i,j}\right)&=\frac{1}{\kappa+1}+\frac{c_i}{\sqrt {n\kappa}},\ \ i\in\mathcal{C},\\ \mathbf{E}\left(\zeta^{(n)}_{\ell,j}\right)&=\frac{D_j}{\sqrt {n\kappa}}, \end{align*} from Remark \ref{Dcomponent}. Now we define a new stochastic process, \[R^{(n)}_{0,j}=0,\ R^{(n)}_{m,j}=\left(\zeta^{(n)}_{1,j}-\frac{D_j}{\sqrt {n\kappa}}\right)+\cdots+\left(\zeta^{(n)}_{m,j}-\frac{D_j}{\sqrt {n\kappa}}\right),\ \ m\ge1.\] Since \begin{align*} &\left|S^{(n)}_{m+k,j}-S^{(n)}_{k,j}\right|\\ ={}&\left|\left(R^{(n)}_{m+k,j}+(m+k)\frac{D_j}{\sqrt {n\kappa}}\right)-\left(R^{(n)}_{k,j}+k\frac{D_j}{\sqrt {n\kappa}}\right)\right|\\ \le{}&\left|R^{(n)}_{m+k,j}-R^{(n)}_{k,j}\right|+\left|m\frac{D_j}{\sqrt {n\kappa}}\right|, \end{align*} it is enough to show \[\lim_{\delta\downarrow0}\limsup_{n\rightarrow\infty}\mathbf{P}\left(\max_{1\le m\le[n\delta]+1\ 0\le k\le [nT]+1}\left|R^{(n)}_{m+k,j}-R^{(n)}_{k,j}\right|>\frac{\varepsilon\sqrt n}{2\kappa\sqrt\kappa}\right)=0,\] and \[\lim_{\delta\downarrow0}\limsup_{n\rightarrow\infty}\mathbf{P}\left(\max_{1\le m\le[n\delta]+1}\left|\frac{m}{\sqrt n}D_j\right|>\frac{\varepsilon\sqrt {n\kappa}}{2\kappa\sqrt\kappa}\right)=0.\] The first one is shown in \cite[Lemma.2.4.19]{KS}. Also it holds that \begin{align*} &\limsup_{n\rightarrow\infty}\mathbf{P}\left(\max_{1\le m\le[n\delta]+1}\left|\frac{m}{\sqrt {n\kappa}}D_j\right|>\frac{\varepsilon\sqrt n}{2\kappa\sqrt\kappa}\right)\\ ={}&\limsup_{n\rightarrow\infty}\mathbf{P}\left(\left|\frac{[n\delta]+1}{\sqrt {n\kappa}}D_j\right|>\frac{\varepsilon\sqrt n}{2\kappa\sqrt\kappa}\right)=0,\ \ \mbox{if}\ \delta<\frac{\varepsilon}{2\kappa|D_j|} \end{align*} and this shows the second one. \end{proof} Proposition \ref{finitedist} and \ref{tight} show Theorem \ref{donsker}. \vspace{10pt} Next, to prove Theorem \ref{BMinvariant}, we show following lemmas. {\lem \label{ab}Let $a, b>0,i\in\mathcal{C}$. If $\mu$ is invariant under $T_i$, then $\mu_{a,b}$ is also invariant under $T_i$.} \begin{proof} Let $S^{a,b}_x=aS_{bx}$ for $a,b >0$ and $x \in \mathbb{R}$. By using the expression \eqref{Texpression}, it holds that \begin{align*} T_iS^{a,b}_x&=aS_{bx}+\left(A_iaS_{bx}-\sup_{y\leq x}A_iaS_{by}+\sup_{y\leq 0}A_iaS_{by}\right)(e_i-e_0)\\ &=(T_iS)^{a,b}_x, \end{align*} and the claim follows. \end{proof} {\lem\label{generaltheory} Suppose $\{\mu_n\}$ is a sequence of probability measures on $\mathcal{S}^0_c$, each of which is invariant under $T_i$, and $\mu_n$ converges weakly to $\mu$. Moreover, suppose that $\mu_n$ satisfies for any $z \in \mathbb{R}$, \[\lim_{x \to -\infty} \limsup_{n \to \infty} \mu_n\left(\sup_{y\leq x}A_iS_{y} > A_iS_z\right)=0\] and $\mu$ satisfies for any $z \in \mathbb{R}$, \[\lim_{x \to -\infty} \mu\left(\sup_{y\leq x}A_iS_{y} > A_iS_z\right)=0.\] It then holds that $\mu$ is also invariant under $T_i$. } \begin{proof} It is enough to show that for any $L>0$ and continuous bounded function $f: C([-L,L],\mathbb{R}^\kappa) \to \mathbb{R}$, \[\mu\left(f \left(S|_{[-L,L]}\right)\right) = \mu\left(f \left(T_iS|_{[-L,L]}\right)\right).\] Let \[ M^{L'}_x:=\left\{\begin{array}{ll} A_iS_{-L'}, & \mbox{if }x < -L', \\ \sup_{-L' \le y \le x} A_iS_y, & \mbox{if }-L' \le x \le L',\\ \sup_{-L' \le y \le L'} A_iS_y, & \mbox{otherwise.} \end{array}\right.\] Also, define \begin{equation}\label{413} (T^{L'}_iS)_x:=S_x+\left(A_iS_x-M^{L'}_x+M^{L'}_0\right)(e_i-e_0),\ \ x\in\mathbb{R}. \end{equation} Then, $T^{L'}_i :\mathcal{S}^0_c \to \mathcal{S}^0_c $ is continuous, and so \begin{equation}\label{413a} \lim_{n \to \infty} \mu_n \left(f \left((T^{L'}_iS)|_{[-L,L]}\right)\right) = \mu\left(f \left((T^{L'}_iS)|_{[-L,L]}\right)\right), \end{equation} for any $L,L'$. It is easy to verify that $(T^{L'}_iS)|_{[-L,L]}=(T_iS)|_{[-L,L]}$ if $L < L'$ and $\sup_{y\leq -L'}A_iS_{y} \leq A_iS_{-L}$, by comparing \eqref{Texpression} and \eqref{413}. Therefore, for any $L' >L$, \[\left|\mu_n\left(f \left((T^{L'}_iS)|_{[-L,L]}\right)\right) - \mu_n\left(f \left((T_iS)|_{[-L,L]}\right)\right)\right| \le 2 \|f\|_{\infty} \mu_n \left(\sup_{y\leq -L'}A_iS_{y} > A_iS_{-L}\right).\] Hence, by assumption, we have that \[\lim_{L' \to \infty} \limsup_{n \to \infty}\left|\mu_n\left(f \left((T^{L'}S)|_{[-L,L]}\right)\right) - \mu_n\left(f \left((TS)|_{[-L,L]}\right)\right)\right|=0,\] which implies, with \eqref{413a}, \begin{equation}\label{413aa} \limsup_{n \to \infty}\mu_n\left(f \left((T_iS)|_{[-L,L]}\right)\right)=\lim_{L' \to \infty} \mu(f (T^{L'}_iS|_{[-L,L]})). \end{equation} Similarly it holds that \[\left|\mu\left(f \left((T^{L'}_iS)|_{[-L,L]}\right)\right) - \mu\left(f \left((T_iS)|_{[-L,L]}\right)\right)\right| \le 2 \|f\|_{\infty} \mu \left(\sup_{y\leq -L'}A_iS_{y} > A_iS_{-L}\right),\] and the assumption $\lim_{x \to -\infty} \mu\left(\sup_{y\leq x}A_iS_{y} > A_iS_z\right)=0$ for any $x$ implies that \[\lim_{L' \to \infty} \mu(f (T^{L'}_iS|_{[-L,L]}))=\mu\left(f \left((T_iS)|_{[-L,L]}\right)\right).\] This is the right-hand side of \eqref{413aa}. Also the left-hand side of \eqref{413aa} is equal to \[\limsup_{n \to \infty}\mu_n\left(f \left(S|_{[-L,L]}\right)\right)=\mu\left(f \left(S|_{[-L,L]}\right)\right),\]then the claim is proved. \end{proof} Finally, We check the assumptions of the previous result for $\nu_n=\mu^{p^{(n)}}_{\frac{\sqrt\kappa}{\sqrt n},n}$ and $\nu_D$. {\lem\label{checkassumption} For any $z \in \mathbb{R}$, \[\lim_{x \to -\infty} \limsup_{n \to \infty} \nu_n\left(\sup_{y\leq x}A_iS_{y} > A_iS_z\right)=0\] and \[\lim_{x \to -\infty} \nu_D\left(\sup_{y\leq x}A_iS_{y} > A_iS_z\right)=0.\] } \begin{proof} For $S_x=B_x+(c_0e_0+\cdots c_\kappa e_\kappa)x$ it holds that \begin{align*} A_iS_x&=-2\frac{(e_i-e_0)\cdot S_x}{|e_i-e_0|^2}\\ &=-2\frac{(e_i-e_0)\cdot B_x}{|e_i-e_0|^2}+(c_0-c_i)x\\ &\rightarrow-\infty \end{align*} almost surely as $x\rightarrow-\infty$. Therefore, the second claim of the lemma is obvious. To estimate the probability $\nu_n\left(\sup_{y\leq x}A_iS_{y} > A_iS_z\right)$, first note that, for any $x<z$, \begin{align*} \nu_n \left(\sup_{y\leq x}A_iS_{y} > A_iS_z\right) & =\mu^{p^{(n)}}\left(\sup_{y\leq nx}A_iS_{y} > A_iS_{nz}\right)\\ & \leq \mu^{p^{(n)}} \left(\sup_{y\leq [nx]+1}A_iS_{y}> \min\left\{A_iS_{[nz]},A_iS_{[nz]+1} \right\}\right)\\ & = \mu^{p^{(n)}} \left(\sup_{y\leq [nx]+1-[nz]}A_iS_{y}> \min\left\{A_iS_{0},A_iS_{1} \right\}\right)\\ & \le \mu^{p^{(n)}} \left(\sup_{y\leq [nx]+1-[nz]}A_iS_{y}\ge 0 \right), \end{align*} where $[w]$ is the maximum integer not greater than $w$. Thus we only need to show that \[\lim_{x \to -\infty} \limsup_{n \to \infty} \mu^{p^{(n)}} \left(\sup_{y\leq [nx]}A_iS_{y} \ge 0 \right)=0.\] For any $\ell \ge 1$, we have \begin{align*} \mu^{p^{(n)}} \left(\sup_{y\leq -\ell}A_iS_{y} \ge 0 \right) &\leq \mu^{p^{(n)}} \left(A_iS_{-\ell} \ge 0\right) + \sum_{k \le -1}\mu^{p^{(n)}} \left(A_iS_{-\ell}=k\right)\left(\frac{p_i}{ p_0}\right)^{-k}. \end{align*} Now, since $A_iS_{-\ell}\buildrel{d}\over{=}-A_iS_\ell=-\sum_{k=1}^{\ell}\left(\mathbf{1}_{\{\eta_k=0\}}-\mathbf{1}_{\{\eta_k=i\}}\right)$, we have \begin{align*} \mu^{p^{(n)}} \left(A_iS_{-\ell} \ge 0\right) &= \mu^{p^{(n)}}\left(\sum_{k=1}^{\ell}\left(\mathbf{1}_{\{\eta_k=0\}}-\mathbf{1}_{\{\eta_k=i\}}\right) \le 0\right)\\ & \le \mu^{p^{(n)}}\left(\frac{1}{\ell}\left|\sum_{k=1}^{\ell}\left(\mathbf{1}_{\{\eta_k=0\}}-\mathbf{1}_{\{\eta_k=i\}}\right) - \frac{\ell (c_0-c_i)}{\sqrt {n\kappa}}\right| \ge \frac{c_0-c_i}{\sqrt {n\kappa}}\right)\\ & \le \mu^{p^{(n)}}\left(\frac{1}{\ell}\left|\sum_{k=1}^{\ell}\left\{\mathbf{1}_{\{\eta_k=0\}}-\mathbf{1}_{\{\eta_k=i\}}- \frac{c_0-c_i}{\sqrt {n\kappa}}\right\} \right| \ge \frac{c_0-c_i}{\sqrt {n\kappa}}\right)\\ & \le \frac{n\kappa}{\ell (c_0-c_i)^2} E^{p^{(n)}}\left(\left\{\mathbf{1}_{\{\eta_k=0\}}-\mathbf{1}_{\{\eta_k=i\}}-\frac{c_0-c_i}{\sqrt {n\kappa}}\right\}^2\right)\\ &= \frac{n\kappa}{\ell (c_0-c_i)^2} V^{p^{(n)}}\left(\mathbf{1}_{\{\eta_k=0\}}-\mathbf{1}_{\{\eta_k=i\}}\right)\\ &=\frac{n\kappa}{\ell (c_0-c_i)^2}\left(\frac{2}{\kappa+1}+\frac{c_0+c_i}{\sqrt {n\kappa}}\right), \end{align*} where $E^{p^{(n)}}$ is the expectation and $V^{p^{(n)}}$ is the variance with respect to $\mu^{p^{(n)}}$. Moreover, \begin{align*} &\sum_{k \le -1}\mu^{p^{(n)}} \left(A_iS_{-\ell}=k\right)\left(\frac{p_i}{ p_0}\right)^{-k}\\ ={}&\sum_{-\ell\leq k \le -1}\mu^{p^{(n)}} \left(A_iS_{-\ell}=k\right)\left(\frac{p_i}{ p_0}\right)^{-k}\\ ={}&\sum_{-\ell\leq k \le -1}\sum_{0\le j\le \ell+k}\binom{\ell}{j}(1-p_0-p_i)^{j}\binom{\ell-j}{\frac{\ell-j-k}{2}}p^{\frac{\ell-j-k}{2}}_0p^{\frac{\ell-j+k}{2}}_i\left(\frac{p_i}{ p_0}\right)^{-k}\\ ={}&\sum_{-\ell\leq k \le -1}\sum_{0\le j\le \ell+k}\binom{\ell}{j}(1-p_0-p_i)^{j}\binom{\ell-j}{\frac{\ell-j-k}{2}}p^{\frac{\ell-j+k}{2}}_0p^{\frac{\ell-j-k}{2}}_i\\ ={}&\sum_{-\ell\leq k \le -1}\mu^{p^{(n)}} \left(A_iS_{-\ell}=-k\right)\\ ={}&\mu^{p^{(n)}} (A_iS_{-\ell} \ge 1)\\ \le{}&\mu^{p^{(n)}} (A_iS_{-\ell} \ge 0),\\ \end{align*} where $\binom{\ell}{q} \equiv 0$ for $q \notin \mathbb{N}$. Therefore, we have \begin{align*} \lim_{x \to -\infty} \limsup_{n \to \infty} \mu^{p^{(n)}} \left(\sup_{y\leq [nx]}A_iS_{y} \ge 0 \right) &\le \lim_{x \to -\infty} \limsup_{n \to \infty} \frac{2n\kappa}{[nx](c_0-c_i)^2}\left(\frac{2}{\kappa+1}+\frac{c_0+c_i}{\sqrt {n\kappa}}\right) \\ ={}&\lim_{x \to -\infty} \frac{2\kappa}{x(c_0-c_i)^2}\frac{2}{\kappa+1}=0. \end{align*} \end{proof} \begin{proof}[Proof of Theorem \ref{BMinvariant}] Since $\mu^{p^{(n)}}$ is invariant under $T_i$, so is $\nu_n=\mu^{p^{(n)}}_{\frac{\sqrt\kappa}{\sqrt n},n}$ by Lemma \ref{ab}. Then Theorem \ref{donsker}, Lemma \ref{generaltheory} and \ref{checkassumption} show $\nu_{D}$ is also invariant under $T_i$. In other words, two-sided standard $\kappa-$dimensional Brownian motion with drift $D\in\mathcal{D}$, given by \[D=c_0e_0+\cdots+c_\kappa e_\kappa,\ c_0>c_i,\ \forall i\in\mathcal{C},\ c_0+\cdots+c_\kappa=0\] is invariant under $T_i$. \end{proof} \section*{Acknowledgements} The author appreciates M.Sasada for her guidance and constructive comments from beggining to end. He also thanks D.Croydon for useful advice. \vspace{10pt}
1,314,259,992,819
arxiv
\section{Introduction} Throughout nearly all areas of mathematics one can find certain canonical objects that are uniquely determined by their homogeneity-like properties. Historically, the first detected example of this sort was the set of rational numbers $\Qyu$, characterized by Cantor as the unique dense countable linearly ordered set with no end-points. Another example is Urysohn's universal metric space, the unique separable complete metric space $\U$ containing isometric copies of all separable metric spaces and with the property that every isometry between finite subsets of $\U$ extends to a bijective isometry of $\U$. In 1954, Roland \fra\ developed a general theory in the language of first-ordered structures, currently known as \emph{\fra\ theory}. After his work, several universal homogeneous structures (called \emph{\fra\ limits}) had been identified and studied, being important objects in various areas of mathematics, computer science and even mathematical physics~\cite{Droste}. One needs to admit that the Urysohn space had been almost forgotten for many years, and not linked to \fra\ theory until a relatively recent line of research dealing with topological dynamics of automorphism groups. A notable work in this area is~\cite{KPT}. For more information on current status of \fra\ theory we refer to a recent survey article of Macpherson's~\cite{Macpherson}. \separator Recall that a \emph{\fra\ class} is a countable class $\Ef$ of finitely generated models of a fixed first-order language, satisfying the following conditions: \begin{enumerate} \item[(i)] Given $a,b\in \Ef$ there exists $d\in \Ef$ such that both $a$ and $b$ embed into $d$ (Joint Embedding Property). \item[(ii)] Given $a,b\in \Ef$ and embeddings $\map ica$, $\map jcb$, there exist $w\in\Ef$ and embeddings $\map kaw$ and $\map \ell bw$ such that $k \cmp i = \ell \cmp j$ (Amalgamation Property). \item[(iii)] Given $a\in \Ef$, every substructure of $a$ is isomorphic to an element of $\Ef$. \end{enumerate} The \emph{\fra\ limit} of $\Ef$ is a countable model $U$ such that, up to isomorphism, $$\Ef = \setof{a \subs U}{a \text{ is a finitely generated substructure of }U}$$ and for every isomorphism $\map hab$ between finitely generated substructures of $U$ there exists an automorphism $\map HUU$ such that $H \sups h$. The latter property is called \emph{ultrahomogeneity}. It is a classical theorem of \fra~\cite{F1} that the \fra\ limit exists and is unique, up to isomorphism. Uncountable versions of \fra\ limits were studied by \jon~\cite{jon56, jon60}, supplemented by Morley and Vaught~\cite{MorleyVaught}. A recent result of Dolinka~\cite{Dolinka} characterizes, under certain assumptions, countable models that are embeddable as retracts into the \fra\ limit. Namely, he proves that, under certain conditions on the \fra\ class, retracts of the \fra\ limit are precisely the (countable) algebraically closed models. Further study, in the context transformation semigroups and permutation group theory has been done in a recent PhD thesis of McPhee~\cite{McPhee}. The aim of this note is to extend Dolinka's characterization to the case of category-theoretic \fra\ limits, at the same time weakening the assumption on the class of objects. In particular, Dolinka's result assumes that models are finite and for each natural number $n$ there exist only finitely many isomorphic types of models generated by a set of cardinality $n$. We do not make any of these assumptions. Our result relates retracts of \fra\ limits to a natural variant of injectivity. Among new applications, we characterize non-expansive retracts of the universal metric space of Urysohn. This metric space is formally not a \fra\ limit, because the category of finite metric spaces is uncountable. However, it can be ``approximated" by \fra\ limits of countable subcategories (e.g. by considering rational distances only). Category-theoretic approach to \fra\ limits comes from the author's paper \cite{Kubfra}, motivated by a much earlier work of Droste and G\"obel \cite{DrGoe92} and by a recent work of Irwin and Solecki~\cite{IrSol} on projective \fra\ limits. In~\cite{Kubfra} the key notion is a \emph{\fra\ sequence} rather than a \fra\ limit. This turns out to be convenient, allowing to work in a single category (corresponding to finitely generated models), forgetting about the existence or non-existence of colimits. In order to speak about retractions, we need to work with a pair of categories, both with the same objects; the first one allows ``embeddings" only, while the second one allows all possible homomorphisms. \subsection{Categories of sequences} Fix a category $\fK$. We shall treat $\fK$ as the class of arrows, the class of objects will be denoted by $\ob\fK$ and the set of $\fK$-arrows with domain $x$ and codomain $y$ will be denoted by $\fK(x,y)$. A \emph{sequence} in $\fK$ is simply a covariant functor from $\omega$ into $\fK$. One can think that the objects of $\fK$ are ``small" structures (e.g. finitely generated models of a fixed language). Sequences in $\fK$ form a bigger category of ``large" structures. For category-theoretic notions we refer to \cite{MacLane}. We shall use the following convention: Sequences in $\fK$ will be denoted by capital letters $X,Y,Z,\dots$ and the objects of $\fK$ will be denoted by small letters $x,y,z,\dots$. Fix a sequence $\map X\omega\fK$. Recall that formally $X$ assigns to each natural number $n$ an object $X(n)$ of $\fK$ and $X$ assigns a $\fK$-arrow $\map {X(n,m)}{X(n)}{X(m)}$ for each pair $\pair nm$ of natural numbers such that $n\loe m$. We shall always write $x_n$ instead of $X(n)$ and $x_n^m$ instead of $X(n,m)$. Note that being a functor imposes the conditions $x_n^n = \id{x_n}$ and $x_k^m = x_\ell^m \cmp x_k^\ell$ for $k\loe \ell \loe m$. An arrow from a sequence $X$ to a sequence $Y$ is, by definition, a natural transformation from the functor $X$ into the functor $Y \cmp \psi$, where $\map \psi \omega\omega$ is increasing (i.e. $\psi$ is a covariant functor from $\omega$ to $\omega$). We identify arrows that ``potentially converge" to the same limit. More precisely, given natural transformations $\tau_0$ and $\tau_1$ from the sequence $X$ to $Y \cmp \psi_0$ and $Y \cmp \psi_1$, respectively, we say that $\tau_0$ is \emph{equivalent} to $\tau_1$, if the diagram consisting of both sequences $X$, $Y$ together with all arrows induced by $\tau_0$ and $\tau_1$ is commutative. This is indeed an equivalence relation and it commutes with the composition, therefore $\ciagi\fK$ becomes a category. In order to illustrate this idea, observe that every sequence is isomorphic to its cofinal subsequence. Indeed, if $X$ is a sequence and $k = \ciag k$ is a strictly increasing sequence of natural numbers, then the $\ciagi\fK$-arrow $\map I{X\cmp k}X$ defined by $I=\ciag i$, where $i_n = \id{x_{k_n}}$, is an isomorphism. Its inverse is $J = \ciag j$, where $j_n = x_n^{k_m}$ and $m=\min\setof{s}{k_s \goe n}$. The composition $I \cmp J$ is formally $\ciag j$ regarded as an arrow from $X$ to $X$. Clearly, $I\cmp J$ is equivalent to the identity $\sett{\id{x_n}}{\ntr}$. Similarly, $J\cmp I$ is equivalent to the identity of $X\cmp k$. The original category $\fK$ may be regarded as a subcategory of $\ciagi\fK$, identifying an object $x$ with a sequence $$\xymatrix{ x \ar[r]^{\id x} & x \ar[r]^{\id x} & x \ar[r]^{\id x} & \dots }$$ Thus, we shall always assume that $\fK \subs \ciagi\fK$. Given a sequence $X$ and $\ntr$, we shall denote by $x_n^\infty$ the arrow from $x_n$ to $X$ induced by the $n$th object of $X$. Formally, $x_n^\infty$ is the equivalence class of $\sett{x_n^m}{m\goe n}$. \subsection{\fra\ sequences} \fra\ classes and limits can be described using categories. Let $\fK$ be a fixed category. A \emph{\fra\ sequence} in $\fK$ is a sequence $U$ satisfying the following two conditions. \begin{enumerate} \item[(F1)] For every object $x$ in $\fK$ there exist $\ntr$ and a $\fK$-arrow $x \to u_n$. \item[(F2)] For every $\ntr$ and for every $\fK$-arrow $\map f{u_n}y$ there exist $m \goe n$ and a $\fK$-arrow $\map gy{u_m}$ such that $g \cmp f = u_n^m$. \end{enumerate} Recall that $\fK$ has the \emph{amalgamation property} if for every $\fK$-arrows $\map fca$, $\map gcb$ there exist $\fK$-arrows $\map {f'}aw$, $\map {g'}bw$ satisfying $f' \cmp f = g' \cmp g$. A \fra\ sequence exists whenever $\fK$ has the amalgamation property, the joint embedding property and has countably many isomorphic types of arrows. A \fra\ sequence is unique up to isomorphism. We refer to \cite{Kubfra} for the details. A standard induction shows that the amalgamation property partially extends to the category of sequences. Namely: \begin{prop}\label{prmknoteghb} Assume $\fK$ has the amalgamation property. Then for every $\ciagi\fK$-arrows $\map fcA$, $\map gcB$ with $c\in \ob\fK$, there exist $\ciagi\fK$-arrows $\map {f'}AW$, $\map {g'}BW$ satisfying $f' \cmp f = g' \cmp g$. \end{prop} However, it is shown in~\cite{Kubfra} that in general the amalgamation property of $\fK$ does not imply the same property of $\ciagi\fK$. Now, let $\fK\subs \fL$ be a pair of categories such that $\fK$ has the same objects as $\fL$. For instance, $\fK$ is a category of finitely generated models of a fixed language with embeddings and $\fL$ allows all homomorphisms. Note that $\ciagi\fK$ as a subcategory of $\ciagi\fL$. We shall need to deal with the category $\fR = \ciagi{(\fK,\fL)}$ whose objects are $\omega$-sequences in $\fK$ and the arrows come from $\fL$, i.e., $\ob\fR = \ob{\ciagi\fK}$ and $\fR(X,Y) = \ciagi\fL(X,Y)$ for $X,Y\in \ob{\fR}$. For example, if $\fK$, $\fL$ are as above, $\ciagi{(\fK,\fL)}$ is the category of countable models with all possible homomorphisms, while $\ciagi\fK$ is the category of countable models with embeddings. \section{Main result} Let $\fK\subs \fL$ be two fixed categories with the same objects. We say that $\pair \fK\fL$ has the \emph{mixed amalgamation property} if for every arrows $\map fca$ and $\map gcb$ such that $f\in \fK$ and $g\in \fL$, there exist arrows $\map {f'}aw$, $\map {g'}bw$ satisfying $f' \cmp f = g' \cmp g$ and such that $g' \in \fK$ and $f' \in \fL$. The mixed amalgamation is described in the following diagram, where $\xymatrix{\ar@{ >->}[r] &}$ denotes an arrow in $\fK$. $$\xymatrix{ a \ar[r]^{f'}& w\\ c \ar@{ >->}[u]^f \ar[r]_g & b \ar@{ >->}[u]_{g'} }$$ We say that $\pair \fK\fL$ has the \emph{amalgamated extension property} if for every commutative $\fL$-diagram $$\xymatrix{ a \ar[r]^f & x \\ c \ar@{ >->}[u]^i \ar@{ >->}[r]_j & b \ar[u]_g }$$ with $i,j \in \fK$, there exist $\fK$-arrows $\map exy$, $\map kaw$, $\map \ell bw$ and an $\fL$-arrow $\map hwy$ such that $e \cmp f = h \cmp k$, $e \cmp g = h \cmp \ell$ and $k \cmp i = \ell \cmp j$. That is, the following diagram is commutative. $$\xymatrix{ & & & y \\ & & x \ar@{ >->}[ur]_e & \\ a \ar[urr]^(.40)f \ar@{ >->}[r]_k & w \ar@/^1pc/[uurr]^{h} & & \\ c \ar@{ >->}[u]^i \ar@{ >->}[r]_j & b \ar[uur]_g \ar@{ >->}[u]^\ell & & }$$ We now define the following axioms for a pair of categories $\pair \fK \fL$, needed for our main result. \begin{enumerate} \item[\haha 0] $\fK \subs \fL$ and $\ob \fK = \ob \fL$. \item[\haha 1] $\fK$ has both the amalgamation property and the joint embedding property. \item[\haha 2] $\pair \fK \fL$ has the mixed amalgamation property. \item[\haha 3] $\pair \fK \fL$ has the amalgamated extension property. \end{enumerate} \begin{df} A pair of categories $\pair \fK \fL$ \emph{has property \hahah}\ if it satisfies conditions \haha0 -- \haha3. \end{df} It is necessary to make some comments on the properties described above. Namely, the condition $\ob\fK=\ob\fL$ can be removed from \haha0, it appears there for the sake of convenience only. The role of $\fL$ is offering more arrows than $\fK$, some of them will be needed for constructing retractions. One can think of the $\fK$-arrows as ``embeddings". In most cases, these will be indeed monics. Condition \haha1 is needed mainly for the existence and good properties of a \fra\ sequence in $\fK$. Recall that the joint embedding property follows from amalgamations, whenever $\fK$ has an initial object (or at least a weakly initial object). Condition \haha2 will be crucial for proving that the \fra\ sequence and its retracts are $\fK$-injective (see the definition below). Finally, the somewhat technical condition \haha3 will be needed for the argument in the main lemma relating $\fK$-injective objects with the \fra\ sequence. If $\fL$ has a terminal object then \haha3 implies that $\fK$ has the amalgamation property. Summarizing, if $\fK$ has a weakly initial object and $\fL$ has a terminal object, then we may ignore condition \haha1. Condition \haha3 becomes trivial if $\fK$ has pushouts in $\fL$. We say that $\fK$ has \emph{pushouts in $\fL$} if for every pair of $\fK$-arrows $\map ica$, $\map jcb$, there exist $\fK$-arrows $\map kaw$, $\map \ell bw$ such that $$\xymatrix{ a \ar@{ >->}[r]^k & w \\ c \ar@{ >->}[u]^i \ar@{ >->}[r]_j & b \ar@{ >->}[u]_\ell }$$ is a pushout square in $\fL$. It is obvious from the definition of a pushout that $\pair\fK\fL$ has the amalgamated extension property (with $y=x$ and $e = \id x$) whenever $\fK$ has pushouts in $\fL$. Let us remark that for all examples with property \hahah\ appearing in this note, the amalgamated extension property holds with $x=y$ and $e=\id x$ (see the definition and diagram above). Below is the crucial notion, whose variations appear often in the literature (see, e.g., \cite{AHRT}, where a definition similar to ours can be found). \begin{df} Let $\fK \subs \fL$ be two categories with the same objects. We say that $A\in \ob {\ciagi\fK}$ is \emph{$\fK$-injective in} $\ciagi{(\fK,\fL)}$ if for every $\fK$-arrow $\map i a b$, for every $\ciagi{(\fK,\fL)}$-arrow $\map f a A$, there exists a $\ciagi{(\fK,\fL)}$-arrow $\map {\ovr f} b A$ such that ${\ovr f} \cmp i = f$. $$\xymatrix{ a \ar@{ >->}[d]_i \ar[rr]^f & & A \\ b \ar@{-->}[rru]_{\ovr f} }$$ \end{df} This definition obviously generalizes to an arbitrary pair of categories $\fK \subs \fR$. We restrict attention to the special case $\fR = \ciagi{(\fK,\fL)}$, since more general versions will not be needed. Following is a useful criterion for injectivity. \begin{prop}\label{pinkarerg} Assume $\pair \fK\fL$ has the mixed amalgamation property and $X \in \ob{\ciagi\fK}$. Then $X$ is $\fK$-injective in $\ciagi{(\fK,\fL)}$ if and only if for every $\ntr$, for every $\fK$-arrow $\map f{x_n}y$, there exist $m\goe n$ and an $\fL$-arrow $\map gy{x_m}$ satisfying $$g \cmp f = x_n^m.$$ \end{prop} \begin{pf} Suppose $X$ is $\fK$-injective and fix a $\fK$-arrow $\map f{x_n}y$. Applying $\fK$-injectivity for $\map {x_n^\infty}{x_n}X$, we find $\map GyX$ such that $G \cmp f = x_n^\infty$. The arrow $G$ factors through some $\fL$-arrow $\map gy{x_m}$ for some $m\goe n$, that is, $G = x_m^\infty \cmp g$. Finally, $g \cmp f = x_n^m$. Suppose now that $X$ satisfies the condition above and fix a $\fK$-arrow $\map jab$ and a $\ciagi{(\fK,\fL)}$-arrow $\map Fa{X}$. Then $F = x_n^\infty \cmp f$ for some $\fL$-arrow $f$, where $\ntr$. Applying the mixed amalgamation property, find a $\fK$-arrow $\map h{x_n}y$ and an $\fL$-arrow $\map gby$ such that $g \cmp j = h \cmp f$. By assumption, there exist $m \goe n$ and an $\fL$-arrow $\map ky{x_m}$ such that the following diagram commutes. $$\xymatrix{ a \ar@{ >->}[d]_j \ar[r]^f & x_n \ar@{ >->}[d]_h \ar@{ >->}[dr]^{x_n^m} & \\ b \ar[r]_g & y \ar[r]_k & x_m }$$ Finally, taking $G = x_m^\infty \cmp k \cmp g$, we get $G \cmp j = F$. \end{pf} Our interest in $\fK$-injectivity comes from the following fact, which is an immediate consequence of the criterion above. \begin{prop}\label{pwkindrzektif} Assume $\pair \fK\fL$ has the mixed amalgamation property and $U$ is a \fra\ sequence in $\fK$. Then $U$ is $\fK$-injective in $\ciagi{(\fK,\fL)}$. \end{prop} We shall need the following ``injective" version of amalgamated extension property. \begin{lm}\label{lekstaminja} Assume $\pair \fK\fL$ satisfies \hahah\ and $X\in \ob{\ciagi\fK}$ is $\fK$-injective in $\ciagi{(\fK,\fL)}$. Then for every $\fK$-arrows $\map ica$, $\map jcb$ and for every $\ciagi{(\fK,\fL)}$-arrows $\map FaX$, $\map GbX$ such that $F \cmp i = G \cmp j$, there exist $\fK$-arrows $\map kaw$, $\map \ell bw$ and a $\ciagi{(\fK,\fL)}$-arrow $\map HwX$ such that the diagram $$\xymatrix{ & a \ar@{ >->}[rd]_k \ar[rrrd]^F & & & \\ c \ar@{ >->}[ru]^i \ar@{ >->}[rd]_j & & w \ar[rr]^(.30)H & & X\\ & b \ar@{ >->}[ru]^\ell \ar[rrru]_G & & & }$$ commutes. \end{lm} \begin{pf} Find $n$ such that $F = x_n^\infty \cmp f$ and $G = x_n^\infty \cmp g$ for some $\fL$-arrows $f,g$, where $\map {x_n^\infty}{x_n}X$ is the canonical arrow induced by the $n$th object of the sequence $X$. Using property \haha3, we find $\fK$-arrows $\map kaw$, $\map \ell bw$, $\map e{x_n}y$ and an $\fL$-arrow $\map hwy$ such that $h \cmp k = e \cmp f$ and $h \cmp \ell = e \cmp g$. Using the $\fK$-injectivity of $X$ we can find a $\ciagi{(\fK,\fL)}$-arrow $\map PyX$ such that $P \cmp e = x_n^\infty$. Let $H = P \cmp h$. Then $$H \cmp k = P \cmp h \cmp k = P \cmp e \cmp f = x_n^\infty \cmp f = F.$$ Similarly, $H \cmp \ell = G$. \end{pf} The following lemma is crucial. \begin{lm}\label{lkrusall} Assume $\pair \fK\fL$ has property \hahah\ and $A$ is a $\fK$-injective object in $\ciagi{(\fK,\fL)}$. Furthermore, assume $U$ is a \fra\ sequence in $\fK$ and $\map FXA$ is an arbitrary $\ciagi{(\fK,\fL)}$-arrow. Then there exist a $\ciagi\fK$-arrow $\map JXU$ and a $\ciagi{(\fK,\fL)}$-arrow $\map GUA$ such that $G \cmp J = F$. $$\xymatrix{ X \ar@{ >->}[d]_J \ar[rr]^F & & A \\ U \ar@{-->}[rru]_{G} }$$ \end{lm} \begin{pf} Recall that we use the usual convention for objects $x_n = X(n)$, $u_n = U(n)$, and for arrows $x^m_n = X(n,m)$, $u^m_n = U(n,m)$. We shall construct inductively the following ``triangular matrix" in $\fK$, together with commuting $\ciagi{(\fK,\fL)}$-arrows $\map {F_{i,j}}{w_{i,j}}A$ for $j\loe i+1$, where we agree that $w_{i,0} = x_i$ and $w_{i,i+1} = u_{\ell_i}$. $$\xymatrix{ x_0\ar@{ >->}[d]\ar@{ >->}[r] & u_{\ell_0}\ar@{ >->}[d]\ar@{ >->}[dr] & & & & \\ x_1\ar@{ >->}[d]\ar@{ >->}[r] & w_{1,1}\ar@{ >->}[d]\ar@{ >->}[r] & u_{\ell_1}\ar@{ >->}[d]\ar@{ >->}[dr] & & & \\ x_2\ar@{ >->}[d]\ar@{ >->}[r] & w_{2,1}\ar@{ >->}[d]\ar@{ >->}[r] & w_{2,2}\ar@{ >->}[d]\ar@{ >->}[r] & u_{\ell_2}\ar@{ >->}[d]\ar@{ >->}[dr] & & \\ x_3\ar@{ >->}[d]\ar@{ >->}[r] & w_{3,1}\ar@{ >->}[d]\ar@{ >->}[r] & w_{3,2}\ar@{ >->}[d]\ar@{ >->}[r] & w_{3,3}\ar@{ >->}[d]\ar@{ >->}[r] & u_{\ell_3}\ar@{ >->}[d]\ar@{ >->}[dr] & \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \\ }$$ The first column in the diagram above is the sequence $X$, while the diagonal is a cofinal subsequence of $U$. Our initial assumption on $F_{i,j}$ is that $\sett{F_{n,0}}{\ntr} = F$. It is clear how to start the construction: Using the \fra\ property of $U$, we find $\ell_0$ and a $\fK$-arrow $\map {e_0}{x_0}{u_{\ell_0}}$. Next, using the $\fK$-injectivity of $A$, we find $\map{F_{0,1}}{u_{\ell_0}}A$ satisfying $F_{0,1} \cmp e_0 = F_{0,0}$. Suppose the $n$th row has already been constructed, together with arrows $F_{i,j}$ for $i\loe n$, $j\loe n+1$. Starting from $\fK$-arrows $\map {x_n^{n+1}}{x_n}{x_{n+1}}$ and $x_n \to w_{n,1}$, using Lemma~\ref{lekstaminja}, we find $w_{n+1,1}\in \ob\fK$ and $\fK$-arrows $w_{n,1} \to w_{n+1,1}$, $x_{n+1} \to w_{n+1,1}$, and a $\ciagi{(\fK,\fL)}$-arrow $\map {F_{n+1,1}}{w_{n+1,1}}A$ such that the diagram $$\xymatrix{ & w_{n,1}\ar@{ >->}[rd] \ar@/^/[rrrd]^{F_{n,1}} & & & \\ x_n\ar@{ >->}[ru]\ar@{ >->}[rd]_{x_n^{n+1}} & & w_{n+1,1} \ar[rr]^(.4){F_{n+1,1}} & & A\\ & x_{n+1}\ar@{ >->}[ru] \ar@/_/[rrru]_{F_{n+1,0}} & & & }$$ commutes. Continuing this way, using Lemma~\ref{lekstaminja}, we obtain the $(n+1)$st row and $\ciagi{(\fK,\fL)}$-arrows $F_{n+1,i}$ for $i\loe n+1$ which commute together with the following diagram. $$\xymatrix{ x_n\ar@{ >->}[d]\ar@{ >->}[r] & w_{n,1}\ar@{ >->}[d]\ar@{ >->}[r] & w_{n,2}\ar@{ >->}[d]\ar@{ >->}[r] & \dots\ar@{ >->}[r] & w_{n,n-1}\ar@{ >->}[d]\ar@{ >->}[r] & u_{\ell_n}\ar@{ >->}[d] \\ x_{n+1}\ar@{ >->}[r] & w_{n+1,1}\ar@{ >->}[r] & w_{n+1,2}\ar@{ >->}[r] & \dots\ar@{ >->}[r] & w_{n+1,n-1}\ar@{ >->}[r] & w_{n+1,n} }$$ Now, using the \fra\ property of $U$ we find $\ell_{n+1} > \ell_n$ and a $\fK$-arrow $w_{n+1,n} \to u_{\ell_{n+1}}$ making the triangle $$\xymatrix{ u_{\ell_n}\ar@{ >->}[d]\ar@{ >->}[dr] & \\ w_{n+1,n}\ar@{ >->}[r] & u_{\ell_{n+1}} }$$ commutative. Using Lemma~\ref{lekstaminja} again, we get an arrow $\map{F_{n+1,n+2}}{u_{\ell_{n+1}}}A$ commuting with $F_{n+1,n}$, $F_{n,n+1}$ and the triangle above. Finally, the compositions of the horizontal arrows in the triangular ``matrix" constructed above induce an arrow of sequences $\map JXU$ in $\ciagi\fK$. The inductive construction also gives a sequence of arrows $\sett{F_{n,n+1}}{\ntr}$ that turns into a $\ciagi{(\fK,\fL)}$-arrow $\map GUA$ satisfying $G \cmp J = F$. This completes the proof. \end{pf} \begin{tw}\label{tmejnn} Let $\pair \fK \fL$ be a pair of categories with property \hahah. Assume $\fK$ has a \fra\ sequence $U$ and let $X$ be an arbitrary sequence in $\fK$. The following properties are equivalent. \begin{enumerate} \item[(a)] $X$ is $\fK$-injective in $\ciagi{(\fK, \fL)}$. \item[(b)] There exist a $\ciagi{\fK}$-arrow $\map JXU$ and a $\ciagi{(\fK,\fL)}$ arrow $\map RUX$ such that $R \cmp J = \id X$. \item[(c)] $X$ is a retract of $U$ in $\ciagi{(\fK, \fL)}$. \end{enumerate} \end{tw} Note that condition (c) is formally weaker than (b), since it is not required in (c) that the right inverse of a retraction $\map RUX$ is a $\ciagi\fK$-arrow. \begin{pf} (a) $\implies$ (b) Applying Lemma~\ref{lkrusall} to the identity $\map {\id X}XX$, we get a $\ciagi\fK$-arrow $\map JXU$ and a $\ciagi{(\fK,\fL)}$-arrow $\map RUX$ such that $R \cmp J = \id X$. (b) $\implies$ (c) This is obvious. (c) $\implies$ (a) Let $\map JXU$ and $\map RUX$ be $\ciagi{(\fK,\fL)}$-arrows such that $R \cmp J = \id X$. Fix a $\fK$-arrow $\map iab$ and an $\fL$-arrow $\map FaX$. By Proposition~\ref{pwkindrzektif}, $U$ is $\fK$-injective in $\ciagi{(\fK,\fL)}$, so there exists $\map GbU$ such that $G \cmp i = J \cmp F$. Finally, we have $R \cmp G \cmp i = R \cmp J \cmp F = F$. \end{pf} \subsection{Remarks on absolute retracts} One can have a false impression after reading our result characterizing retracts of a \fra\ sequence, namely that \emph{every} embedding (i.e. a $\ciagi\fK$-arrow) of a $\fK$-injective $\ciagi \fK$-object into the \fra\ sequence admits a left inverse in $\ciagi(\fK,\fL)$. This is not true in general. We make a brief discussion of this problem. To be more concrete, we assume $\pair \fK \fL$ has property \hahah\ and let $U$ be a \fra\ sequence in $\fK$. The problem stated above is strictly related to the following well known concept: \begin{df} Let $\fK \subs \fL$ be as above. We say that $W \in \ob{\ciagi \fK}$ is an \emph{absolute retract} in $\ciagi(\fK,\fL)$ if for every $\ciagi \fK$-arrow $\map J W Y$ there exists a $\ciagi(\fK,\fL)$-arrow $\map R Y W$ such that $R \cmp J = \id W$. \end{df} This notion is well known especially in topology. In particular, there is a rich theory of absolute retracts in geometric topology (see \cite{Borsuk}). One of the main aspects is the existence of some ``canonical" objects which can be used for checking whether a given object is an absolute retract or not. For instance, in the category of compact topological spaces, an absolute retract is simply a retract of a Tikhonov cube. In the category of metric spaces with continuous maps, absolute retracts are retracts of convex sets in normed linear spaces. In the category of metric spaces with non-expansive maps, absolute retracts are hyperconvex metric spaces \cite{AroPan}. Notice that our definition is relative to a fixed category of ``small" objects, e.g. spaces of weight less than a fixed cardinal number. In the case of compact topological (or metric) spaces, the ``canonical" objects (e.g. Tikhonov cubes) turn out to be absolute retracts without restrictions on the weight of the spaces and therefore being an absolute retract in a category of objects of restricted ``size" is equivalent to being an absolute retract in the big category, with no ``size" restrictions. The following fact is rather standard. \begin{prop}\label{Pierfbrw} Assume $\fK \subs \fL$ is a pair of categories such that $\pair{\ciagi \fK}{\ciagi(\fK,\fL)}$ has the mixed amalgamation property. Given $W \in \ob{\ciagi\fK}$ the following two properties are equivalent. \begin{enumerate} \item[(a)] $W$ is an absolute retract in $\ciagi(\fK,\fL)$. \item[(b)] $W$ is $\ciagi\fK$-injective in $\ciagi(\fK,\fL)$. \end{enumerate} \end{prop} \begin{pf} Only (a)$\implies$(b) requires an argument. Fix a $\ciagi\fK$-arrow $\map I X Y$ and a $\ciagi(\fK,\fL)$-arrow $\map F X W$. Using the mixed amalgamation property we find a $\ciagi\fK$-arrow $\map J W V$ and a $\ciagi(\fK,\fL)$-arrow $\map G Y W$ for which the diagram $$\xymatrix{ W \ar@{ >->}[r]^J & V \\ X \ar[u]^F \ar@{ >->}[r]_I & Y \ar[u]_G }$$ is commutative. Let $\map R V W$ be such that $R \cmp J = \id W$. Then $R \cmp G \cmp I = F$. \end{pf} Now the problem arises whether $\fK$-injectivity implies $\ciagi\fK$-injectivity. The next result characterizes this property using a \fra\ sequence. \begin{tw} Let $\fK \subs \fL$ be a pair of categories with property \hahah, and let $U\in \ob {\ciagi \fK}$ be a \fra\ sequence in $\fK$. Assume further that $\pair{\ciagi \fK}{\ciagi(\fK,\fL)}$ has the mixed amalgamation property. Then the following statements are equivalent. \begin{enumerate} \item[(a)] $\fK$-injectivity implies $\ciagi\fK$-injectivity in $\ciagi(\fK,\fL)$. \item[(b)] $U$ is $\ciagi\fK$-injective in $\ciagi(\fK,\fL)$. \item[(c)] For every $\ciagi\fK$-arrow $\map J U U$ there exists a $\ciagi(\fK,\fL)$-arrow $\map R U U$ such that $R \cmp J = \id U$. \end{enumerate} \end{tw} \begin{pf} Implication (a)$\implies$(b) follows from the fact that $U$ is $\fK$-injective (Proposition~\ref{pwkindrzektif}). Implication (b)$\implies$(c) is trivial. Implication (b)$\implies$(a) follows directly from Theorem~\ref{tmejnn}, because a retract of a $\ciagi\fK$-injective object is obviously $\ciagi\fK$-injective. It remains to show that (c)$\implies$(b). Suppose $U$ is not $\ciagi\fK$-injective. By Proposition~\ref{Pierfbrw}, there exists a $\ciagi\fK$-arrow $\map I U Y$ which is not left-invertible in $\ciagi(\fK,\fL)$. Here we have used the mixed amalgamation property for $\pair {\ciagi\fK}{\ciagi(\fK,\fL)}$. Since $U$ is \fra, there exists a $\ciagi\fK$-arrow $\map J Y U$. Using (c), we find a $\ciagi(\fK,\fL)$-arrow $\map R U U$ such that $R \cmp J \cmp I = \id U$. But now $R \cmp J$ is a left inverse to $I$, a contradiction. \end{pf} An interesting consequence of the result above is that whenever $\fK$-injectivity is different from $\ciagi\fK$-injectivity, it is witnessed by some $\ciagi\fK$-arrow $\map J U U$ with no left inverse in $\ciagi(\fK,\fL)$. In other words, $U$ carries all the information about $\ciagi\fK$-injectivity. \subsection{Extensions of the main result} Theorem~\ref{tmejnn} has a natural generalization to uncountable \fra\ sequences. More precisely, let $\kappa$ be an uncountable regular cardinal and assume that all sequences in $\fK$ of length $ < \kappa$ have colimits in $\fL$, where the colimiting cocones are $\fK$-arrows. In this case we say that $\fK$ is \emph{$\kappa$-continuous} in $\fL$. Under this assumption, a version of Lemma~\ref{lkrusall} for $\kappa$-sequences is true, with almost the same proof---usual induction is replaced by transfinite induction. Proposition~\ref{pwkindrzektif} is valid for arbitrary \fra\ sequences, the countable length of the sequence was never used in the proof. Let $\uzuple \kappa\fK$ denote the category of all sequences in $\fK$ of length $\loe \kappa$, with arrows induced by natural transformations (like in the countable case). Let $\uzuple \kappa{\fK,\fL}$ denote the category with the same objects as $\uzuple \kappa\fK$, and with arrows taken from $\uzuple \kappa\fL$. We can now formulate an ``uncountable" version of our main result. \begin{tw}\label{tmejnnunct} Let $\kappa$ be an uncountable regular cardinal and let $\pair \fK \fL$ be a pair of categories with property \hahah, such that $\fK$ is $\kappa$-continuous in $\fL$. Assume $\fK$ has a \fra\ sequence $U$ of length $\kappa$. Given a sequence $X$ in $\fK$ of length $\loe \kappa$, the following properties are equivalent. \begin{enumerate} \item[(a)] $X$ is $\fK$-injective in $\uzuple \kappa{\fK, \fL}$. \item[(b)] There exist a $\uzuple \kappa{\fK}$-arrow $\map JXU$ and a $\uzuple \kappa{\fK,\fL}$ arrow $\map RUX$ such that $R \cmp J = \id X$. \item[(c)] $X$ is a retract of $U$ in $\uzuple \kappa{\fK, \fL}$. \end{enumerate} \end{tw} Let us now come back to the countable case. Assume $\pair \fK\fL$ has property \hahah\ and moreover $\fK$ has pushouts in $\fL$. Let us look at the proof of Lemma~\ref{lkrusall}. We can assume that all squares in the infinite ``triangular matrix" constructed there are pushouts in $\fL$. Using the notation from the proof of Lemma~\ref{lkrusall}, let $W^n$ denote the sequence coming from the $n$th column. Observe that the arrow from $W^n$ to $W^{n+1}$ is determined by the ``horizontal" $\fK$-arrow $w_{n+1,n} \to w_{n+1,n+1}$. In other words, all other $\fK$-arrows come as a result of the corresponding pushout square. An arrow of sequences $\map FVW$ determined by pushouts from a single $\fK$-arrow will be called \emph{pushout generated} from $\fK$. Denote by $\ciagipo \fK$ the category whose objects are $\omega$-sequences in $\fK$, while arrows are pushout generated from $\fK$. A deeper analysis of the proof of Lemma~\ref{lkrusall} gives the following observation, which may be of independent interest. \begin{prop} Assume $\pair \fK \fL$ is a pair of categories with property \hahah\ and $\fK$ has pushouts in $\fL$. Let $X \in \ob{\ciagi\fK}$ be $\fK$-injective in $\ciagi{(\fK,\fL)}$. Then: \begin{enumerate} \item[(1)] $X$ is $\ciagipo \fK$-injective in $\ciagi{(\fK,\fL)}$. \item[(2)] Let $U$ be a \fra\ sequence in $\fK$. There exists a sequence $$X_0 \to X_1 \to X_2 \to \dots$$ in $\ciagipo \fK$ such that $X_0 = X$ and $U$ is the colimit of this sequence in $\ciagi\fK$. \end{enumerate} \end{prop} Clearly, (1) and (2) imply immediately that $X$ is a retract of $U$. \section{Applications} We start with some more comments on property \hahah. In many cases (especially in model-theoretic categories), it is much easier to prove the (mixed) amalgamation property for special ``primitive" arrows rather than for arbitrary arrows. In order to formalize this idea, fix a pair of categories $\pair\fK\fL$ satisfying condition \haha0 and fix a collection $\Ef \subs \fK$ (actually $\Ef$ might be a proper class). We say that $\fK$ is \emph{generated} by $\Ef$ if for every $f\in \fK$ there exist $\ntr$ and $g_0,\dots, g_{n-1} \in \Ef$ such that $f = g_{n-1} \cmp \dots \cmp g_0$. For example, if $\fK$ is the category of embeddings of finite models of a fixed first-order language, $\Ef$ may be the class of embeddings $\map fST$ such that $T$ is generated by $\img fS \cup \sn b$ for some $b\in T$. We define the amalgamation property for $\Ef$ and the mixed amalgamation property for $\pair\Ef\fL$, as before. \begin{prop}\label{pprimitivv} Let $\fK \subs \fL$ be two categories with the same objects, where $\fK$ has the joint embedding property. Assume further that $\fK$ is generated by a family $\Ef$ such that $\Ef$ has the amalgamation property and $\pair \Ef\fL$ has both the mixed amalgamation property and the amalgamated extension property. Then $\pair\fK\fL$ has property \hahah. \end{prop} \begin{pf} Given an arrow $f\in \fK$, we say that $f$ \emph{has length} $\loe n$ if $f = g_{n-1} \cmp \dots \cmp g_0$, where $g_0, \dots, g_{n-1} \in \Ef$. In particular, all arrows in $\Ef$ have length $1$. Easy induction shows that if $\map ica$, $\map jcb$ are $\fK$-arrows such that the length of $i$ is $\loe m$ and the length of $j$ is $\loe n$, then there exist $\fK$-arrows $\map kaw$, $\map \ell bw$ such that $k \cmp i = \ell \cmp j$ and $k$ has length $\loe n$, while $\ell$ has length $\loe m$. Since every $\fK$-arrow has a finite length, this shows that $\fK$ has the amalgamation property. A similar induction on the length of $\fK$-arrows shows that $\pair \fK\fL$ has the amalgamated extension property. Finally, using the fact that $\pair\Ef\fL$ has the mixed amalgamation property, we prove by induction that for every $\fK$-arrow $\map ica$ of length $\loe n$, and for every $\fL$-arrow $\map fcb$, there exist an $\fL$-arrow $\map gaw$ and a $\fK$-arrow $\map \ell bw$ of length $\loe n$ such that $g \cmp i = \ell \cmp f$. This shows that $\pair \fK\fL$ has the mixed amalgamation property. \end{pf} Another simplification for proving property \hahah\ is the concept of mixed pushouts. Let $\fK\subs \fL$ be two categories with the same objects. We say that $\pair\fK\fL$ has the \emph{mixed pushout property} if for every arrows $\map fca$ and $\map gcb$ such that $f\in \fK$ and $g\in \fL$, there exist arrows $\map {f'}aw$ and $\map {g'}bw$ such that $f'\in \fL$, $g'\in \fK$ and $$\xymatrix{ a \ar[r]^{f'} & w \\ c \ar@{ >->}[u]^f \ar[r]_g & b \ar@{ >->}[u]_{g'} }$$ is a pushout square in $\fL$. Note that if both $f,g$ are $\fK$-arrows in the definition above, then so are $f',g'$, by uniqueness of the pushout. The definition above makes sense (and is applicable) in case where $\fK$ is an arbitrary family of arrows, not necessarily a subcategory. This is presented in the next statement. \begin{prop}\label{pjetghsd} Let $\fK\subs \fL$ be two categories with the same objects. Assume that $\fK$ has the joint embedding property and $\Ef\subs \fK$ is such that $\pair \Ef\fL$ has the mixed pushout property and $\Ef$ generates $\fK$. Then $\pair \fK\fL$ has property \hahah. \end{prop} \begin{pf} Suppose first that $\Ef = \fK$. The amalgamation property (condition \haha1) follows from the remark above, namely that the pushout of two $\fK$-arrows consists of $\fK$-arrows. Mixed amalgamation property (condition \haha2) is just a weaker version of the mixed pushout property. Finally, the amalgamated extension property (condition \haha3) follows immediately from the definition of a pushout. Suppose now that $\Ef \ne \fK$. It suffices to prove that $\pair\fK\fL$ has the mixed pushout property. Like in the proof of Proposition~\ref{pprimitivv}, we use induction on the length of $\fK$-arrows, bearing in mind that the obvious composition of two pushout squares is a pushout square. More precisely, the inductive hypothesis says: Given a $\fK$-arrow $\map ica$ of length $< n$, and an $\fL$-arrow $\map fcb$, there exist an $\fL$-arrow $\map gaw$ and a $\fK$-arrow $\map \ell bw$ of length $< n$ such that $$\xymatrix{ b \ar@{ >->}[r]^\ell & w \\ c \ar[u]^f \ar@{ >->}[r]_i & a \ar[u]_g }$$ is a pushout square in $\fL$. \end{pf} Many natural pairs of categories, in particular coming from model theory, have the mixed pushout property. Concrete well known examples are finite graphs, partially ordered sets, semilattices. Each of these classes is considered as a pair of two categories, the first one with embeddings and the second one with all homomorphisms. These examples are mentioned in \cite{Dolinka}. A typical example of a pair $\pair \fK\fL$ with property \hahah, failing the mixed pushout property is the category $\fL$ of all finite linear orders with increasing (i.e. order preserving) functions and $\fK$ the category of all finite linear orders with embeddings. In contrast to the above results, it is worth mentioning a \fra\ class that does not fit into our framework. Namely, the \fra\ class of finite $K_n$-free graphs (where $K_n$ denotes the complete graph with $n$ vertices and $n > 2$) has the pushout property (formally the class of embeddings has pushouts in the class of all homomorphisms), yet the corresponding pair of categories fails to have mixed amalgamations. Specifically, a graph is meant to be a structure with one symmetric irreflexive binary relation, so a homomorphism of graphs cannot identify vertices connected by edges. In other words, every graph homomorphism restricted to a complete subgraph becomes an embedding. It has been proved by Mudrinski~\cite{Mudrinski} that for $n > 2$, the \fra\ limit of $K_n$-free graphs (called the \emph{Henson graph} $H_n$) is retract rigid, i.e. identity is the only retraction of $H_n$. On the other hand, we have the following easy fact (stated in a different form in \cite[Example 3.3]{DolinkaBerg}). \begin{prop} No $K_n$-free graph with $n > 2$ is injective for finite $K_n$-free graphs. \end{prop} \begin{pf} Suppose $X$ is such a graph. Using injectivity for $S=\emptyset$ and $T = K_{n-1}$, we see that $X$ contains an isomorphic copy $K$ of $K_{n-1}$. Now let $S$ be a graph with $n-1$ vertices and no edges and let $\map fSX$ be a bijection onto $K$. Let $T = S \cup \sn v$, where $v$ is connected to all the vertices of $S$. By injectivity, there exists a homomorphism $\map gTX$ extending $f$. But now $K \cup \sn{g(v)} \subs X$ is a copy of $K_n$, a contradiction. \end{pf} Before discussing concrete examples of pairs with property \hahah, we make one more remark on injectivity. Recall that an arrow $\map jxy$ is \emph{left-invertible} in $\fL$ if there exists $f\in \fL$ such that $f \cmp j = \id x$. The following is an easy consequence of our main result. \begin{wn}\label{wnrhojht} Let $\pair \fK\fL$ be a pair of categories such that every $\fK$-arrow is left-invertible in $\fL$. Assume that $\pair \fK\fL$ has property \hahah\ and $U$ is a \fra\ sequence in $\fK$. Then for every sequence $X \in \ob {\ciagi\fK}$ there exist a $\ciagi\fK$-arrow $\map JXU$ and a $\ciagi{(\fK,\fL)}$-arrow $\map RUX$ such that $R \cmp J = \id X$. \end{wn} \begin{pf} In view of Theorem~\ref{tmejnn}, it suffices to show that every sequence is $\fK$-injective in $\ciagi{(\fK,\fL)}$. Fix $X \in \ob {\ciagi\fK}$, a $\fK$-arrow $\map jab$, and a $\ciagi{(\fK,\fL)}$-arrow $\map faX$. Choose an $\fL$-arrow $\map rba$ such that $r \cmp j = \id a$. Then $g = f \cmp r$ has the property that $g \cmp j = f$. This shows that $X$ is $\fK$-injective in $\ciagi{(\fK,\fL)}$. \end{pf} This corollary applies to finite Boolean algebras (also noted in \cite{Dolinka}) and, as we shall see later, to finite linear orderings. \subsection{\fra\ classes and algebraically closed models} Let $\Emm$ be a class of finitely generated models of a fixed first-order language $L$. It is natural to consider the category $\homos\Emm$ whose objects are all elements of $\Emm$ and arrows are all homomorphisms (i.e. maps that preserve all relations, functions and constants). It is also natural to consider the category $\embes\Emm$ whose objects are again all elements of $\Emm$, while arrows are embeddings only. In many cases, $\pair{\embes\Emm}{\homos\Emm}$ has property \hahah. Simplifying the notation, we shall say that $\Emm$ has the {\em pushout property} or {\em mixed amalgamation property} if $\pair{\embes\Emm}{\homos\Emm}$ has such a property. Denote by $\ovr \Emm$ the class of all (countable) models that are unions of $\omega$-chains of models from $\Emm$. It is clear that $\ciagi{\embes\Emm}$ is equivalent to $\ovr\Emm$ with embeddings and $\ciagi{(\embes\Emm,\homos\Emm)}$ is equivalent to $\ovr\Emm$ with all homomorphisms. Recall that a model $X\in\ovr\Emm$ is {\em algebraically closed} if for every formula $$\phi(x_0,\dots,x_{k-1}, y_0,\dots,y_{\ell-1})$$ that is a finite conjunction of atomic formulae, for every $a_0,\dots, a_{k-1} \in X$, if there exists an extension $X'\sups X$ in $\ovr\Emm$ satisfying $$X'\models (\exists\;y_0,\dots,y_{\ell-1})\; \phi(a_0,\dots,a_{k-1},y_0,\dots,y_{\ell-1})$$ then there exist $b_0, \dots, b_{\ell-1} \in X$ such that $X\models \phi(a_0,\dots,a_{k-1},b_0,\dots,b_{\ell-1})$. \begin{prop}\label{ppanfgaju} Let $\Emm$ be a class of finitely generated models of a fixed first-order language. Every $\Emm$-injective model in $\ovr\Emm$ is algebraically closed. \end{prop} \begin{pf} Fix an $\Emm$-injective model $X \in \ovr\Emm$. Fix $X'\sups X$ and assume $X' \models (\exists\; \vec y)\; \phi(\vec a, \vec y)$ for some $k$-tuple $\vec a$ of elements of $X$, where $\phi(\vec x,\vec y)$ is a finite conjunction of atomic formulae and $\vec x, \vec y$ are shortcuts for $(x_0,\dots, x_{k-1})$ and $(y_0,\dots,y_{\ell-1})$, respectively. Let $S\in \Emm$ be a submodel of $X$ that contains $\vec a$. Let $T\in \Emm$ be a submodel of $X'$ containing $S$ and a fixed tuple $\vec b$ such that $X' \models \phi(\vec a, \vec b)$. Then also $T \models \phi(\vec a, \vec b)$, because this property is absolute for $\phi$. Using the $\Emm$-injectivity of $X$, find a homomorphism $\map fTX$ satisfying $f\rest S = \id S$. Finally, let $\vec c = (f(b_0),\dots,f(b_{\ell-1}))$, where $\vec b = (b_0,\dots,b_{\ell-1})$. Since $f$ is a homomorphism and $\phi$ is a conjunction of atomic formulae, we have that $X \models \phi(\vec a, \vec c)$. \end{pf} We shall say that a structure $M$ is \emph{$n$-generated} if there exists $S\subs M$ such that $|S|\loe n$ and $S$ generates $M$, that is, no proper submodel of $M$ contains $S$. Recall that a first-order language is \emph{finite} if it contains finitely many predicates (constant, relation and function symbols). \begin{prop}\label{pnfgju} Let $\Emm$ be a class of finite models of a fixed first-order language $L$. Assume that either $L$ is finite or for every $\ntr$ there exist finitely many isomorphic types of $n$-generated models in $\Emm$. Assume furthermore that $\Emm$ has the mixed amalgamation property. Then every algebraically closed $L$-model $X \in \ovr\Emm$ is $\Emm$-injective. \end{prop} \begin{pf} Fix $S,T \in \Emm$ such that $S$ is a submodel of $T$. Fix a homomorphism $\map fSX$. Using the mixed amalgamation, we can find an extension $X'\in \ovr\Emm$ of $X$ and a homomorphism $\map {f'}T{X'}$ such that $f'\rest S = f$. Let $\Gee$ be the set of all functions $\map gTX$ satisfying $g \rest S = f$. We need to show that some $g\in\Gee$ is a homomorphism. Suppose first that there exist only finitely many $|T|$-generated structures in $\Emm$ and let $\En \subs \Emm$ be a finite set that contains isomorphic types of all of them. Given $g\in \Gee$, denote by $g'$ a fixed isomorphism from the submodel generated by $\img gT$ onto a fixed model from the collection $\En$. Note that $g$ is a homomorphism if and only if $g' \cmp g$ is a homomorphism. Now observe that the set $\Ha = \setof{g' \cmp g}{g\in \Gee}$ is finite. Let $S = \sett{s_i}{i < k}$ and $T\setminus S = \sett{t_j}{j < \ell}$. Fix $g\in \Gee$ and suppose it is not a homomorphism. There exists either a relation $R$ or a function $F$ and a finite sequence of elements of $T$ that witness this fact. Let $\psi_g$ be an atomic formula describing this fact. We may assume that $\psi_g$ has $k+\ell$ free variables, the first $k$ are supposed to denote $s_0,\dots,s_{k-1}$ and the latter ones $t_0,\dots,t_{\ell-1}$. Let $\phi$ be the conjunction of all formulae $\psi_g$, where $g\in\Gee$. Then $T\models \phi(s_0, \dots, s_{k-1}, t_0, \dots, t_{\ell-1})$ and, since $f'$ is a homomorphism, \begin{equation} X' \models \phi \Bigl(f(s_0), \dots, f(s_{k-1}), f'(t_0), \dots, f'(t_{\ell-1})\Bigr). \tag{1}\label{eqmfutione} \end{equation} Using the fact that $X$ is algebraically closed, find $\vec u = (u_0,\dots,u_{\ell-1})$ in $X$ such that \begin{equation} X \models \phi \Bigl(f(s_0), \dots, f(s_{k-1}), u_0, \dots, u_{\ell-1} \Bigr). \tag{2}\label{eqmfugotwo} \end{equation} Let $g\in \Gee$ be such that $g(t_j) = u_j$ for $j < \ell$. Then $g$ is a homomorphism. Indeed, otherwise there would be a witness (a relation or a function, plus some elements of $T$) saying that $g' \cmp g$ is not a homomorphism; however $\phi$ ``knows" all these witnesses, which gives rise to a contradiction. Suppose now that $L$ is finite and consider again the set $\Gee$. For each $g\in \Gee$, if $g$ is not a homomorphism, this is witnessed by an atomic formula $\psi_g$ and some elements of $T$. Now, even though the set $\Gee$ may be infinite, the number of atomic formulae with parameters in $T$ is finite. As before, let $\phi(\vec x, \vec y)$ collect all of them. Again, $X \models \phi(\vec s, \vec t)$ and consequently (\ref{eqmfutione}) holds. Since $X$ is algebraically closed, we can find $\vec u$ such that (\ref{eqmfugotwo}) holds. Finally, $g\in\Gee$ satisfying $g(t_j) = u_j$ ($j < \ell$) is the desired homomorphism. \end{pf} Following Dolinka~\cite{Dolinka}, we say that a class of models $\Emm$ has the \emph{1-point homomorphism extension property} (briefly: 1PHEP) if for every embedding $\map iAB$ and for every surjective homomorphism $\map fAC$, where $A,B,C \in \Emm$ and $B$ is generated by $A\cup \sn b$ for some $b\in B$, there exist an embedding $\map jCD$ and a homomorphism $\map gBD$ for which the diagram $$\xymatrix{ C \ar@{ >->}[r]^j & D \\ A \ar@{->>}[u]^f \ar@{ >->}[r]_i & B \ar@{->>}[u]_g }$$ commutes. Let us say that an embedding $\map iAB$ is \emph{primitive} if $B$ is generated by one element from $\img iA$. Clearly, every embedding is a composition of primitive embeddings. Furthermore, every homomorphism is the composition of a surjective homomorphism and an embedding. These facts, together with easy induction (see Proposition~\ref{pprimitivv}) show that 1PHEP is equivalent to the mixed amalgamation property of $\pair{\embes \Emm}{\homos \Emm}$. Combining Theorem~\ref{tmejnn}, Propositions~\ref{ppanfgaju},~\ref{pnfgju} and the remarks above, we obtain a strengthening of Dolinka's result \cite{Dolinka}: \begin{wn} Let $\Emm$ be a \fra\ class of finite models of a fixed first-order language $L$. Assume that $L$ is finite or for every $\ntr$ the number of isomorphism types of $n$-generated structures in $\Emm$ is finite. Assume further that $\Emm$ has the pushout property and the 1PHEP. Let $U\in\ovr\Emm$ be the \fra\ limit of $\Emm$. For a model $X\in \ovr\Emm$ the following conditions are equivalent. \begin{enumerate} \item[(a)] $X$ is a retract of $U$. \item[(b)] $X$ is algebraically closed. \end{enumerate} \end{wn} The ``pushout property" in the statement above means that $\embes\Emm$ has pushouts in $\homos\Emm$. This assumption may of course be replaced by a weaker one, namely, that $\pair{\embes\Emm}{\homos\Emm}$ has the amalgamated extension property. Note that a \fra\ class of finite models of a finite language may fail the condition concerning the number of $n$-generated models. For example, let $L$ consist of a unique unary function symbol $P$ and let $\Emm$ be the class of all finite $L$-models. That is, every model $S \in \Emm$ is endowed with a function $\map {P^S}SS$ and $\map fST$ is a homomoprhism iff $f(P^S(x)) = P^T(f(x))$ for every $x\in S$. It is an easy exercise to check that $\Emm$ is a \fra\ class with the mixed pushout property, therefore the corollary above applies. On the other hand, for each $\ntr$ there exists a $1$-generated structure $S_n \in \Emm$ of cardinality $n$. Namely, $S_n = \{0,\dots,n-1\}$ with the function $P$ defined by $P(n-1)=0$ and $P(i) = i+1$ for $i < n-1$. Thus, there are infinitely many $1$-generated structures in $\Emm$. Note that a countable $L$-structure $\pair XP$ belongs to $\ovr \Emm$ if and only if for every finite set $A\subs X$ there exists a finite set $S\subs X$ such that $A\subs S$ and $\img PS \subs S$. There are some natural \fra\ classes of finite models of infinite languages and with infinitely many $n$-generated structures, for which Proposition~\ref{pnfgju} (and consequently the corollary above) still hold. In section~\ref{sstegbjer} below, we shall investigate \fra\ classes of metric spaces, showing that the possibility of characterizing injectivity by ``being algebraic closed" depends on the language specifying the objects. \subsection{A note on homomorphism-homogeneous structures} In connection with (classical model-theoretic) \fra\ limits, there is an interesting notion of homomorphism-homogeneous structures, introduced recently by Cameron and Ne\v set\v ril~\cite{CamNes} and studied already by several authors (see \cite{Mas07}, \cite{Ilic}, \cite{CamLock}, \cite{RusSch}, \cite{MasPech}). Namely, a (usually countable) structure $M$ is \emph{homomorphism-homogeneous} if every homomorphism between its finitely-generated substructures extends to an endomorphism of $M$. It is clear that this notion can be defined in category-theoretic language, using a pair of categories $\pair \fK\fL$ as before, where $\fL$-arrows mean ``homomorphisms" and $\fK$-arrows mean ``embeddings". It turns out that homomorphism-homogeneity is strictly related to injectivity, as we show below. \begin{df} Fix two categories $\fK\subs \fL$ with the same objects. We say that an object $X \in \ob{\ciagi\fK}$ is \emph{$\fL$-homogeneous in $\ciagi{(\fK,\fL)}$} if for every $\ciagi\fK$-arrow $\map jaX$ such that $a\in \ob\fK$, for every $\ciagi{(\fK,\fL)}$-arrow $\map faX$, there exists a $\ciagi{(\fK,\fL)}$-arrow $\map FXX$ satisfying $F \cmp j = f$. This is described in the diagram below. $$\xymatrix{ a \ar@{ >->}[rr]^j \ar[rrd]_f & & X \ar@{-->}[d]^F \\ & & X }$$ \end{df} Note that the arrow $f$ is of the form $x_n^\infty \cmp f'$ for some $f' \in \fL$. That is why the definition above really speaks about $\fL$-homogeneity, not $\ciagi{(\fK,\fL)}$-homogeneity. It can actually be viewed as a variation on the mixed amalgamation property, which is witnessed by the results below. \begin{lm}\label{lnesetrilrtn} Let $\pair \fK\fL$ be a pair of categories such that $\fK \subs \fL$ and let $X,Y \in \ob{\ciagi\fK}$ be such that $X$ is $\fK$-injective in $\ciagi{(\fK,\fL)}$. Then for every $\ciagi\fK$-arrow $\map jaY$ with $a\in \ob\fK$, for every $\ciagi{(\fK,\fL)}$-arrow $\map faX$, there exists a $\ciagi{(\fK,\fL)}$-arrow $\map FYX$ for which the diagram $$\xymatrix{ a \ar@{ >->}[rr]^j \ar[rrd]_f & & Y \ar[d]^F \\ & & X }$$ commutes. \end{lm} \begin{pf} The arrow $j$ factorizes through some $y_k$, that is, $j = y_k^\infty \cmp i$ for some $\fK$-arrow $\map ia{y_k}$. Using $\fK$-injectivity, we construct inductively $\fL$-arrows $\map {f_n}{y_n}X$ for $n \goe k$ so that $f_k \cmp i = f$ and $f_{n+1} \cmp y_n^{n+1} = f_n$ for $n > k$. This gives rise to an arrow of sequences $F = \sett{f_n}{n \goe k}$ satisfying $F \cmp j = f$. \end{pf} Letting $X = Y$ in the lemma above, we obtain: \begin{wn}\label{wnrgtboh} Let $\fK \subs \fL$ be a pair of categories. Every $\fK$-injective object is $\fL$-homogeneous in $\ciagi{(\fK,\fL)}$. \end{wn} The equivalence (b)$\iff$(c) in the next statement, in the context of model theory, has been noticed by Dolinka~\cite[Prop. 3.8]{DolinkaBerg}. \begin{prop} Let $\fK \subs \fL$ be a pair of categories and let $\fK$ have a \fra\ sequence $U \in \ob{\ciagi\fK}$. The following properties are equivalent: \begin{enumerate} \item[(a)] $U$ is $\fK$-injective in $\ciagi{(\fK,\fL)}$. \item[(b)] $U$ is $\fL$-homogeneous in $\ciagi{(\fK,\fL)}$. \item[(c)] $\pair \fK\fL$ has the mixed amalgamation property. \end{enumerate} \end{prop} \begin{pf} Implication (c)$\implies$(a) has been proved in Proposition~\ref{pwkindrzektif}. Implication (a)$\implies$(b) is a consequence of Corollary~\ref{wnrgtboh}. It remains to show that (b)$\implies$(c). Suppose $U$ is $\fL$-homogeneous in $\ciagi{(\fK,\fL)}$ and fix a $\fK$-arrow $\map jca$ and an $\fL$-arrow $\map fcb$. Using the property of being a \fra\ sequence, find $\fK$-arrows $\map ia{u_k}$ and $\map eb{u_\ell}$ with some $k,\ell < \nat$. Since $U$ is $\fL$-homogeneous, there exists a $\ciagi{(\fK,\fL)}$-arrow $\map FUU$ satisfying $F \cmp u_k^\infty \cmp i \cmp j = u_\ell^\infty \cmp e \cmp f$. Finally, find an $\fL$-arrow $\map g{u_k}{u_m}$ with $m > \ell$, such that $u_m^\infty \cmp g = F \cmp u_k^\infty$. The situation is described in the following diagram. $$\xymatrix{ c \ar@{ >->}[r]^j \ar[d]_f & a \ar@{ >->}[r]^i & u_k \ar[rd]^g \ar@{ >->}[rrr] & & & U \ar[d]^F \\ b \ar@{ >->}[rr]_e & & u_\ell \ar@{ >->}[r]_{u_\ell^m} & u_m \ar@{ >->}[rr] & & U }$$ Thus, $j$ and $f$ are amalgamated by a $\fK$-arrow $u_\ell^m \cmp e$ and an $\fL$-arrow $g \cmp i$. \end{pf} Under certain natural assumptions, we are able to characterize homo\-mor\-phism-homo\-gen\-eous objects. In the next statement we deal with countable categories, but what we really have in mind is the existence of countably many isomorphic types of arrows. For example, the category of finite sets is a proper class, yet it is obviously equivalent to a countable category. \begin{tw}\label{tmejndwa} Let $\fK \subs \fL$ be a pair of categories such that $\pair \fK\fL$ has the mixed pushout property, $\fL$ is countable, and $\fK$ has the initial object $0$. For a sequence $X \in \ob{\ciagi\fK}$, the following properties are equivalent. \begin{enumerate} \item[(a)] $X$ is $\fL$-homogeneous in $\ciagi{(\fK,\fL)}$. \item[(b)] There exists a subcategory $\fK_0$ of $\fK$ such that $0$ is initial in $\fK_0$, $X \in \ob{\ciagi{\fK_0}}$, $\pair{\fK_0}\fL$ has the mixed pushout property, and $X$ is $\fK_0$-injective in $\ciagi{(\fK_0,\fL)}$. \item[(c)] There exists a subcategory $\fK_0$ of $\fK$ such that $0$ is initial in $\fK_0$, $X \in \ob{\ciagi{\fK_0}}$, $\pair{\fK_0}\fL$ has the mixed pushout property, and $X$ is a retract of a \fra\ sequence in $\fK_0$. \end{enumerate} \end{tw} The existence of the initial object in $\fK$ is not essential, but to remove it we would have to make more technical assumptions involving the joint embedding property. \begin{pf} The equivalence (b)$\iff$(c) is contained in Theorem~\ref{tmejnn}. The fact that $\fK_0$ is countable has been used here for the existence of a \fra\ sequence. Implication (b)$\implies$(a) is contained in Corollary~\ref{wnrgtboh}. It remains to show that (a)$\implies$(b). We may assume that $x_0 = 0$ in the sequence $X$. Let $\Es = \setof{x_n^m}{n \loe m, \; n,m\in\nat}$. Then $\Es$ is a subcategory of $\fK$ that contains the intial object $0$. We first check that $X$ is $\Es$-injective. Fix a $\ciagi{(\fK,\fL)}$-arrow $\map f{x_n}X$ and fix $m > n$. Since $X$ is $\fL$-homogeneous, there exists a $\ciagi{(\fK,\fL)}$-arrow $\map FXX$ satisfying $F \cmp x_n^\infty = f$. Note that $x_n^\infty = x_m^\infty \cmp x_n^m$, therefore $(F \cmp x_m^\infty) \cmp x_n^m = f$, which shows the $\Es$-injectivity of $X$. Now let $\fK_0$ consist of all $\fK$-arrows $\map jca$ such that $X$ is $j$-injective in $\ciagi{(\fK,\fL)}$ and there exists at least one $\ciagi{(\fK,\fL)}$-arrow from $c$ to $X$. That is, for every $\ciagi{(\fK,\fL)}$-arrow $\map fcX$, there exists a $\ciagi{(\fK,\fL)}$-arrow $\map gaX$ satisfying $g \cmp j = f$. The second assumption is needed for keeping $0$ initial in $\fK_0$, namely, $X$ should also be injective for the (unique) arrow $0\to a$. It is clear that $\fK_0$ is a subcategory of $\fK$ containing $\Es$. In particular, $X\in \ob{\ciagi {\fK_0}}$. It remains to show that $\pair{\fK_0}\fL$ has the mixed pushout property. For this aim, fix an $\fK_0$-arrow $\map jcb$, an $\fL$-arrow $\map pca$, and let $\map kaw$, $\map qbw$ be such that $k\in\fK$, $q\in\fL$ and $$\xymatrix{ b \ar[r]^q & w \\ c \ar@{ >->}[u]^j \ar[r]_p & a \ar@{ >->}[u]_k }$$ is a pushout square in $\fL$. Fix a $\ciagi{(\fK,\fL)}$-arrow $\map faX$. Since $X$ is $j$-injective, there exists a $\ciagi{(\fK,\fL)}$-arrow $\map gbX$ satisfying $g \cmp j = f \cmp p$. Both arrows $f$ and $g$ are factorized through some $x_n$, namely, $f = x_n^\infty \cmp f'$ and $g = x_n^\infty \cmp g'$ for some $\fL$-arrows $f', g'$. Using the property of a pushout, we find a unique $\fL$-arrow $\map hw{x_n}$ satisfying $h \cmp k = f'$ and $h \cmp q = g'$. In particular, $\ovr f = x_n^\infty \cmp h$ has the property that $\ovr f \cmp k = f$. This shows that $X$ is $k$-injective in $\ciagi{(\fK,\fL)}$ and completes the proof. \end{pf} Unfortunately, the result above is not fully applicable to \fra\ classes. Namely, in case where $\fK$ is a countable \fra\ class, $\fK_0$ may not be a full subcategory of $\fK$. This is demonstrated below, for the class of finite graphs. \begin{ex} Let $X$ be the two-element complete graph. It is clear that $X$ is homomorphism-homogeneous (and also ultrahomogeneous). We consider graphs without loops, therefore every endomorphism of $X$ is an automorphism. More precisely, we consider the pair $\pair\fK\fL$, where $\ob\fK = \ob\fL$ are all finite simple graphs, the $\fK$-arrows are embeddings and the $\fL$-arrows are graph homomorphisms. Let $\fK_0$ be any subcategory of $\fK$ that has pushouts in $\fL$ and contains all the embeddings of subgraphs of $X$. So, $\ob{\fK_0}$ contains the empty graph and complete subgraphs of size $\loe2$. The pushout with embeddings of the empty graph is just the coproduct (disjoint sum), there $\ob{\fK_0}$ contains the 2-element graph $D$ with no edges. Furthermore, $\ob{\fK_0}$ contains the graph $G$ whose set of vertices is $\{-1,0,1\}$ and the edges are $\dn{-1}0$ and $\dn01$. Such a graph comes from the pushout of two embeddings of the one-element graph into $X$. Now consider an embedding $\map jDG$ such that $\img jD = \dn{-1}1$. Let $\map fDX$ be one-to-one. Clearly, $f$ is a homomorphism and no homomorphism $\map gGX$ satisfies $g\cmp j = f$. This shows that $X$ is not $j$-injective. In particular, $\fK_0$ is not a full subcategory of $\fK$. \end{ex} \subsection{Metric spaces}\label{sstegbjer} We shall now discuss a concrete model-theoretic application of our result: Retracts of the universal metric space of Urysohn. Let $\metrics$ be the category of finite metric spaces with isometric embeddings. The objects of $\metrics$ are models of a first-order language: For each $r>0$ we can define the binary relation $D_r(x,y) \iff d(x,y) < r$, where $d$ denotes the metric on a fixed set $X$. The axioms of a metric can be rephrased in terms of the relations $D_r$. For example, the triangle inequality follows from the following (infinitely many) formulae: $$D_r(x,z) \land D_s(z,y) \implies D_{r+s}(x,y).$$ Note that it suffices to consider the relations $D_r$ with $r$ positive rational: The metric is then defined by $d(x,y) = \inf_{r \in \Qyu^+}{D_r(x,y)}$, where $\Qyu^+$ denotes the set of all positive rationals. In other words, metric spaces can be described in a countable language. It is clear that, in this language, a homomorphism of metric spaces is a non-expansive map. Recall that $\map fXY$ is \emph{non-expansive} if $d_Y(f(p),f(q)) \loe d_X(p,q)$ for every $p,q \in X$, where $d_X$, $d_Y$ denote the metrics on $X$ and $Y$ respectively. It is also possible to describe a metric space by similar relations $D_r$, now meaning that the distance is $\loe r$. We shall see later that, even though both languages describe the same objects, the notion of being algebraically closed is completely different. Clearly, the language of metric spaces is infinite and there exist infinitely many types of $2$-element metric spaces (even when restricting to rational distances), therefore one cannot apply Dolinka's result here. Moreover, $\metrics$ is formally not a \fra\ class, because it contains continuum many pairwise non-isomorphic objects. It becomes a \fra\ class when restricting to spaces with rational distances. However, in that case we cannot speak about complete metric spaces. In any case, our main result is applicable to the complete metric space of Urysohn, as we show below. The following lemma, in a slightly different form, can be found in \cite[Lemma 3.5]{DolMas}. \begin{lm}\label{lnjgetrgg} Let $\map fXY$ be a non-expansive map of nonempty finite metric spaces. Assume $X\cup\sn a$ is a metric extension of $X$. Then there exists a metric extension $Y\cup \sn b$ of $Y$ such that $$\xymatrix{ Y \ar@{ >->}[r] & Y\cup\sn b \\ X \ar[u]^f \ar@{ >->}[r] & X\cup\sn a \ar[u]_g }$$ where $g\rest X = f$ and $g(a) = b$, is a pushout square in the category of metric spaces with non-expansive maps. Furthermore \begin{equation} d(y,b) = \min_{x\in X} \Bigl( d(y,f(x)) + d(x,a) \Bigr) \tag{M}\label{eqemfwoef} \end{equation} for every $y\in Y$. \end{lm} The statement obviously fails when $X = \emptyset$ and $Y \nnempty$. \begin{pf} We first need to show that (\ref{eqemfwoef}) defines a metric on $Y\cup \sn b$. Of course, only the triangle inequality requires an argument. Fix $y,y_1 \in Y$. Find $x_1\in X$ such that $d(y_1,b) = d(y_1,f(x_1)) + d(x_1,a)$. Using the triangle inequality in $Y$, we get $$d(y,b) \loe d(y,f(x_1)) + d(x_1,a) \loe d(y,y_1) + d(y_1,f(x_1)) + d(x_1,a) = d(y,y_1) + d(y_1,b).$$ Now find $x\in X$ such that $d(y,b) = d(y,f(x)) + d(x,a)$. Using the triangle inequalities in $X$ and $Y$, and the fact that $d(f(x),f(x_1)) \loe d(x,x_1)$, we obtain \begin{align*} d(y,b) + d(y_1,b) &= d(y,f(x)) + d(x,a) + d(y_1,f(x_1)) + d(x_1,a) \\ &\goe d(y,f(x)) + d(x,x_1) + d(y_1,f(x_1)) \\ &\goe d(y,f(x)) + d(f(x),f(x_1)) + d(y_1,f(x_1)) \\ &\goe d(y,y_1). \end{align*} Thus, $d$ defined by (\ref{eqemfwoef}) fulfills the triangle inequality. Given $x\in X$, we have $d(g(x),g(a)) = d(f(x),b) \loe d(f(x),f(x)) + d(x,a) = d(x,a)$. This shows that $g$ is non-expansive. Finally, assume $\map p{X\cup\sn a}W$ and $\map qYW$ are non-expansive maps such that $p\rest X = q \cmp f$. We need to show that there exists a unique non-expansive map $\map h{Y\cup\sn b}W$ satisfying $h \cmp g = p$ and $h \rest Y = q$. The uniqueness of $h$ is clear, namely $h(b) = h(g(a)) = p(a)$. It remains to verify that $h$ is non-expansive. Suppose otherwise and fix $y\in Y$ such that $d(h(y),h(b)) > d(y,b)$. Find $x\in X$ such that $d(y,b) = d(y,f(x)) + d(x,a)$. So we have \begin{equation} d(h(y),p(a)) > d(y,f(x)) + d(x,a). \tag{*}\label{eqgwiazz} \end{equation} Knowing that $p$ and $q$ are non-expansive, we get \begin{equation} d(p(x), p(a)) \loe d(x,a) \oraz d(q(y),q(f(x)) \loe d(y,f(x)). \tag{**}\label{eqgwizdd} \end{equation} Note that $q(f(x)) = p(x)$ and $q(y) = h(y)$. Finally, (\ref{eqgwiazz}) and (\ref{eqgwizdd}) give $$d(h(y),p(a)) > d(p(x),p(a)) + d(h(y),p(x))$$ which contradicts the triangle inequality in $W$. This completes the proof. \end{pf} We say that a metric space $\pair Xd$ is \emph{finitely hyperconvex} if for every finite family of closed balls $$\Aaa = \left\{ \clbal(x_0,r_0), \clbal(x_1,r_1), \dots, \clbal(x_{n-1},r_{n-1}) \right\}$$ such that $\bigcap \Aaa = \emptyset$, there exist $i,j < n$ such that $$d(x_i,x_j) > r_i + r_j.$$ This is a weakening of the notion of a \emph{hyperconvex metric space}, due to Aronszajn \& Panitchpakdi~\cite{AroPan}, where the family above may be of arbitrary cardinality. Actually, the authors of \cite{AroPan} had already considered $\kappa$-hyperconvex metric spaces; finite hyperconvexity corresponds to $\aleph_0$-hyperconvexity. A variant of finite hyperconvexity (with closed balls replaced by open balls) has been recently studied by Niemiec~\cite{NiemiecANR} in the context of topological absolute retracts. The following facts relate this definition to our main topic. The first one should be well known to readers familiar with hyperconvexity, namely, every metric space embeds isometrically into a hyperconvex one. \begin{lm}\label{lhtrgoo} Let $X$ be a finite metric space and let $\Aaa = \sett{\clbal(x_i,r_i)}{i<N}$ be a family of closed balls such that $N\in\nat$ and $d(x_i,x_j) \loe r_i + r_j$ for every $i,j<N$. Then there exists a metric extension $X\cup \sn a$ of $X$ such that $d(a,x_i) \loe r_i$ for every $i<N$. \end{lm} \begin{pf} Fix $a\notin X$ and define \begin{equation} d(a,x) = \min_{i < N}\Bigl( d(x,x_i) + r_i \Bigr). \tag{*}\label{eqnstearr} \end{equation} Obviously, $d(a,x_i)\loe r_i$. It remains to check that (\ref{eqnstearr}) indeed defines a metric on $X\cup\sn a$. It is the triangle inequality that requires a proof. Fix $x,y\in X$ and fix $k < N$ such that $d(a,y) = d(y,x_k) + r_k$. Then $$d(a,x) \loe d(x,x_k) + r_k \loe d(x,y) + d(y,x_k) + r_k = d(x,y) + d(y,a).$$ This shows that $d(x,a) \loe d(x,y) + d(y,a)$. Now fix $i < N$ such that $d(a,x) = d(x,x_i) + r_i$. We have that $d(x_i,x_k) \loe r_i + r_k$, therefore \begin{align*} d(a,x) + d(a,y) &= d(x,x_i) + r_i + d(y,x_k) + r_k \\ &\goe d(x,x_i) + d(y,x_k) + d(x_i,x_k) \goe d(x,y). \end{align*} This shows that $d$ defined by (\ref{eqnstearr}) satisfies the triangle inequality. \end{pf} The next lemma is a special case of two results of Aronszajn \& Panitchpakdi, namely, Theorem 2 on page 413 and Theorem 3 on page 415 in \cite{AroPan}. We present the proof for the sake of completeness. \begin{lm}\label{ltebhr} A metric space is finitely hyperconvex if and only if it is injective with respect to isometric embeddings of finite metric spaces. \end{lm} \begin{pf} Let $X$ be a finitely hyperconvex metric space and fix a non-expansive map $\map fSX$, where $S$ is a finite metric space. It suffices to show that $f$ can be extended to a non-expansive map $\map {f'}TX$ whenever $T$ is a metric extension of $S$ and $T\setminus S = \sn a$. Fix $T = S \cup \sn a$ and let $$\Aaa = \setof {\clbal(f(s), r_s)}{s\in S},$$ where $r_s = d(s,a)$. Given $s,s_1\in S$, we have that $d(f(s),f(s_1)) \loe d(s,s_1) \loe r_s + r_{s_1}$. Since $X$ is finitely hyperconvex, there exists $b\in \bigcap\Aaa$. This means that $d(b,f(s)) \loe d(s,a)$ for every $s\in S$. Thus, setting $f'(a) = b$ and $f'\rest S = f$, we obtain a non-expansive extension of $f$. This shows the ``only if" part. For the ``if" part, fix a family $\Aaa = \sett{\clbal(x_i, r_i)}{i < N}$ in $X$, so that $d(x_i, x_j) \loe r_i + r_j$ for $i,j < N$. Let $S = \{ x_0,x_1, \dots, x_{N-1} \}$ and endow $S$ with the metric inherited from $X$. Let $T = S\cup \sn a$ be a metric extension of $S$ such that $d(a,x_i) \loe r_i$ for $i < N$. It exists by Lemma~\ref{lhtrgoo}. Applying the injectivity of $X$, we can find a non-expansive extension $\map gTX$ of the inclusion $S\subs X$. Let $b = g(a)$. Then $d(b,x_i) \loe d(a,x_i) \loe r_i$ for $i < N$. This shows that $\bigcap \Aaa \nnempty$. \end{pf} \begin{tw}\label{tfndgbojf} Given a Polish space $\pair Xd$, the following properties are equivalent: \begin{enumerate} \item[(a)] $\pair Xd$ is a non-expansive retract of the universal Urysohn space $\U$. \item[(b)] $\pair Xd$ is finitely hyperconvex. \item[(b')] $\pair Xd$ is injective with respect to isometric embeddings of finite metric spaces. \end{enumerate} \end{tw} \begin{pf} The equivalence (b)$\iff$(b') is contained in Lemma~\ref{ltebhr}. (a)$\implies$(b') Assume $X\subs \U$ and $\map r\U X$ is a non-expansive retraction. Fix finite metric spaces $S \subs T$ and a non-expansive map $\map fSX$. Using the mixed pushout property (a consequence of Lemma~\ref{lnjgetrgg} and Proposition~\ref{pjetghsd}), we can find an isometric embedding $\map j{\img fS}W$ and a non-expansive map $\map gTW$ such that $W$ is a finite metric space and $g\rest S = j \cmp f$. Using the ultrahomogeneity of $\U$, we can find an isometric embedding $\map hW\U$ such that $h \cmp j$ is the inclusion $\img fS \subs \U$. Finally, let $p = r \cmp h \cmp g$. Then $\map pTX$ is a non-expansive map and $p\rest S = f$. (b')$\implies$(a) Fix a Polish space $X$ satisfying (b'). Fix a countable dense set $D\subs X$. Let $K_0 = \Qyu \cup \setof{d(x,y)}{x,y\in D}$ and let $K$ be the subsemigroup of $\pair\Err +$ generated by $K_0$. Consider the category of nonempty finite metric spaces with distances in $K$ (we call them \emph{$K$-metric spaces}). This category is countable, therefore it has a \fra\ sequence. This \fra\ sequence defines a countable metric space $E$ whose completion is, by uniqueness, the Urysohn space. Enlarging $D$ to a countable set, we may assume that it is injective with respect to isometric embeddings of finite $K$-metric spaces. By Theorem~\ref{tmejnn}, $D$ is a non-expansive retract of $E$ and consequently $X$ is a non-expansive retract of $\U$. \end{pf} Let us note that in the statement above only the implication (b)$\implies$(a) appears to be new, the other ones are standard arguments easily adapted from \cite{AroPan}. The main ingredient needed here is the fact that Urysohn's space is finitely hyperconvex, which follows directly from Lemma~\ref{ltebhr} above. It is easy to see that the results above remain valid for the bounded version of the Urysohn space, called the \emph{Urysohn sphere}. Denote by $\metrics_1$ the class of all finite metric spaces of diameter $\loe 1$. There is an obvious functor mapping $\pair Xd \in \metrics$ to $\pair X{d_C} \in \metrics_1$, where $$d_C(x,y) = \min\{d(x,y), C\}.$$ Applying this functor, we can easily conclude that Lemmata~\ref{lnjgetrgg}, \ref{lhtrgoo} and~\ref{ltebhr} hold for arbitrary classes of the form $\metrics_1$. However, we need to specify the more general version of hyperconvexity. Namely, we say that $\pair Xd$ is \emph{finitely $1$-hyperconvex} if for every finite family of closed balls $\Bee = \sett{\clbal(x_i,r_i)}{i<n}$ with $r_i \loe 1$ for $i<n$, it holds that $\bigcap \Bee \nnempty$ whenever $d(x_i,x_j) \loe r_i + r_j$ for every $i,j < n$. The bounded version of Theorem~\ref{tfndgbojf} is as follows. \begin{tw} Given a separable complete metric space $X$ of diameter $\loe 1$, the following conditions are equivalent. \begin{enumerate} \item[(a)] $X$ is a non-expansive retract of the Urysohn sphere. \item[(b)] $X$ is finitely $1$-hyperconvex. \item[(c)] $X$ is injective with respect to isometric embeddings of finite metric spaces of diameter $\loe 1$. \end{enumerate} \end{tw} The theorem above speaks about complete metric spaces, however we can also formulate a version by restricting distances to a countable subsemigroup $S$ of $[0,+\infty)$. In that case, the class $\Emm_S$ of finite metric spaces with distances in $S$ is countable and we can consider its \fra\ limit $U_S\in \ovr \Emm_S$, a (possibly non-complete) countable ultrahomogeneous $S$-metric space. By the remarks above, we conclude that $X \in \ovr \Emm_S$ is a non-expansive retract of $U_S$ if and only if it is finitely $S$-hyperconvex (with the obvious meaning of $S$-hyperconvexity). This gives rise to the announced example showing that ``being algebraically closed" for metric spaces may or may not be equivalent to injectivity. \begin{ex} Consider the class $\Emm_\Qyu$ of finite rational metric spaces. Assume that the language consists of relations $D_r$ ($r\in\Qyu$), where $D_r(x,y)$ means ``$d(x,y) < r$". Using finite conjunctions of atomic formulae, there is no way to say that $X$ is finitely $\Qyu$-hyperconvex. Indeed, consider $\Qyu$ as a metric space with the usual distance and take $X = \Qyu \setminus \sn 1$. Clearly, $\Qyu$ is finitely $\Qyu$-hyperconvex, hence algebraically closed (see Proposition~\ref{ppanfgaju}). Thus, $X$ is algebraically closed too, because of the strict inequalities in the relations $D_r$. On the other hand, $X$ is obviously not $\Emm_\Qyu$-injective: The inclusion $\dn02 \subs X$ has no non-expansive extension onto $\{0,1,2\}$. Finally, consider the same language for $\Emm_\Qyu$, but with a different interpretation. Namely, let $D_r(x,y)$ mean ``$d(x,y) \loe r$" ($r\in \Qyu$). Now it is clear that ``being algebraically closed" implies ``being finitely $\Qyu$-hyperconvex", because of a version of Lemma~\ref{lhtrgoo} for rational metric spaces. Thus, the two properties are equivalent and now it is true that a countable rational metric space is $\Emm_\Qyu$-injective if and only if it is algebraically closed. \end{ex} \subsection{Banach spaces} Let $\banach$ denote the category of finite-dimensional Banach spaces (over the field of real or complex numbers) with linear transformations of norm $\loe 1$. Let $\banachi$ denote the category of finite-dimensional Banach spaces with linear isometric embeddings. The following lemma is well known. For the proof we refer to~\cite{ACCGMud}. \begin{lm} $\pair\banachi\banach$ has the mixed pushout property. \end{lm} A Banach space $X$ is \emph{1-complemented} in $Y$ if $X \subs Y$ and there exists a projection $\map P Y Y$ (i.e. a linear operator satisfying $P\cmp P = P$) of norm $1$ and $\img P Y = X$. A Banach space $E$ is \emph{almost $1$-injective} for finite-dimensional spaces if, given finite-dimensional spaces $X \subs Y$, given a linear operator $\map T X E$ with $\norm T \loe 1$, given $\eps > 0$, there exists a linear operator $\map {\til T} Y E$ such that $\til T \rest X = T$ and $\norm{\til T} \loe 1 + \eps$. The \emph{Gurarii space}~\cite{Gurarii} is a separable Banach space $\gur$ satisfying the following condition: Given $\eps > 0$ and finite-dimensional spaces $X \subs Y$, every isometric embedding $\map e X \gur$ extends to an $\eps$-isometric embedding $\map {\til e} Y \gur$ (that is, $\til e$ is one-to-one and $\norm {\til e} \loe 1 + \eps$, $\norm {\til e^{-1}} \loe 1 + \eps$). The fact that $\gur$ is unique up to a linear isometry was proved by Lusky~\cite{Lusky}; an elementary argument has been found recently, see~\cite{KubSol}. We now would like to apply Theorem~\ref{tmejnn}. The obstacle is that the category $\banachi$ is too big, it does not have a \fra\ sequence. On the other hand, given a countable $S\subs \banachi$ there exists a countable $\fK\subs \banachi$ such that $S\subs \fK$ and $\fK$ has pushouts in $\banach$. The category $\fK$ has a \fra\ sequence. If $S$ is ``rich enough" then this \fra\ sequence induces the Gurarii space $\gur$. This way we obtain the following result, originally due to Wojtaszczyk~\cite{Wojtaszczyk}. \begin{tw} Let $E$ be a separable Banach space. The following properties are equivalent. \begin{enumerate} \item[(a)] $E$ is linearly isometric to a $1$-complemented subspace of the Gurarii space. \item[(b)] $E$ is almost $1$-injective for finite-dimensional Banach spaces. \item[(c)] $E$ is an isometric $L^1$ predual. \end{enumerate} \end{tw} \begin{pf} (a)$\implies$(b) By the mixed pushout property, it is straightforward to see that the Gurarii space is almost $1$-injective. Clearly, this property is preserved by $1$-complemented subspaces. (b)$\implies$(c) This is part of the main result of Lindenstrauss \cite{Lin64}. In fact, it is proved in \cite[Thm. 6.1]{Lin64} that (c) is equivalent to almost $1$-injectivity for Banach spaces of dimension $\loe 4$. (c)$\implies$(a) A result of Lazar \& Lindenstrauss~\cite{LazLin66} says that there exists a chain $E_0\subs E_1 \subs E_2 \subs \dots$ of finite-dimensional subspaces of $E$ whose union is dense in $E$ and each $E_n$ is isometric to some $\ell^\infty_{k(n)}$. In fact, due to Michael \& Pe{\l}czy\'nski~\cite{MicPel}, one may assume that $k(n) = n$ for $\ntr$, although this is not needed here. Let $\fL$ be a countable subcategory of $\banach$ that contains all inclusions $E_n\subs E_{n+1}$ and a fixed chain defining the Gurarii space. Enlarging $\fL$ by adding countably many arrows, we may assume that it is closed under mixed pushouts, that is, the pair $\pair \fK \fL$ has the mixed pushout property, where $\fK = \fL \cap \banachi$. Let $\gur$ denote the Gurarii space. By the assumptions on $\fL$, we have that both $\gur$ and $E$ are objects of $\ciagi\fK$. Now observe that $E$ is $\fK$-injective in $\ciagi{(\fK,\fL)}$. Indeed, if $\map fAE$ is an arrow in $\ciagi{(\fK,\fL)}$, where $A, B \in\ob\fK$ are such that $A\subs B$, then $f$ is an isometric embedding of $A$ into some $E_n$ (by the definition of arrows between sequences). It is easy and well known that every space isometric to $\ell^\infty_m$ is $1$-injective for all Banach spaces. Thus, $f$ can be extended to a linear isometry $\map {\ovr f}B{E_n}$. We actually need one more assumption on $\fK$: namely that $\ovr f\in \fK$ whenever $f\in \fK$. This can be achieved by a standard closing-off argument. Finally, Theorem~\ref{tmejnn} implies that $E$ is isometric to a $1$-complemented subspace of $\gur$. \end{pf} A non-separable version of the above result is actually much simpler and comes exactly as a particular case of the uncountable version of Theorem~\ref{tmejnn}: \begin{tw}\label{Tetnrw} Assume the continuum hypothesis. Let $\Ve$ be the unique Banach space of density $\aleph_1$ that is of universal disposition for separable spaces. A Banach space of density $\loe \aleph_1$ is isometric to a $1$-complemented subspace of $\Ve$ if and only if it is $1$-separably injective. \end{tw} Some explanations are needed here. Namely, a Banach space $V$ is \emph{of universal disposition} for separable spaces if for every separable Banach spaces $X \subs Y$, every isometric embedding of $X$ into $V$ extends to an isometric embedding of $Y$ into $V$. Our result from~\cite{Kubfra} says that, under the continuum hypothesis, there exists a unique Banach space $\Ve$ of density $\aleph_1$ and of universal disposition for separable spaces. Extensions of this result can be found in~\cite{ACCGMud}, where more general constructions of spaces of universal disposition are presented. It is shown there that $2^{\aleph_0}$ is the minimal density of a Banach space of universal disposition for separable spaces. Finally, assuming the continuum hypothesis, the space $\bV$ is the \fra\ limit of separable Banach spaces with linear isometric embeddings. The notion of being ``$1$-separably injective" has obvious meaning; it has been recently studied in~\cite{ACCGMsi}. In this context, Theorem~\ref{Tetnrw} complements the results of~\cite{ACCGMsi}. \subsection{Linear orders} Let $\kappa$ be an infinite cardinal and let $\los\kappa$ denote the class of all linearly ordered sets of cardinality $< \kappa$. A homomorphism of linearly ordered sets will be called an \emph{increasing map}. As mentioned before, $\flos$ gives a natural example of a pair $\pair {\embes {\flos}}{\homos {\flos}}$ failing the pushout property. However, we have the following \begin{prop} For every infinite cardinal $\kappa$, the pair $\pair {\embes {\los\kappa}}{\homos {\los\kappa}}$ has property \hahah. \end{prop} \begin{pf} Condition \haha1 follows from \haha3, because $\embes{\los\kappa}$ has an initial object (the empty set) and $\homos{\los\kappa}$ has a terminal object, the $1$-element linearly ordered set. It remains to show \haha2 and \haha3. Call an embedding $\map jAB$ \emph{primitive} if $|B\setminus \img jA| \loe1$. It is clear that every increasing embedding is the colimit of a transfinite sequence of primitive embeddings. We shall use an uncountable version of Proposition~\ref{pprimitivv}, which can be easily proved by transfinite induction, using the fact that the category $\embes{\los\kappa}$ is $\kappa$-continuous in $\homos{\los\kappa}$. Denote by $\Pee$ the class of all primitive embeddings in $\embes{\los\kappa}$. Let us prove first that $\pair \Pee{\homos{\los\kappa}}$ has the amalgamated extension property (condition \haha3). Fix linearly ordered sets $C,A,B$ such that $A = C \cup \sn a$ and $B = C \cup \sn b$. Fix increasing maps $\map fAL$ and $\map gBL$ such that $f\rest C = g\rest C$. Formally, we have to assume that $a\ne b$. Let $W = A \cup B$. We let $a < b$ if $f(a) < g(b)$; we let $a > b$ otherwise. It is clear, using the compatibility of $f$ and $g$, that this defines a linear order on $W$, extending the orders of $A$ and $B$. The unique map $\map hWL$ satisfying $h\rest A = f$ and $h\rest B = g$ is increasing. This shows \haha3. Now fix linearly ordered sets $C,A,B$ such that $A = C \cup \sn a$ with $a\notin C$, and fix an increasing map $\map fCB$. Let $$L = \bigcup_{c < a}(\leftarrow, f(c)] \oraz R = \bigcup_{c > a}[f(c), \rightarrow),$$ where $(\leftarrow, x] = \setof{p}{p \loe x}$ and $[x,\rightarrow) = \setof{p}{p \goe x}$. Note that $B = L \cup R$ and $L\cap R$ is either empty or a singleton. Let $W = B \cup \sn w$, where either $w \in L\cap R$ or $w\notin B$ in case where $L\cap R = \emptyset$. In the latter case, define $x < w$ and $w < y$ for $x\in L$, $y\in R$. Define $\map gAW$ by setting $g(a) = w$ and $g \rest C = f$. Clearly, $g$ is increasing and the inclusion $B\subs W$ is primitive. This shows \haha2 and completes the proof. \end{pf} Note that every increasing embedding of finite linear orders is left-invertible. Thus, we immediately obtain the following result. \begin{wn} Every countable linear order is order-isomorphic to an increasing retract of the set of rational numbers. \end{wn} Of course, this result can be proved directly, realizing that $X \cdot \Qyu$ with the lexicographic ordering is isomorphic to $\Qyu$, whenever $X$ is a countable linear order. Note that this completely answers Question~10.6 from~\cite{McPhee}. Passing to the uncountable case, let us note that $\los{\omega_1}$ has the \fra\ limit if and only if the Continuum Hypothesis holds. Denote this \fra\ limit by $\Qyu_{\omega_1}$. It is easy to check that a linearly ordered set $X$ of cardinality $\omega_1$ is injective for countable linear orders (isomorphic to $\Qyu_{\omega_1}$) if and only if for every countable sets $A,B \subs X$ such that $a < b$ for $a\in A$, $b\in B$, there exists $x\in X$ such that $a \loe x \loe b$ ($a < x < b$) whenever $a\in A$, $b\in B$ (one of the sets $A$, $B$ may be empty). For example, the closed unit interval $[0,1]$ satisfies this condition, therefore it can be embedded as an increasing retract of $\Qyu_{\omega_1}$. \separator We finish with some remarks on reversed \fra\ sequences. General theory of reversed \fra\ limits of finite models (of a first-order language) was developed in~\cite{IrSol}. The idea comes just by considering the opposite category. More specifically, fix a class $\Emm$ of finite models and consider the pair $\pair{\quos\Emm}{\homos\Emm}$, where $\quos\Emm$ is the category whose objects are elements of $\Emm$ and arrows are quotient maps. Now property \hahah\ is defined by reversing the arrows in all the diagrams. For example, amalgamation is replaced by ``reversed amalgamation" and pushouts are replaced by pullbacks. Sequences are now contravariant functors and it is natural to consider their limits endowed with the topology, inherited from the product of finite sets. It is not hard to see that precisely the continuous homomorphisms are induced by arrows between sequences. It is worth noting that if $\Emm$ is closed under finite products and substructures then $\quos\Emm$ has pullbacks in $\homos\Emm$. The pullback of two quotient maps $\map f X Z$, $\map g Y Z$ is provided by the structure $$w = \setof{\pair st \in X \times Y}{f(s) = g(t)}.$$ Coming back to finite linear orders, consider the pair $\pair{\quos\flos}{\homos\flos}$. It is straightforward to see that $\quos\flos$ has no pullbacks in $\homos\flos$. On the other hand, it is easy and standard to check that this pair has (the reversed variant of) property \hahah. Note that every increasing quotient of finite linearly ordered sets is right-invertible. Thus, all sequences in $\quos\flos$ are ``finitely projective". It is clear that the inverse \fra\ limit of $\flos$ is the Cantor set endowed with the standard linear order. Thus, using Theorem~\ref{tmejnn} (or, more precisely, Corollary~\ref{wnrhojht}), we obtain the following well known fact which belongs to the folklore. \begin{wn}\label{wpveryastonn} Every compact metric totally disconnected linearly ordered space is a continuous increasing retract of the standard Cantor set. \end{wn} Again, it is not hard to prove this fact directly, by showing that a metric compact totally disconnected linearly ordered space $K$ can be isomorphically embedded into the Cantor set and constructing the retraction ``manually". Note that the reversed \fra\ theory would only say that $K$ is a continuous increasing quotient of the Cantor set, however not all continuous increasing quotient maps of the Cantor set are right-invertible. \subsection*{Acknowledgments} The author is indebted to the anonymous referee for several helpful remarks, in particular for pointing out the reference~\cite{McPhee}.
1,314,259,992,820
arxiv
\section{Introduction} Quantum field theories in 1+1 dimensional commutative spacetime exhibit novel phenomenon of bosonization. Bosonization maps the bosonic composite operators to the fermionic ones and vice versa. Bosonization is expected in two dimensional quantum field theories as the phase change under exchange of fermions can be absorbed in the bosonic field operators . The equivalence between the fermionic and the bosonic systems in two dimensions was discovered long ago by Jordan and Wigner \cite{wig}. This was later demonstrated in continuum quantum field theories on equivalence between massive Thirring model and sine-Gordon model \cite{Sid,man}. The fermion-boson equivalence could be established as follows: The canonical quantization of a bosonic field operator in 1+1 dimensions: $$\phi(x) = \int{\frac{dp^1}{\sqrt{2\pi}2p^0}[a(p)e^{-ip.x}+a^{\dagger}(p)e^{ip.x}]};~~~~~p^{\mu} = (p^0, p^1)$ demands that the bosonic operators $a(p)$ and $a^{\dagger}(p)$ must satisfy the canonical commutation relations: $$[a(p), a^{\dagger}(p') ] = 2p^0\delta(p-p');~~[a(p), a(p') ] = 0 = [a^{\dagger}(p), a^{\dagger}(p') ]$$ One can construct an operators $b(p)$ and $b^{\dagger}(p)$ in terms of $a(p)$ and $a^{\dagger}(p)$ defined by, \begin{eqnarray} b^{\dagger}(p) &=& a^{\dagger}(p) e^{-i\pi\int_{p^1}^\infty\frac{dk^1}{2k^0}a^{\dagger}(k)a(k) }\\ b(p) &=& e^{i\pi\int_{p^1}^\infty\frac{dk^1}{2k^0}a^{\dagger}(k)a(k) }a(p) \end{eqnarray} which are fermionic operators that satisfy the canonical anti-commutation relations: $$\{b(p), b^{\dagger}(p')\} = 2p^0\delta(p-p');~~\{b(p), b(p') \} = \{b^{\dagger}(p), b^{\dagger}(p') \} = 0$$ The physical import of bosonization in a quantum field theory is understood in terms of the duality between the strong and weak coupling limits. Consider the sine-Gordon model defined by the action: \begin{equation}S_{SG}=\int{d^2x} (\frac{1}{2} \partial_{\mu}\phi\partial^{\mu}\phi + \frac{\alpha_0}{\beta^2}\cos\beta\phi+\gamma_0)\end{equation} and massive Thirring model by, \begin{equation} S_{MT} =\int{d^2x}(\bar \psi(i\not\partial -m_0)\psi -\frac{1}{2}g j^{\mu}j_{\mu})\end{equation} These are dual to each other when coupling constants are related by $\frac{4\pi}{\beta^2} = 1+ \frac{g}{\pi}$ by the bosonization rules. Thus strong-weak duality implies that a weak bosonic coupling could be found for strong fermionic interactions. In this paper we wish to address the following question: Can the two dimensional quantum field theories, in particular, quantum massive Thirring model in Moyal spacetime be bosonized and obtain sine-Gordon model in the same spacetime? \vspace{0.1cm} Moyal spacetime in 1+1 dimensions is defined by \begin{equation} [\hat x_{\mu},\hat x_{\nu}] = i\theta_{\mu\nu}\equiv i\epsilon_{\mu\nu}\theta ~~~~\mu,\nu = 0, 1 \label{nc1}. \end{equation} In the case of two dimensional spacetime, every antisymmetric matrix equals a number times a constant antisymmetric second-rank tensor $\epsilon ^{\mu\nu}$ with $\epsilon ^{12}=1$, which is Lorentz invariant. Therefore, we can write $\theta^{\mu\nu} = i\theta \epsilon^{\mu\nu}$, where $\theta$ is a real parameter. The commutator (\ref{nc1}) is invariant under translation:~$ x^\mu \to x'^\mu=x^\mu+a^\mu,~ a^\mu \in \mathbb{R}$. While Poincare group can be implemented in 1+1 dimensions due to covariance of $\epsilon_{\mu\nu}$, implementation of twisted Poincare group has played the role of an interesting development in the study of noncommutative quantum field theories \cite{mas0,bal1,bal4}. Bosonization on noncommutative spacetime in the context of conventional canonical quantization have been studied in the past in 1+1 dimensions \cite{bos0,bos1} as well as in 2+1 dimensions \cite{bos2}. Integrable sine-Gordon model in Moyal spacetime using conventional quantization has been studied in \cite{tr0,tr1,tr2}. Moreover, the ref. \cite{tr1}, also discusses bosonized NC massive Thirring model. However noncommutative quantum field theories under conventional quantization, unlike twisted quantized QFT's, suffer from IR-UV mixing and unitarity difficulties. Hence our program in this paper assumes importance. Interestingly the quantum sinh-Gordon model is shown to be integrable using twisted quantization program on noncommutative space \cite{sac0}. \vspace{0.1cm} Twisted quantization program is required for implementing twisted Poincare symmetry in the quantum field theory on noncommutative spacetime \cite{mas,sac1,sac2}. By twisted Poincare group one implies Poincare group implemented on multiparticle states through Drinfeld twist \cite{mas}. Now the following question naturally emerges: does there exist a twisted bosonic representation of the twisted fermionic operator in the framework of quantized field operators in terms of the twisted creation and annihilation operators? Twisted quantization program on Moyal spacetime is obtained through a deformation of the canonical quantization. To this end we consider the twisted bosonic operators $a^{\theta}(p)$ and $a^{\theta\dagger}(p)$ that satisfy the commutation relations: \begin{eqnarray*} && a^{\theta}(p) a^{\theta\dagger}(p') = e^{-ip\wedge p'}a^{\theta\dagger}(p')a^{\theta}(p) + 2p^0\delta(p-p');\\ &&a^{\theta}(p) a^{\theta}(p') = e^{ip\wedge p'}a^{\theta}(p')a^{\theta}(p);~~a^{\theta\dagger}(p) a^{\theta\dagger}(p') = e^{ip\wedge p'}a^{\theta\dagger}(p')a^{\theta\dagger}(p)\end{eqnarray*} where $p\wedge p' = p_{\mu}\theta^{\mu\nu}p'_{\nu} = p_{\mu}p'_{\nu}\epsilon^{\mu\nu}\theta$. One can construct the twisted operators $b^{\theta}(p)$ and $b^{\theta\dagger}(p)$ defined by, \begin{eqnarray} b^{\theta\dagger}(p) &=& a^{\theta\dagger}(p) e^{-i\pi\int_{p^1}^\infty\frac{dk^1}{2k^0}a^{\theta\dagger}(k)a^{\theta}(k) } = a^{\theta\dagger}(p) e^{-i\pi\int_{p^1}^\infty\frac{dk^1}{2k^0}a^{\dagger}(k)a(k) }\\ b^{\theta}(p) &=& e^{i\pi\int_{p^1}^\infty\frac{dk^1}{2k^0}a^{\theta\dagger}(k)a^{\theta}(k) }a^{\theta}(p) = e^{i\pi\int_{p^1}^\infty\frac{dk^1}{2k^0}a^{\dagger}(k)a(k) }a^{\theta}(p) \end{eqnarray} which are twisted fermionic operators that obey the anticommutation relations (see Appendix A): \begin{eqnarray*} && b^{\theta}(p) b^{\theta\dagger}(p') =- e^{-ip\wedge p'}b^{\theta\dagger}(p')b^{\theta}(p) + 2p^0\delta(p-p');\\ &&b^{\theta}(p) b^{\theta}(p') = -e^{ip\wedge p'}b^{\theta}(p')b^{\theta}(p);~~b^{\theta\dagger}(p) b^{\theta\dagger}(p') = -e^{ip\wedge p'}b^{\theta\dagger}(p')b^{\theta\dagger}(p)\end{eqnarray*} Thus the bosonic representation for the fermionic operator holds true even in the case of twisted quantization.\vspace{0.1cm} In section \ref{sec2}, we briefly review the equivalence of massive Thirring model and sine-Gordon model in 1+1 dimensional commutative spacetime. Section \ref{sec3} deals with the twisted scalar and the twisted Dirac fields which are quantized in terms of the twisted creation and twisted annihilation operators. Section \ref{sec4} explores as to the noncommutative S-operator for both the twisted quantum field theories. In sections \ref{sec5} and \ref{sec6}, we analyze n-point correlation functions pertaining to the noncommutative sine-Gordon model and massive Thirring model respectively. The twisted bosonization rules that establish equivalence between these two models are summarized in section \ref{sec7}. \section{Equivalence between sine-Gordon model and massive Thirring model in 1+1 dimensional commutative spacetime\label{sec2}} Coleman \cite{Sid} showed that the Massive Thirring Model and the Sine-Gordon Model are equivalent to each other in the following sense: Any order correlation functions pertaining to both the models turn out to have the same structure provided specific identifications of the coupling constants and mass parameters of the two models are made. The sine-Gordon model (SG) is a renormalizable field theory of a single scalar boson field $\phi$ in 1+1 dimensions. The Lagrangian for SG model reads: \begin{equation} \mathcal{L_{SG}}= \frac{1}{2} \partial_{\mu}\phi\partial^{\mu}\phi + \frac{\alpha_0}{\beta^2}\cos\beta\phi+\gamma_0\end{equation} where $\alpha_0$, $\beta$ and $\gamma_0$ are real parameters. In order to establish the equivalence between the two models, we shall display their n-point functions. We define: \begin{equation}A_{\pm}(x) = :e^{\pm i\beta\phi(x)}:\end{equation} where :: is the normal-ordering operation defined by the mass $m$ (see \cite{Sid}). Now the free field vacuum expectation value of $\prod_{i=1}^{n}A_{+}(x_i)A_{-}(y_i)$ is evaluated as: \begin{equation} \left\langle 0\left|T\prod_{i=1}^{n}A_{+}(x_i)A_{-}(y_i)\right|0\right\rangle = \frac{\prod_{i>j}[(x_i-x_j)^2(y_i-y_j)^2C^2m^4]^{\frac{\beta^2}{4\pi}}} {\prod_{i,j} [(x_i-y_i)^2Cm^2]^{\frac{\beta^2}{4\pi}}} \label{sg} \end{equation} Where $C$ is a numerical constant and $m$ is a mass parameter. The massive Thirring model (MT) is a field theory of a single spin-1/2 fermion field with a current-current interaction in 1+1 dimensions. The Lagrangian for MT model reads: \begin{equation} \mathcal{L_{MT}} =\bar \psi(i\not\partial -m_0)\psi -\frac{1}{2}g j^{\mu}j_{\mu} \end{equation} Now we define a renormalized scalar density: \begin{equation} \sigma_{\pm} = \frac{1}{2}Z \bar\psi(x)(1\pm \gamma^5)\psi(x)\end{equation} where $Z$ is the cutoff dependent multiplicative renormalization constant. The vacuum expectation value of $\prod_{i=1}^{n}\sigma_{+}(x_i)\sigma_{-}(y_i)$ using Kaliber's formula \cite{kal} yields: \begin{equation} \left\langle 0\left|T\prod_{i=1}^{n}\sigma_{+}(x_i)\sigma_{-}(y_i)\right|0\right\rangle = \frac{\prod_{i>j}[(x_i-x_j)^2(y_i-y_j)^2 M^4]^{\frac{1}{1+g/\pi}}} {\prod_{i,j} [(x_i-y_i)^2 M^2]^{\frac{1}{1+g/\pi}}}\label{mt} \end{equation} where $g$ is a coupling constant and $M$ a mass parameter which absorbs all renormalization constants and numerical factors. Comparison of equations (\ref{sg}) and (\ref{mt}) leads to the fact that the two perturbative field theories are identical provided the following identifications are made: \begin{eqnarray} \sigma_{\pm} &=& \frac{1}{2}A_{\pm} \Rightarrow \frac{1}{2}Z \bar\psi(x)(1\pm \gamma^5)\psi(x) = \frac{1}{2} :e^{\pm i\beta\phi(x)}:\nonumber\\ M^2 &=& Cm^2\nonumber\\ \frac{1}{1+g/\pi} &=& \frac{\beta^2}{4\pi} \end{eqnarray} Thus the two field theoretic models: sine-Gordon and massive Thirring are equivalent to each other as long as their perturbation expansions converge. \section{Noncommutative quantum field theories in 1+1 dimensional spacetime \label{sec3}} The effect of spacetime noncommutativity in QFT is incorporated using the Moyal star product. Given a local QFT on a commutative space-time, it can be generalized to a noncommutative space-time which amounts to replacing ordinary local product by a Moyal star product. The Moyal star product is a mapping from operators (say $\hat A(\hat x)$, $\hat B(\hat x)$) to functions ($A( x)$, $B( x)$) defined by: \begin{equation} \hat A(\hat x)\hat B(\hat x) \Leftrightarrow (A*B)(x) = e^{\frac{i}{2}\theta^{\mu\nu}\partial_{\mu}^x\partial_{\nu}^y}A(x)B(y)|_{y=x} \end{equation} Quantum field theory on noncommutative spacetime with $\theta^{0i}\ne0$ is shown to be manifestly unitary \cite{un1}. Unitary quantum mechanics with time-space noncommutativity has been developed in \cite{un2}. \subsection{Twisted scalar field} Consider a free scalar field $$\phi(x) = \int{\frac{dk^1}{\sqrt{2\pi}2k^0}[a_k^{\theta}e^{-ik.x}+a_k^{\theta \dagger}e^{ik.x}]},~~~~~k^0=\sqrt{{k^1}^2 + m^2} $$ quantized via twisted commutation relations \cite{sac1} given as: \begin{eqnarray} a_p^{\theta} a_k^{\theta} &=& e^{ip\wedge k} a_k^{\theta} a_p^{\theta},\nonumber\\ a_p^{\theta} a_k^{\theta\dagger} &=& e^{-ip\wedge k} a_k^{\theta\dagger} a_p^{\theta} + 2p_0\delta(\vec p-\vec k) \end{eqnarray} The twisted creation and annihilation operators $a_p^{\theta}$ and $a_p^{\theta\dagger}$ are related to the untwisted ones $a_p^{\dagger}$ and $a_p$ via`` dressing transformations'' [which were first discussed in \cite{dr1,dr2,dr3}]: \begin{equation} a_p^{\theta} = a_pe^{-\frac{i}{2}p\wedge P},~~~ a_p^{\theta\dagger} = a_p^{\dagger}e^{\frac{i}{2}p\wedge P}. \end{equation} Thus twisted creation and annihilation operators $a_p^{\theta}$ and $a_p^{\theta\dagger}$ act on the same Fock space as that of the untwisted ones $a_p^{\dagger}$ and $a_p$. The momentum operator of the scalar field is given by: $$P^{\mu} = \int{\frac{dp^1}{2\pi2p^0}p^{\mu}a_p^{\theta\dagger}a_p^{\theta}} = \int{\frac{dp^1}{2\pi2p^0}p^{\mu}a_p^{\dagger}a_p};~~~[P^{\mu}, \phi] = -i\partial_{\mu}\phi$$ The twisted and untwisted n-particle states are related to each other as follows: \begin{equation} |p_1, p_2, ...p_n\rangle_{\theta} = a_{p_{1}}^{\theta\dagger}a_{p_{2}}^{\theta\dagger} . ...a_{p_{n}}^{\theta\dagger}|0\rangle = e^{\frac{i}{2}\sum_{i<j}p_i\wedge p_j}|p_1, p_2, ...p_n\rangle \label{bs} \end{equation} The commutator of $\phi(x)$ and $\phi(y)$ turns out: \begin{eqnarray} [\phi(x), \phi(y)] &=& \int{\frac{dp^1dk^1}{2\pi2p_02k_0}}\left[e^{-i(p.x+k.y)}(e^{-ip\wedge k}-1)a_k^{\theta}a_p^{\theta}\right. \nonumber\\ &+&e^{i(p.x+k.y)}(e^{ip\wedge k}-1)a_k^{\theta\dagger}a_p^{\theta\dagger}\nonumber\\ &+&e^{-i(p.x-k.y)}\{(e^{ip\wedge k}-1)a_k^{\theta\dagger}a_p^{\theta}+2p_0\delta(\vec p-\vec k)\}\nonumber\\ &+&\left.e^{i(p.x-k.y)}\{(1-e^{-ip\wedge k})a_p^{\theta\dagger}a_k^{\theta}-2p_0\delta(\vec p-\vec k)\}\right] \end{eqnarray} The twisted scalar fields at any two spacetime points with space-like separation do not commute, however, $\langle 0|[\phi(x), \phi(y)]|0\rangle$ is zero for space-like separation. \subsection{Twisted Dirac field} A free Dirac field can be Fourier decomposed as $$\psi(x) = \int{\frac{dk^1}{\sqrt{2\pi}2k^0}[b_k^{\theta}u(k)e^{-ik.x}+d_k^{\theta \dagger}v(k)e^{ik.x}]},~~~~~k^0=\sqrt{{k^1}^2 + m^2} $$ where $b_k^{\theta}$ and $d_k^{\theta \dagger}$ obey the following twisted anticommutation relations \cite{sac1,sac2} given as: \begin{eqnarray} b_p^{\theta} b_k^{\theta} &=&- e^{ip\wedge k} b_k^{\theta} b_p^{\theta},~~~ b_p^{\theta} b_k^{\theta\dagger} =- e^{-ip\wedge k} b_k^{\theta\dagger} b_p^{\theta} + 2p_0\delta(\vec p-\vec k),\nonumber\\ d_p^{\theta} d_k^{\theta} &=&- e^{ip\wedge k} d_k^{\theta} d_p^{\theta},~~~ d_p^{\theta} d_k^{\theta\dagger} =- e^{-ip\wedge k} d_k^{\theta\dagger} d_p^{\theta} + 2p_0\delta(\vec p-\vec k) \end{eqnarray} The twisted creation and annihilation operators $b_p^{\theta\dagger},d_p^{\theta\dagger}$ and $b_p^{\theta}, d_p^{\theta}$ are related to the untwisted ones $b_p^{\dagger},d_p^{\dagger}$ and $b_p, d_p$ as: \begin{eqnarray} b_p^{\theta} &=& b_pe^{-\frac{i}{2}p\wedge P} ; d_p^{\theta} = d_pe^{-\frac{i}{2}p\wedge P},\nonumber\\ b_p^{\theta\dagger} &=& b_p^{\dagger}e^{\frac{i}{2}p\wedge P} ; d_p^{\theta\dagger} = d_p^{\dagger}e^{\frac{i}{2}p\wedge P} \end{eqnarray} The twisted creation and annihilation operators act on the same Fock space as that of their untwisted counterparts. The total momentum operator of the Dirac field can be written as: $$P^{\mu} = \int{\frac{dp^1}{2\pi2p^0}p^{\mu}[ b_p^{\theta\dagger}b_p^{\theta}} + d_p^{\theta\dagger}d_p^{\theta} ]= \int{\frac{dp^1}{2\pi2p^0}p^{\mu}[ b_p^{\dagger}b_p} + d_p^{\dagger}d_p ];~~~[P^{\mu}, \psi] = -i\partial_{\mu}\psi$$ The twisted and untwisted n-particle fermionic states are related to each other as: $$|p_1,s_1; p_2,s_2; ...p_n,s_n\rangle_{\theta} = b_{p_{1}}^{s_1\theta\dagger}b_{p_{2}}^{s_2\theta\dagger} .... b_{p_{n}}^{s_n\theta\dagger}|0\rangle = e^{\frac{i}{2}\sum_{i<j}p_i\wedge p_j}|p_1,s_1; p_2,s_2; ...p_n,s_n\rangle $$ $$|p_1,s_1; p_2,s_2; ...p_n,s_n\rangle_{\theta} = d_{p_{1}}^{s_1\theta\dagger}d_{p_{2}}^{s_2\theta\dagger} .... d_{p_{n}}^{s_n\theta\dagger}|0\rangle = e^{\frac{i}{2}\sum_{i<j}p_i\wedge p_j}|p_1,s_1; p_2,s_2; ...p_n,s_n\rangle $$ The anticommutator of the twisted Dirac fields $\psi(x)$ and $\bar\psi(y)$ is: \begin{eqnarray} \{\psi(x), \bar\psi(y)\} &=& \int{\frac{dp^1dk^1}{2\pi2p_02k_0}}\left[e^{-i(p.x-k.y)}\{(1-e^{-ip\wedge k}) b_k^{\theta\dagger}b_p^{\theta}+2p^0\delta(\vec p-\vec k)\}u(p)\bar u(k)\right.\nonumber\\ &+&e^{-i(p.x+k.y)}(1-e^{-ip\wedge k})d_k^{\theta}b_p^{\theta}u(p)\bar v(k)+e^{i(p.x+k.y)}(1-e^{-ip\wedge k})b_k^{\theta\dagger}d_p^{\theta\dagger}v(p)\bar u(k)\nonumber\\ &+&\left.e^{i(p.x-k.y)}\{(1-e^{-ip\wedge k})d_p^{\theta\dagger}d_k^{\theta}+2p_0\delta(\vec p-\vec k)\}v(p)\bar v(k)\right] \end{eqnarray} The twisted Dirac fields at any two spacetime points with space-like separation do not anticommute, however, $\langle 0|\{\psi(x), \bar\psi(y)\}|0\rangle$ is zero for space-like separation. \section{Noncommutative S-operator \label{sec4}} The S-operator $ S_{\theta}$ for twisted quantum field theory reads: \begin{equation} S_{\theta} = \mathcal{T}e^{-i\int_{-\infty}^{+\infty}H_I^{\theta}dx}\end{equation} In order to study scattering theory on noncommutative spacetime, we shall analyze the S-operator pertaining to the following interaction Hamiltonians of noncommutative QFT's: \begin{eqnarray} H_{I}^{\theta}& =&\frac{\lambda}{4!}\phi_*^n(x)= \frac{\lambda}{4!}\phi(x)*\phi(x)*\phi(x)*......*\phi(x) \\ H_{I}^{'\theta} &= & g\bar{\psi}(x)*\gamma^{\mu}\psi(x)*\bar{\psi}(x)*\gamma_{\mu}\psi(x). \end{eqnarray} Let us consider the leading order terms corresponding to $H_{I}^{\theta}$ and $H_{I}^{'\theta}$: \begin{eqnarray} S_{\theta }^{(1)} &=& \frac{-i\lambda}{4!}\int{d^2x} (\phi*\phi*...*\phi)(x)\\ S_{\theta }^{'(1)} &=& -ig\int{d^2x} (\bar{\psi}*\gamma^{\mu}\psi*\bar{\psi}*\gamma_{\mu}\psi)(x) \end{eqnarray} We shall now focus on a typical term arising from the Fourier decomposition of field $\phi$ in $\hat S_{\theta }^{(1)} $ as: \begin{eqnarray} && \frac{-i\lambda}{4!}\int{d^2x} a_{p_1}^{\theta}a_{p_2}^{\theta\dagger}.....a_{p_{n-1}}^{\theta}a_{p_{n}}^{\theta\dagger} e^{-ip_1.x}*e^{ip_2.x}*.....*e^{-ip_{n-1}.x}*e^{ip_n.x} \nonumber\\ && = \frac{-i\lambda}{4!}\int{d^2x} a_{p_1}a_{p_2}^{\dagger}.....a_{p_{n-1}}a_{p_{n}}^{\dagger}e^{\frac{i}{2} (\sum_i(-1)^ip_i)\wedge P}e^{\frac{i}{2} \sum_{i<j}(-1)^{i+j}p_i\wedge p_j} e^{(\sum_{i=1}^{n}(-1)^ip_i.)x}\nonumber\\ &&\times~e^{-\frac{i}{2} \sum_{i<j}(-1)^{i+j}p_i\wedge p_j }\nonumber \\ && = \frac{-i\lambda}{4!}\int{d^2x} a_{p_1}a_{p_2}^{\dagger}.....a_{p_{n-1}}a_{p_{n}}^{\dagger}e^{\frac{i}{2} (\sum_i(-1)^ip_i)\wedge P} e^{(\sum_{i=1}^{n}(-1)^ip_i).x}\nonumber\\ &&= \frac{-i\lambda}{4!}\int{d^2x} a_{p_1}a_{p_2}^{\dagger}.....a_{p_{n-1}}a_{p_{n}}^{\dagger}e^{(\sum_{i=1}^{n}(-1)^ip_i).x} \end{eqnarray} The last step could be obtained either by using $$\int{d^2x} e^{(\sum_{i=1}^{n}(-1)^ip_i).x} = \delta(\sum_{i=1}^{n}(-1)^ip_i )$$ which converts $e^{\frac{i}{2} (\sum_i(-1)^ip_i)\wedge P}$ to unity, or using $$\frac{-i\lambda}{4!}\int{d^2x} a_{p_1}a_{p_2}^{\dagger}.....a_{p_{n-1}}a_{p_{n}}^{\dagger}e^{(\sum_{i=1}^{n}(-1)^ip_i).x} e^{\frac{i}{2} \overleftarrow{\partial}\wedge P}$$ which may be expressed as a sum of the corresponding commutative counterpart and a surface terms [which might be discarded]. In fact such correspondence of the noncommutative S-operator with the commutative S-operator is true for all order \cite{sac3}.\vspace{.1cm} We now explore the specific term stemming from $\hat S_{\theta }^{'(1)} $ as: \begin{eqnarray} && -ig\int{d^2x} b_{p_1}^{\theta\dagger}b_{p_2}^{\theta}b_{p_3}^{\theta\dagger}b_{p_{4}}^{\theta} e^{ip_1.x}*e^{-ip_2.x}*e^{ip_3.x}*e^{-ip_4.x}\nonumber\\ && = -ig\int{d^2x} b_{p_1}^{\dagger}b_{p_2}b_{p_3}^{\dagger}b_{p_{4}}e^{\frac{i}{2} (p_1-p_2+p_3-p_4) \wedge P}e^{-\frac{i}{2}[p_1\wedge(p_2-p_3-p_4)+p_2\wedge (p_3-p_4)+ p_3\wedge p_4 ]} \nonumber\\ && \times~e^{i(p_1-p_2+p_3-p_4).x}e^{\frac{i}{2}[p_1\wedge(p_2-p_3-p_4)+p_2\wedge (p_3-p_4)+ p_3\wedge p_4 ]}\nonumber\\ && = -ig\int{d^2x} b_{p_1}^{\dagger}b_{p_2}b_{p_3}^{\dagger}b_{p_{4}}e^{\frac{i}{2} (p_1-p_2+p_3-p_4)\wedge P}e^{i(p_1-p_2+p_3-p_4).x}\nonumber\\ && = -ig\int{d^2x} b_{p_1}^{\dagger}b_{p_2}b_{p_3}^{\dagger}b_{p_{4}}e^{i(p_1-p_2+p_3-p_4).x}e^{\frac{i}{2} \overleftarrow{\partial}\wedge P}\nonumber\\ && = -ig\int{d^2x} b_{p_1}^{\dagger}b_{p_2}b_{p_3}^{\dagger}b_{p_{4}}e^{i(p_1-p_2+p_3-p_4).x} \end{eqnarray} where in the last step we have discarded the surface terms. Thus to the leading order: $\hat S_{\theta }^{'(1)} = \hat S_0^{'(1)}$. This might be shown to be true for all orders.\vspace{0.1cm} The S-matrix element ($S_{\theta}[p_4,p_3;p_1,p_2]$) of a noncommutative field theory, for instance, for the process: $\phi(p_1) + \phi(p_2) \to \phi(p_3) + \phi(p_4)$ could be expressed in terms of its commutative counterpart ($S_0[p_4,p_3;p_1,p_2]$) using (\ref{bs}) as: \begin{equation}S_{\theta}[p_4,p_3;p_1,p_2] = e^{\frac{i}{2}[p_1\wedge p_2-p_3\wedge p_4]}S_0[p_4,p_3;p_1,p_2] \end{equation} The phase factor stems solely from the incoming and outgoing multi-particle twisted states as $\hat S_{\theta} = \hat S_0$. At this juncture it is worthy to note that noncommutative field theories [involving scalar field or/and matter field alone] are as much renormalizable as their commutative counterpart since divergences in the S-matrix elements of the former springs solely from the S-matrix elements of the latter. \section{Noncommutative sine-Gordon model \label{sec5}} The Lagrangian density for the noncommutative sine-Gordon model (NCSG) reads: \begin{equation} \mathcal{L_{NCSG}}= \frac{1}{2} \partial_{\mu}\phi*\partial^{\mu}\phi + \frac{\alpha_0}{\beta^2}\cos_*\beta\phi+\gamma_0\end{equation} where $\cos_*\beta\phi \equiv \frac{1}{2}(e_*^{i\beta\phi}+e_*^{-i\beta\phi}) \equiv 1-\frac{\beta^2}{2!}\phi*\phi + \frac{\beta^4}{4!}\phi*\phi*\phi*\phi+.....$.\\ The Euler-Lagrange equation [after rescaling $\beta\phi \to \phi$] gives: \begin{equation} (\frac{\partial^2}{\partial t^2}- \frac{\partial^2}{\partial x^2})\phi +\alpha_0\sin_*\phi= 0\label{el} \end{equation} The solitonic solution of the classical field equation of the sine-Gordon model on commutative spacetime is known to be: \begin{equation} \phi(x,t) = 4tan^{-1}[e^{\frac{\sqrt{\alpha_0}(x-vt)}{\sqrt{1-v^2}}}] \end{equation} which turns out to be the solution of equation (\ref{el}) as well. To see this, let us now consider the following term appearing in the Euler-Lagrange equation (\ref{el}): \begin{eqnarray} \sin_*\phi &= &\phi -\frac{ \phi_*^3}{3!} + \frac{ \phi_*^5}{5!}-.....\nonumber\\ &=& \phi -4tan^{-1}[e^{\frac{\sqrt{\alpha_0}( x-vt)}{\sqrt{1-v^2}}}]*4tan^{-1}[e^{\frac{\sqrt{\alpha_0}( x-vt)} {\sqrt{1-v^2}}}]*4tan^{-1}[e^{\frac{\sqrt{\alpha_0}( x-vt)}{\sqrt{1-v^2}}}]+...\label{si} \end{eqnarray} The general form corresponding to the second term on RHS in the above equation (\ref{si}) can be simplified to: \begin{equation} e^{n_1\frac{\sqrt{\alpha_0}(x-vt)}{\sqrt{1-v^2}}}*e^{n_2\frac{\sqrt{\alpha_0}(x-vt)}{\sqrt{1-v^2}}}*e^{n_3\frac{\sqrt{\alpha_0}(x-vt)}{\sqrt{1-v^2}}} = e^{(n_1+n_2+n_3)\frac{\sqrt{\alpha_0}(x-vt)}{\sqrt{1-v^2}}} \end{equation} where $n_1, n_2$ and $n_3$ are positive integers. In fact, we can have \begin{equation} \sin_*\phi = \sin\phi \end{equation} Thus the solution of classical field equation $\phi(x,t)$ on commutative spacetime happens to be also the solution of the corresponding noncommutative theory. \subsection{n-point correlation function in NCSG model} We are now ready to establish the correspondence between the NCSG and the NCMT models, we evaluate the following n-point function: $$\left\langle 0\left|T\prod_{i=1}^{n}:e_*^{i\beta\phi(x_i)}::e_*^{i\beta\phi(y_i)}:\right|0\right\rangle$$ where, $ e_*^{i\beta\phi(x)} \equiv 1 + i\beta\phi + \frac{(i\beta)^2}{2!}\phi*\phi + \frac{(i\beta)^3}{3!}\phi*\phi*\phi ..... $\\ Now, \begin{eqnarray} &&\left\langle 0\left|T\prod_{i=1}^{n}:e_*^{i\beta\phi(x_i)}::e_*^{i\beta\phi(y_i)}:\right|0\right\rangle \nonumber \\ &&= \left\langle 0\left|T:e_*^{i\beta\phi(x_1)}::e_*^{i\beta\phi(y_1)}:...:e_*^{i\beta\phi(x_n)}::e_*^{i\beta\phi(y_n)}:\right|0\right\rangle\nonumber\\ &&= \left\langle 0\left|T:\sum_{n_1=0}^{\infty}\frac{({i\beta\phi_*(x_1)})^{n_1}}{n_1!}::\sum_{m_1=0}^{\infty} \frac{({i\beta\phi_*(y_1)})^{m_1}}{m_1!}:.....\right.\right.\nonumber\\ &&\times \left.\left.:\sum_{n_n=0}^{\infty}\frac{({i\beta\phi_*(x_n)})^{n_n}}{n_n!}::\sum_{m_n=0}^{\infty}\frac{({i\beta\phi_*(y_n)})^{m_n}}{m_n!}:\right|0\right\rangle \end{eqnarray} For our purpose, we shall consider the specific non-vanishing term (containing equal number of creation and annihilation operators) arising from: \begin{equation} \left\langle 0\left|T:\phi_*^n(x_1)::\phi_*^n(y_1):... :\phi_*^n(x_n)::\phi_*^n(y_n):\right|0\right\rangle \label{npoint}\end{equation} We shall use the following notations: \begin{equation} \phi(x_1) = \int{\frac{dp_{11}}{(2\pi\sqrt{2\omega_{p_{11}}}}}[a_{p_{11}}e^{-ip_{11}.x_1} +a_{p_{11}}^{\dagger}e^{ip_{11}.x_1}] \end{equation} so that \begin{equation} \phi(x_1)\phi(x_1) = \int{\frac{dp_{11}dp_{12}}{(2\pi)^2\sqrt{2\omega_{p_{11}}2\omega_{p_{12}}}}}[a_{p_{11}}e^{-ip_{11}.x_1} +a_{p_{11}}^{\dagger}e^{ip_{11}.x_1}] [a_{p_{12}}e^{-ip_{12}.x_1} +a_{p_{12}}^{\dagger}e^{ip_{12}.x_1}] \end{equation} Now the specific non-vanishing term of the n-point function\footnote{Here for nonzero n-point function, n has to be even to have equal numbers of creation and annihilation operators. However we can always choose even number of field operators for nonzero n(either even or odd)-point function. It has been checked that the conclusions remain unaltered for either even or odd n.} \begin{eqnarray*} && \left \langle 0\left| {a_{p_{11}}^{\theta}...a_{p_{n1}}^{\theta} e^{-ip_{11}.x_1}*...*e^{-ip_{n1}.x_1}} {a_{k_{11}}^{\theta}...a_{k_{n1}}^{\theta} e^{-ik_{11}.y_1}*...*e^{-ik_{n1}.y_1}}\right.\right.\\ &&\times.....{a_{p_{1(n/2)}}^{\theta}...a_{p_{n(n/2)}}^{\theta} e^{-ip_{1(n/2)}.x_{n/2}}*...*e^{-ip_{n(n/2)}.x_{n/2}}}\\ &&\times{a_{k_{1(n/2)}}^{\theta}... a_{k_{n(n/2)}}^{\theta} e^{-ik_{1(n/2)}.y_{n/2}}*...*e^{-ik_{n(n/2)}.y_{n/2}}}\\ &&\times{a_{p_{1(n/2+1)}}^{\theta\dagger}...a_{p_{n(n/2+1)}}^{\theta\dagger} e^{ip_{1(n/2+1)}.x_{n/2+1}}*...*e^{ip_{n (n/2+1)}.x_{n/2+1}}}\\ &&\times{a_{k_{1(n/2+1)}}^{\theta\dagger}... a_{k_{n(n/2+1)}}^{\theta\dagger} e^{ik_{1(n/2+1)}.y_{n/2+1}}*...*e^{ik_{n(n/2+1)}.y_{n/2+1}}}.....\\ &&\left.\left.\times{a_{p_{1 n}}^{\theta\dagger}...a_{p_{n n}}^{\theta\dagger} e^{ip_{1 n}.x_n}*...*e^{ip_{n n}.x_n}} {a_{k_{1 n}}^{\theta\dagger}... a_{k_{n n}}^{\theta\dagger} e^{ik_{1 n}.y_n}*...*e^{ik_{n n}.y_n}}\right|0\right\rangle \end{eqnarray*} \begin{eqnarray*} &&= \left \langle 0\left| {a_{p_{11}}...a_{p_{n1}} e^{-\frac{i}{2} (\sum_ip_{i1})\wedge P} e^{\frac{i}{2}\sum_{i<j}p_{i1}\wedge p_{j1}} e^{-i\sum_ip_{i1}.x_1}} e^{-\frac{i}{2}\sum_{i<j}p_{i1}\wedge p_{j1}}\right.\right. \\ &&\times {a_{k_{11}}...a_{k_{n1}}}e^{-\frac{i}{2} (\sum_ik_{i1})\wedge P} e^{\frac{i}{2}\sum_{i<j}k_{i1}\wedge k_{j1}} e^{-i\sum_ik_{i1}.y_1} e^{-\frac{i}{2}\sum_{i<j}k_{i1}\wedge k_{j1}}.....\\ &&\times{ {a_{p_{1n}}^{\dagger}a_{p_{2n}}^{\dagger}...a_{p_{nn}}^{\dagger} }} e^{\frac{i}{2} (\sum_ik_{in})\wedge P} e^{\frac{i}{2}\sum_{i<j}p_{in}\wedge p_{jn}} e^{i\sum_ip_{in}.y_n} e^{-\frac{i}{2}\sum_{i<j}p_{in}\wedge p_{jn}}\\ &&\left.\left.\times ~{ {a_{k_{1n}}^{\dagger}a_{k_{2n}}^{\dagger}...a_{k_{nn}}^{\dagger} }} e^{\frac{i}{2} (\sum_ik_{in})\wedge P} e^{\frac{i}{2}\sum_{i<j}k_{in}\wedge k_{jn}} e^{i\sum_ik_{in}.y_n} e^{-\frac{i}{2}\sum_{i<j}k_{in}\wedge k_{jn}}\right|0\right\rangle\\ &&= \left \langle 0\left| {a_{p_{11}}...a_{p_{n1}} e^{-i\sum_ip_{i1}.x_1}}{a_{k_{11}}...a_{k_{n1}}} e^{-i\sum_ik_{i1}.y_1} .....\right.\right. \\ &&\times ~{ { a_{p_{1n}}^{\dagger}...a_{p_{nn}}^{\dagger} }} e^{i\sum_ip_{in}.x_n} a_{k_{1n}}^{\dagger}...a_{k_{nn}}^{\dagger} e^{i\sum_ik_{in}.y_n} \\ &&\left.\left.\times e^{-\frac{i}{2}(\sum_{i}[p_{i1}+...+p_{i(n/2)}+ k_{i1}+...+k_{i(n/2)}])\wedge (\sum_{i}[p_{i(n/2+1)}+...+p_{in}+ k_{i(n/2+1)}+...+k_{in}])}\right|0\right\rangle \end{eqnarray*} The following expression \begin{eqnarray*} &&= \left \langle 0\left| {a_{p_{11}}a_{p_{21}}...a_{p_{n1}} }{a_{k_{11}}a_{k_{21}}... a_{k_{(n-1) 1}}a_{k_{n1}}} \right.\right. \\ &&\left.\left.....{a_{p_{1(n-1)}}^{\dagger}a_{p_{2(n-1)}}^{\dagger}...a_{p_{n(n-1)}}^{\dagger} }... { {a_{k_{1n}}^{\dagger}a_{k_{2n}}^{\dagger}...a_{k_{nn}}^{\dagger} }} \right|0\right\rangle \end{eqnarray*} turns out to be the sum of the various terms each of which contains product of several Dirac delta functions. A typical term looks like \begin{eqnarray} \delta(k_{n(n/2)}-p_{1(n/2+1)})\delta(k_{(n-1)(n/2)}-p_{2(n/2+1)})......\delta(p_{21}-k_{(n-1)n})\delta(p_{11 }-k_{nn}) \end{eqnarray} It is evident that the domain of support corresponding to each term pertaining to the product of several Dirac delta functions leads to $$ p_{i1}=k_{(n+1-i) n},~ p_{i2}= k_{(n+1-i)(n-1)},...........,p_{in}=k_{(n+1-i)1};~~i=1,2,....,n$$ Therefore $e^{-\frac{i}{2}(\sum_{i}[p_{i1}+...+p_{i(n/2)}+ k_{i1}+...+k_{i(n/2)}])\wedge (\sum_{i}[p_{i(n/2+1)}+...+p_{in}+ k_{i(n/2+1)}+...+k_{in}])}$ actually becomes one. Thus \begin{eqnarray} &&= \left \langle 0\left| {a_{p_{11}}...a_{p_{n1}} e^{-i\sum_ip_{i1}.x_1}}{a_{k_{11}}...a_{k_{n1}}} e^{-i\sum_ik_{i1}.y_1}..... \right.\right. \nonumber\\ &&\times ~{ { a_{p_{1n}}^{\dagger}...a_{p_{nn}}^{\dagger} }} e^{i\sum_ip_{in}.x_n} a_{k_{1n}}^{\dagger}...a_{k_{nn}}^{\dagger} e^{i\sum_ik_{in}.y_n} \nonumber\\ &&\left.\left.\times e^{-\frac{i}{2}(\sum_{i}[p_{i1}+...+p_{i(n/2)}+ k_{i1}+...+k_{i(n/2)}])\wedge (\sum_{i}[p_{i(n/2+1)}+...+p_{in}+ k_{i(n/2+1)}+...+k_{in}])}\right|0\right\rangle \nonumber\\ &&= \left \langle 0\left| {a_{p_{11}}...a_{p_{n1}} e^{-i\sum_ip_{i1}.x_1}}{a_{k_{11}}...a_{k_{n1}}} e^{-i\sum_ik_{i1}.y_1} .....\right.\right. \nonumber\\ &&\left.\left.\times ~{ { a_{p_{1n}}^{\dagger}...a_{p_{nn}}^{\dagger} }} e^{i\sum_ip_{in}.x_n} a_{k_{1n}}^{\dagger}...a_{k_{nn}}^{\dagger} e^{i\sum_ik_{in}.y_n} \right|0\right\rangle \end{eqnarray} Such correspondence between the noncommutative and commutative matrix elements might be extended to all non vanishing contributions of the n-point functions. Therefore \begin{eqnarray} \left\langle 0\left|T\prod_{i=1}^{n}:e_*^{i\beta\phi(x_i)}::e_*^{i\beta\phi(y_i)}:\right|0\right\rangle &= &\left\langle 0\left|T\prod_{i=1}^{n}:e^{i\beta\phi(x_i)}::e^{i\beta\phi(y_i)}:\right|0\right\rangle\nonumber\\ &=& \frac{\prod_{i>j}[(x_i-x_j)^2(y_i-y_j)^2C^2m^4]^{\frac{\beta^2}{4\pi}}} {\prod_{i,j} [(x_i-y_i)^2Cm^2]^{\frac{\beta^2}{4\pi}}}\label{ncsg} \end{eqnarray} \section{Noncommutative massive Thirring model \label{sec6}} The noncommutative massive Thirring model (NCMT) is described by the following Lagrangian density: \begin{equation} \mathcal{L_{NCMT}} =\bar \psi(x)*(i\not\partial -m_0)\psi(x) -\frac{1}{2} gj_*^{\mu}*j_{*\mu} \end{equation} Now we shall be interested in the calculation of the following quantity: $$ \left \langle 0\left| T\prod_{i=1}^{n}\frac{1}{2}Z \bar\psi(x_i)*(1+ \gamma^5)\psi(x_i) \frac{1}{2}Z \bar\psi(y_i)*(1- \gamma^5)\psi(y_i)\right|0\right\rangle$$ A typical non vanishing term of the above expression is: \begin{eqnarray*} && \left \langle 0\left| d_{p_1}^{\theta}b_{p_2}^{\theta}d_{k_1}^{\theta}b_{k_2}^{\theta}...d_{p_{n-1}}^{\theta}b_{p_{n}}^{\theta}d_{k_{n-1}}^{\theta}b_{k_n}^{\theta}\right.\right.\\ &&\times \prod_{j=1}^{n-1} [e^{-ip_{j}.x_i}*e^{-ip_{j+1}.x_i}e^{-ik_{j}.y_i}*e^{-ik_{j+1}.y_i}]\\ && \times d_{p_{n+1}}^{\theta\dagger}b_{p_{n+2}}^{\theta\dagger}b_{k_{n+1}}^{\theta\dagger} d_{k_{n+2}}^{\theta\dagger}...b_{p_{2(n-1)}}^{\theta\dagger}d_{p_{2n}}^{\theta\dagger}b_{k_{2(n-1)}}^{\theta\dagger}d_{k_{2n}}^{\theta\dagger}\\ && \left.\left.\times \prod_{j=n+1}^{2(n-1)} [e^{ip_{j}.x_i}*e^{ip_{j+1}.x_i}e^{ik_{j}.y_i}*e^{ik_{j+1}.y_i}] \right|0\right\rangle\\ &&= \left \langle 0\left| {d_{p_1}b_{p_2}...d_{k_{n-1}}b_{k_{n}}e^{\frac{i}{2} \sum_{j=1}^{n-1}[p_j\wedge p_{j+1}+k_j\wedge k_{j+1}] }}\right.\right.\\ &&\times e^{\frac{i}{2} \sum_{j=1}^{n}p_j\wedge\sum_{j=1}^{n} k_j }e^{-\frac{i}{2} \sum_{j=1}^{n}(p_j+k_j)\wedge P }b_{p_{n+1}}^{\dagger} d_{p_{n+2}}^{\dagger}...b_{k_{2(n-1)}}^{\dagger}d_{k_{2n}}^{\dagger} \\ && \times e^{\frac{i}{2} \sum_{j=n+1}^{2n-1}[p_j\wedge p_{j+1}+k_j\wedge k_{j+1}] } e^{\frac{i}{2} \sum_{j=n+1}^{2n}p_j\wedge\sum_{j=n+1}^{2n} k_j }e^{\frac{i}{2} \sum_{j=n+1}^{2n-1}(p_j+k_j)\wedge P }\\ && \times{ e^{-i\sum_{j=1}^{n-1}[(p_{j}+p_{j+1}).x_j+(k_j+k_{j+1}).y_j]}e^{-\frac{i}{2}\sum_{j=1}^{n-1}[p_j\wedge p_{j+1}+k_j\wedge k_{j+1}] }}\\ &&\left.\left.\times{ e^{i\sum_{j=n+1}^{2n-1}[(p_{j}+p_{j+1}).x_j+(k_j+k_{j+1}).y_j]}e^{-\frac{i}{2}\sum_{j=n+1}^{2n-1}[p_j\wedge p_{j+1}+k_j\wedge k_{j+1}] }} \right|0\right\rangle\\ &&= \left \langle 0\left| {d_{p_1}b_{p_2}...d_{k_{n-1}}b_{k_{n}} e^{\frac{i}{2} \sum_{j=1}^{n}p_j\wedge\sum_{j=1}^{n} k_j } b_{p_{n+1}}^{\dagger} d_{p_{n+2}}^{\dagger}...b_{k_{2(n-1)}}^{\dagger}d_{k_{2n}}^{\dagger} }\right.\right.\\ && \times e^{\frac{i}{2} \sum_{j=n+1}^{2(n-1)}p_j\wedge\sum_{j=n+1}^{2n} k_j }e^{-\frac{i}{2}[\sum_{j=1}^{n}(p_j+ k_j)\wedge \sum_{j=n+1}^{2n}(p_j+k_j)] }\\ && \left.\left. \times{ e^{-i\sum_{j=1}^{n-1}[(p_{j}+p_{j+1}).x_i+(k_j+k_{j+1}).y_j]}} { e^{i\sum_{j=n+1}^{2(n-1)}[(p_{j}+p_{j+1}).x_i+(k_j+k_{j+1}).y_j]}} \right|0\right\rangle \end{eqnarray*} The object $\left \langle 0\left| {d_{p_1}b_{p_2}...d_{k_{n-1}}b_{k_{n}} b_{p_{n+1}}^{\dagger} d_{p_{n+2}}^{\dagger}...b_{k_{2(n-1)}}^{\dagger}d_{k_{2n}}^{\dagger}}\right|0\right\rangle$ is proportional to: $$\delta(k_{n}-p_{n+1})\delta(k_{n-1}-p_{n+2})....\delta(k_{2n-1}-p_{2})\delta(k_{2n}-p_{1})$$ which leads to the constraints: $k_i = p_{2n+1-i}; i=1,2,...n$. It is straightforward to notice that all the following exponential terms $$e^{\frac{i}{2} \sum_{j=1}^{n}p_j\wedge\sum_{j=1}^{n} k_j } e^{\frac{i}{2} \sum_{j=n+1}^{2n-1}p_j \wedge\sum_{j=n+1}^{2n} k_j }e^{-\frac{i}{2}[\sum_{j=1}^{n}(p_j+ k_j)\wedge \sum_{j=n+1}^{2n}(p_j+k_j)] }$$ could be set to unity over the domain of support of the above product of Dirac delta functions. Thus \begin{eqnarray} &&= \left \langle 0\left| {d_{p_1}b_{p_2}...d_{k_{n-1}}b_{k_{n}} e^{\frac{i}{2} \sum_{j=1}^{n}p_j\wedge\sum_{j=1}^{n} k_j } b_{p_{n+1}}^{\dagger} d_{p_{n+2}}^{\dagger}...b_{k_{2(n-1)}}^{\dagger}d_{k_{2n}}^{\dagger} }\right.\right.\nonumber\\ && \times e^{\frac{i}{2} \sum_{j=n+1}^{2(n-1)}p_j\wedge\sum_{j=n+1}^{2n} k_j }e^{-\frac{i}{2}[\sum_{j=1}^{n}(p_j+ k_j)\wedge\sum_{j=n+1}^{2n}(p_j+k_j)] }\nonumber\\ && \left.\left. \times{ e^{-i\sum_{j=1}^{n-1}[(p_{j}+p_{j+1}).x_i+(k_j+k_{j+1}).y_j]}} { e^{i\sum_{j=n+1}^{2(n-1)}[(p_{j}+p_{j+1}).x_i+(k_j+k_{j+1}).y_j]}} \right|0\right\rangle \nonumber\\ &&= \left \langle 0\left| {d_{p_1}b_{p_2}...d_{k_{n-1}}b_{k_{n}} b_{p_{n+1}}^{\dagger} d_{p_{n+2}}^{\dagger}...b_{k_{2(n-1)}}^{\dagger}d_{k_{2n}}^{\dagger} }\right.\right.\nonumber\\ && \left.\left. \times{ e^{-i\sum_{j=1}^{n-1}[(p_{j}+p_{j+1}).x_i+(k_j+k_{j+1}).y_j]}} { e^{i\sum_{j=n+1}^{2(n-1)}[(p_{j}+p_{j+1}).x_i+(k_j+k_{j+1}).y_j]}} \right|0\right\rangle \end{eqnarray} This noncommutative-commutative equivalence extends to the all non vanishing terms of the n-point function. Thus \begin{eqnarray} &&\left \langle 0\left| T\prod_{i=1}^{n}\frac{1}{2} Z\bar\psi(x_i)*(1+ \gamma^5)\psi(x_i) \frac{1}{2}Z \bar\psi(y_i)*(1- \gamma^5)\psi(y_i)\right|0\right\rangle \nonumber\\ &&=\left \langle 0\left| T\prod_{i=1}^{n}\frac{1}{2}Z \bar\psi(x_i)(1+ \gamma^5)\psi(x_i) \frac{1}{2} Z\bar\psi(y_i)(1- \gamma^5)\psi(y_i)\right|0\right\rangle\nonumber\\ &&= \frac{\prod_{i>j}[(x_i-x_j)^2(y_i-y_j)^2 M^4]^{\frac{1}{1+g/\pi}}} {\prod_{i,j} [(x_i-y_i)^2 M^2]^{\frac{1}{1+g/\pi}}} \label{nctm} \end{eqnarray} \section{Equivalence between NCSG and NCMT models \label{sec7}} The two perturbative noncommutative field theories, on comparing equations (\ref{ncsg}) and (\ref{nctm}), turn out to be identical provided we make the following identification: \begin{eqnarray} \frac{1}{2}Z \bar\psi(x)*(1\pm \gamma^5)\psi(x)&=& \frac{1}{2}:e_*^{\pm i\beta\phi(x)}:\nonumber\\ M^2 &=& Cm^2\nonumber\\ \frac{1}{1+g/\pi} &=& \frac{\beta^2}{4\pi} \end{eqnarray} We shall now evaluate the vacuum expectation value of the commutator $[\partial_{\nu}\phi(x), e_*^{\pm i\beta\phi(y)}]$. \begin{eqnarray}\langle0|[\partial_{\nu}\phi(x),: e_*^{\pm i\beta\phi(y)}:]|0\rangle& =& \langle 0|[\partial_{\nu}\phi(x), :e^{\pm i\beta\phi(y)}:]|0\rangle \nonumber\\ & =& \pm\beta g_{\nu 0}\delta(\vec x-\vec y)\langle0|:e^{\pm i\beta\phi}:|0\rangle\end{eqnarray} The matrix element of $[\bar\psi(x)*\gamma_{\mu}\psi(x),\frac{1}{2}Z\bar\psi(y)*(1\pm\gamma_{5})\psi(y)]$ between the vacuum states reads: \begin{eqnarray} &&\langle0|[\bar\psi(x)*\gamma_{\mu}\psi(x),\frac{1}{2}Z\bar\psi(y)*(1\pm\gamma_{5})\psi(y)]|0\rangle \nonumber\\ && = \langle 0|[\bar\psi(x)\gamma_{\mu}\psi(x),\frac{1}{2}Z\bar\psi(y)(1\pm\gamma_{5})\psi(y)]|0\rangle \nonumber\\ && = \pm2(1+g/\pi)^{-1}\epsilon^{\mu\nu}\beta g_{\nu 0}\delta(\vec x-\vec y)\langle0|\frac{1}{2}Z\bar\psi(1\pm\gamma_{5})\psi|0\rangle\end{eqnarray} From $\frac{1}{2}Z \bar\psi(x)*(1\pm \gamma^5)\psi(x)= \frac{1}{2}:e_*^{\pm i\beta\phi(x)}:$, we can have \begin{equation}\bar\psi(x)*\gamma^{\mu}\psi(x) = - \frac{g}{2\pi}\epsilon^{\mu\nu}\partial_{\nu}\phi(x)\end{equation} which holds true for the matrix elements of the above commutators (see Appendix B). \section{Conclusions \label{sec8}} We have extended the rules for bosonization in two dimensional spacetime to noncommutative spacetime with twisted quantization conditions. While the past research concentrated on issues of classical integrability, we have focused on bosonization. We have also checked using our bosonization rules, the duality between massive Thirring model and sine-Gordon model.\vspace{0.1cm} We have shown the static finite energy solution, solitons, of the classical field equation for commutative 1+1 sine-Gordon model is also the static finite energy solution of the classical field equation for its noncommutative counterpart. Therefore, the composite field versus fundamental field correspondence pertaining to the two models persists even in the noncommutative spacetime since the quantum soliton, quantized static solutions using semi-classical method, of sine-Gordon model could be recognized as the fundamental fermion of the noncommutative massive Thirring model.\vspace{0.1cm} The sine-Gordon field in NC spacetime requires twisted bosonic rules. The dual massive Thirring model requires twisted fermionic quantization. Our bosonization rules are consistent with these requirements. \vspace{0.1cm} The bosonization in commutative spacetime is considered Abelian bosonization. Non-Abelian bosonization has further interesting structures \cite{wit}. This is anticipated to be even more interesting and subtle in NC spacetime which will be explored later. \section*{Appendix A} \begin{eqnarray} b^{\theta}(p) b^{\theta\dagger}(p') + e^{-p\wedge p'}b^{\theta\dagger}(p')b^{\theta}(p) &=& e^{i\pi\int_{p^1}^\infty\frac{dk^1}{2k^0}a^{\dagger}(k)a(k) }a^{\theta}(p)a^{\theta\dagger}(p')e^{-i\pi\int_{p^{'1}}^\infty\frac{dk^1}{2k^0}a^{\dagger}(k)a(k) }\nonumber\\ &+& e^{-ip\wedge p'}a^{\theta\dagger}(p') e^{-i\pi\int_{p^{'1}}^\infty\frac{dk^1}{2k^0}a^{\dagger}(k)a(k) }e^{i\pi\int_{p^1}^\infty\frac{dk^1}{2k^0}a^{\dagger}(k)a(k) }a^{\theta}(p);\nonumber\\ &=& e^{-ip\wedge p'} e^{i\pi\int_{p^1}^\infty\frac{dk^1}{2k^0}a^{\dagger}(k)a(k) }a^{\theta\dagger}(p')a^{\theta}(p)e^{-i\pi\int_{p^{'1}}^\infty\frac{dk^1}{2k^0}a^{\dagger}(k)a(k) }\nonumber\\ &+&e^{-i\pi\int_{p^1}^\infty\frac{dk^1}{2k^0}a^{\dagger}(k)a(k) }2p^0\delta(p-p')e^{-i\pi\int_{p^{'1}}^\infty\frac{dk^1}{2k^0}a^{\dagger}(k)a(k) }\nonumber\\ &+& e^{-ip\wedge p'}a^{\theta\dagger}(p') e^{-i\pi\int_{p^{'1}}^\infty\frac{dk^1}{2k^0}a^{\dagger}(k)a(k) }e^{i\pi\int_{p^1}^\infty\frac{dk^1}{2k^0}a^{\dagger}(k)a(k) }a^{\theta}(p);\nonumber\\ &=& e^{-ip\wedge p'} a^{\theta\dagger}(p')a^{\theta}(p)e^{i\pi\int_{p^1}^\infty\frac{dk^1}{2k^0}a^{\dagger}(k)a(k) }e^{-i\pi\int_{p^{'1}}^\infty\frac{dk^1}{2k^0}a^{\dagger}(k)a(k) }\nonumber\\ &+&2p^0\delta(p-p')\nonumber\\ &+& e^{-ip\wedge p'} a^{\theta\dagger}(p') e^{i \pi }a^{\theta}(p)e^{-i \pi\int_{p^{'1}}^\infty\frac{dk^1}{2k^0}a^{\dagger}(k)a(k) }e^{i\pi\int_{p^1}^\infty\frac{dk^1}{2k^0}a^{\dagger}(k)a(k) }\nonumber\\ &=&2p^0\delta(p-p') \end{eqnarray} Where we have used: \begin{eqnarray*} && a^{\theta}(p) a^{\theta\dagger}(p') = e^{-p\wedge p'}a^{\theta\dagger}(p')a^{\theta}(p) + 2p^0\delta(p-p');\\ && e^{i\pi\int_{p^{'1}}^\infty dk^1\delta(k-p) }e^{-i\pi\int_{p^1}^\infty dk^1\delta(k-p) } = e^{i\pi\int_{p^{'1}}^{p^1} dk^1\delta(k-p) } = e^{i \pi } = -1 \end{eqnarray*} \section*{Appendix B} Let us evaluate the commutator: $[\partial_{\nu}\phi(x), :e_*^{\pm i\beta\phi(y)}:]$. For the twisted field $\phi(x)$, we can have: \begin{eqnarray} \phi(x) &=&\phi_0(x)e^{\frac{1}{2} \overleftarrow{\partial} \wedge P}\nonumber\\ e_*^{\pm i\beta\phi(x)}& =& 1+i\beta\phi+\frac{(i\beta)^2}{2!}\phi(x)*\phi(x) +.....= e^{\pm i\beta\phi_0(x)}e^{\frac{1}{2} \overleftarrow{\partial} \wedge P} \end{eqnarray} Therefore the commutator could be expanded in a series in noncommutative parameter $\theta$, $$[\partial_{\nu}\phi(x), e_*^{\pm i\beta\phi(y)}] = [\partial_{\nu}\phi_0(x)e^{\frac{1}{2} \partial \wedge P}, e^{\pm i\beta\phi_0(y)}e^{\frac{1}{2} \partial \wedge P}] = a_0(x,y) + \theta a_1(x,y) + O(\theta^2)$$ To the order $O(\theta)$, the commutator reads: \begin{eqnarray} [\partial_{\nu}\phi_0(x)e^{\frac{1}{2} \partial \wedge P},: e^{\pm i\beta\phi_0(y)}e^{\frac{1}{2} \partial \wedge P}:]& =& [\partial_{\nu}\phi_0(x), :e^{\pm i\beta\phi_0(y)}:]\nonumber\\ &+&\frac{1}{2} [\partial_{\nu}\phi_0(x),:e^{i\beta\phi_0(y)}:] (\overleftarrow{\partial^x}+\overleftarrow{\partial^y})\wedge P\nonumber\\ &-&\frac{i}{2} [\partial_{\nu}\phi_0(x),:e^{i\beta\phi_0(y)}:] (\overleftarrow{\partial^x}\wedge\overleftarrow{\partial^y}) \end{eqnarray} Now we shall compute the commutator involving twisted fermionic field operators upto the order $\theta$ using: \begin{eqnarray} \psi(x) &=&\psi_0(x)e^{\frac{1}{2} \overleftarrow{\partial} \wedge P}\nonumber\\ \bar\psi(x)*\gamma_{\mu}\psi(x) & =& \bar\psi_0(x)*\gamma_{\mu}\psi_0(x)e^{\frac{1}{2} \overleftarrow{\partial}l \wedge P} \end{eqnarray} Here, \begin{eqnarray*} [\bar\psi(x)*\gamma_{\mu}\psi(x),\frac{1}{2}Z\bar\psi(y)*(1\pm\gamma_{5})\psi(y)]& =& [\bar\psi_0(x)\gamma_{\mu}\psi_0(x),\frac{1}{2}Z\bar\psi_0(y)(1\pm\gamma_{5})\psi_0(y)]\\ &+&[\bar\psi_0(x)\gamma_{\mu}\psi_0(x),\frac{1}{2}Z\bar\psi_0(y)(1\pm\gamma_{5})\psi_0(y)](\overleftarrow{\partial^x}+\overleftarrow{\partial^y})\wedge P\\ &-&\frac{i}{2} [\bar\psi_0(x)\gamma_{\mu}\psi_0(x),\frac{1}{2}Z\bar\psi_0(y)(1\pm\gamma_{5})\psi_0(y)] (\overleftarrow{\partial^x}\wedge\overleftarrow{\partial^y}) \end{eqnarray*} The commutators \cite{Sid} on commutative spacetime read: $$ [\partial_{\nu}\phi_0(x), :e^{\pm i\beta\phi_0(y)}:] = \pm\beta g_{\nu 0}\delta(\vec x-\vec y):e^{\pm i\beta\phi_0}: $$ $$ [\bar\psi_0 (x)\gamma_{\mu}\psi_0 (x),\frac{1}{2}Z\bar\psi_0 (y)(1\pm\gamma_{5})\psi_0 (y)]= \pm2(1+g/\pi)^{-1}\epsilon^{\mu\nu}\beta g_{\nu 0}\delta(\vec x-\vec y)\frac{1}{2}Z\bar\psi_0(1\pm\gamma_{5})\psi_0 $$ We notice that the equality $$\bar\psi(x)*\gamma^{\mu}\psi(x) = - \frac{g}{2\pi}\epsilon^{\mu\nu}\partial_{\nu}\phi(x)$$ will hold to the order $\theta$ provided the fermionic and the bosonic momentum field operators are the same since $\frac{1}{2}Z \bar\psi(x)*(1\pm \gamma^5)\psi(x)= \frac{1}{2}:e_*^{\pm i\beta\phi(x)}:$.
1,314,259,992,821
arxiv
\section{Introduction} \label{Introd} In this note we present a continuation of the work by the authors \cite{JL2}--\cite{JL5} on self-similar regularizations of the Riemann problem associated with a non-linear strictly hyperbolic system in one-space dimension: \be \del_t u + A(u) \, \del_x u = 0, \qquad u=u(t,x) \in \Bzero, \quad t>0, \, x \in \RR, \label{1.1} \ee with piecewise constant, initial data \be u(0,x) = u_l \, \mbox{ for } x < 0; \quad u_r \, \mbox{ for } x >0, \label{1.2} \ee where $u_l, u_r$ are constant states in $\Bzero$. Here, $\Bzero \subset \RN$ denotes the ball centered at the origin and with radius $\delta_0>0$, and, for all $u \in \Bzero$, $A(u)$ is assumed to have distinct and real eigenvalues $\lam_1(u) < \ldots < \lam_N(u)$ and basis of left- and right-eigenvectors $l_j(u), r_j(u)$, ($1 \leq j \leq N$). Following Dafermos \cite{Dafermos1,Dafermos2} and Slemrod \cite{Slemrod} who advocate the use of self-similar regularizations to capture the whole wave fan structure of the Riemann problem, we consider solutions constructed by self-similar vanishing diffusion associated with a {\sl general} diffusion matrix $B= B(u)$, that is we search for solutions of \be \del_t u_\eps + A(u_\eps) \, \del_x u_\eps = \eps \, t \, \del_x \bigl( B(u_\eps) \, \del_x u_\eps\bigr), \quad \eps >0. \label{1.3} \ee Due to the choice of the scaling $\eps \, t$, this system admits solutions $u_\eps = u_\eps(x/t)$, and, therefore, we refer to (\ref{1.2})-(\ref{1.3}) as the {\sl self-similar diffusive Riemann problem.} The matrix $B=B(u)$ is assumed to depend smoothly upon $u$ and to remain sufficient close to the identity matrix, that is, for some given matrix norm and for $\eta>0$ sufficiently small \be \sup_{u \in \Bzero} |B(u) - \Id | \leq \eta. \label{1.4} \ee The method of analysis introduced below is not a~priori restricted to (\ref{1.3})-(\ref{1.4}), and generalizations are discussed at the end of this note and in \cite{JL5,JL6}. The techniques developed so far for general hyperbolic systems (see \cite{Tzavaras,LeFlochTzavaras} and \cite{JL2}--\cite{JL4}) were restricted to regularizations based on the identity diffusion matrix. The new approach introduced here allows us to cover classes of approximations based on general diffusion matrices (or relaxation terms, see below). This degree of generality is especially important for non-conservative systems \cite{LeFloch1,LeFloch2} and for the boundary-value problem \cite{JL2}, whose solutions are known to {\sl strongly depend} upon the specific regularization. \section{Main results} By the property of propagation at finite speed, a self-similar solution $u=u(y)$ of the Riemann problem is constant outside a sufficiently large, compact interval $[-L,L]$, i.e.: $u(y) = u_l$ for $y <-L$ and $u(y) = u_r$ for $y >L$. As is customary, we assume that $\delta_0$ is sufficiently small so that the wave speeds $\lam_j(u)$ remain close to the constant speeds $\lam_j(0)$ and are uniformly separated in the sense that $$ \bLam_j \leq \lam_j(u) \leq \Lamb_j, \quad u \in \Bzero, $$ for some constants $-L < \bLam_1 < \Lamb_1 < \bLam_2 < \ldots < \bLam_N < \Lamb_N <L$. \ We establish the following theorem. \ \begin{theorem} \label{1-1} Consider the non-linear, strictly hyperbolic system (\ref{1.1}) together with its parabolic regularization (\ref{1.3})-(\ref{1.4}). There exist (sufficiently small) constants $\delta_1, \eta>0$ and a (sufficiently large) constant $C_0 > 0$ such that for any initial data $u_l, u_r \in \Bone$ the self-similar diffusive Riemann problem (\ref{1.2})-(\ref{1.3}) admits a smooth solution $u^\eps=u^\eps(x/t) \in \Bzero$ defined for all $y=x/t \in [-L,L]$, which has uniformly bounded variation, $$ TV_{-L}^L(u_\eps) \leq C_0 \, |u_r - u_l|, $$ and converges strongly to some limit $u:[-L,L] \to \Bzero$: $$ u^\eps \to u \, \mbox{ in the $L^1$ norm, as } \eps \to 0. $$ The limit function satisfies the following properties. The function $y \mapsto u(y)$ has bounded total variation, that is, $TV_{-L}^L(u) \leq C_0 \, |u_r - u_l|$, and is constant on each interval $[\Lamb_j, \bLam_{j+1}]$. If (\ref{1.1}) is a system of conservation laws, i.e.~$A=Df$ for some flux $f:\Bzero \to \RN$, then the limit is a distributional solution of \be \del_t u + \del_x f(u) = 0. \label{1.5} \ee If $U,F): \Bone \to \RR \times\RN$ is an entropy / entropy flux pair associated with (\ref{1.5}) and the diffusion matrix satisfies the convexity-like condition $\nabla^2 U \cdot B \geq 0$, then the solution $u$ satisfies the entropy inequality \be \del_t U(u) + \del_x F(u) \leq 0. \label{1.6} \ee \end{theorem} \ We have also the following description of the wave curves. \ \begin{theorem} \label{1-2} With the notation and assumptions in Theorem~\ref{1-1}, to each $j$-characteristic family and each left-hand state $u_l$ one can associate a {\rm $j$-wave curve} $$ \WW_j(u_l) := \bigl\{ u_r = \psi_j(m; u_l) \, / \, m \in (\bm_j, \mb_j) \bigr\}, $$ issuing from $u_l$, which, by definition, is made of all right-hand states $u_r$ attainable by a Riemann solution $u=u(y)$, with left-hand state $u_l$, by using only $j$-waves, that is such that $$ u(y) = u_l \mbox{ for } y < \bLam_j; \qquad u_r \mbox{ for } y > \Lamb_j. $$ Moreover, the mapping $\psi_j : (\bm_j, \mb_j) \times \Bone \to \Bzero$ is Lipschitz continuous with respect to both arguments, and for some small constant $c>0$ $$ \del_m \psi_j(m; u_l) \in \CC_j := \bigl\{ w \in \RN \, / \, \bigl| w \cdot l_j(0) \bigr| \geq (1-c) \, |w | \bigr\}. $$ Moreover, the characteristic component $y \mapsto \alpha_j(y) := l_j(0) \cdot u'(y)$ is a non-negative measure in the interval $[\bLam_j, \Lamb_j]$. \end{theorem} \ For the proof of these results as well as a characterization of the limit when (\ref{1.1}) is a general non-conservative system, we refer to \cite{JL5,JL6}. We will here sketch the proof of Theorem~\ref{1-1}. To handle general diffusion matrix $B(u)$, the following {\sl generalized eigenvalue problem} is introduced $$ \bigl( - y + A(u) \bigr) \, \hatr_j(u,y) = \mu_j(u,y) \, B(u) \, \hatr_j(u,y), $$ $$ \hatl_j(u,y) \cdot \bigl( - y + A(u) \bigr) = \mu_j(u,y) \, \hatl_j(u,y) \cdot B(u). $$ In view of (\ref{1.4}), one has $\hatr_j(u,y) = r_j(u) + O(\eta)$ and $\hatl_j(u,y) = l_j(u) + O(\eta)$. The proof relies on a suitable {\sl asymptotic expansion} of the solution $u_\eps = u_\eps(x/t)$, of the form $$ u_\eps' = \sum_j \, a_j^\eps \, \hatr_j(u_\eps,\cdot) \quad \mbox{ with } \, a_j^\eps := \hatl_j(u_\eps, \cdot) \cdot u_\eps'. $$ Omitting $\eps$, we deduce that the components $a_j$ satisfy a {\sl coupled system of $N$ differential equations:} $$ a_i' - {\mu_i(u, \cdot) \over \eps} \, a_i + \sum_j \pi_{ij}(u, \cdot)\, a_j = Q_i(u; \cdot) := \sum_{j, k} \kappa_{ijk}(u, \cdot) \, a_j \, a_k, $$ where $$ \pi_{ij}(u, \cdot) : = \hatl_i(u, \cdot) \cdot B(u) \, \del_y \hatr_j(u,\cdot), $$ $$ \kappa_{ijk}(u, \cdot) : = -\hatl_i(u,\cdot) \cdot D_u\bigl( B \, \hatr_k \bigr) \big (u, \cdot\big ) \, \hatr_j(u, \cdot). $$ The system under study has the form $$ a_i' - {\mu_i(u, \cdot) \over \eps} \, a_i + O(\eta) \, \sum_j |a_j| = O(1) \, \sum_{j,k} |a_j| \, |a_k|. $$ In a central part of our argument we study the homogeneous system \be \varphi_i ' - {\mu_i(u, \cdot) \over \eps} \, \varphi_i + \sum_j \pi_{ij}(u, \cdot)\, \varphi_j = 0, \qquad \varphi = \bigl( \varphi_1, \ldots, \varphi_N \bigr), \label{homogeneous} \ee and establish that it has solutions $\varphi_j$, referred to as the {\sl linearized $i$-wave measures} associated with the function $u$, which are ``close'' (in a sense to be specified) to the following normalized solutions of the corresponding uncoupled system (obtain by setting $\eta=0$) $$ \varphi_i^\star := {e^{ - g_i/\eps } \over I_i}, \qquad I_i := \int_{-L}^{L}e^{ - g_i / \eps} \, dy, \quad g_i(y) := - \int_{\rho_i}^y \mu_i(u(x), x) \, dx. $$ Here, the constants $\rho_i$ are determined so that the functions $g_i$ are non-negative. \ \begin{theorem} The system (\ref{homogeneous}) admits a solution $\varphi$ such that for all $i=1, \ldots, N$ and $y \in [-L,L]$ $$ \bigl(1 - O(\eta) \big) \, \varphi_i^\star(y) - \eps \, O(\eta) \, \sum_j \varphi_j^\star(y) \leq \varphi_i(y) \leq \bigl( 1 + O(\eta) \bigr) \, \varphi_i^\star(y) + \eps \, O(\eta) \, \sum_j \varphi_j^\star(y). $$ \end{theorem} \ In constrast with the functions $\varphi_i^\star$, the functions $\varphi_i$ need not be positive. Next, to control the total variation of the solutions of (\ref{1.3}), we derive Glimm-like estimates on the {\sl wave interaction coefficients} $$ F_{ijk}^\star(y) : = \varphi_i^\star(y) \int_{c_i}^y {\varphi_j^\star \, \varphi_k^\star \over \varphi_i^\star} \, dx $$ for some constants $c_i \in [\bLam_i, \Lamb_i]$. We gain useful information on the possible growth of the total variation of solutions. Roughly speaking, the coefficient $F_{ijk}$ bounds the contribution to the $i$-th family due to interactions between waves of the $j$-th and $k$-th characteristic families. \section{Generalizations} \label{3-0} The results above have been also extended to relaxation approximations and boundary-value problems. In particular, we can handle relaxation approximations associated with the conservative system (\ref{1.5}) \be \del_t u^\eps + \del_x v^\eps = 0, \qquad \del_t v^\eps + a^2 \, B(u) \del_x u^\eps = {1 \over \eps \, t} \, \bigl( f(u^\eps) - v^\eps \bigr), \label{1.9} \ee where $u^\eps = u^\eps(x,t)$ and $v^\eps = v^\eps(x,t)$ are the unknowns, and $\eps >0$ is a relaxation parameter. We also study (\ref{1.3}) and (\ref{1.9}) in the presence of a boundary, when there exists an index $p$ such that $0 < \Lamb_p$, and that at most one wave family is characteristic, that is, $0 \in (\bLam_p, \Lamb_p)$. We consider (\ref{1.1}) on the interval $y \in [0, L]$, and prove the existence of a solution with uniformly bounded variation. To handle the boundary layer, we modify the previous definition of the functions $\varphi_j^\star, j\leq p$ and carefully estimate the coefficients $F_{ijk}^\star(y)$ when $\eps \rightarrow 0$. In addition, following pioneering work by Fan and Slemrod \cite{FanSlemrod} who studied the effect of artificial viscosity terms, we consider a system arising in liquid-vapor phase dynamics with {\sl physical} viscosity and capillarity effects taken into account. We establish uniform total variation bounds, allowing us to deduce new existence results. Our analysis cover both the hyperbolic and the hyperbolic-elliptic regimes and apply to arbitrarily large Riemann data. The proofs rely on a new technique of reduction to two coupled scalar equations associated with the two wave fans of the system. Strong $L^1$ convergence to a weak solution of bounded variation is established in the hyperbolic regime, while in the hyperbolic-elliptic regime a stationary singularity near the axis separating the two wave fans, or more generally an almost-stationary oscillating wave pattern (of thickness depending upon the capillarity-viscosity ratio) are observed which prevent the solution to have globally bounded variation. \section*{Acknowledgements} PLF was partially supported by the A.N.R. (Agence Nationale de la Recherche) grant 06-2-134423 (MATH-GR). KTJ and PGL were partially supported by a grant (Number 2601-2) from the Indo-French Centre for the Promotion of Advanced Research, IFCPAR (CEFIPRA).
1,314,259,992,822
arxiv
\section*{Self-Organized Criticality} The statistics concerning earthquakes in a particular extended region of the Earth during a given period of time obey a power law known as the Gutenberg-Richter law \cite{newman2005power}: the logarithm of the energy of an earthquake is a linear function of the logarithm of the frequency of earthquakes of such energy. An earthquake is an example of a dynamical system presenting both temporal and spatial degrees of freedom. The following definition generalizes these properties. \begin{definition} A dynamical system is a \emph{Self-Organized Critical} (SOC) system if it is slowly driven by an external force, exhibits avalanches, and has power law correlated interactions (cf. \cite{watkins201625}, section 7). \end{definition} By an avalanche, we mean a sudden recurrent modification of the internal energy of the system. In SOC systems, avalanches display scale invariance over energy and over time \cite{watkins201625}\footnote{P.W. Anderson describes SOC as having ``paradigmatic value" characterizing ``the next stage of Physics". He writes: ``In the 21$^\textit{st}$ century, one revolution which can take place is the construction of generalization which jumps and jumbles the hierarchies or generalizations which allow scale-free or scale transcending phenomena. The paradigm for the first is broken symmetry; for the second, self-organized criticality" \cite{anderson2011more}.}. Many ordinary critical phenomena near continuous phase transition (which requires fine tuning) display non-trivial power law correlations with a cut-off associated to the cluster size \cite{aharony2003introduction}. For example, the Ising model shows power law correlations only for specific parameters, while a self-organized critical system, behaving statistically in an analogous manner, achieves the critical state merely by means of a small external force, without fine-tuning. \section*{The Sandpile Model} The concept of SOC was introduced in the seminal papers of Per Bak, Chao Tang, and Kurt Wiesenfeld \cite{BTW,bak1988self} where they put forward the archetypical example of a SOC system: the Sandpile Model. It is a highly idealized cellular automaton designed to display spatio-temporal scale invariance. Unlike other toy models in physics, e.g. the Ising model, a sandpile automaton doesn't attempt to apprehend the actual interactions of a physical system exhibiting SOC (such as a real sandpile). The sandpile cellular automaton is rather a mathematical model \footnote{A holistic model in the sense of \cite{downey2012think}.} that reproduces the statistical behavior of very diverse dynamical systems: it is descriptive at the level of global emergent phenomena without necessarily corresponding to such systems at the local reductionist level. \begin{figure}[h] \begin{center} \begin{tikzpicture} \begin{scope}[scale = 0.8] \draw [very thick, dashed] (1,1) grid (4,4); \draw [red] (3,3)node{$\bullet$}; \draw (3,3) node[above right]{4}; \draw (2,3) node[above right]{3}; \draw (3,2) node[above right]{3}; \draw (4,3) node[above right]{3}; \draw (3,4) node[above right]{0}; \draw (2,2) node[above right]{1}; \draw (5.7,2.7) node {$\text{toppling}$}; \draw[->, very thick] (5,2)--(6.5,2); \end{scope} \begin{scope}[xshift=140, scale = 0.8] \draw [very thick, dashed] (1,1) grid (4,4); \draw [red] (2,3)node{$\bullet$}; \draw (3,3) node[above right]{0}; \draw (2,3) node[above right]{4}; \draw (3,2) node[above right]{4}; \draw (4,3) node[above right]{4}; \draw (3,4) node[above right]{1}; \draw (2,2) node[above right]{1}; \draw [red] (3,2)node{$\bullet$}; \draw [red] (4,3)node{$\bullet$}; \end{scope} \end{tikzpicture} \end{center} \caption{Numbers represent the number of sand grains in the vertices of the grid, and a toppling is performed. Red points are unstable vertices.} \label{pic_sand} \end{figure} Imagine a large domain on a checkered plane; each vertex of this grid is determined by two integer coordinates $(i,j)$. We will write $\mathbb{Z}^2$ to denote the integral square lattice formed by all pairs of integers $(i,j)$ in the Euclidean plane $\mathbb{R}^2$. Our dynamical system will evolve inside $\Omega$, which is the intersection of $\mathbb{Z}^2$ with a large convex figure in the plane. Throughout this paper, we use the word \emph{graph} to indicate a collection of line segments (called edges) that connect points (called vertices) to each other. \begin{definition} A sandpile model consists of a grid inside a convex domain $\Omega$ on which we place grains of sand at each vertex; the number of grains on the vertex $(i,j)$ is denoted by $\varphi(i,j)$ (see Figure~\ref{pic_sand}). Formally, a {\it state} is an integer-valued function $\varphi:\Omega\to\mathbb Z_{\geq 0}$. We call a vertex $(i,j)$ \emph{unstable} whenever there are four or more grains of sand at $(i,j)$, i.e., whenever $\varphi(i,j)\geq 4$. The evolution rule is as follows: any unstable vertex $(i,j)$ topples spontaneously by sending one grain of sand to each of its four neighbors $(i, j + 1),(i,j-1),(i-1,j),(i+1,j)$. The sand that falls outside $\Omega$ disappears from the system. Stable vertices cannot be toppled. Given an initial state $\varphi$, we will denote by $\varphi^\circ$ the stable state reached after all possible topplings have been performed. It is a remarkable and well-known fact that the final state $\phi^\circ$ does not depend on the order of topplings. The final state $\phi^\circ$ is called the relaxation of the initial state $\phi$. \end{definition} Bak and his collaborators proposed the following experiment: take any stable state $\phi_0$ and perturb it at random by adding a grain of sand at a random location. Denote the relaxation of the perturbed state by $\phi_1$, and repeat this procedure. Thus a sequence of randomly chosen vertices $(i_k,j_k)$ gives rise to a sequence of stable states by the rule $\phi_{k+1}= (\phi_k + \delta_{i_kj_k})^\circ$. The relaxation process $(\phi_k + \delta_{i_kj_k})\mapsto \phi_{k+1}$ is called an \emph{avalanche}\footnote{We can think of an avalanche as an earthquake.}; its size is the number of vertices that topple during the relaxation. Given a long enough sequence of uniformly chosen vertices $(i_k,j_k)$, we can compute the distribution for the sizes of the corresponding avalanches. Let $N(s)$ be the number of avalanches of size $s$; then the main experimental observation of \cite{BTW} is that: $$ \log N(s)= \tau \log s + c. $$ In other words, the sizes of avalanches satisfy a power law. In Figure~\ref{fig:SandpilePowerLaw}, we have reproduced this result with $\tau \sim -1.2$. This has only very recently been given a rigorous mathematical proof using a deep analysis of random trees on the two-dimensional integral lattice $\mathbb{Z}^2$ \cite{bhupatiraju2016inequalities}. \begin{definition} A recurrent state is a stable state appearing infinitely often, with probability one, in the above dynamical evolution of the sandpile. \end{definition} Surprisingly, the recurrent states are exactly those which can be obtained as a relaxation of a state $\phi\geq 3$ (pointwise). The set of recurrent states forms an Abelian group \cite{Dhar} and its identity exhibits a fractal structure in the scaling limit (Fig.~\ref{fig:PhaseTransition}); unfortunately, this fact has resisted a rigorous explanation so far. The main point of this paper is to exhibit fully analogous phenomena in a continuous system (which is not a cellular automaton) within the field of tropical geometry. An advantage of the tropical model is that, while it has self-organized critical behavior, just as the classical model does, its states look much less chaotic; thus we say that the tropical model has no combinatorial explosion. \section*{Mathematical Modeling for Proportional Growth and Pattern Formation} The dichotomy between continuous mathematical models and discrete cellular automata has an important example in developmental biology. A basic continuous model of pattern formation was offered by Alan Turing in 1952 \cite{turing1952chemical}. He suggested that two or more homogeneously distributed chemical substances, termed morphogens, with different diffusing rates and chemical activity, can self-organize into spatial patterns of different concentrations. This theory was confirmed decades later and can be applied, for example, to modeling the patterns of fish and lizard skin \cite{kondo1995reaction},\cite{dhillon2017bifurcation}, or of seashells' pigmentation \cite{fowler1992modeling}. On the discrete modeling side, the most famous model is the Game of Life developed by Conway in 1970 \cite{gardner1970mathematical}. A state of this two-dimensional system consists of "live" and "dead" cells that function according to simple rules. Any live cell dies if there are fewer than two or more than three live neighbors. Any dead cell becomes alive if there are three live neighbors. A very careful control of the initial state produces "oscillators" and "spaceships" that certainly look fascinating but seem not to be related to realistic biological models. Nevertheless, a strong philosophical conclusion of the Game of Life is that extremely simple rules can produce behavior of arbitrary complexity. A more realistic cellular automaton has recently been derived from the continuous reaction-diffusion model of Turing. In \cite{manukyan2017living}, the transformation of the juvenile white ocelli skin patterns of the lizard {\it Timon lepidus} into a green and black labyrinth was observed. In this study, the authors presented the skin squamae of lizard as a hexagonal grid, where the color of each individual cell depended on the color states of neighboring positions. This cell automaton could successfully generate patterns coinciding with those on the skin of adult lizards. \begin{figure}[H] \centering \includegraphics[width=1.0\linewidth]{FF2.pdf} \caption{ In (A), (B), (C) and (D), a very large number $N$ of grains of sand is placed at the origin of the everywhere empty integral lattice, the final relaxed state shows fractal behavior. Here, as we advance from (A) to (D), we see successive sandpiles for $N=10^3$ (A), $10^4$ (B), $10^5$ (C), and $10^6$ (D), rescaled by factors of $\sqrt{N}$. In (E), we zoom in on a small region of (D) to show its intricate fractal structure, and, finally, in (F), we further zoom in on a small portion of (E). We can see proportional growth occurring in the patterns as the fractal limit appears. The balanced graphs inside the roughly triangular regions of (F) are tropical curves.} \label{fig:scaleinvariance} \end{figure} Pattern formation is related to an important problem in developmental biology: how to explain proportional growth. It is not clear why different parts of animal or human bodies grow at approximately the same rate from birth to adulthood. Sandpile simulations provide a qualitative toy model as follows. \begin{example} \label{ex_many} Early on, it was observed experimentally that sandpiles have fractal structure in their spatial degrees of freedom (see Figure~\ref{fig:scaleinvariance}). \end{example} This example exhibits the phenomenon of proportional growth and scale invariance: If we rescale tiles to have area $\frac{1}{N}$ and let $N$ go to infinity, then the picture converges in the weak-$\star$ sense. (See \cite{PS} and references therein.) Recently, the patches and a linear pattern in this fractal picture were explained in \cite{LPS,us_solitons,levine2013apollonian} using discrete superharmonic functions and Apollonian circle packing. Dhar and Sadhu \cite{dhar2013sandpile} established certain two-dimensional sandpile models where the size of newly formed patterns depends on the number of sand grains added on the top, but the number and shape of the patterns remain the same. Strikingly, they also proposed a three-dimensional model which also forms proportionally growing patterns; these patterns look like the head, thorax, and abdomen of real larva. The perspective of our paper suggests that continuous tropical geometry should have consequences in the study of proportional growth modeling. We conjecture that tropical functions should appear as gradients of growth.(Compare Figure 3 in ~\cite{rastelli2002surface} and our Figure~\ref{figpush}; see also Section 2.5 in \cite{vollmer2017growth}). \section*{Tropical Geometry} Tropicalization can be thought of as the study of geometric and algebraic objects on the log-log scale\footnote{Drawing algebraic varieties on log-log scale was successfully used by O. Ya. Viro, in his study of Hilbert's 17th problem, to construct real algebraic curves with prescribed topology, for a more recent account of this story read \cite{viro2006patchworking}.} in the limit when the base of the logarithm is very large\footnote{This limit has been discovered several times in physics and mathematics and was named Maslov dequantization in mathematical physics before it was called tropicalization in algebraic geometry \cite{kolokoltsov1997idempotent}.}. Let us start by considering the most basic mathematical operations: addition and multiplication. With the logarithmic change of coordinates, and with the base of the logarithm becoming infinitely large, multiplication becomes addition and addition becomes taking the maximum. Namely, define $$x+_t y := \log_{t}(t^x+t^y)$$ $$x \times _t y := \log_t(t^xt^y)$$ for $x, \ y$ positive and then, taking the limit as $t$ tends to $+ \infty$, set $${\mathrm{Trop}}(x+y):= \lim\limits_{t \rightarrow + \infty} x +_t y = \max(x,y),$$ $${\mathrm{Trop}}(x \times y):= \lim\limits_{t \rightarrow + \infty} x \times_t y = x+y.$$ Tropical numbers are, thus, defined as the set of ordinary real numbers (together with $\{- \infty \}$, i.e. $\mathbb{T} := \mathbb{R} \cup \{ - \infty \}$) and with tropical addition $\max(x,y)$ and multiplication $x+y$. The tropical numbers form a semi-ring: all the usual rules of algebra hold in $\mathbb{T}$ with the exception of additive inverses (the neutral element for addition is $-\infty$; additive inverses are missing). This mathematical structure simplifies some computations, and several optimization problems are solved efficiently in this way; this is a growing area of applied research in the current decade (it has been used in auctions for the Bank of England \cite{economy2}, for biochemical networks \cite{radulescu2012reduction}, and for Dutch railroads, among other things \cite{heidergott2014max}). Thermodynamics suggests that the tropical limit should be understood as the low temperature limit $T\to 0$ (or equivalently, as the limit when the Boltzmann constant $k$ vanishes, $k\to 0$) of the classical algebraic and geometric operations. In this interpretation of the limit, the relevant change of variables is $t=e^{-\frac{1}{kT}}$, where $t$ is the tropical parameter (the base of the logarithm) and $T$ is the temperature \cite{kenyon2006dimers,kapranov2011thermodynamics,itenberg2012Geometry}. More relevant to realistic physical systems, tropical functions and tropical algebra appear naturally in statistical physics as the formal limit as $k\to 0$, and this can be used to analyze frustrated systems like spin ice and spin glasses. This is directly related to the appearance of the tropical degeneration in dimer models \cite{kenyon2006dimers,cimasoni2007dimers}. \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{FF3.pdf} \caption{Each column represents a tropical degeneration (from top to bottom). The complex picture is at the top; in the middle, we depict the amoeba on log-log scale and, at the bottom, we picture the tropical curve which can be seen as the spine of the corresponding amoeba. In the first example, we have a degree-one curve with three points $A,B,C$ going off to infinity (the intersections $A,B$ of a line with the coordinate axes go to $-\infty$ under the logarithm, and $C$, the intersection at infinity, goes to $(+\infty,+\infty)$). In the second example, a quadric degenerates sending off six points off to infinity. A quadric intersects each of coordinate lines and the line at infinity in two points, therefore, six (three times two) points go off to infinity. From top to bottom $t\to\infty$} \label{fig:Tropicaldegeneration} \end{figure} \begin{definition} Let $A\subset\mathbb Z^2$ be any finite set. A tropical polynomial in two variables is a function \begin{equation} \label{eq_1} \mathrm{Trop}(F(x,y))= \max \limits_{(i,j)\in \mathcal{A}}(a_{ij}+ix+jy). \end{equation} \end{definition} A tropical polynomial should be thought of the tropicalization of a classical polynomial $F(x,y)=\sum\limits_{(i,j)\in A} a_{ij}x^iy^j$ obtained by replacing all of the summations and multiplications by their tropical counterparts, i.e. $\max$ and addition, respectively. For an excellent introduction to tropical geometry, we refer the reader to \cite{maclagan2015introduction}. \section*{Tropical Limit in Algebraic Geometry} Complex algebraic geometry is the field of mathematics that studies geometric objects inside complex Euclidean spaces $\mathbb{C} ^n$ that arise as zero sets of finite families of polynomials. A simple example of such an object is an algebraic curve $\mathcal{C}$ given by a single polynomial equation in two variables $$\mathcal{C}: F(x,y)=0, \ \ (x,y) \in \mathbb{C}^2, F(x,y)= \sum \limits_{(i,j)\in \mathcal{A}} a_{ij}x^iy^j.$$ Here $\mathcal{A}$ is a finite set of pairs of positive integers. The degree of $F$ is the maximal total power $i+j$ of all the monomials $x^i y^j$ appearing in $F$, namely $d=\max \limits_{(i,j)\in \mathcal{A}}(i+j)$. The curve $\mathcal{C}$ is a two-dimensional object, despite its name (for two real dimensions is the same as one complex dimension). A non-singular complex curve $\mathcal{C}= \{F(x,y)=0 \}$ of degree $d=\textrm{deg}(F)$ is a Riemann surface of genus $g=\frac{1}{2}(d-1)(d-2)$. Therefore, the usual lines ($d=1$) and quadrics ($d=2$) are topological $2$-spheres, while cubics ($d=3$, also called elliptic curves) are tori. The geometric counterpart of the tropicalization is as follows. Given a complex algebraic curve $\mathcal{C}_t$ defined by a polynomial $$F_t(x,y)=\sum \limits_{(i,j)\in \mathcal{A}}\gamma_{ij}t^{a_{ij}}x^iy^j=0, |\gamma_{ij}|=1,$$ we call the {\it amoeba} $A_t$ the image of $\mathcal{C}_t$ under the map $\log_t(x,y)=(\log_t | x | , \log_t | y |)$, $A_t:=\log_t(\mathcal{C}_t)$. The limit of the amoebas $A_t$ as $t \rightarrow +\infty$ is called ${\mathrm{Trop}}(\mathcal{C})$, the tropicalization of $\mathcal{C}_t$. The limit ${\mathrm{Trop}}(\mathcal{C}) $ can be described entirely in terms of the tropical polynomial ${\mathrm{Trop}}(F)$ (eq.~\ref{eq_1}). This fact can be proved by noticing that on the linearity regions of ${\mathrm{Trop}}(F(x,y))$, one monomial in $F_t$ dominates all the others and, therefore, $F_t$ cannot be zero; and, consequently, we conclude that the limit ${\mathrm{Trop}}(\mathcal{C} )$ is precisely the set of points $(x,y)$ in the plane where the (3-dimensional) graph of the function $${\mathrm{Trop}}(F(x,y)) = \max \limits_{(i,j)\in \mathcal{A}}(a_{ij}+ix+jy)$$ is not smooth. This set of points is known as the corner locus of ${\mathrm{Trop}}(F(x,y))$. For this reason, we define the tropical plane curve as the corner locus of a tropical polynomial. As depicted in Figure~\ref{fig:Tropicaldegeneration}, the tropical degeneration ${\mathrm{Trop}}(\mathcal{C})$ of a complex curve $\mathcal{C}$ is essentially the graph of the curve depicted in a $\log_t$-$\log_t$ scale for $t$ very large. It is not very hard to verify that every tropical curve ${\mathrm{Trop}}(\mathcal{C})$ in the plane is a finite balanced graph, all of whose edges have rational slopes, such that at every vertex, the (weighted) directions of the primitive vectors in every direction cancel out (this is called the balancing condition and is akin to Kirchhoff's law for electrical circuits). Conversely, every such balanced graph on the plane appears as a tropical algebraic curve ${\mathrm{Trop}}(\mathcal{C})$. Here, it may be a good moment to point out that one can observe small tropical curves in Figure \ref{fig:scaleinvariance}, where the final state of a sandpile is depicted; this already points towards a relationship between SOC and tropical geometry. \section*{String Theory, Mirror Symmetry} Tropical geometry is intimately related to the interaction between algebraic geometry and string theory that occurs in the mirror symmetry program. Given a tropical 2-dimensional surface $B$, we can use the tropical structure to produce a pair $(X_B,\tilde{X}_B)$ of mirror manifolds. This motivated M. Kontsevich to predict that counting tropical curves on $B$ could be used for the calculation of Gromov-Witten invariants (which count holomorphic complex curves) in the $A$-side of mirror symmetry. The first example of such a calculation done from a rigorous mathematical perspective was accomplished by Mikhalkin \cite{mikh1}. This perspective has been expanded by Gross and Siebert in their program to understand mirror symmetry from a mathematical viewpoint using tropical geometry \cite{MR2722115}. Let us also mention that the dichotomy between continuous and discrete models in our paper (already appearing in the biological models) has an important analogue in string theory: Iqbat et al. have argued that, when we probe space-time beyond the scale $\alpha'$ and below Planck's scale, the resulting fluctuations of space-time can be computed with a classical cellular automaton (a melting crystal) representing quantum gravitational foam \cite{MR2425292}. Their theory is a three-tier system whose levels are classical geometry (K\"ahler gravity), tropical geometry (toric manifolds) and cellular automata (discrete melting crystals). The theory that we propose in this paper is also a three-tier system whose levels are classical complex algebraic geometry, tropical geometry (analytic tropical curves) and cellular automata (sandpiles). This seems not to be a coincidence and suggests deep connections between our model for SOC and their model for quantum gravitational foam. \section*{Tropical Curves in Sandpiles} To understand the appearance of tropical geometry in sandpiles, consider the \emph{toppling function} $H(i,j)$ defined as follows: Given an initial state $\phi$ and its relaxation $\varphi^\circ$, the value of $H(i,j)$ equals the number of times that there was a toppling at the vertex $(i,j)$ in the process of taking $\varphi$ to $\varphi^\circ$. The discrete Laplacian of $H$ is defined by the net flow of sand, $\Delta H (i,j) :=$ $$ H(i-1,j) + H(i+1,j) + H(i,j-1) + H(i,j+1) - 4 H(i,j).$$ The toppling function is clearly non-negative on $\Omega$ and vanishes at the boundary. The function $\Delta H$ completely determines the final state $\varphi^\circ$ by the formula: \begin{equation} \label{eq_topp} \varphi^\circ(i,j) = \varphi (i,j) + \Delta H(i,j). \end{equation} \begin{figure}[H] \setlength{\fboxsep}{0pt} \centering \includegraphics[width=1.0\linewidth]{FF4.pdf} \caption{The evolution of $\langle 3 \rangle + \delta_{P}$. Sand falling outside the border disappears. Time progresses in the sequence (A), (B), (C), and finally (D). Before (A), we add grains of sand to several points of the constant state $\langle 3\rangle$ (we see their positions as blue disks given by $\delta_P$). Avalanches ensue. At time (A), the avalanches have barely started. At the end, at time (D), we get a tropical analytic curve on the square $\Omega$. White represents the region with 3 grains of sand while green represent 2, yellow represents 1, and red represents the zero region. We can think of the blue disks $\delta_P$ as the genotype of the system, of the state $\langle 3 \rangle$ as the nutrient environment, and of the thin graph given by the tropical function in (D) as the phenotype of the system.} \label{fig:Tropicalsandpileevolution} \end{figure} It can be shown by induction that the toppling function $H$ satisfies the \emph{Least Action Principle}: if $\varphi(i,j) + \Delta F(i,j) \leq 3$ is stable, then $F(i,j)\geq H(i,j)$. Ostojic \cite{ostojic2003patterns} noticed that $H(i,j)$ is a piecewise quadratic function in the context of Example~\ref{ex_many}. Consider a state $\varphi$ which consists of 3 grains of sand at every vertex, except at a finite family of points $$P=\{p_1=(i_1,j_1),\ldots,p_r=(i_r,j_r)\}$$ where we have 4 grains of sand: \begin{equation} \label{eq_phi} \varphi:=\langle 3 \rangle + \delta_{p_1} + \cdots + \delta_{p_r} = \langle 3 \rangle + \delta_{P} . \end{equation} The state $\varphi^\circ$ and the evolution of the relaxation can be described by means of tropical geometry (the final picture (D) of Figure~\ref{fig:Tropicalsandpileevolution} is a tropical curve). This was discovered by Caracciolo et al. \cite{caracciolo2010conservation} while a rigorous mathematical theory to prove this fact has been given by Kalinin et al. \cite{us}, which we review presently. It is a remarkable fact that, in this case, the toppling function $H(i,j)$ is piecewise linear (after passing to the scaling limit). To prove this, one considers the family $\mathcal{F}_P$ of functions on $\Omega$ that are: (1) piecewise linear with integral slopes, (2) non-negative over $\Omega$ and zero at its boundary, (3) concave, and (4) not smooth at every point $p_i$ of $P$. Let $F_P$ be the pointwise minimum of functions in $\mathcal{F}_P$. Then $F_P \geq H$ by the Least Action Principle (since $\Delta F_P\leq 0, \Delta F_P(p_i)<0$). \begin{lemma} In the scaling limit $H=F_P$. \end{lemma} {\bf A sketch of a proof.} We introduce the wave operators $W_p$ \cite{ivashkevich1994waves,ktitarev2000scaling} at the cellular automaton level and the corresponding tropical wave operators $G_p$. Given a fixed vertex $p=(i_0,j_0)$, we define the wave operator $W_p$ acting on states $\varphi$ of the sandpile as: $$ W_p(\varphi):=(T_p(\varphi+\delta_p)-\delta_p)^{\circ},$$ where $T_p$ is the operator that topples the state $\varphi+\delta_p$ at $p$ once, if it's possible to topple $p$ at all. In a computer simulation, the application of this operator looks like a wave of topplings spreading from $p$, while each vertex topples at most once. The first important property of $W_p$ is that, for the initial state $\varphi:=\langle 3 \rangle + \delta_{P}$, we can achieve the final state $\varphi^\circ$ by successive applications of the operator $W_{p_1}\circ\cdots\circ W_{p_r}$ a large but finite number of times (in spite of the notation): $$\varphi^\circ = (W_{p_1}\cdots W_{p_r})^\infty \varphi+\delta_P.$$ This process decomposes the total relaxation $\varphi \mapsto \varphi^\circ$ into layers of controlled avalanching. The second important property of the wave operator $W_p$ is that its action on a state $\varphi = \langle 3 \rangle + \Delta f$ has an interpretation in terms of tropical geometry. To wit, whenever $f$ is a piecewise linear function with integral slopes that, in a neighborhood of $p$, is expressed as $a_{i_0j_0} + i_0x + j_0 y$, we have $$W_p(\langle 3 \rangle +\Delta f) = \langle 3 \rangle + \Delta W(f),$$ where $W(f)$ has the same coefficients $a_{ij}$ as $f$ except one, namely $a_{i_0j_0}'=a_{i_0j_0}+1$. This is to account for the fact that the support of the wave is exactly the face where $a_{i_0j_0} + i_0x + j_0 y$ is the leading part of $f$. We will write $G_p := W_p^\infty$ to denote the operator that applies $W_p$ to $\langle 3 \rangle + \Delta f$ until $p$ lies in the corner locus of $f$.We repeat that it has an elegant interpretation in terms of tropical geometry: $G_p$ increases the coefficient $a_{i_0j_0}$ corresponding to a neighborhood of $p$, lifting the plane lying above $p$ in the graph of $f$ by integral steps until $p$ belongs to the corner locus of $G_pf$. Thus $G_p$ has the effect of pushing the tropical curve towards $p$ (Figure~\ref{figpush}) until it contains $p$. From the properties of the wave operators, it follows immediately that: $$F_P= \left(G_{p_1}\cdots G_{p_r}\right)^\infty {\mathbf 0},$$ where ${\mathbf 0}$ is the function which is identically zero on $\Omega$. All intermediate functions $\left(G_{p_1}\cdots G_{p_r}\right)^k {\mathbf 0}$ are less than $H$ since they represent partial relaxations, but their limit belongs to $\mathcal{F}_P$, and this, in turn, implies that $H=F_P$. {\bf Conclusion.} We have shown that the toppling function $H$ for Eq.~\ref{eq_phi} is piecewise linear. Thus, applying Eq.~\ref{eq_topp}, we obtain that $\phi^\circ$ is equal to $3$ everywhere but the locus where $\Delta H\ne 0$, i.e. its corner locus, namely, an $\Omega$-tropical curve (Figure~\ref{fig:Tropicalsandpileevolution}). \begin{definition}\cite{us_series} An $\Omega$-tropical series is a piecewise linear function on $\Omega$ given by: $$F(x,y) = \min_{(i,j)\in\mathcal{A}} (a_{ij} +ix + jy),$$ where the set $\mathcal{A}$ is not necessarily finite and $F|_{\partial\Omega}=0$. An $\Omega$-tropical curve is the set where $F$ is not smooth. Each $\Omega$-tropical curve is a locally finite graph satisfying the balancing condition. \end{definition} \begin{figure}[H] \centering \includegraphics[width=1.0\linewidth]{FF5.pdf} \caption{Top: The action of the wave operator $W_p$ on a tropical curve. The tropical curve steps closer to $p$ by an integral step. Thus $W_p$ shrinks the face that $p$ belongs to; the combinatorial morphology of the face that $p$ belongs to, can actually change. Bottom: The function $G_p{\mathbf 0}$, where $p$ is the center of the circle, and its associated omega-tropical curve are shown. } \label{figpush} \end{figure} \begin{remark} Tropical curves consist of edges, such that to each direction of the edges there corresponds a line-shaped pattern (a string) such as the one encountered in Figure~\ref{fig:scaleinvariance}; these patterns can be computed \cite{us_solitons}. In simulations, we have observed that these strings act like the renormalization group and, thus, ensure the proportional growth of the quadratic patches in Figure~\ref{fig:scaleinvariance}. The same occurs in other sandpile models with proportional growth, which suggests that tropical geometry is a less reductionist tool than cellular automata to study this phenomenon. \end{remark} \section*{The Tropical Sandpile Model} Here, we define a new model, the tropical sandpile (TS), reflecting structural changes when a sandpile evolves. The definition of this dynamical system is inspired by the mathematics of the previous section; TS is not a cellular automaton but it exhibits SOC. The dynamical system lives on the convex set $\Omega = [0,N] \times [0\,N]$; we will consider $\Omega$ to be a very large square. The input data of the system is a large but finite collection of points $P=\{p_1,\ldots,p_r\}$ with integer coordinates on the square $\Omega$. Each state of the system is an $\Omega$-tropical series (and the associated $\Omega$-tropical curve). The initial state for the dynamical system is $F_0 = \mathbf{0}$, and its final state is the function $F_P$ defined previously. Notice that the definition of $\mathcal{F}_P$, while inspired by sandpile theory, uses no sandpiles or cellular automata whatsoever. Intermediate states $\{F_k\}_{k=1,\dots,r}$ satisfy the property that $F_k$ is not smooth at $p_1,p_2,\dots,p_k$; i.e. the corresponding tropical curve passes through these points. In other words, the tropical curve is first attracted to the point $p_1$. Once it manages to pass through $p_1$ for the first time, it continues to try to pass through $\{p_1,p_2\}$. Once it manages to pass through $\{p_1,p_2\}$, it proceeds in the same manner towards $\{p_1,p_2,p_3\}$. This process is repeated until the curve passes through all of $P=\{p_1,\ldots, p_r\}$. \begin{figure}[H] \centering \includegraphics[width=1.0\linewidth]{FF6.pdf} \caption{The first two pictures show the comparison between the classical (A) and tropical (B) sandpiles for $|P|=100$ generic points on the square. In (C), the square $\Omega$ has side $N=1000$; a large number ($|P|=40000$) of grains has been added, showing the spatial SOC behavior on the tropical model compared to the identity (D) of the sandpile group on the square of side $N=1000$. In the central square region on (C) (corresponding to the solid block of the otherwise fractal unit), we have a random tropical curve with edges on the directions $(1, 0), (0, 1)$, and $(\pm 1, 1)$, which is given by a small perturbation of the coefficients of the tropical polynomial defining the usual square grid. } \label{fig:PhaseTransition} \end{figure} We will call the modification $F_{k-1}\to F_{k}$ the $k$-th avalanche. It occurs as follows: To the state $F_{k-1}$ we apply the tropical operators $G_{p_1}, G_{p_2}, \ldots, G_{p_{k}}; G_{p_1}, \ldots $ in cyclic order until the function stops changing; the discreteness of the coordinates of the points in $P$ ensures that this process is finite\footnote{If the coordinates of the points in $P$ are not integers, the model is well-defined, but we need to take a limit (see \cite{us_series}), which is not suitable for computer simulations.}. Again, as before, while sandpile-inspired, the operators $G_p$ are defined entirely in terms of tropical geometry without mention of sandpiles. There is a dichotomy: Each application of an operator $G_p$, either does something to change the shape of the current tropical curve (in this case $G_p$ is called an active operator), or does nothing, leaving the curve intact (if $p$ already belongs to the curve). \begin{definition} The size of the $k$-th avalanche is the number of distinct active operators $G_{p_i}$ used to take the system from the self-critical state $F_{k-1}$ to the next self-critical state $F_{k}$, divided by $k$. In particular, the size $s_k$ of the $k$-th avalanche is a number between zero and one, $0 \leq s_k \leq 1$, and it estimates the proportional area of the picture that changed during the avalanche. \end{definition} In the previous example, as the number of points in $P$ grows and becomes comparable to the number of lattice points in $\Omega$, the tropical sandpile exhibits a phase transition going into spatial SOC (fractality). This provides the first evidence in favor of SOC on the tropical sandpile model, but there is a far more subtle spatio-temporal SOC behavior that we will exhibit in the following paragraphs. While the ordering of the points from the $1$st to the $r$-th is important for the specific details of the evolution of the system, the system's statistical behavior and final state are insensitive to it. This is called an Abelian property, which was studied extensively in \cite{MR3493110} for discrete dynamical systems (Abelian networks). Our model suggests studying continuous dynamical systems with this Abelian property, such as, for example, Abelian networks with nodes on the plane, a continuous set of states, and an evolution rule depending on the coordinates of the nodes. We expect that the Abelian property is equivalent to the least action principle (cf. \cite{MR3493110}). \section*{Self-Organized Criticality in the Tropical World} The tropical sandpile dynamics exhibit slow driving avalanching (in the sense of \cite{watkins201625} page 22). Once the tropical dynamical system stops after $r$ steps, we can ask ourselves what the statistical behavior of the number $N(s)$ of avalanches of size $s$ is like. We posit that the tropical dynamical system exhibits spatio-temporal SOC behavior; that is, we have a power law: $$\log N(s) = \tau \log s+c.$$ To confirm this, we have performed experiments in the supercomputing clusters ABACUS and Xiuhcoatl at Cinvestav (Mexico City); the code is available on \cite{gitsand}. In the figure below, we see the graph of $\log N(s)$ vs. $\log s$ for the tropical (piecewise linear, continuous) sandpile dynamical system; the resulting experimental $\tau$ in this case was $\tau \sim -0.9$. \section*{Conclusion and Further Directions} We have obtained a piecewise-linear (continuous, tropical) model to study statistical aspects of non-linear phenomena. As tropical geometry is highly developed, it seems reasonable to believe that it will provide new reductionist explanations for physical non-linear systems in the future (as well as providing a tool for studying proportional growth phenomena in biology and elsewhere), as has already happened with non-linear aspects of algebraic geometry and mirror symmetry. Next, we list open questions. From the point of view of real physical phenomena, the tropical sandpile provides a new class of mathematical and modelling tools. Some possible directions for further research are as follows: \begin{direction} If $\Omega$ is a polygon with sides of rational slope, then each $\Omega$-tropical curve $C$ can be obtained as the corner locus of $F_P$ for a certain set $P$ of points. To achieve this, one considers a (finite) set $P$ of points which all belong to $C$ and cover $C$ rather densely, meaning that every point in $C$ is very close to a point in $P$ (at a distance less than a very small $\epsilon$). As is shown in \cite{us_series}, the corner locus of $F_P$ is an $\Omega$-tropical curve passing though $P$ which solves a certain Steiner-type problem: minimizing the tropical symplectic area. However, because the set $P$ is huge in this construction, an open question remains: Can we find a small set $P=\{p_1,\ldots,p_r \}$ (where $r$ is approximately equal to the number of faces of $C$) such that $C$ is the corner locus of $F_P$? (We thank an anonymous referee for this question.) \end{direction} \begin{figure}[H] \centering \includegraphics[width=0.5\linewidth]{FF7.pdf} \caption{A) The power law for sandpiles. The logarithm of the frequency is linear with respect to the logarithm of the avalanche size, except near the right where the avalanches are larger than half of the system. Here we have $\Omega=[0,100]^2$ and is initially filled with $3$ grains everywhere, followed by $10^6$ dropped grains. B) The power-law for the tropical (piece-wise linear, continuous) dynamical system. In this computer experiment $\Omega$ has a side of $1000$ units and we add at random a set $P$ of $10000$ individual sand grains (a random large genotype).} \end{figure} \begin{direction} The operators $G_p$ can be lifted to the algebraic setting but, for now, we can do this only in fields of characteristic two \cite{us_series}. Is it possible to lift $G_p$ in characteristic zero (the complex realm)? In any case, we expect that this difficulty can be alleviated by a mirror symmetry interpretation. The issue is due to the {\it symplectic} nature of $G_p$: indeed, $\Omega$-tropical curves naturally appear as tropical symplectic degenerations \cite{mikhalkin2018examples,matessi2018lagrangian}. When $\Omega$ is not a rational polygon, one should take into account non-commutative geometry \cite{katzarkov2014definition}. What is the mirror analog of $G_p$ in the complex world? We expect that there should exist an operator $G_p^*$ acting on strings, and through this we expect power-law statistics for a mirror notion of the areas of the faces of $\Omega$-tropical curves. Closely related to this, in analogy to the work of Iqbar et al. \cite{MR2425292}, we conjecture that the partition function obtained by summing over statistical mechanical configurations of sandpiles should have, via mirror symmetry, an interpretation as a path integral in terms of K\"ahler geometry. Tropical geometry should play the role of toric geometry in this case. What is the precise geometric model in this situation? Developments in this direction would allow a new renormalization group interpretation of SOC (cf. \cite{diaz1994dynamic, ansari2008self}). \end{direction} \begin{direction} Rastelli and K\"anel studied nanometer sized three-dimensional islands formed during epitaxial growth of semiconductors appearing as faceted pyramids that seem to be modeled by tropical series from an inspection of Figure~\ref{figpush} in this paper and Figure 3 in ~\cite{rastelli2002surface}. It would be very interesting to prove that this is so and to study the consequences of this observation to the modeling of the morphology of such phenomena. This may not be totally unrelated to allometry in biology. Is it possible that the gradient slope model, as in Section 2.5 in \cite{vollmer2017growth}, could be piecewise linear, and the corner locus slopes could prescribe the type and speed of growth for tissues? \end{direction} \begin{direction} Study the statistical distributions for the coefficients of the tropical series in Figure 6 (C) in our paper. Explain why the slopes are mostly of directions $(0,1),(1,0),(1,1),(-1,1)$. Is it possible that a concentration measure phenomenon takes place and that such a type of picture appears with probability one? (We thank Lionel Levine for this question.) \end{direction} \section*{Experimental Data} The data used to produce Figure 9 can be found in \cite{gitsand}. \section*{Acknowledgments} Nikita Kalinin was funded by the SNSF PostDoc.Mobility Grant 168647, supported in part by Young Russian Mathematics award, and would like to thank Grant FORDECYT-265667 "Programa para un Avance Global e Integrado de la Matem\'atica Mexicana". Also, support from the Basic Research Program of the National Research University Higher School of Economics is gratefully acknowledged. Yulieth Prieto was funded by Grant FORDECYT-265667 and by ABACUS (Cinvestav). Mikhail Shkolnikov was supported by ISTFELLOW program. Finally, Ernesto Lupercio would like to thank the Moshinsky Foundation, Conacyt, FORDECYT-265667, ABACUS, Xiuhcoatl, IMATE-UNAM, Samuel Gitler International Collaboration Center and the Laboratory of Mirror Symmetry NRU HSE, RF Government grant, ag. No. 14.641.31.0001 and the kind hospitality of the University of Geneva and of the Mathematisches Forschungsinstitut Oberwolfach where this work started. \emph{In memoriam JL.}
1,314,259,992,823
arxiv
\section{Introduction} The anomalous magnetic moment of the muon $a_{\mu}$ and the running of the electromagnetic coupling $\alpha$ play a fundamental role in searches for physics beyond the Standard Model (SM). For both quantities the hadronic vacuum polarisation (HVP) contribution is a main source of uncertainty in the SM prediction. In particular, at the desired level of accuracy isospin breaking effects in the HVP contribution have to be taken into account~\cite{Gerardin:2020gpp,Aoyama:2020ynm}. In this work, we continue the investigation of isospin breaking effects~\cite{Risch:2017xxe,Risch:2018ozp,Risch:2019xio} making use of Coordinated Lattice Simulations (CLS) $N_{\mathrm{f}}=2+1$ QCD ensembles~\cite{Bruno:2014jqa,Bruno:2016plf,Mohler:2017wnb,Mohler:2020txx} with open and (anti-)periodic temporal boundary conditions~\cite{Luscher:2011kk}. We organise this work as follows: We briefly summarise the setup used for the perturbative treatment of isospin breaking effects~\cite{deDivitiis:2011eh, deDivitiis:2013xla} based on reweighting QCD$_{\text{iso}}$ gauge ensembles and discuss suitable hadronic renormalisation schemes for QCD+QED and QCD$_{\text{iso}}$ inspired by chiral perturbation theory. We recap the formalism for the computation of mesonic two-point functions and for the renormalisation of the local vector current in this framework using QED$_{\mathrm{L}}$~\cite{Hayakawa:2008an} as a finite-volume prescription of QED. We finally discuss isospin breaking effects in the LO-HVP contribution to the muon anomalous magnetic moment as well as in the closely related LO hadronic contributions to the running of the electromagnetic coupling. \section{Inclusion of perturbative isospin breaking effects by reweighting} We briefly summarise our setup for the perturbative treatment of isospin breaking effects. For a detailed description we refer to~\cite{Risch:2018ozp}. We consider the space of QCD+QED-like theories parameterised by $\varepsilon = (am_{\mathrm{u}}, am_{\mathrm{d}}, am_{\mathrm{s}},\beta, e^{2})$. For the choice $\varepsilon^{(0)} = (am_{\mathrm{u}}^{(0)}, am_{\mathrm{d}}^{(0)}, am_{\mathrm{s}}^{(0)}, \beta^{(0)}, 0)$ with $am_{\mathrm{u}}^{(0)}= am_{\mathrm{d}}^{(0)}$ we obtain QCD$_{\mathrm{iso}}$ together with a free photon field. In~\cite{Risch:2018ozp} we have shown that QCD+QED can be related to QCD$_{\mathrm{iso}}$ by reweighting via the identity \begin{align} \langle O[U,A,\Psi,\overline{\Psi}] \rangle &= \frac{\langle R[U] \,\langle O[U,A,\Psi,\overline{\Psi}] \rangle_{\mathrm{q}\gamma} \rangle_{\mathrm{eff}}^{(0)}}{\langle R[U] \rangle_{\mathrm{eff}}^{(0)}} & R[U] &= \frac{\exp(-S_{\mathrm{g}}[U])\,Z_{\mathrm{q}\gamma}[U]}{\exp(-S_{\mathrm{g}}^{(0)}[U])\,Z^{(0)}_{\mathrm{q}}[U]}, \label{eq_expectation_value_by_reweighting} \end{align} where $\langle \ldots \rangle_{\mathrm{eff}}^{(0)}$ is evaluated by making use of existing QCD$_{\mathrm{iso}}$ gauge configurations. $\left\langle \ldots \right\rangle_{\mathrm{q}\gamma}$ denotes the QED expectation value on a QCD background gauge field and $Z_{\mathrm{q}\gamma}[U]$ is the corresponding partition function, whereas $Z_{\mathrm{q}}^{(0)}[U]$ denotes the partition function of isosymmetric quarks on a QCD background gauge field, i.e. the quark determinant of QCD$_{\mathrm{iso}}$. We evaluate $R[U]$ by means of perturbation theory in $\Delta\varepsilon=\varepsilon-\varepsilon^{(0)}$ around $\varepsilon^{(0)}$. The required Feynman rules are discussed in~\cite{Risch:2018ozp}. In order to fix the expansion coefficients $\Delta\varepsilon$ we make use of a suitable hadronic renormalisation scheme discussed in the next section. \section{Hadronic renormalisation scheme for QCD+QED and QCD$_{\text{iso}}$} Masses of pseudo-scalar mesons can be computed in chiral perturbation theory including the electromagnetic interaction. Defining the average light quark mass $\hat{m} = \frac{1}{2}(m_{\mathrm{u}}+m_{\mathrm{d}})$ and the $\pi^{0}$-$\eta$ mixing angle $\varepsilon = \frac{\sqrt{3}}{4}\frac{m_{\mathrm{d}}-m_{\mathrm{u}}}{m_{\mathrm{s}}-\hat{m}}$ the lowest-order contribution to pseudo-scalar meson masses at $O(e^{2}p^{0})$ and $O(\varepsilon)$ are given by~\cite{Neufeld:1995mu} \begin{align} m_{\pi^{+}}^{2} &= 2B\hat{m}+2e^{2}ZF^{2}, & m_{K^{+}}^{2} &= B\Big((m_{\mathrm{s}}+\hat{m})-\frac{2\varepsilon}{\sqrt{3}}(m_{\mathrm{s}}-\hat{m})\Big)+2e^{2}ZF^{2}, \nonumber \\ m_{\pi^{0}}^{2} &= 2B\hat{m}, & m_{K^{0}}^{2} &= B\Big((m_{\mathrm{s}}+\hat{m})+\frac{2\varepsilon}{\sqrt{3}}(m_{\mathrm{s}}-\hat{m})\Big), \end{align} where $F$ is the pion decay constant in the chiral limit, $B$ the vacuum condensate parameter and $Z$ a dimensionless coupling constant. The linear combinations $m_{\pi^{0}}^{2} = B(m_{\mathrm{u}}+m_{\mathrm{d}})$, $m_{K^{+}}^{2}+m_{K^{0}}^{2}-m_{\pi^{+}}^{2} = 2Bm_{\mathrm{s}}$ and $m_{K^{+}}^{2}-m_{K^{0}}^{2}-m_{\pi^{+}}^{2}+m_{\pi^{0}}^{2} = B(m_{\mathrm{u}}-m_{\mathrm{d}})$ serve as proxies for the average light quark mass, the strange quark mass and the light quark mass splitting. Making use of the fact that at leading order $\alpha_{\mathrm{em}}$ does not renormalise, i.e. $\alpha_{\mathrm{em}}=\frac{e^{2}}{4\pi}$, we use the above expressions to define a hadronic renormalisation scheme for QCD+QED: \begin{align} &(m_{\pi^{0}}^{2})^{\mathrm{QCD+QED}} = (m_{\pi^{0}}^{2})^{\mathrm{phys}}, \quad\quad\quad (m_{K^{+}}^{2} + m_{K^{0}}^{2} - m_{\pi^{+}}^{2})^{\mathrm{QCD+QED}} = (m_{K^{+}}^{2} + m_{K^{0}}^{2} - m_{\pi^{+}}^{2})^{\mathrm{phys}}\label{eq:QCDQEDscheme}, \\ &(m_{K^{+}}^{2}-m_{K^{0}}^{2}-m_{\pi^{+}}^{2}+m_{\pi^{0}}^{2})^{\mathrm{QCD+QED}} = (m_{K^{+}}^{2}-m_{K^{0}}^{2}-m_{\pi^{+}}^{2}+m_{\pi^{0}}^{2})^{\mathrm{phys}}, \,\,\, (\alpha_{\mathrm{em}})^{\mathrm{QCD+QED}} = (\alpha_{\mathrm{em}})^{\mathrm{phys}}.\nonumber \end{align} The superscript "phys" indicates the experimentally measured value, whereas "QCD+QED" refers to the theoretical prediction. Forming appropriate linear combinations the above scheme is equivalent to matching the quantities $m_{\pi^{0}}^{2}$, $m_{K^{0}}^{2}$ and $m_{K^{+}}^{2} - m_{\pi^{+}}^{2}$ instead. Optionally, $m_{\pi^{+}}^{2}-m_{\pi^{0}}^{2} = 2e^{2}ZF^{2}$ can be used as a proxy for $\alpha_{\mathrm{em}}=\frac{e^{2}}{4\pi}$, such that one obtains a scheme based on $m_{\pi^{0}}^{2}$, $m_{\pi^{+}}^{2}$, $m_{K^{0}}^{2}$ and $m_{K^{+}}^{2}$. In addition, it is possible to introduce a scheme for QCD$_{\text{iso}}$, which is characterised by a vanishing electromagnetic coupling and identical up- and down-quark masses. In this case, in \cref{eq:QCDQEDscheme} only two proxies for the quark masses remain: \begin{align} \big(m_{\pi^{0}}^{2}\big)^{\mathrm{QCD}_{\mathrm{iso}}} &= \big(m_{\pi^{0}}^{2})^{\mathrm{phys}}, & \big(m_{K^{+}}^{2} + m_{K^{0}}^{2} - m_{\pi^{+}}^{2}\big)^{\mathrm{QCD}_{\mathrm{iso}}} &= (m_{K^{+}}^{2} + m_{K^{0}}^{2} - m_{\pi^{+}}^{2}\big)^{\mathrm{phys}}. \end{align} Combining the latter equations and making use of the fact that the pions and kaons become mass degenerate, respectively, one finds for the squared isosymmetric pion and kaon masses~\cite{Blum:discussion2021workshop}: \begin{align} \big(m_{\pi}^{2}\big)^{\mathrm{QCD}_{\mathrm{iso}}} &= \big(m_{\pi^{0}}^{2}\big)^{\mathrm{phys}}, & \big(m_{K}^{2}\big)^{\mathrm{QCD}_{\mathrm{iso}}} &= \frac{1}{2}\big(m_{K^{+}}^{2} + m_{K^{0}}^{2} - m_{\pi^{+}}^{2} + m_{\pi^{0}}^{2}\big)^{\mathrm{phys}}. \end{align} Isospin breaking effects of an observable $O$ can now be quantified by comparing the predictions $(O)^{\mathrm{QCD+QED}}$ and $(O)^{\mathrm{QCD}_{\mathrm{iso}}}$. Similarly, a scheme for QCD is obtained when demanding a vanishing electromagnetic coupling in \cref{eq:QCDQEDscheme}. These chiral perturbation theory inspired schemes have the advantage to be purely based on pseudo-scalar meson masses and are therefore, in contrast to schemes based on renormalised quark masses~\cite{deDivitiis:2013xla}, easy to handle. Since the limited number of gauge ensembles considered so far does not yet allow for an extrapolation to the physical point, we match QCD+QED and QCD$_{\mathrm{iso}}$ on each ensemble. Neglecting isospin breaking effects in the scale setting we match the proxies for the average light and strange quark masses in both theories and set the light quark mass difference and the electromagnetic coupling to their physical values. Applying the leading-order perturbative expansion $am_{H}=(am_{H})^{(0)}+\sum_{l}\Delta\varepsilon_{l}(am_{H})^{(1)}_{l}+O(\Delta\varepsilon^{2})$ for $H=\pi^{0},\pi^{+},K^{0},K^{+}$ this scheme translates into a system of linear equations that determines the expansion coefficients $\Delta\varepsilon=(a\Delta m_{\mathrm{u}}, a\Delta m_{\mathrm{d}}, a\Delta m_{\mathrm{s}},\Delta\beta, e^{2})$: \begin{align} &\sum_{l} \Delta\varepsilon_{l} \Big((am_{\pi^{0}})^{(0)} (am_{\pi^{0}})^{(1)}_{l}\Big) = 0, \quad\quad\quad\quad\quad\quad \Delta\varepsilon_{\Delta\beta} = 0, \quad\quad\quad\quad\quad\quad \Delta\varepsilon_{e^{2}} = 4\pi\alpha_{\mathrm{em}}, \nonumber \end{align} \begin{align} &\sum_{l} \Delta\varepsilon_{l} \Big((am_{K^{+}})^{(0)} (am_{K^{+}})^{(1)}_{l} + (am_{K^{0}})^{(0)} (am_{K^{0}})^{(1)}_{l} - (am_{\pi^{+}})^{(0)} (am_{\pi^{+}})^{(1)}_{l}\Big) = 0, \nonumber \\ &\sum_{l} \Delta\varepsilon_{l} \Big((am_{K^{+}})^{(0)}(am_{K^{+}})^{(1)}_{l}-(am_{K^{0}})^{(0)} (am_{K^{0}})^{(1)}_{l}-(am_{\pi^{+}})^{(0)} (am_{\pi^{+}})^{(1)}_{l}+(am_{\pi^{0}})^{(0)}(am_{\pi^{0}})^{(1)}_{l}\Big) \nonumber \\ &= \frac{1}{2} a^{(0)}\big(m_{K^{+}}^{2}-m_{K^{0}}^{2}-m_{\pi^{+}}^{2}+m_{\pi^{0}}^{2})^{\mathrm{phys}}. \end{align} \section{Mesonic two-point correlation function} To determine pseudo-scalar meson masses required for the hadronic renormalisation scheme as well as to compute the HVP function we consider zero-momentum projected mesonic two-point functions for the operator combinations $(\mathcal{M}_{2},\mathcal{M}_{1})=(\mathcal{P},\mathcal{P}),(\mathcal{V}_{\mathrm{l}},\mathcal{V}_{\mathrm{l}}),(\mathcal{V}_{\mathrm{c}},\mathcal{V}_{\mathrm{l}})$: \begin{align} C(x_{2}^{0},x_{1}^{0}) &=\frac{a^{6}}{|\Lambda_{123}|}\sum_{\vec{x_{2}},\vec{x_{1}}}\langle \mathcal{M}_{2}^{x_{2}}\mathcal{M}_{1}^{x_{1}} \rangle = \langle \mathcal{M}_{2}^{x_{2}^{0}}\mathcal{M}_{1}^{x_{1}^{0}} \rangle &\text{with}\quad&\mathcal{M}^{x^{0}}_{i}=\frac{a^{3}}{\sqrt{|\Lambda_{123}|}}\sum_{\vec{x}}\mathcal{M}^{x}_{i}. \label{label1} \end{align} $|\Lambda_{123}|$ denotes the spatial volume of the lattice. The pseudo-scalar density operator is defined as $\mathcal{P}^{x i} = \overline{\Psi}{}^{x}\Lambda^{i}\gamma^{5}\Psi{}^{x}$, where $\Lambda^{i}$ determines the flavour content. We make use of two lattice discretisations of the vector current: the ultra-local discretisation $\mathcal{V}_{\mathrm{l}}^{x\mu i} = \overline{\Psi}{}^{x}\Lambda^{i}\gamma^{\mu}\Psi{}^{x}$ and the conserved discretisation $\mathcal{V}_{\mathrm{c}}^{x\mu i} = \frac{1}{2}\big(\overline{\Psi}{}^{x+a\hat{\mu}}(W^{x\mu})^{\dagger}\Lambda^{i}(\gamma^{\mu}+\mathds{1})\Psi{}^{x}+\overline{\Psi}{}^{x}\Lambda^{i}(\gamma^{\mu}-\mathds{1})W^{x\mu}\Psi{}^{x+a\hat{\mu}}\big)$~\cite{Risch:2019xio}, which fulfils the lattice vector Ward identity in QCD+QED for diagonal $\Lambda^{i}$ and which depends on the combined QCD+QED gauge links $W^{x\mu} = U^{x\mu} e^{\mathrm{i} a e Q A^{x\mu}}$. Treating isospin breaking perturbatively, correlation functions are expanded according to $C=C^{(0)}+\sum_{l}\Delta\varepsilon_{l}C^{(1)}_{l}+O(\Delta\varepsilon^{2})$. As a consequence, operators also have to be expanded in $e$, i.e. $\mathcal{O}=\mathcal{O}^{(0)}+e\mathcal{O}^{(\frac{1}{2})}+\frac{1}{2}e^{2}\mathcal{O}^{(1)}+O(e^{3})$. The expansions of $\mathcal{V}_{\mathrm{c}}$ in $e$ can be found in~\cite{Risch:2019xio}. Combining \cref{eq_expectation_value_by_reweighting} and \cref{label1}, the quark-connected 0th and 1st order contributions to the mesonic two-point functions read \begin{align} C{}^{(0)} &= \Big\langle \begin{gathered} \includegraphics[width=6.5em]{diagrams/mes2pt_con0.pdf} \end{gathered} \Big\rangle_{\mathrm{eff}}^{(0)}, \quad\quad\quad\quad C{}^{(1)}_{\Delta m_{f}} = \Big\langle \begin{gathered} \includegraphics[width=6.5em]{diagrams/mes2pt_con1_det1f.pdf} \end{gathered} + \begin{gathered} \includegraphics[width=6.5em]{diagrams/mes2pt_con1_det2f.pdf} \end{gathered} \Big\rangle_{\mathrm{eff}}^{(0)}, \nonumber \\ C{}^{(1)}_{\Delta \beta} &= \begin{aligned}[t] &\Big\langle \begin{gathered} \includegraphics[width=6.5em]{diagrams/mes2pt_con0.pdf} \end{gathered} \begin{gathered} \includegraphics[width=1.5em]{diagrams/vertex_beta.pdf} \end{gathered} \Big\rangle_{\mathrm{eff}}^{(0)} - \Big\langle \begin{gathered} \includegraphics[width=6.5em]{diagrams/mes2pt_con0.pdf} \end{gathered} \Big\rangle_{\mathrm{eff}}^{(0)} \Big\langle \begin{gathered} \includegraphics[width=1.5em]{diagrams/vertex_beta.pdf} \end{gathered} \Big\rangle_{\mathrm{eff}}^{(0)}, \end{aligned} \nonumber \\ C{}^{(1)}_{e^{2}} &= \Big\langle \begin{aligned}[t] & \begin{gathered} \includegraphics[width=6.5em]{diagrams/mes2pt_con1_bow1.pdf} \end{gathered} + \begin{gathered} \includegraphics[width=6.5em]{diagrams/mes2pt_con1_bow2.pdf} \end{gathered} + \begin{gathered} \includegraphics[width=6.5em]{diagrams/mes2pt_con1_tad1.pdf} \end{gathered} + \begin{gathered} \includegraphics[width=6.5em]{diagrams/mes2pt_con1_tad2.pdf} \end{gathered} \\ &+ \begin{gathered} \includegraphics[width=6.5em]{diagrams/mes2pt_con1_exch.pdf} \end{gathered} + \begin{gathered} \includegraphics[width=6.5em]{diagrams/mes2pt_con1_op21.pdf} \end{gathered} + \begin{gathered} \includegraphics[width=6.5em]{diagrams/mes2pt_con1_op22.pdf} \end{gathered} + \begin{gathered} \includegraphics[width=6.5em]{diagrams/mes2pt_con1_op2.pdf} \end{gathered} \Big\rangle_{\mathrm{eff}}^{(0)}. \end{aligned} \label{eq:qconmes2pt} \end{align} We evaluate the diagrams by means of stochastic $U(1)$ quark sources with support on a single time-slice and $Z_{2}$ photon sources to estimate the all-to-all photon propagator in Coulomb gauge~\cite{Risch:2018ozp}. The photon boundary conditions are chosen in accordance with the gauge field boundary conditions of the QCD$_{\mathrm{iso}}$ ensembles. For temporal periodic gauge ensembles we use periodic boundary conditions for the photon field, whereas for open temporal boundary conditions we apply homogeneous Dirichlet and Neumann boundary conditions~\cite{Risch:2018ozp}. In order to reduce the stochastic noise we apply covariant approximation averaging~\cite{Shintani:2014vja} in combination with the truncated solver method~\cite{Bali:2009hu}. The simulation code is based on the QDP++~\cite{Edwards:2004sx} and FFTW3~\cite{FFTW05} libraries and the openQCD~\cite{Luscher:2014} framework. We have performed simulations on three gauge ensembles listed in \cref{table_lattice_parameters}. \begin{table} \begin{centering} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline & $(L/a)^3\times T/a$ & $a\,[\text{fm}]$ & $m_{\pi}\,[\text{MeV}]$ & $m_{K}\,[\text{MeV}]$ & $m_{\pi}L$ & $L\,[\text{fm}]$ & boundary \\ \hline N200 & $48^3\times128$ & $0.06426(76)$ & $282(3)$ & $463(5)$ & 4.4 & 3.1 & open \\ D450 & $64^3\times128$ & $0.07634(97)$ & $217(3)$ & $476(6)$ & 5.4 & 4.9 & periodic \\ H102 & $32^3\times96$ & $0.08636(10)$ & $354(5)$ & $438(4)$ & 5.0 & 2.8 & open \\ \hline \end{tabular} \caption{Parameters of CLS ensembles with $N_{\mathrm{f}}=2+1$ quark flavours of non-perturbatively O(a) improved Wilson quarks and tree-level improved L\"uscher-Weisz gauge action~\cite{Bruno:2014jqa, Bruno:2016plf}.} \label{table_lattice_parameters} \end{centering} \end{table} \section{Renormalisation of the local vector current} In QCD+QED the flavour-diagonal bare vector currents $\mathcal{V}_{d} = (\mathcal{V}^{0}_{d},\mathcal{V}^{3}_{d},\mathcal{V}^{8}_{d})$ with $\Lambda^{0} = \frac{1}{\sqrt{6}}\mathds{1}$, $\Lambda^{3} = \frac{1}{2}\lambda^{3}$ and $\Lambda^{8} = \frac{1}{2}\lambda^{8}$ for the local and conserved discretisations $d=\mathrm{l},\mathrm{c}$ may undergo mixing~\cite{Risch:2019xio}. We therefore introduce renormalisation factor matrices $Z_{\mathcal{V}_{d,\mathrm{R}}\mathcal{V}_{d}}=\Big(Z_{\mathcal{V}_{d,\mathrm{R}}^{i_{2}}\mathcal{V}_{d}^{i_{1}}}\Big)_{i_{2},i_{1}=0,3,8}$, such that the renormalised vector currents expressed in terms of the bare currents read $ \mathcal{V}_{d,\mathrm{R}}= Z_{\mathcal{V}_{d,\mathrm{R}}\mathcal{V}_{d}} \mathcal{V}_{d}$ for $d = \mathrm{l},\mathrm{c}$. For the conserved vector current $\mathcal{V}_{\mathrm{c}}$ we assume that mixing is absent and the renormalisation trivial due to the existence of a lattice vector Ward identity~\cite{Risch:2019xio,peskin1997introduction}, i.e. $Z_{\mathcal{V}_{\mathrm{c},\mathrm{R}}\mathcal{V}_{\mathrm{c}}}=\mathds{1}$. For a critical account on this assumption we refer to~\cite{Collins:2005nj}. We impose the renormalisation condition~\cite{Maiani:1986yj} $\langle 0 | \mathcal{V}_{\mathrm{c},\mathrm{R}} | V \rangle = \langle 0 | \mathcal{V}_{\mathrm{l},\mathrm{R}} | V \rangle$ for a low-energy vector state $|V \rangle$. Defining the matrix of correlation functions $\langle \mathcal{V} \mathcal{V} \rangle = (\langle \mathcal{V}^{i_{2}} \mathcal{V}^{i_{1}} \rangle)_{i_{2},i_{1}=0,3,8}$ we may express this relation in terms of renormalised zero-momentum projected correlation functions: \begin{align} \langle \mathcal{V}_{\mathrm{c},\mathrm{R}}^{x_{2}^{0}} \mathcal{V}_{\mathrm{l},\mathrm{R}}^{x_{1}^{0}} \rangle &\rightarrow \langle \mathcal{V}_{\mathrm{l},\mathrm{R}}^{x_{2}^{0}} \mathcal{V}_{\mathrm{l},\mathrm{R}}^{x_{1}^{0}} \rangle \quad\text{for}\quad T\gg x_{2}^{0} \gg x_{1}^{0} \gg 0. \end{align} Using the renormalisation relation of the vector currents, $Z_{\mathcal{V}_{\mathrm{c},\mathrm{R}}\mathcal{V}_{\mathrm{c}}}=\mathds{1}$ and multiplying by $(Z_{\mathcal{V}_{\mathrm{l},\mathrm{R}}\mathcal{V}_{\mathrm{l}}})^{-1}$ from the right, this condition becomes~\cite{Risch:2019xio} \begin{align} \langle \mathcal{V}_{\mathrm{c}}^{x_{2}^{0}} \mathcal{V}_{\mathrm{l}}^{x_{1}^{0}} \rangle &\rightarrow Z_{\mathcal{V}_{\mathrm{l},\mathrm{R}}\mathcal{V}_{\mathrm{l}}} \,\langle \mathcal{V}_{\mathrm{l}}^{x_{2}^{0}} \mathcal{V}_{\mathrm{l}}^{x_{1}^{0}} \rangle \quad\text{for}\quad T\gg x_{2}^{0} \gg x_{1}^{0} \gg 0. \end{align} Hence, we may extract the renormalisation factor matrix for the spatial vector currents from~\cite{Risch:2019xio} \begin{align} Z_{\mathrm{eff},\mathcal{V}_{\mathrm{l},\mathrm{R}}\mathcal{V}_{\mathrm{l}}}(x_{2}^{0},x_{1}^{0}) &= \Bigg(\frac{1}{3}\sum_{\mu=1}^{3} \langle \mathcal{V}_{\mathrm{c}}^{x_{2}^{0}\mu} \mathcal{V}_{\mathrm{l}}^{x_{1}^{0}\mu} \rangle\Bigg)\,\Bigg(\frac{1}{3}\sum_{\mu=1}^{3}\langle \mathcal{V}_{\mathrm{l}}^{x_{2}^{0}\mu} \mathcal{V}_{\mathrm{l}}^{x_{1}^{0}\mu} \rangle\Bigg)^{-1}, \end{align} which satisfies $Z_{\mathrm{eff},\mathcal{V}_{\mathrm{l},\mathrm{R}}\mathcal{V}_{\mathrm{l}}}(x_{2}^{0},x_{1}^{0}) \rightarrow Z_{\mathcal{V}_{\mathrm{l},\mathrm{R}}\mathcal{V}_{\mathrm{l}}}$ for $T\gg x_{2}^{0} \gg x_{1}^{0} \gg 0$ in the limit of large time separations, where lattice artefacts become small. We further perform a perturbative expansion $Z_{\mathcal{V}_{\mathrm{R}}\mathcal{V}} = (Z_{\mathcal{V}_{\mathrm{R}}\mathcal{V}})^{(0)} + \sum_{l}\Delta\varepsilon_{l} \,(Z_{\mathcal{V}_{\mathrm{R}}\mathcal{V}})^{(1)}_{l} + O(\Delta\varepsilon^{2})$. For the results of the extracted renormalisation factors we refer to~\cite{Risch:2019xio}. From the bare currents $\mathcal{V}_{d}$ and the renormalisation factor matrix $Z_{\mathcal{V}_{d,\mathrm{R}}\mathcal{V}_{d}}$ we construct the renormalised electromagnetic current $\mathcal{V}{}_{d,\mathrm{R}}^{\gamma}$ defined as \begin{align} \mathcal{V}{}_{d,\mathrm{R}}^{\gamma} &= \mathcal{V}{}_{d,\mathrm{R}}^{3} + \frac{1}{\sqrt{3}} \mathcal{V}{}_{d,\mathrm{R}}^{8} = \sum_{i=0,3,8} \Big(Z_{\mathcal{V}_{d,\mathrm{R}}^{3}\mathcal{V}_{d}^{i}} + \frac{1}{\sqrt{3}} Z_{\mathcal{V}_{d,\mathrm{R}}^{8}\mathcal{V}_{d}^{i}}\Big)\,\mathcal{V}{}_{d}^{i} \quad\text{for}\quad d = \mathrm{l},\mathrm{c}. \end{align} \section{The LO-HVP contribution to the muon anomalous magnetic moment $a_{\mu}$} \label{sec:amu} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{plots/cor_vcem123renvlem123renqcon_integrandamuhvp0_reconstruct.pdf} \includegraphics[width=0.49\linewidth]{plots/cor_vcem123renvlem123renqcon_amuhvp0_reconstruct_switchscuts.pdf} \caption{Left: Integrand $\langle \mathcal{V}^{\gamma}_{\mathrm{c},\mathrm{R}}\mathcal{V}^{\gamma}_{\mathrm{l},\mathrm{R}} \rangle^{(0)}\cdot \tilde{K}$ in red and its reconstruction in blue in lattice units on N200. $1\,\text{fm} = 15.5(1)\,a$. Right: The corresponding $(a_{\mu}^{\mathrm{HVP}})^{(0)}$ as a function of $x^{0}_{\mathrm{swi}}$ and $xx^{0}_{\mathrm{cut}}$.} \label{fig:amuhvp0} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{plots/cor_vcem123renvlem123renqcon_integrandamuhvp1_reconstruct_dmu.pdf} \includegraphics[width=0.49\linewidth]{plots/cor_vcem123renvlem123renqcon_amuhvp1_reconstruct_switchscuts_dmu.pdf} \includegraphics[width=0.49\textwidth]{plots/cor_vcem123renvlem123renqcon_integrandamuhvp1_reconstruct_dmd.pdf} \includegraphics[width=0.49\linewidth]{plots/cor_vcem123renvlem123renqcon_amuhvp1_reconstruct_switchscuts_dmd.pdf} \includegraphics[width=0.49\textwidth]{plots/cor_vcem123renvlem123renqcon_integrandamuhvp1_reconstruct_dms.pdf} \includegraphics[width=0.49\linewidth]{plots/cor_vcem123renvlem123renqcon_amuhvp1_reconstruct_switchscuts_dms.pdf} \includegraphics[width=0.49\textwidth]{plots/cor_vcem123renvlem123renqcon_integrandamuhvp1_reconstruct_e2.pdf} \includegraphics[width=0.49\linewidth]{plots/cor_vcem123renvlem123renqcon_amuhvp1_reconstruct_switchscuts_e2.pdf} \caption{Left: Integrand $\langle \mathcal{V}^{\gamma}_{\mathrm{c},\mathrm{R}}\mathcal{V}^{\gamma}_{\mathrm{l},\mathrm{R}}\rangle^{(1)}_{l}\cdot \tilde{K}$ for $l=\Delta m_{\upq},\Delta m_{\downq},\Delta m_{\strangeq},e^{2}$ in red and its reconstruction in blue in lattice units on N200. $1\,\text{fm} = 15.5(1)\,a$. Right: The corresponding $(a_{\mu}^{\mathrm{HVP}})^{(1)}_{l}$ as a function of $xx^{0}_{\mathrm{swi}}$ and $xx^{0}_{\mathrm{cut}}$.} \label{fig:amuhvp1} \end{figure} \begin{table} \begin{subtable}[c]{\textwidth} \begin{center} \subcaption*{$a_{\mu}^{\mathrm{HVP}}$ from $\langle \mathcal{V}^{\gamma}_{\mathrm{c},\mathrm{R}}\mathcal{V}^{\gamma}_{\mathrm{l},\mathrm{R}} \rangle$} \vspace{-0.2cm} \begin{tabular}{|l|l|l|l|l|} \hline & $(a_{\mu}^{\mathrm{HVP}})^{(0)}\,[10^{10}]$ & $(a_{\mu}^{\mathrm{HVP}})^{(1)}\,[10^{10}]$ & $a_{\mu}^{\mathrm{HVP}}\,[10^{10}]$ & $(a_{\mu}^{\mathrm{HVP}})^{(1)}/(a_{\mu}^{\mathrm{HVP}})^{(0)}$ \\ \hline N200 & $488(9)_{\mathrm{{st}}}(10)_{\mathrm{{a}}}[14]$ & $-0.6[7]$ & $487(9)_{\mathrm{{st}}}(10)_{\mathrm{{a}}}[13]$ & $-0.0012[15]$ \\ D450 & $541(8)_{\mathrm{{st}}}(12)_{\mathrm{{a}}}[15]$ & $0.97[99]$ & $542(9)_{\mathrm{{st}}}(12)_{\mathrm{{a}}}[15]$ & $0.0018[18]$ \\ H102 & $440(4)_{\mathrm{{st}}}(10)_{\mathrm{{a}}}[10]$ & $1.7[4]$ & $441(4)_{\mathrm{{st}}}(10)_{\mathrm{{a}}}[11]$ & $0.0038[8]$ \\ \hline \end{tabular} \end{center} \end{subtable} \begin{subtable}[c]{\textwidth} \begin{center} \subcaption*{$a_{\mu}^{\mathrm{HVP}}$ from $\langle \mathcal{V}^{\gamma}_{\mathrm{l},\mathrm{R}}\mathcal{V}^{\gamma}_{\mathrm{l},\mathrm{R}} \rangle$} \vspace{-0.2cm} \begin{tabular}{|l|l|l|l|l|} \hline & $(a_{\mu}^{\mathrm{HVP}})^{(0)}\,[10^{10}]$ & $(a_{\mu}^{\mathrm{HVP}})^{(1)}\,[10^{10}]$ & $a_{\mu}^{\mathrm{HVP}}\,[10^{10}]$ & $(a_{\mu}^{\mathrm{HVP}})^{(1)}/(a_{\mu}^{\mathrm{HVP}})^{(0)}$ \\ \hline N200 & $491(8)_{\mathrm{{st}}}(11)_{\mathrm{{a}}}[13]$ & $-0.8[7]$ & $490(8)_{\mathrm{{st}}}(11)_{\mathrm{{a}}}[13]$ & $-0.0016[14]$ \\ D450 & $546(8)_{\mathrm{{st}}}(12)_{\mathrm{{a}}}[15]$ & $1.49[99]$ & $548(8)_{\mathrm{{st}}}(13)_{\mathrm{{a}}}[15]$ & $0.0027[18]$ \\ H102 & $445(4)_{\mathrm{{st}}}(10)_{\mathrm{{a}}}[10]$ & $1.6[4]$ & $447(4)_{\mathrm{{st}}}(10)_{\mathrm{{a}}}[11]$ & $0.0036[8]$ \\ \hline \end{tabular} \end{center} \end{subtable} \caption{Isosymmetric contribution $(a_{\mu}^{\mathrm{HVP}})^{(0)}$ and first-order correction $(a_{\mu}^{\mathrm{HVP}})^{(1)}$ of the hadronic vacuum polarisation contribution $a_{\mu}^{\mathrm{HVP}}$ from the two descretisations $\langle \mathcal{V}^{\gamma}_{\mathrm{c},\mathrm{R}}\mathcal{V}^{\gamma}_{\mathrm{l},\mathrm{R}} \rangle$ and $\langle \mathcal{V}^{\gamma}_{\mathrm{l},\mathrm{R}}\mathcal{V}^{\gamma}_{\mathrm{l},\mathrm{R}} \rangle$. The statistical and scale setting errors are labelled with "st" and "a", respectively.} \label{tbl:amuhvp} \end{table} In continuous Euclidean spacetime the LO-HVP contribution $a_{\mu}^{\mathrm{HVP}}$ can be computed from the QCD-connected part of the renormalised vector-vector correlation function by means of the time-momentum representation~\cite{Bernecker:2011gh,Francis:2013fzp,DellaMorte:2017dyu} \begin{align} a_{\mu}^{\mathrm{HVP}}\delta^{\mu_{2}\mu_{1}} &= \Big(\frac{\alpha}{\pi}\Big)^{2} \int_{0}^{\infty} \mathrm{d}x^{0}\, \tilde{K}(x^{0},m_{\mu}) \int \mathrm{d}x^{3}\langle \mathcal{V}^{\gamma x\mu_{2}}_{\mathrm{R}} \mathcal{V}^{\gamma 0\mu_{1}}_{\mathrm{R}}\rangle_{\mathrm{QCD-con}}, \nonumber\\ \tilde{K}(t,m_{\mu}) &= -8\pi^{2}\int_{0}^{\infty} \frac{\mathrm{d}\omega}{\omega}\,\frac{1}{m_{\mu}^{2}}\hat{s}Z(\hat{s})^{3}\,\frac{1-\hat{s}Z(\hat{s})}{1+\hat{s}Z(\hat{s})^{2}}\,\Big(\omega^{2}t^{2}-4\sin^{2}\Big(\frac{\omega t}{2}\Big)\Big), \end{align} \vspace{-0.1em}where $\tilde{K}(x^{0},m_{\mu})$ is the muon mass dependent integration kernel~\cite{DellaMorte:2017dyu}, $Z(\hat{s}) = -\frac{\hat{s}-\sqrt{\hat{s}^{2}+4\hat{s}}}{2\hat{s}}$ and $\hat{s} = \frac{\omega^{2}}{m_{\mu}^{2}}$. In the following, we drop the subscript "QCD-con" as we only consider quark-connected diagrams, c.f. \cref{eq:qconmes2pt}. Otherwise, the QCD-disconnected QED-connected part has to be subtracted by hand as it corresponds to a higher order HVP insertion~\cite{Chakraborty:2018iyb}. We discretise the continuum expression replacing the time integration by a finite summation up to $x^{0}_{\mathrm{cut}}$ and average over the three spatial components of the vector-vector correlation function: \begin{align} a_{\mu}^{\mathrm{HVP}} &= \Big(\frac{\alpha}{\pi}\Big)^{2} a\sum_{x^{0}=0}^{x^{0}_{\mathrm{cut}}} \tilde{K}(x^{0},m_{\mu}) \,\frac{1}{3}\sum_{\mu=1}^{3}\langle \mathcal{V}^{\gamma x^{0}\mu}_{\mathrm{R}} \mathcal{V}^{\gamma 0\mu}_{\mathrm{R}}\rangle. \label{eq:amuhvp} \end{align} \vspace{-0.1em} We treat the noise problem of $\langle \mathcal{V}_{\mathrm{R}}^{\gamma x^{0}}\mathcal{V}_{\mathrm{R}}^{\gamma 0} \rangle$ for large $x^{0}$ by performing a single-state reconstruction $\langle \mathcal{V}_{\mathrm{R}}^{\gamma x^{0}}\mathcal{V}_{\mathrm{R}}^{\gamma 0} \rangle_{\mathrm{rec}} = c \,e^{-m x^{0}}$, where the parameters $c$ and $m$ are determined by a fit. Nevertheless, this is only an effective description as we cannot resolve various low-energy states. We switch between $\langle \mathcal{V}_{\mathrm{R}}^{\gamma x^{0}}\mathcal{V}_{\mathrm{R}}^{\gamma 0} \rangle$ and the reconstruction $\langle \mathcal{V}_{\mathrm{R}}^{\gamma x^{0}}\mathcal{V}_{\mathrm{R}}^{\gamma 0} \rangle_{\mathrm{rec}}$ at $x^{0}_{\mathrm{swi}}$ above which the signal is lost. We perform a perturbative expansion, but neglect isospin breaking effects in the scale $a$ for $am_{\mu}^{\mathrm{phys}}$. \Cref{fig:amuhvp0,fig:amuhvp1} show the isosymmetric and first-order contributions to the integrand in \cref{eq:amuhvp} and the corresponding contributions to $a_{\mu}^{\mathrm{HVP}}$ for N200. The reconstruction is particularly relevant for suppressing the noise at large distances for the isosymmetric contribution, as well as for the first-order contributions with the expansion parameters $\Delta m_{\upq}$, $\Delta m_{\downq}$ and $e^{2}$. Results for the three investigated ensembles are displayed in \cref{tbl:amuhvp}. At the given level of statistical accuracy we find compatible results for both lattice discretisations. The scale setting uncertainty dominates the error of $(a_{\mu}^{\mathrm{HVP}})^{(0)}$ and $(a_{\mu}^{\mathrm{HVP}})^{(1)}$ is a correction to $(a_{\mu}^{\mathrm{HVP}})^{(0)}$ smaller than $O(0.5\%)$. The first-order correction $(a_{\mu}^{\mathrm{HVP}})^{(1)}$ is smaller than the error of $(a_{\mu}^{\mathrm{HVP}})^{(0)}$. \section{The LO hadronic contribution to the running of $\alpha_{\mathrm{em}}$} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{plots/ratio_deltaalphahad1coeffsum_deltaalphahad0_cl.pdf} \includegraphics[width=0.49\textwidth]{plots/ratio_deltaalphahad1coeffsum_deltaalphahad0_ll.pdf} \caption{Relative isospin breaking correction to the hadronic contributions $\Delta\alpha^{\mathrm{had}}_{\mathrm{em}}$ to the running of the electromagnetic coupling from the two descritisations $\langle \mathcal{V}^{\gamma}_{\mathrm{c},\mathrm{R}}\mathcal{V}^{\gamma}_{\mathrm{l},\mathrm{R}} \rangle$ and $\langle \mathcal{V}^{\gamma}_{\mathrm{l},\mathrm{R}}\mathcal{V}^{\gamma}_{\mathrm{l},\mathrm{R}} \rangle$.} \label{fig:runningalpha} \end{figure} The LO hadronic contribution to the running of $\alpha_{\mathrm{em}}$ is related to the subtracted hadronic vacuum polarisation function $\hat{\Pi}(p^{2}) = \Pi(p^{2}) - \Pi(0)$ by $\Delta\alpha_{\mathrm{em}}^{\mathrm{had}}(-p^{2}) = 4\pi\alpha_{\mathrm{em}}\,\hat{\Pi}_{\mathcal{V}^{\gamma}_{\mathrm{R}}\mathcal{V}^{\gamma}_{\mathrm{R}}}(p^{2})$. We compute $\hat{\Pi}$ in the time-momentum representation~\cite{Bernecker:2011gh} \begin{align} \hat{\Pi}_{\mathcal{V}^{\gamma}_{\mathrm{R}}\mathcal{V}^{\gamma}_{\mathrm{R}}}(p^2)\,\delta^{\mu_{2}\mu_{1}} &= \int_{0}^{\infty} \mathrm{d}x^{0}\, K(p^{2},x^{0})\int \mathrm{d}x^{3}\, \langle \mathcal{V}^{\gamma x\mu_{2}}_{\mathrm{R}} \mathcal{V}^{\gamma 0\mu_{1}}_{\mathrm{R}}\rangle_{\mathrm{QCD-con}} \end{align} with the kernel function $K(\omega^{2},t) = -\frac{1}{\omega^{2}}(\omega^{2}t^{2}-4\sin^{2}(\frac{\omega t}{2}))$. We treat $\langle \mathcal{V}^{\gamma}_{\mathrm{R}}\mathcal{V}^{\gamma}_{\mathrm{R}} \rangle$ as in \cref{sec:amu}. \Cref{fig:runningalpha} shows results for the quark-connected contributions depicted in \cref{eq:qconmes2pt}. For both discretisations we find corrections smaller than $O(0.5\%)$ on all investigated ensembles with the largest corrections in the small-momentum regime. \section{Conclusions and Outlook} We introduced a hadronic renormalisation scheme for QCD+QED and QCD$_{\text{iso}}$ inspired by chiral perturbation theory and computed leading isospin breaking effects in the LO-HVP contribution to the anomalous magnetic moment of the muon $a_{\mu}^{\mathrm{HVP}}$ as well as in the LO hadronic contributions to the running of the electromagnetic coupling $\Delta \alpha_{\mathrm{em}}^{\mathrm{had}}$. For both quantities we found corrections smaller than $O(0.5\%)$ on the investigated ensembles. In a similar fashion, leading isospin breaking effects can also be computed for hadronic contributions to the running of the weak mixing angle $\Delta\sin^{2}\Theta_{W}^{\mathrm{had}}$~\cite{SanJose:2021apl} based on the correlation function $\langle \mathcal{V}^{Z}_{\mathrm{R}}\mathcal{V}^{\gamma}_{\mathrm{R}} \rangle$, where $\mathcal{V}^{Z}_{\mathrm{R}} = \mathcal{V}^{T_{3}}_{\mathrm{R}} - \sin^{2}\Theta_{W}\mathcal{V}^{\gamma}_{\mathrm{R}}$ and $\mathcal{V}^{T_{3}}_{\mathrm{R}} = -\frac{1}{2\sqrt{6}}\mathcal{V}^{0}_{\mathrm{R}}+\frac{1}{2}\mathcal{V}^{3}_{\mathrm{R}}+\frac{1}{2\sqrt{3}}\mathcal{V}^{8}_{\mathrm{R}}$. To incorporate isospin breaking effects in the scale setting, which has been neglected in this work, we have started to investigate masses of octet and decuplet baryons~\cite{Segner:2021}. Due to the noise problem of the vector-vector correlation function we plan to base the renormalisation procedure of the local vector current on the vector Ward identity~\cite{Gerardin:2018kpy,Gerardin:2019rua}, which allows for a better controlled determination of the renormalisation factors. Additionally, we are aiming for the inclusion of leading order QED finite volume corrections for hadron masses~\cite{Borsanyi:2014jba} as well as for the HVP related observables~\cite{Bijnens:2019ejw}. \vspace{0.5em} \begin{small} We are grateful to our colleagues within the CLS initiative for sharing ensembles. Our calculations were performed on the HPC Cluster "Clover" at the Helmholtz Institute Mainz and on the HPC Cluster "Mogon II" at the University of Mainz. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time on the GCS Supercomputer JUWELS at J{\"u}lich Supercomputing Centre (JSC) for project CHMZ21. \end{small} \bibliographystyle{JHEP}
1,314,259,992,824
arxiv
\section{Introduction} Stars on the red giant branch undergo a process of copious mass-loss leading to the formation of a circumstellar envelope (CSE). These envelopes, quilted up by the ejected material, contain atomic and molecular gas and are characteristic of the post Asymptotic Giant Branch (AGB) evolutionary phase, which results in the formation of a Planetary Nebula (PNe). Because of the huge amount of processed material returned to the ISM, this evolutionary phase is very important for the chemical evolution of the Galaxy. Yet, the short transition phase between the end of the AGB and the formation of a new PNe is still poorly understood. In particular, it is quite challenging to understand how the almost symmetric CSE observed around AGB stars transform themselves in the highly structured morphologies observed in high dynamical range optical images of PNs. The importance of studying objects in the way to PNe resides in the fact that in their CSE the unknown physical mechanisms, that shape the PNe, are already at work, as HST images of multi-polar outflows in post-AGB stars appear to indicate (Sahai \cite{sahai01}). Despite of numerous efforts to identify the shaping agent/s in PNs (Sahai \& Trauger \cite{sahai98}; Garcia-Segura et al. \cite{garciaseg}), there are no observative evidences that support one mechanism in favors to the others (Balick and Franck \cite{balick}). Dust is ubiquitous detected in post-AGB and PNe and, quite often, it is in the form of disks and tori. Some mo\-dels evocate these structures, whose origin is however under debate, as important ingredient for the mechanism which produces collimated outflows observed in the CSE (Frank et al. 1997; Huggins et al. \cite{Huggi}). In this contest, very important informations can be provided by stu\-dying the physical properties of CSE and in particular, the pre\-sen\-ce and the spatial distribution of the circumstellar dust, as this may help in finding clues to understand the nature of the shaping agent. Detailed studies of the spectral energy distribution (SED) of a small sample of optically bright post-AGB candidate stars have shown that these objects can be divided into two groups, depending on the shape of the IR excess (Trams et al. \cite{trams}; van der Veen et al. \cite{vanderveen}): sources with a broad IR excess extending from the near infrared until the far-IR have both hot and cool dust in their circumstellar shells and sources with only a far-IR excess show only the presence of cool dust. The double peaks in the SEDs appear to be characteristic for objects in transitions, but the presence of only cool dust seems to point out objects more evolved towards PNs. \begin{table*} \caption[]{Properties of stars in our sample. {\it IRAS} fluxes come directly from Point Source Catalog (PSC).} \label{sample} \begin{center} \begin{tabular}{ l c c rrrr} \multicolumn{7}{c}{Sources with Optical Counterpart} \\ \hline \hline & & & & & &\\ IRAS Name & RA(J2000) & DEC(J2000) & F$_{12}$ & F$_{25}$ & F$_{60}$ & F$_{100}$ \\ & & & [Jy] & [Jy] & [Jy] & [Jy] \\ \hline 00210+6221 & 00 23 51.2 & +62:38:16 & 48.5 & 51.9 & 12.5 & $<$23.2\\ 01174+6110 & 01:20:44.9 & +61:26:18 & 4.1 & 16.9 & 33.9 & 4.1\\ 04296+3429 & 04:32:56.6 & +34:36:11 & 12.7 & 45.9 & 15.4 & $<$9.2\\ 05089+0459 & 05:11:36.1 & +05:03:26 & 7.4 & 21.9 & 11.9 & 3.8\\ 06530--0213 & 06:55:32.1 & $-$02:17:30 & 6.1 & 27.4 & 15.1 & 4.1\\ 07134+1005 & 07:16:10.2 & +09:59:48 & 24.5 & 116.7 & 50.1 & 18.7\\ 07331+0021 & 07:35:41.1 & +00:14:58 & 15.3 & 68.1 & 18.5 & 3.7\\ 17436+5003 & 17:44:55.4 & +50:02:39 & 6.1 & 184.0 &152.0 & 48.7\\ 19114+0002 & 19:13:58.6 & +00:07:31 & 31.3 & 648.3 &515.9 &168.1\\ 20000+3239 & 20:01:59.4 & +32:47:32 & 15.0 & 71.0 & 30.0 & $<$43.1 \\ 20028+3910 & 20:04:35.0 & +39:18:38 & 41.8 & 210.8 &143.1 & 46.5\\ 22223+4327 & 22:24:30.6 & +43:43:03 & 2.1 & 37.1 & 22.4 & 9.5\\ 22272+5435 & 22:29:10.3 & +54:51:06 & 73.9 & 302.4 & 96.6 & 41.0\\ 22574+6609 & 22:59:18.4 & +66:25:48 & 9.0 & 29.5 & 10.7 & 2.5\\ 23304+6147 & 23:32:45.0 & +62:03:49 & 11.4 & 59.1 & 26.6 & 7.2\\ \hline & & & & & &\\ \multicolumn{7}{c}{Sources without Optical Counterpart} \\ \hline \hline 07430+1115 & 07:45:49.8 & +11:08:25 & 7.7 & 22.9 & 10.7 & 2.5\\ 18454+0001 & 18:48:01.5 & +00:04:47 & 10.8 & 14.5 & 13.6 &$<$384\\ 18514+0019 & 18:53:57.9 & +00:23:24 & 4.9 & 23.4 & 17.3 &$<$152\\ 18576+0341 & 19:00:11.2 & +03:45:46 & 58.5 &425.0 &274.7 &$<$1660\\ 19024+0044 & 19:05:01.5 & +00:48:48 & 2.9 & 48.8 & 42.5 & 15.7\\ 19075+0432 & 19:10:00.0 & +04:37:06 & 5.2 & 28.1 & 31.8 & 14.4\\ 19454+2920 & 19:47:24.3 & +29:28:12 & 17.3 & 89.6 & 54.4 & 14.7\\ 20144+4656 & 20:15:58.3 & +47:05:39 & 1.2 & 17.0 & 20.0 & $<$85\\ 21537+6435 & 21:55:04.6 & +64:49:54 & 6.9 & 26.1 & 13.3 & $<$6.1\\ \hline \end{tabular} \end{center} \end{table*} Thermal emission from dusty envelopes may extend up to millimiter wavelengths, depending on the temperature of the dust. In particular, millimetric observations are essential to better determine the SEDs of the sources and thus to put more stringent constraints to the model of their circumstellar envelopes. Moreover, millimetric observations will allow to assest the presence of multiple dusty shells to be related to different mass-loss episodes undergone by the stars. In this paper we present the results of a 1.2 mm survey aimed to detect the millimetric emission from a sample of post-AGB stars for a robust modeling of their circumstellar envelopes. This appears as a first, necessary step in preparation for the use of the new interferometers (i.e. ALMA and CARMA) that, in the very near future, will be able to resolve, in unprecedented spatial resolution, the circumstellar geometry for a full understanding of the shaping mechanisms. \section{Observations} \subsection{Sample selection} In the attempt to determine a possible millimetric emission due to thermal dust emission from the circumstellar envelope, we observed a sample of Post-AGB stars with the 30m IRAM telescope. Since the dusty envelopes show a clear signature in the far-IR spectrum of the stars, the IRAS colour-colour diagram has been successfully used by several authors in systematic searches for post-AGB objects by looking for sources in-between the locus of the planetary nebulae and late-type AGB stars. Garcia-Lario et al. (\cite{garcia}) compiled a list of stellar objects, characterized by strong F-IR excess, which occupy the same region of the IRAS color dia\-gram as AGB stars and PNs. The sample contains 126 Post-AGB. Among those, a large fraction ($70\%$) represents post-AGB in a very early stage, still heavily obscured in the optical. Another method of looking for transition objects is concentrated on optically bright objects with an IR excess due to circumstellar dust (Pottash and Parthasarathy \cite{Pottasch}; Trams et al. \cite{trams}; Oudmaijer et al. \cite{oud92}). These resulted in the detection of objects scattered in the IRAS colour-colour diagram, as they show different amounts of IR excesses and different IR-colours. Oudmaijer et al (\cite{oud92}; \cite{oud96}), looked for transition objects with an optical counterpart by performing a cross-correlation of the SAO optical catalogue with the IRAS point-source catalogue, selecting supergiants with spectral type between B and G and with an IR excess due to circumstellar dust. To select a sample of possible targets we used the most complete compilations of objects with no optical counterpart, from Garcia-Lario et al. (\cite{garcia}) and of objects with optical counterpart, from Oudmaijer et al (\cite{oud92}; \cite{oud96}), selecting the stars identified as post-AGB, which are associated with highly evolved post-AGB stars with low-mass progenitors. We further include in the sub-sample of stars with optical counterpart 11 stars, originally not classified by Oudmjier et al. (\cite{oud92}) as Post-AGB, for which there are compelling observational evidences that are in the post-AGB evolutionary stage (van der Veen et al. \cite{vanderveen}; van Wincker \cite{vanwin97}). \begin{table}[] \caption[]{Measured millimeter fluxes.} \label{flux} \begin{center} \begin{tabular}{ l c r r c } \multicolumn{5}{c}{Sources with Optical Counterpart} \\ \hline \hline & & & & \\ IRAS Name & Date & Time & Flux~ & Weather$^{\dag}$\\ & & [s] ~~& [mJy] & \\ \hline 00210+6221 & 21 Jan 02 & 1200 & $\dots\pm$2.4 & B \\ 01174+6110 & 21 Jan 02 & 1200 & 22.9$\pm$1.7 & B \\ 04296+3429 & 14 Feb 02 & 1200 & 4.4$\pm$1.3 & A \\ 05089+0459 & 05 Feb 02 & 1440 & $\dots\pm$1.0 & A \\ 06530$-$0213 & 14 Feb 02 & 1200 & 4.8$\pm$1.5 & A \\ 07134+1005 & 07 Feb 02 & 1200 & 14.0$\pm$1.5 & A \\ 07331+0021 & 07 Feb 02 & 1200 & $\dots\pm$1.9 & B \\ 17436+5003 & 05 Feb 02 & 1080 & 15.2$\pm$1.1 & A \\ 19114+0002 & 04 Feb 02 & 480 & 68.2$\pm$3.4 & B \\ 20000+3239 & 21 Jan 02 & 1200 & 11.4$\pm$1.7 & B \\ 20028+3910 & 21 Jan 02 & 1200 & 11.9$\pm$1.5 & A \\ 22223+4327 & 04 Feb 02 & 1200 & $\dots\pm$1.5 & A \\ 22272+5435 & 04 Feb 02 & 1200 & 35.3$\pm$1.7 & A \\ 20144+4656 & 21 Jan 02 & 1200 & $\dots\pm$1.8 & B \\ 23304+6147 & 21 Jan 02 & 1200 & $\dots\pm$1.8 & B \\ \hline & & & &\\ \multicolumn{5}{c}{Sources without Optical Counterpart} \\ \hline \hline 07430+1115 & 07 Feb 02 & 1200 & $\dots\pm$1.3 & A \\ 18454+0001 & 22 Jan 02 & 1200 & $\dots\pm$1.4 & A \\ 18514+0019 & 21 Jan 02 & 1200 & 8.2$\pm$1.3 & A \\ 18576+0341 & 03 Feb 02 & 840 & $\ge$45.4$\pm$2.2 & B \\ 19024+0044 & 22 Jan 02 & 1200 & $\dots\pm$1.3 & A \\ 19075+0432 & 04 Feb 02 & 1200 & 6.2$\pm$1.2 & A \\ 19454+2920 & 03 Feb 02 & 1200 & $\dots\pm$4.3 & C \\ 20144+4656 & 05 Feb 02 & 1200 & $\dots\pm$2.7 & C \\ 21537+6435 & 04 Feb 02 & 1200 & 6.3$\pm$1.6 & A \\ \hline \multicolumn{5}{l}{\footnotesize{$^{\dag}$ Wheatear condition: A) good, B) poor, C) bad.}} \\ \end{tabular} \end{center} \end{table} A small sample of post-AGB stars has been observed in the millimetric band with the JCMT (van der Veen et al. \cite{vanderveen}) and the detection rate appeared to be well correlated with the F$_{60}$ IRAS flux, in the sense that all the stars with F$_{60}$ $ \geq$ 10 ~Jy have been detected. If the extra infrared excess has the shape of a cold black body, as expected from a more distant dust shell, a F$_{60}$ $\sim$ 10 Jy would imply a 1.3 mm flux higher than 36 mJy and thus easily detectable with the new Bolometer. In order to maximize the probability to have a detectable flux at 1.2 mm we thus selected, from our original sample, only those sources with ${\mathrm F}_{60} \geq 10$ Jy. This reduces our sample to 34 targets.\par In this paper we report on the results relative to a subsamples of 24 objects, which are given in Table~\ref{sample}. \subsection{The 30m IRAM observations and results} The 37-channel Max-Planck Millimeter Bolometer (``MAMBO''; Kreysa et al. \cite{kreysa}) array at the 30-meter IRAM telescope on Pico Veleta (Spain) was used to perform the survey. The observations were made between 21 January and 7 February 2002, using the standard ON-OFF technique, chopping the secondary mirror of the telescope by about 50$\arcsec$ in azimuth, at a rate of 1 Hz. The FWHP of our beam was 10$\arcsec$.5 at 1.2 mm. For each source the observations were typically obtained in blocks of 4 or 5 scans lasting 4 minutes each. Frequent skydip observations have been used to determine atmospheric extinction as function of elevation and time. The data were analyzed using the MOPSI software (Zylka \cite{zylka}). The flux calibration was performed by observing either Mars and Uranus to determine the flux conversion factor. For each channel the sky noise was subtracted by computing the weight mean of the signals from the surrounding six channels. \begin{figure*} \resizebox{17cm}{!}{\includegraphics{bbody.ps}} \caption{Spectral energy distributions of the emission towards the detected sources. The asterisks indicate measures from this work. The dashed lines show the modified blackbody fits to the long-wavelength data, as discussed in the text (par. 3.1). } \label{bbody} \end{figure*} In table \ref{flux} the resulting 1.2 mm flux densities measured for the detected objects are reported. Out of the 24 observed sources, we carried out 11 detections over the threshold of $4\sigma$ and two uncertain detections at $3\sigma$. Four of the detected and one of the undetected sources overlap the sample observed from Walmsley et al. (\cite {walmsley}), by using the 30m IRAM telescope at the wavelength of 1.3~mm, but with a different bolometer. The measured flux densities are in good accordance with the exception of IRAS~22272+5435, which shows a flux level significatively higher. The sample includes IRAS~01174+6110 and IRAS~18576+0341, which were firstly classified as post-AGB stars on the basis of their IRAS colours, but probably having different nature. IRAS~01174+6110, in fact, is likely an HII region (Kelly et al. \cite{kelly}), while IRAS~18576+0341 has been recently recognized as new LBV (Pasquali $\&$ Comeron \cite{pasquali}; Clark et al. \cite{clark}; Umana et al. \cite{Umana}). For these two objects we just report the measured flux density. \par In particular, IRAS~18576+0341 will be object of a future more detailed analysis assembled with radio high resolution results. It is to note that Umana et al. (\cite{Umana}) pointed out that the source's right ascension is about 4$\arcsec$ shifted with respect to values previously reported in li\-te\-ra\-tu\-re (Garcia-Lario \cite{garcia}). Since the first contour level in the 22 GHz map defines a source size close to 10$\arcsec$, it is reasonable to consider the measured 1.2 mm flux as a lower limit. Some of the detected object in our sample have been detected in $^{13}$CO and/or $^{12}$CO transition J=2-1 at $\lambda$= 1.3 mm, that is within the band of our observations (Hrivnak et al. \cite{hriv05}, Bujarrabal et al. \cite{bujar92}, Bujarrabal et al. \cite{bujar01}). We verified that the contribution of such emission, spreaded in our bandwidth of about 80 GHz, is lower than the errors associated to the measures for all the detected sources but IRAS~19114+0002 and IRAS~22272+5435. In particular, on the basis of the observations by Bujarrabal et al. (\cite{bujar92}), we derived for IRAS~19114+0002 an emission, in the MAMBO band, of 15 mJy from $^{12}$CO and of 1.6 mJy from $^{13}$CO; for IRAS~22272+5435 the contribution of $^{12}$CO emission is about 7 mJy (Hrivnak et al. \cite{hriv05}). In the following calculations, we thus subtracted these contributions from the observed fluxes. \subsection{Spectral energy distributions} In order to build up the spectral energy distribution (SED) of the detected sources, updated with our millimetric data, and to investigate its corrected low frequencies shape, we combined our MAMBO observations with infrared (2MASS + IRAS + MSX) and optical data available in literature. The data are corrected for the interstellar extinction, with the exception of IRAS~19075+0432 due to the lack of informations about the extinction toward the source. Moreover we have excluded from this analysis IRAS~01174+6110 because of the uncertainty on its nature. For those objects for which the observations cover a spectral range wide enough, the resulting SEDs (fig.~\ref{bbody}) show the typical double peaked shape, with the two peaks corresponding respectively to the optical emission from the photosphere of the central star and to the thermal emission from the circumstellar dust. \begin{table*} \caption[]{Derived values for the emissivity index, dust mass, absorption coefficient and envelope dust mass.} \label{masstab} \begin{center} \begin{tabular}{ l c c c r c } \multicolumn{6}{c}{Source with Optical Counterpart} \\ \hline \hline & & & & & \\ IRAS Name & T$_d$ & p & $\chi_{1.3}$ & M$_{d}$ ~~ & d ~~\\ & [K] & & $[cm^2 g^{-1}]$ & [M$_{\odot}$]~~ & [kpc] \\ \hline 07134+1005 & 135 & 1.11 & 1.75 & 8.5 (--4) & 2.4 \\ 17436+5003 & 100 & 1.55 & 0.88 & 6.2 (--4) & 1.2 \\ 19114+0002 & 100 & 1.41 & 1.09 & 4.3 (--2) & 6.0 \\ 20000+3239 & 140 & 0.94 -- 1.55 & 0.88 -- 2.1 & 0.9 -- 2.3 (--4) & 1.0$^{\dag}$\\ 20028+3910 & 100 & 1.65 & 0.75 & 2.4 (--3) & 2.9 \\ 22272+5435 & 145 & 1.03 & 1.98 & 6.2 (--4) & 1.6 \\ \hline & & & & & \\ \multicolumn{6}{c}{Source without Optical Counterpart} \\ \hline \hline 18514+0019 & 130 & 0.89 -- 2.18 & 0.32 -- 2.5 & 0.6 -- 4.8 (--4) & 1.0$^{\dag}$ \\ 19075+0432 & 85 & 1.37 & 1.2 & 1.5 (--4) & 1.0$^{\dag}$ \\ 21537+6435 &140 & 0.87 -- 0.97 & 2.18 -- 2.55 & 4.3 -- 5.1 (--5) & 1.0$^{\dag}$ \\ \hline \multicolumn{6}{l}{\footnotesize{$^{\dag}$ Assumed distance.}} \\ \end{tabular} \end{center} \end{table*} \section{Analysis} \subsection{Dust Masses} From a theoretical point of view, the mass of the circumstellar dust surrounding the central object can be derived from the observed continuum millimetric flux. Assuming that the flux density measured at this frequencies is to be ascribed to the thermal emission from optically thin and isothermal dust, the dust mass ($M_\mathrm{d}$) and the flux density are directly proportional (Hildebrand \cite{hildeb}): \begin{displaymath} M_\mathrm{d}= \frac{F(\nu)d^2}{B_{\nu}(T_\mathrm{d}) \chi_{\nu}} \end{displaymath} \noindent where $B_\nu(T_\mathrm{d})$ is the Planck function for dust temperature $T_\mathrm{d}$, $d$ is the distance to the source and $\chi_{\nu}$ is the dust opacity at the observing frequency. We can thus estimate the dust mass by using the Rayleigh-Jeans approximation: \begin{equation} M_\mathrm{d}= \frac{F(\nu) \lambda^2 d^2}{2kT_\mathrm{d} \chi_{\nu}}. \label{eq1} \end{equation} The value of $\chi_{\nu}$ is the major uncertainty that affects the conversion of the millimetric flux density in dust mass. Following Hildebrand's approach, we can extrapolate the dust opacity $\chi_{\nu}$ at 1.2 mm from its value at 250 $\mu$m, i.e. $\chi_{250\mu \mathrm{m}} = 10$ cm$^2$ g$^{-1}$, assuming the power-law dependence $\chi_{\nu}\propto\nu^p$, where $p$ is the emissivity index and strongly depends on the mineralogical composition of the grain and on their physical shape. Under the hypothesis of optically thin emission, the emissivity index $p$ may be derived from the spectral index in the millimetric and submillimetric spectral range, where the dust emits as a blackbody modified by the frequency dependent dust opacity, that is $F_{\nu}\propto \chi_{\nu}B_{\nu}(T_\mathrm{d})$. We thus proceeded iteratively by fitting a modified blackbody to the infrared and millimetric data to estimate the dust temperature, while the emissivity index has been derived from a linear fit in the $\log{\nu}$ - $\log\frac{F_{\nu}}{B(T_\mathrm{d})}$ to the integrated flux density from 100 $\mu$m to 1.2 mm. In the case of IRAS~19075+0432, it is to note that the MSX data at 8.28 $\mu$m suggest the presence of both warm and cold circumstellar dust. \begin{figure*} \resizebox{17cm}{!}{\includegraphics{07134.ps}} \caption{{\it Left:} Image of IRAS~07134+1005 observed at 11.9 $\mu$m by Hony et al. (\cite{hony}). {\it Right:} The map simulated at 11.9 $\mu$m using the DUSTY code, assuming the parameters reported in table \ref{pardusty}. The map has been convolved with a circular beam having HPBW of 0$^{\prime \prime}$.83. In both the maps the contours indicate the 5-95\% intensity levels in steps of 10\%.} \label{ir07} \end{figure*} For sources with only an upper limit on IRAS 100 $\mu$m flux, the same analysis as above was performed using the IRAS 60 $\mu$m flux to constrain minimum value for $p$. In fig.~\ref{bbody} the resulting curves are overplotted to the observed SEDs. In table~\ref{masstab} the best fit parameters are listed together with the derived dust opacities and masses calculated from the eq.~\ref{eq1} for $\lambda$ = 1.2 mm. \subsection{SED modeling with DUSTY code} \begin{figure*} \resizebox{17cm}{!}{\includegraphics{figfit.ps}} \caption{Spectral energy distributions of the emission towards six stars in the sample. The asterisks indicate measures from this work. The dashed lines show the SEDs computed using the DUSTY code. } \label{fit} \end{figure*} For six of the detected post-AGB stars, the observations scan a wide spectral range between optical bands and radio regions. All the available informations on the sources, including the results from SED fits with envelope models, have been collected from the literature, in order to verify the compatibility of the previously determined parameters with our 1.2 mm fluxes. The observed SEDs are plotted in fig. \ref{fit}, along with best fit SEDs from the radiative transfer code DUSTY (Ivezi\`{c} et al. \cite{ivez}). The code allows to calculate the radiative transfer through a spherically symmetric dust shell and determines the spectral energy distribution on the basis of specified properties of the radiation source, dust composition and dust shell properties. In particular, DUSTY allows to use six different types of dust grains which are assumed to be distributed in a spherical shell. For all the sources we assumed Planckian SED for the central star and the modified MRN (Mathis, J.S., Rumpl, W., Nordsieck, K.H. \cite{mathis}) power law for the grain size distribution $n(a) \propto a^{-q}$, where $a$ is the grain size and {\it q} is fixed to 3.5, as it is commonly used, if not otherwise specified in the text. For more details about the other options we reference to the description in the user manual. When available, we calculated the SED by assuming the stellar and dust parameters taken from previous references in literature, otherwise the best-fit model parameters are sought iteratively by fitting the shape of the observed SED. With the aim of add more observative constrains, we compared the infrared maps of the sources available in literature with the simulated maps obtained from the code at the same observative frequencies. {\bf All the simulated maps have been convolved with the beam of the instrument used to obtain the map used for comparison}. An example of such a comparison is shown in fig. \ref{ir07} in the case of IRAS~07134+1005. In particular we refer to Hony et al. (\cite{hony}) for IRAS~07134+1005, Gledhill et al. (\cite{gled03}) for IRAS~17436+5003, Jura \& Werner (\cite{jura99}) for IRAS~19114+0002, Ueta et al. (\cite{ueta01}) for IRAS~22272+5435 and to 2MASS Atlas Image provided from Infrared Science Archive (bungo.ipac.caltech.edu/applications/2MASS/IM/ interactive.html) for IRAS~20000+3239 IRAS~20028+3910. We thus rejected all those solutions that involve shell sizes which don't match the observative constrains according to the rms noise level of the measures. The remaining four sources of our sample, detected in the survey, were not fitted because of the lack of either data and knownledge of stellar parameters, involving too many degrees of freedom. A summary of the adopted input parameters used to produce the model fits in fig. \ref{fit} is given in table \ref{pardusty}. In addition, we report the derived stellar and dust shell parameters beside the physical quantities used in scaling the dusty outputs in accordance with the dusty manual (Ivezi\`{c} et al. \cite{ivez}). In all calculations we assumed a density for the grain material of 3 g/cm$^3$ and a gas/dust ratio of 220.\par The fit's parameter that is mainly constrained from the millimetric and submillimetric measures is the shell outer radius, as these observations probe the cool, outer parts of the CSE. However, the value derived from the SED fit, and reported in table \ref{pardusty}, has to be meant as an estimate at the distance from the star where the wind density is close to the ambient density. Thus, the modeled envelope could result to be more extended than the observed one if its outer parts comes to have surface brightness lower or equal to the rms noise level. In particular, it is not surprising that the radii derived for the CSE outer edge result to be much greater than those derived from the near or mid infrared observations, since at those frequencies most of the flux comes from the hotter inner regions of the envelope. {\bf On the basis of the mass loss rate and shell size derived from the SED fit, we calculated the total envelop mass, assuming a constant wind expansion velocity which has been taken from references in literature. The dust envelope masses derived for a gas/dust ratio of 220 are also reported in table~\ref{pardusty}. Such values are consistent with those reported in table~\ref{masstab} within a factor of 2. We can thus asses that there is a good agreement between the two dust mass estimates, expecially given the rough approximations made in the assumption of constant mass loss rate and in the choice of the gas/dust ratio, which could differ significatively from the assumed value as a function of the C/O ratio.} The fit of the model to the observed flux distribution for each object is discussed in the next section. \section{Results and discussion} {\it IRAS~07134+1005}. This object is known to have strong features near 21 and 30 $\mu$m, whose carrier species have not been firmly identified (Kwok et al. \cite{kwok89}). It is an F5 supergiant having C/O $\approx$ 1 (Van Winckler \& Reyniers \cite{vanwin00}). Despite of the very simplified dust composition used in our model calculations, the parameters derived from our fit are in good accordance with the ones calculated from Meixner et al. \cite{meix04} and Hony et al. (\cite{hony}), using dif\-fe\-rent dust radiative transfer programs, with the exception of the mass loss rate that is larger for a factor of 3 from the one calculated from Hony et al. (\cite{hony}). On the other hand, our derived value for $\dot{M}$ agrees with the values obtained from the SED model performed by Hrivnak et al. (\cite{hriv00}) and from Jura et al. (\cite{jura00}) on the basis of their mid-IR images of the source.\\ {\it IRAS~17436+5003}. It is a F3 Ib supergiant with an oxygen-rich chemistry (Justtanont et al. \cite{justt}). Its SED has been modeled by several authors adopting different choices mainly in the selection of grain parameters. While Hoogzaad et al. (\cite{hooz}) and Meixner et al. (\cite{meix02}) adopt the usual grain size distribution $n \propto a^{-3.5}$ with a minimum size of 0.18 $\mu$m and 0.2 $\mu$m respectively, Gledhill \& Yates (\cite{gled03}) opt for very small grains, using a steep size distribution $n \propto a^{-6}$ and minimum grain size 0.01 $\mu$m, that, as pointed out from the authors, better account for both the SED shape within the whole frequency range and the observed high degrees of near-IR linear polarization. In our calculation we followed the latter way. As a consequence, the dust mass reported in tab.~\ref{masstab} could be underestimate because the assumed dust opacity is derived from an approximated grain model with an average grain size of $0.1$ $\mu$m ( Hildebrand, \cite{hildeb}). This value is a factor of 5 greater than the grain size obtained from the $n \propto a^{-6}$ distribution in the adopted range for $a$. Assuming for such small grain radius $\chi_{850\mu\mathrm{m}} = 0.54$ cm$^2$ g$^{-1}$ (Gledhill \& Yates \cite{gled03}), the derived dust mass increases of one magnitude order. \begin{table*} \caption[]{Input and derived stellar and dust parameters resulting from the fit to the SED.} \label{pardusty} \begin{center} \begin{tabular}{c c c c c c c c } \hline \hline Property &IRAS & 07134+1005 & 17436+5003 & 19114+0002 & 20000+3239 & 20028+3910 & 22272+5435 \\ \hline T$_{\ast}$[K] && 7250 & 7500 & 5660 & 5000 & 7000 & 5300 \\ Chem. && C & O & O & C & C? & C \\ T$_d$[K] && 150 & 120 & 110 & 170 & 180 & 160 \\ L [L$_{\odot}$] && 6000 & 3440 & 300000 & 630 & 6600 & 6700 \\ A$_V$ && 0.5 & 1.2 & 2 & 2.5 & 1.3 & 2.5 \\ r$_{i}$ [cm] && $4.76\times 10^{16}$ & $1.91\times 10^{16}$ & $1.73\times 10^{17}$ & $8.18\times 10^{15}$ & $1.88\times 10^{16}$ & $2.46\times 10^{16}$\\ r$_{o}$ [cm] && $3.48\times 10^{17}$ & $1.53\times 10^{17}$ & $6.88\times 10^{17}$ & $2.05\times 10^{17}$ & $1.88\times 10^{17}$ & $3.11\times 10^{17}$\\ $\dot{M}$ [$\frac{M\odot}{yr}$] && $3.48\times 10^{-5}$ & $3.48\times 10^{-5}$ & $1.09\times 10^{-3}$ & $6.92\times 10^{-6}$ & $8.93\times 10^{-5}$ & $3.12\times 10^{-5}$ \\ v$_{exp}$ [km/s] && 10 .0 & 15.5 & 35.0 & 12.0 & 16.0 & 10.0 \\ M$_{d}$ [M$_{\odot}$] && $1.66\times 10^{-3}$ & $4.35\times 10^{-4}$ & $2.31\times 10^{-2}$ & $1.63\times 10^{-4}$ & $1.35\times 10^{-3}$ & $1.28\times 10^{-3}$\\ Ref. & & {\scriptsize 1, 2, 3, 4} &{\scriptsize 5, 6, 7, 18, 19, 20} & {\scriptsize 3, 8, 9, 10, 18, 19, 21} & {\scriptsize 11, 12, 13, 19, 22} & {\scriptsize 14, 15, 18, 19, 20} & {\scriptsize 13, 16, 17, 18} \\ \hline \end{tabular} \\ \end{center} \footnotesize{References - (1) Meixner et al. \cite{meix04}; (2) Hony et al. \cite{hony}; (3) Hrivnak et al. \cite{hriv89}; (4)Van Genderen et al. \cite{vangen}; (5) Ghedill et al. \cite{gled03}; (6) Ueta et al. \cite{ueta00}; (7) Skinner et al. \cite{skinner}; (8) Van der Veen \cite{vanderveen}; (9) Hawkins et al. \cite{hawkins}; (10) Th\'evenin et al. \cite{thevenin}; (11) Kwok et al., \cite{kwok95}; (12) Hrivnak et al. \cite{hriv95}; (13) Volk et al. \cite{volk}; (14) Su et al. \cite{su}; (15) Bujarrabal et al. \cite{bujar01}; (16) Hrivnak et al. \cite{hriv91}; (17) Ueta et al. \cite{ueta01}; (18) Walmsley et al. \cite{walmsley}; (19) Gledhill et al. \cite{gled02}; {\bf (20) Likkel et al. \cite{lik91}; (21) Bujarrabal et al. \cite{bujar92}; (22) Hrivnak et al. \cite{hriv00}}.} \end{table*} The inner shell radius derived from the SED fit agrees with the value of 1400 AU ($2.1\times 10^{16}$ cm at 1.2 kpc) obtained by Hoogzaad et al. (\cite{hooz}), but is about 30 -- 40\% larger than the same parameter derived from Gledhill \& Yates (\cite{gled03}) and Meixner et al. (\cite{meix02}), who nevertheless used an asymmetric dust model. The outer radius is larger than the values obtained from Hoogzaad et al. (\cite{hooz}) and Meixner et al. (\cite{meix02}), but close to that derived by Gledhill \& Yates (\cite{gled03}) and to the extension of 6$\arcsec$.5 derived from the CO observation (Bujarrabal et al., \cite{bujar92}).\\ {\it IRAS~19114+0002}. It is an oxygen-rich object of spectral type G5 Ia (Hrivnak et al. \cite{hriv89}), but the real evo\-lu\-tio\-na\-ry status of this object is still controversial. It has been classified either as post-AGB star having a luminosity of about $10^4$ L$_{\odot}$ and lying at 1 kpc (Hrivnak et al. \cite{hriv89}) or as a massive red supergiant with luminosity near $3 \times 10^5$ L$_{\odot}$ and lying to a distance of 6 kpc (Hawkins et al. \cite{hawkins}). The expansion velocity of the circumstellar envelope $v_\mathrm{e}\approx 35$ km s$^{-1}$ determined from the profile of the circumstellar CO emission (Zuckerman \& Dyck \cite{zucker}; Bujarrabal et al. \cite{bujar92}) is significantly higher than the typical value of 15 km s$^{-1}$ for an AGB star and favors the supergiant hypothesis, that is though still doubt (Josselin \& L\`ebre \cite{bujar92}). As for IRAS~17436+5003, Gledhill \& Takami (\cite{gled01}) pointed out the need to adopt a steep power law for the grain size distribution, in order to agree with the high degrees of linear polarization observed in the near-IR. We have thus adopted the grain size distribution $n \propto a^{-6}$ {\bf with $a_{\mathrm{min}}$= 0.005 $\mu$m and $a_{\mathrm{max}}$= 0.25 $\mu$m. The SED fit gives a $r_{\mathrm{in}} = 1.7\times 10^{17}$ cm, which is in good accordance with the inner shell radius of about $1.5\times 10^{17}$ cm obtained from both mid-IR images (Jura \& Werner \cite{jura99}) and near-IR imaging polarimetry observations (Gledhill et al. \cite{gled01}). Such value is very close to the inner shell size of $1.8\times 10^{17}$ cm derived, for the assumed distance of 6 kpc, from high resolution CO observations performed from Jura et al. (\cite{jura01})}. The dust mass obtained is in good accordance with the one derived from Gledhill et al. (\cite{gled02}) on the basis of submillimeter observations. \\ {\it IRAS~20000+3239}. Low resolution K-band spectra performed by Davis et al. (\cite{davis}) showed a rather compact source with angular size lower than 2$\arcsec$, as well Hrivnak et al. (\cite{hriv99}) measured 1$\arcsec$.6 in diameter in V band. Comparing with the SED parameters derived from Volk et al. (\cite{volk}), we obtain good accordance for both the mass loss rate and inner shell radius, as well as for the shell mass derived from Gledhill et al. (\cite{gled02}).\\ {\it IRAS~20028+3910}. This object is characterized from a bipolar morphology, with the central object highly obscured in optical and near-IR (Su et al. \cite{su}; Ueta et al. \cite{ueta00}). The SED, constructed with the data reported by Su et al. (\cite{su}) and references therein, thus shows a mid and far infrared peak much most bright than the one in the near infrared. Neri et al. (\cite{neri}) fitted the CO 1-0 visibility data with an elliptical gaussian component with a size of $3.5\arcsec \times 11.1\arcsec$. The dust mass obtained from our 1.2 mm measures agrees very well with the value calculated from previous submillimeter data (Gledhill et al. \cite{gled02}).\\ {\it IRAS~22272+5435}. It is an extremely carbon rich object and, as IRAS~07134+1005, shows peculiar infrared spectral features. From the analysis of CO 1-0 visibility data, Neri et al. (\cite{neri}) measured an extended envelope size (FWHM) of 21$\arcsec$. From subarcsecond mid infrared ima\-ging study (Ueta et al. \cite{ueta01}) the dust shell was found to have a toroidal structure with a 0.5$\arcsec$ inner radius, which corresponds to 1.1$\times$10$^{16}$ cm at 1.6 kpc. Our fit indicates an inner radius which is consistent with that measured by Ueta et al. (\cite{ueta01}) to whithin a factor of 2. A major discrepancy is found in the estimates of mass loss rate. On the basis of their radiative transfer calculations, in fact, Ueta et al. (\cite{ueta01}) derived a wind with $\dot{M} = 4.1 \times 10^{-6}$ M$_{\odot}$ yr$^{-1}$. Our value is closer with the mean mass loss rate obtained from the CO study performed Bujarrabal et al. (\cite{bujar01}), which is $1.8 \times 10^{-5}$ M$_{\odot}$ yr$^{-1}$, when scaled for our assumed distance. The authors report a total nebular mass of 0.18 M$_{\odot}$, that gives a dust mass of $8.0 \times 10^{-4}$ M$_{\odot}$ for an assumed gas to dust ratio of 220, and which is in good agreement with our estimate. \section{Summary and outlook} We have presented the results of 1.2 mm continuum observations for a sample of 24 sources classified as post-AGB. Continuum emission was detected toward 11 objects, while uncertain detection is reported for two other sources.\par The circumstellar dust masses were derived from our 1.2~mm measures, assuming that the emission is due to optical thin dust. For the sources of which the distance is known, the circustellar dust masses range between about $6 \times 10^{-4}$~M$_\odot$ and $2.4 \times 10^{-3}$~M$_\odot$, with exception of IRAS~19114+0002, whose post-AGB nature is, however, still in question. For the other objects we derived lower dust masses, indicating a more young circumstellar envelope, but the errors in the choice of the common assumed distance could have effected the derived values.\par For six of the detected sources, we compared the observed SEDs, constructed with additional data from li\-te\-ra\-tu\-re, and the model spectra obtained using a sym\-me\-tric radiative transfer code. This allowed us to estimate some physical parameters of stars and envelopes which have been compared with previously results reported in literature. The high detection rate ($\approx 46\%$) seems to support the goodness of our selection rules and an extension of the millimetric survey to the remain targets in our sample is necessary. Stars in our full sample belong to different evolutionary phases in the transition from AGB to PN, as the stars with optical counterpart should be more evolved. Once the full sample will be observed, the comparison between the derived physical properties of different envelopes will provide fundamental informations on this evo\-lu\-tio\-na\-ry phase not fully understood yet. We still note that the CSEs surrounding such kind of objects appear as good targets for the first light projects for future millimeter arrays such as the Atacama Large Millimeter Array (ALMA). From our fits we derive typical dimensions of CSEs ranging from $1.5 \times 10^{17}$ to $1.2\times 10^{18}$ cm, which, combined with the distances as reported in table 3, correspond to angular sizes from few arcsecs up to $\approx 13\arcsec$. This implies that such CSEs can be, in principle, mapped, in great details with the foreseen ALMA angular resolutions by using a combination of both compact (to fully recover all the flux) and extended con\-fi\-gu\-ra\-tions (Wilson et al. 2005). The foreseen capabilities of ALMA will allow, at least for the more compact sources of our sample, and in general for post-AGB sources, to directly map the dusty envelopes at several millimetric and submillimetric frequencies. This could provide a better constrains to the modeling of CSE. Furthermore, a detailed map of CSE could evidence any kind of structured morphologies that can be related to different mass loss episodes, suffered by the star during the AGB evolutionary phase.\par To evaluate the possibility to actually resolve and map the CSEs of our sample, we need to compare, at each frequency channel, the expected flux densities with the foreseen ALMA sensitivity. From the fitted SEDs we have then extrapolated the flux densities in the ALMA first light channels, namely 0.5, 0.6 0.9 1.3, 2 and 3 mm. Expected ALMA sensitivities have been calculated by using the ALMA Sensitivity Calculator (www.eso.org/projects/alma/science/bin/sensitivity.html) in the case of first-light, assuming that only 8 antennae will be available, and in the case of the full (64 antennae) array. For both configurations we derived a detection rate close to $100$\% over the 3$\sigma$ at almost all frequencies. In fig. \ref{alma} we show the expected percentage of the studied objects that ALMA will allow to observe with dynamical range greater than 50. It is evident how ALMA will allow us not only to better sample the millimetric range of the source SEDs, that up to now is very poor, but also to obtain, in most cases, multifrequencies high resolution maps of the circumstellar matter surrounding the stars of our sample, extended to the sources belonging to the south hemisphere. \begin{figure} \resizebox{10cm}{!}{\includegraphics{alma.ps}} \caption{Percentage of objects, detected at 1.2 mm, which are observable with dynamical range grater than 50, assuming calculate ALMA sensitivity with both 8 and 64 antennae.} \label{alma} \end{figure} \begin{acknowledgement} We would like to thank the anonymous referee for his constructive criticism which enabled us to improve this paper. \end{acknowledgement}
1,314,259,992,825
arxiv
\section{Introduction} \label{Intro} The development of deep learning has led in recent years to a wide range of machine learning (ML) applications targeting different aspects of health \cite{ravi2017deep}. Together with the recent development of consumer electronics and physiological sensors this promises low cost solutions for health monitoring and disease detection for a very broad part of the population at any location and any time. The benefits of automatic disease detection and especially early prognosis and life style support to keep healthy are obvious and result in a healthier society and substantial reduction of health expenses. However, there are high demands on the reliability of any kind of health applications and the applied ML methods must be able to learn reliably and operate with high performance. To achieve this with supervised learning, appropriate (labelled) datasets gathered with the physiological sensors that shall be used in a health application are needed for training such that classifiers can learn to sufficiently generalize to new data. However, there are several challenges related to training datasets for health applications including data quantity, class imbalance, and personalization. \par In many domains, the quantity of labelled data has increased substantially, like computer vision and natural language processing, but it remains an inherent problem in the health domain \cite{ravi2017deep}. This is due to privacy concerns as well as the costs of data acquisition and data labelling. Medical experts are needed to label data and crowdsourcing is not an option. To enable medical experts to label data, data are typically acquired with two sensor sets. One set with the sensors that should be used in a health application and one sensor set that represents the gold standard for the given task. This problem is magnified by the fact that any new physiological sensor requires new data acquisition and labelling. Furthermore, there is a high probability that the data acquisition results in an unbalanced dataset. Since many health applications aim to detect events that indicate a health issue there should “ideally” be equally many time periods with and without these events. In general, this is unrealistic for a recording from an individual as well as across a larger population that is not selected with prior knowledge of their health issues. For example, in the recent A3 study \cite{traaen2019treatment} at the Oslo University Hospital individuals with atrial fibrillation were screened for sleep apnea. In a snapshot from this study with 328 individuals, 62 are classified as normal, 128 with mild apnea, 100 with moderate apnea, and 38 with severe apnea. The severeness of sleep apnea is captured by the Apnea Hypopnea Index (AHI) which measures the average number of apnea events per hour and is classified as follows: AHI$<$15, (normal), 15$\leq$ AHI$<$30, (moderate), AHI$\geq$30, (severe)\footnote{From a ML viewpoint only individuals with severe sleep apnea would produce balanced recordings}. It is unrealistic to expect that a sufficiently large dataset for training can be collected from each individual, because it is inconvenient, requires medical experts to label the data, and might be infeasible due to practical reasons for those that develop the application and classifier. \par The objectives of this work are to address these problems with insufficient datasets in the health domain: (1) generate synthetic data from a distribution that approximates the true data distribution to enhance the original dataset; (2) use this approximate distribution to generate data in order to rebalance the original dataset; (3) examine the possibility to generate personalized data that correspond to specific individuals; and (4) investigate how these methods can lead to performance improvements for the classification task. \par The mentioned problems are relevant for many applications in the health domain. As a proof-of-concept, we focus in our experimental work on the detection of obstructive sleep apnea (OSA). OSA is a condition that is characterized by frequent episodes of upper airway collapse during sleep, and is being recognized as a risk factor for several clinical consequences, including hypertension and cardiovascular disease. The detection and diagnosis is performed via polysomnography (PSG). PSG is a cumbersome, intrusive and expensive procedure with very long waiting times. Traditionally, PSG is performed in a sleep laboratory. It requires the patient to stay overnight and record various physiological signals during sleep, such as the electrocardiogram, electroencephalogram, oxygen saturation, heart rate, and respiration from the abdomen, chest and nose. These signals are manually evaluated by a sleep technician to give a diagnosis. In our earlier work \cite{kristiansen2018data}, we could show that machine learning can be used to classify PSG data with good performance, even if only a subset of the signals is used, and that the quality of collected data with commercial-of-the-shelf respiratory sensors (like the Flow sensor from Sweetzpot costing approximately 200 Euro) approaches the quality of equipment used for clinical diagnosis \cite{loberg2018quantifying}. \par In this work, we use different conditional recurrent GAN designs, and four well known classification techniques, i.e., K-Nearest Neighbor (KNN), Random Forest (RF), Multi-Layer Perceptron (MLP), and Support Vector Machine (SVM) to achieve the aforementioned objectives. Since we want to use datasets that are publicly available and open access, we use the Apnea-ECG and MIT-BIH databases from Physionet \cite{ApneaEcg,MITBIH} for our experiments. The reminder of this paper is organized as follows: In Section 2 we examine related works. We present our methods in Section 3. In Section 4 we evaluate these methods by performing three experiments. Section 5 concludes this paper. \section{Related Work} \label{Related_Work} Although the GAN framework \cite{goodfellow2014generative} has recently acquired significant attention for its capability to generate realistic looking images \cite{radford2015unsupervised,isola2017image}, we are interested in time series generation. The GAN is not as widely used for time series generation as for images or videos, however, several works which investigate this approach exist \cite{mogren2016c}. There are, also relevant applications for sequential discrete data \cite{yu2017seqgan}. \par In relation to our objectives most works are related to Objective 1 \cite{esteban2017real,choi2017generating}. Hyland et al. \cite{esteban2017real} use a conditional recurrent GAN to generate realistic looking intensive care unit data, which have continuous time series form. They use a conditional recurrent GAN (based on \cite{mirza2014conditional}), to generate data preconditioned on class labels. Among other experiments, they train a classifier to identify a held out set of real data and show the possibility of training exclusively in synthetic data for this task. They also introduce the opposite procedure (train with the real data and test on the synthetic) for distribution evaluation. We use similar methods to synthesize data in the context of OSA, but we expand these techniques by introducing a metric for evaluating the synthetic data quality which is based on their combination. We also investigate methods to give different importance to different recordings. Other works related to medical applications of GANs include \cite{hwang2017disease} and \cite{che2017boosting}. Our work is associated with the use of multiple GANs in combination and uses different design and metrics from the above works (both works use designs based on combinations of an auto-encoder and a GAN). Many approaches that include multiple GANs exist such as \cite{durugkar2016generative,hoang2017multi}. \par We note that most of the related work with the exception of \cite{che2017boosting} focuses individually on the synthetic data generation and evaluation, and not how to use these data to augment the original dataset to potentially improve the generalization capability of other classifiers. To the best of our knowledge only few works \cite{douzas2018effective,rezaei2019recurrent,mariani2018bagan} exist that examine the potential application of GANs to produce realistic synthetic data for class rebalancing of a training dataset. Only one of them uses specifically a recurrent GAN architecture. Finally, we did not find any relevant work that depicts the data distribution as a mixture of different recording distributions, with the end-goal of producing more personalized synthetic data. \section{ Method} \label{method} The goal of data augmentation in this work is to train classifiers to successfully detect in physiological time series data health events of interest. In our use case this means to classify every 30 or 60 second window of a sleep recording as apneic (i.e., an apnea event happened) or non-apneic. \begin{figure*} \vskip -0.5cm \raggedright \includegraphics[width=\textwidth,height=6cm]{GAN_paper2_4L.png} \vskip -0.6cm \caption{GAN Augmentation} \label{fig1} \vskip -0.6cm \end{figure*} \par Our approach is based on a conditional recurrent GAN to generate a synthetic dataset (SD, see Figure \ref{fig1}) to augment the original training dataset (RD$_{TRAIN}$) (Objective 1) and to rebalance an unbalanced RD$_{TRAIN}$ (Objective 2). Furthermore, we extend the single GAN architecture to a multiple GAN architecture to generate more synthetic data that is potentially closer to the test data to enable personalized training (Objective 3). In this section, we introduce the datasets we use, the two GAN architectures, and the metrics used to evaluate the quality of the generated data. \subsection{Data} \label{eval_Data} In this work we focus on the nasal airflow signal (NAF), because it can adequately be used to train a classifier to recognize apneas and yields the best single signal performance as shown in our previous work \cite{kristiansen2018data}. Furthermore, NAF is contained in most recordings (in 12 recordings\footnote{slp01, slp02a , slp02b, slp03 , slp04, slp14, slp16, slp32, slp37, slp48, slp59, slp66, slp67x}) in the MIT-BIH database. From the Apnea-ECG database we use the eight sleep recordings (i.e., a01, a02, a03, a04, c01, c02, c03, b01) that contain the NAF signal with durations 7- 10 hours. From MIT-BIH we use the 12 recordings that include NAF signal. Note that MIT-BIH has low data quality (noisy wave-forms, values out of bounds, etc), especially when compared to Apnea-ECG. \par The sampling frequency is 100Hz for Apnea-ECG and 250Hz for MIT-BIH and all recordings contain labels for every minute window of breathing for Apnea-ECG and for every 30 seconds window for MIT-BIH. These labels classify a window as apneic and non-apneic. For Apnea-ECG, half of the 8 recordings are classified as severe OSA (a01-a04, called "apneic" recordings) and half are classified as normal OSA (c01-c03,b01, called "non-apneic"). AHIs vary from 0 to 77.4. For MIT-BIH, AHIs vary from 0.7 to 100.8. The only preprocessing we perform is rescaling and downsampling the data to 1Hz. \subsection{Single GAN Architecture} In order to solve the problems of too small and unbalanced dataset we generate synthetic data and augment the original dataset. Due to its recent successes in generating realistic looking synthetic data e.g. images and music, we use the GAN framework to produce realistic looking synthetic time series data. In particular, we use a conditional recurrent GAN. The conditional aspect allows us to control the class of the generated data (apneic, non-apneic). Thus, data from both classes can be generated and the front-end classifiers are able to learn both apneic and non-apneic event types. The generative network G() takes as input random sequence from a distribution $p_z(z)$ and returns a sequence that after training should resemble our real data. The discriminator D() takes as input the real data with distribution $p_{Data}(x)$ and the synthetic data from G, and outputs the probability of the input being real data. Using cross-entropy error, we obtain the value function \cite{goodfellow2014generative}: \begin{equation} \min_G \max_D V(D,G) = \mathbb{E}_{x\sim p_{Data}(x)}[\log D(x)]+\mathbb{E}_{z\sim p_{Z}(z)}[1- \log D(G(z))] \label{eq1} \end{equation} \par G has the objective to minimize the probability that D correctly identifies the generated data as synthetic (see the second term of Eq. \ref{eq1}). D has the objective to maximize the probability to correctly classify data as either real or synthetic. \par The objective of the generator is to fool the discriminator such that it classifies generated data as real. Through the training the generator learns to produce realistic looking synthetic data. Consequently, the generated data distribution converges to the real data distribution \cite{goodfellow2014generative}. Inspired by \cite{esteban2017real}, we use a conditional LSTM \cite{hochreiter1997long} as G and D, because we are interested in time series generation of sequentially correlated data. LSTMs are able to store information over extended time intervals and avoid the vanishing and exploding gradient issues \cite{goodfellow2016deep}. G produces a synthetic sequence of values for the nasal airflow and D classifies each individual sample as real or fake based on the history of the sequence. \subsection{ Multiple GAN Architecture} \label{meth_exp3} \par The aim for this approach is to ensure that the SD represents in a realistic manner all recordings in RD$_{TRAIN}$. Each person, depending on various environmental and personal factors has different breathing patterns. \begin{wrapfigure}{r}{0.5\textwidth} \vskip -0.5cm \begin{center} \includegraphics[width=0.48\textwidth,height=5cm]{Exp3_true2.png} \end{center} \vskip -0.5cm \caption{Three GANs trained separately with a chance to interchane subsets.} \vskip -0.5cm \label{fig:Exp3_1} \end{wrapfigure} Common general patterns exist among different people depending on different factors of different recordings, but individual characterization is possible. Even for the same person, the recordings of different sessions can be different. These changes are often described as bias towards a particular patient \cite{goodfellow2016deep}. We follow a different approach and make the hypothesis that different recording sessions have different data distributions, which together constitute the total apnea/non-apnea distribution of the dataset. In our case different recordings correspond to different individuals. A distinction is made between the recordings and the modes in their distribution since a recording can have more than one mode in its distribution, and different modes in the feature space can be common for different recordings. Since we have insufficient data per recording to successfully perform the experiments of this section, we define disjoint subsets of recordings (hereby called \textit{subsets}), the union of which constitutes the original recording set. Under this hypothesis, the data distribution can be depicted as a mixture of the different recording distributions: \begin{equation} p_{Data}(x)= \sum^{k_{rec}}_{i=0} w_{r_i}p_{rec_i}(x)= \sum^{k_{sub}}_{j=0} w_{s_j}p_{sub_j}(x) \end{equation} \vskip -0.3cm with: \vskip -0.3cm \begin{equation} p_{sub_j}(x)= \sum_{l\in sub_j} w_{sb_lj}p_{rec_l}(x) \end{equation} where $k_{rec}$ is the total number of recordings, $k_{sub}$ is the total number of subsets, $p_{rec_i}$ is the data distribution of recording i, and $w_{r_i}=1/k_{rec}$ assuming equal contribution per recording, $p_{sub_j}$ and $w_{s_j}$ is the distribution and weights of subset j, and $w_{{sb}_lj}$ the weights of the recording within each subset. We restate Eq. \ref{eq1} to explicitly include the distributions of the subsets by dedicating a pair of G and D to each subset. This allows each GAN to prioritize the data from its respective subset, thus making it less probable to exhibit mode collapse for modes contained in the examined recordings. Each subset contains one apneic and one non-apneic recording (see Section 3.1, 4.4). \par The goal of this method is to properly represent all recordings in the SD. The potential decrease of collapsing modes due the use of multiple GANs for different data is an added benefit. There are relevant publications that use similar ensemble techniques to specifically address this issue backed by theoretical or methodological guarantees \cite{tolstikhin2017adagan,hoang2017multi}. \par Since the amount of data per recording is too low to train GAN with only two recordings, we allow each GAN to train with data from the training subset of another GAN with a controllable probability (see Figure \ref{fig:Exp3_1}). Per iteration, for GANj we perform a weighted dice toss such that $J=(1,2...,j,...,k_{sub})$, and $ \mathbf{p}=(p_1,p_2,...p_j,...p_{k_{sub}})$ where J is a random variable following the multinomial distribution and $\mathbf{p}$ the parameter probability vector of the outcomes. For GANj $p_j=p$, and $p_1=p_2=...=p_i..=p_{k_{sub}}=\frac{1-p}{k_{sub}-1}\forall i\neq j$ for a chosen value $p$ . Note that the larger the chosen $p$, the more pronounced the modes of the recording combination that corresponds to GANi will be. It is relatively straightforward to show that: \begin{proposition} A GAN satisfying the conditions of Proposition 2 of \cite{goodfellow2014generative} and trained with a dataset produced from the above method will converge to the mixture distribution: $p_s(\mathbf{x})=\sum_i^{k_{sub}} w_ip_{sub_i}(\mathbf{x})$ where $w_i= P(J=j)$. \end{proposition} Based on this proposition, this method creates a variation of the original dataset, that gives different predefined importance to the different subsets (see Appendix for details). The same proposition holds for individual recordings. The value function now for a GAN takes the following form: \begin{equation} \min_G \max_D V(D,G) = \mathbb{E}_{x\sim p_s(x)}[\log D(x)]+\mathbb{E}_{z\sim p_{Z}(z)}[1- \log D(G(z))] \label{gan_value_mixture} \end{equation} \subsection{Metrics} \par Measuring the quality of data produced by a GAN is a difficult task, since the definition of “realistic” data is inherently vague. However, it is necessary, because the performance of the front-end classifiers is not necessarily a direct measurement of how realistic the synthetic data are. In this subsection we introduce the metrics we use to measure the quality of the synthetic data. \subsubsection{T metric:} \par Hyland et al. \cite{esteban2017real} introduce two empirical evaluation metrics for data quality: TSTR (Train on Synthetic Test on Real) and TRTS (Train on Real Test on Synthetic). Empirical evaluation indicates that these metrics are useful in our case, however each one has disadvantages. To solve some of these issues we combine them via taking their harmonic mean (in the Appendix we explain problems with these metrics and reasons to use the harmonic mean): \begin{equation} T=\frac{2*TSTR*TRTS}{TSTR+TRTS} \end{equation} \subsubsection{ MMD:} \par We chose the Maximum Mean Discrepancy (MMD) \cite{gretton2007kernel} measurement since other well-established measurements (e.g., log likelihood) are either not well suited for GAN assessment, because plausible samples do not necessarily imply high log likelihood and vice versa \cite{theis2015note}, or they are focused on images, like the inception score \cite{salimans2016improved} and the Frechet Inception distance. There is also a wide variety of alternative approaches \cite{borji2019pros}, however we use the MMD since it is simple to calculate, and is generally in line with our visual assessment of the quality of the generated data. \par We follow the method from \cite{sutherland2016generative} to optimize the applied MMD via maximizing the ratio between the MMD estimator and the square root of the estimator of the asymptotic variance of the MMD estimator (the t-statistic). Inspired by \cite{esteban2017real}, we further separate parts of the real and synthetic datasets to MMD training and MMD test sets (each contains half real and half synthetic data points). To maximize the estimator of the t-statistic for the training data we run gradient descent to the parameters of our kernel (i.e., Radial Basis Function (RBF) with variance $\sigma$ as parameter). Then we test the MMD measurement on the MMD test set with the parameters that have been optimized with the training set. In the next section we evaluate the data based on these metrics. \section{Evaluation} In this section, we present the implementation and evaluation of our experiments. To analyze how well we can achieve our objectives with the two GAN architectures, we design three experiments. Before we describe these experiments and their results, we analyze in Section \ref{Quality_eval} the quality of the synthetic data with the T-metric, the MMD, and visual inspection. In Sections 4.2-4.4 we present and analyze the experiments we conduct. Together with accuracy, specificity, and sensitivity we use the kappa coefficient \cite{cohen1960coefficient} as performance metric since it better captures the performance of two-class classification in a single metric than accuracy. For all experiments, the pre-processing of the data is minimal (Section \ref{eval_Data}) and we use a wide variety of relatively basic methods as front-end classifiers. This is because we want to focus on investigating the viability of GAN augmentation as a means of performance improvement for a general baseline case. However, the GAN augmentation is applicable to any type of data (e.g., pre-processed apnea data) and is independent of the front-end classifiers. For details about the GAN and the front-end classifiers parameters and design please refer to Appendix. \subsection{Data Quality Evaluation} \label{Quality_eval} \par To measure the similarity between the synthetic and the real distribution we use the MMD and T metrics (see example in Figure \ref{fig:Quality_Mes}). We execute the tests every 10 epochs during training. Both scores improve as the training procedure progresses, until they stabilize (with minor variation). The T metric is more unstable with epochs with high score in the initial training phase. However, \begin{figure} \vskip -0.5cm \centering \includegraphics[scale=0.24]{AccuracySynth.png} \label{fig:Q_sub1} \includegraphics[scale=0.3]{MMD.png} \label{fig:Q_sub2} \caption{Mean of T-metric (left) and MMD (right) scores throughout the GAN training } \label{fig:Quality_Mes} \vskip -0.25cm \end{figure} \begin{figure} \centering \includegraphics[scale=0.225]{Real1.png} \label{fig:Real1} \includegraphics[scale=0.225]{Synth2.png} \label{fig:Synth1} \caption{Real apneic data (left) and good synthetic apneic data (right) for 600sec} \label{fig:Apnea_im} \vskip -0.5cm \end{figure} after epoch 600, the performance of the metric stabilizes around 0.9. Similarly, the majority of MMD variations stop (with few exceptions) around epoch 400. \par Another important criterion for recognizing whether the generated data are realistic is the visual inspection of the data. Although not as straightforward as for images, apnea and non-apnea data can be visually distinguished. In Figures \ref{fig:Apnea_im} and \ref{fig:Non_apnea_im} we show examples of real and realistic-looking synthetic data. The generated data are realistic-looking and difficult to distinguish from the real. \subsection{Experiment1: Data Augmentation} \label{Exp1_eval} In this experiment we investigate whether augmenting RD$_{TRAIN}$ with realistic SD generated from a GAN trained with the same RD$_{TRAIN}$ can have a positive impact on the front-end classifier performance. \par \textbf{Experiment Description:} We iterate the following experiment 15 times for Apnea-ECG and 10 times for MIT-BIH: We partition RD into RD$_{TRAIN}$ (with 50\% of RD data points), RD$_{TEST}$ (25\%) and a validation set (25\%) via random subsampling. With RD$_{TRAIN}$ we train GAN. The GAN training is very unstable for the data of the two datasets (especially for MIT-BIH), and a good quality based on our metrics and visual inspection does not necessarily correspond to high performance of the front-end classifiers. For this reason, we use the validation dataset to evaluate the front-end classifier performance. We save the trained GAN model periodically throughout training, generate SD, augment RD$_{TRAIN}$, and measure the front-end classifier performance on the validation set. The GAN with the maximum validation set performance, and empirically acceptable MMD and T-metric values is chosen to generate SD. \begin{figure} \vskip -0.5cm \centering \includegraphics[scale=0.225]{non_apnea_real.png} \label{fig:Non_apnea_Real1} \includegraphics[scale=0.225]{non_apnea_synth.png} \label{fig:Non_apnea_synth1} \caption{Real (left) and good synthetic (right) non-apneic data , 175 sec} \label{fig:Non_apnea_im} \vskip -0.5cm \end{figure} \par \textbf{Results:} Due to limited space we present in the main text only the kappa statistic for all front-end classifiers (Table \ref{table1}) , in addition to the accuracy sensitivity and specificity for the MLP classifier (Table \ref{table2}) to indicate the general behaviour we observe for all the classifiers. For accuracy, specificity, sensitivity for KNN, RF and MLP please refer to Appendix A. We use this presentation convention for all experiments. \begin{table} \vskip -0.3cm \caption{Kappa statistic and standard error for all the front-end classifiers for Apnea-ECG and MIT-BIH.All kappa values are multiplied by 100 for legibility} \centering \resizebox{10cm}{!}{% \begin{tabular}{ | p{2.2cm} |p{1.9cm}|p{1.9cm}|p{1.9cm}|p{1.9cm}|} \hline \multicolumn{5}{|c|}{Kappa statistic (X$\cdot 10^{-2}$) for Apnea-ECG (A), and MIT-BIH (M)} \\ \hline &MLP & RF &KNN & SVM \\ \hline A: Baseline &85.89$ \pm$0.36&90.08$\pm$0.26&88.12$\pm$0.40&74.75$\pm$0.40\\ A: Exp1:Synth&78.29$\pm$0.97&83.88$\pm$0.56&85.76$\pm$0.49& 75.04$\pm$0.55\\ A: Exp1:Augm &86.93$\pm$0.45&90.88$\pm$0.28&90.12$\pm$0.37& 76.90$\pm$0.57\\ \hline M: Baseline&25.04$\pm$0.88&30.95$\pm$1.10&27.15$\pm$1.01&0.0$\pm$0.0\\ M: Exp1:Synth&18.35$\pm$0.86&21.80$\pm$0.95&16.84$\pm$1.26&11.02$\pm$0.96 \\ M: Exp1:Augm &27.01$\pm$0.61&33.01$\pm$0.87&29.22$\pm$1.01& 14.93$\pm$1.22\\ \hline \end{tabular} } \label{table1} \vskip -0.3cm \end{table} \textit{Baseline} shows the performance of the front-end classifiers trained only with RD$_{TRAIN}$. For the synthetic case (\textit{Exp1:Synth}) they are trained only with SD, and for the augmented case (\textit{Exp1:Augm}) with RD$_{TRAIN}$ and SD. For Apnea-ECG, Exp1:Augm exhibits for all front-end classifiers a statistically significant improvement of the mean of the kappa statistic at $p=0.05$. The p-value for the one-tailed two sample t-test relative to the Baseline is: (MLP): p= 0.042, (RF): p=0.035, (KNN): p=0.005, (SVM): p=0.002. Notice that SD yields a good performance on its own, and even surpasses the performance of the Baseline for the SVM. We assume that this is due to the better balancing of the synthetic data in relation to the real. In SD, 50\% of the generated minutes are apneic and 50\% non-apneic, whereas in RD$_{TRAIN}$ approximately 62.2\% are non-apneic and 37.8\% are apneic depending on the random subsampling. For MIT-BIH, Exp1:Augm shows a significant or nearly significant improvement of the kappa statistic values relative to the Baseline for all front-end classifiers when we perform the 2-sample one tailed t-test, i.e., (MLP): p=0.012, (RF): p=0.062, (KNN): p=0.029, and (SVM): p$\simeq$0. The overall performance is very low, due to the very low data quality for this dataset. Since our pre-processing is minimal this is to be expected. Notice that the SVM actually does not learn at all for the Baseline case. In all the iterations we performed it classifies all minutes as non-apneic. Interestingly, both for Exp1:Synth and Exp1:Augm, there is a big improvement for the SVM, since the algorithm successfully learns to a certain extent in these cases. We assume that this is due to the better class balance (more apneas present in the datasets of Exp1:Synth and Exp1:Augm). Generally, for MIT-BIH the augmentation seems to have a beneficial effect in performance. \begin{table} \vskip -0.3cm \caption{Accuracy specificity and sensitivity for the MLP classifier } \centering \resizebox{8cm}{!}{% \begin{tabular}{ | p{2.2cm} |p{1.9cm}|p{1.9cm}|p{1.9cm}|} \hline \multicolumn{4}{|c|}{MLP Classifier Apnea-ECG (A), and MIT-BIH (M)} \\ \hline &Acc & Spec &Sens \\ \hline A: Baseline&93.19$\pm$0.17&94.78$\pm$0.19&90.83$\pm$0.39\\ A: Exp1:Synth&89.26$\pm$0.49&85.48$\pm$1.14& 95.02$\pm$0.94\\ A: Exp1:Augm &93.66$\pm$0.20&94.62$\pm$0.24& 92.28$\pm$0.46\\ \hline M: Baseline&64.6$\pm$0.37&75.95$\pm$1.16&48.41$\pm$1.26\\ M: Exp1:Synth&59.76$\pm$0.5&61.6$\pm$2.58&57.17$\pm$3.16 \\ M: Exp1:Augm &64.7$\pm$0.25&69.92$\pm$0.78& 57.08$\pm$1.22\\ \hline \end{tabular} } \label{table2} \vskip -0.3cm \end{table} From Table \ref{table2} we notice that for Exp1:Augm, the MLP (both for MIT-BIH and Apnea-ECG) exhibits a clear improvement in sensitivity and a small drop in specificity. This pattern is present for all front-end classifiers. For Exp1:Augm there is always a clear improvement in sensitivity, and either a small increase or decrease in specificity. This is an important advantage in a healthcare context since sensitivity reflects the ability of a classifier to recognize pathological events. This observation serves as a motivation for Experiment 2. \par \textbf{Implications for OSA Detection:} The goal of this experiment is to reflect a real application scenario in which we have relatively equal amount of data from different patients to train with, and we perform classification for these patients. An example could be mobile OSA detection for patients after monitoring. It serves as an indication that augmentation with synthetic data can yield performance improvements for classifiers that are trained with the goal of OSA detection. \subsection{ Experiment2: Rebalancing Skewed Datasets} To analyze how well the single GAN architecture can be used to rebalance a skewed dataset, Apnea-ECG needs to be modified, because it contains an equal number of apneic and non-apneic recordings (Section \ref{eval_Data}), and the apneic recordings contain many apneic minutes. Thus, the data are lightly skewed towards non-apneic events in Apnea-ECG, with a ratio of 62.2\% non-apneic and 37.8\% apneic. \par \textbf{Experiment Description:} We separate RD into RD$_{TRAIN}$ and RD$_{TEST}$ on a per-recording basis instead of a per event-basis as in the previous experiment. We randomly choose one apneic and one non-apneic recording as RD$_{TEST}$ (i.e., a01 and b01 respectively), and as RD$_{TRAIN}$ we use the remaining six recordings. We choose to evaluate this scenario using Apnea-ECG since it is the dataset for which our front-end classifiers exhibit the better performance. \begin{figure} \vskip -0.5cm \centering \includegraphics[width=6cm,height=1cm]{Fig_Reb2.png} \caption{Training and Test sets for Experiment 2} \label{fig:Reb1} \vskip -0.5cm \end{figure} \par To create an unbalanced dataset, one apneic recording (i.e., a04 chosen randomly) is removed from the training dataset RD$_{TRAIN}$ (Figure \ref{fig:Reb1}). Thus, the ratio is reduced to 72.2\% non-apneic 27.8\% apneic when removing a04. The augmentation in this experiment rebalances the classes to 50\% apneic and 50\% non-apneic. This means that we only generate apneic data with the GAN (i.e., SD contains only apneic minutes) and combine them with the original dataset to form AD. \begin{table} \vskip -0.4cm \caption{Kappa statistic and standard error for all front-end classifiers. } \centering \resizebox{10cm}{!}{% \begin{tabular}{| p{2.1cm} |p{2cm}|p{2cm}|p{2cm}|p{2cm}|} \hline \multicolumn{5}{|c|}{Exp2: Kappa statistic (X$\cdot 10^{-2}$) a01b01-unbalanced} \\ \hline &MLP & RF &KNN & SVM \\ \hline Baseline&88.44$\pm$0.54 &91.92$\pm$0.26 &93.16$\pm$0.16 &74.6$\pm$0.2\\ Exp2:Augm&93.40$\pm$ 0.63 &94.56$\pm$0.16 &94.76$\pm$0.45 &92.88$\pm$0.64 \\ \hline \end{tabular} } \label{table:ResUnbalanced} \vskip -0.4cm \end{table} \par Note that a04 is removed from the training set both for the baseline/augmented training of the front-end classifiers and also for the training of the GAN, i.e., the apneic minute generation relies only on the other two apneic recordings. A validation set is extracted from a01b01. Throughout the training of the GAN the validation set is periodically evaluated by the front-end classifiers which are trained each time with AD. We choose the model that generates the SD with which the front-end classifiers perform the best on the validation set. For this experiment we perform 5 iterations. \begin{table} \vskip -0.4cm \caption{ Accuracy, specificity and sensitivity for MLP } \centering \resizebox{8cm}{!}{% \begin{tabular}{| p{2cm} |p{1.9cm}|p{1.9cm}|p{1.9cm}|} \hline \multicolumn{4}{|c|}{Exp2: MLP a01b01-unbalanced Acc,Spec,Sens} \\ \hline &Acc & Spec &Sens \\ \hline Baseline&94.22$\pm$0.27&99.44$\pm$0.09&89.12$\pm$0.44 \\ Exp2:Augm&96.70$\pm$0.31 &98.82$\pm$0.24&94.62$\pm$0.51 \\ \hline \end{tabular} } \label{table:ResUnbalanced2} \vskip -0.4cm \end{table} \par \textbf{Results:} The results are shown in Tables \ref{table:ResUnbalanced} and \ref{table:ResUnbalanced2}. For Exp2:Augm we train the front-end classifiers with AD (i.e., apneic SD and RD$_{TRAIN}$ without a04), and for the Baseline we train with RD$_{TRAIN}$ without a04. In both cases we evaluate on RD$_{TEST}$. \par Compared to the Baseline, a clear performance improvement occurs for Exp2: Augm. This can be noticed both in terms of accuracy for the MLP (Table \ref{table:ResUnbalanced2}, first column) and in terms of kappa for all front-end classifiers (all columns of Table \ref{table:ResUnbalanced}) . The SVM seems to benefit the most from the rebalancing process. Again, in terms of specificity and sensitivity we notice a similar behaviour as in the previous experiment with an increase in sensitivity and relatively stable specificity. \par \textbf{Implications for OSA Detection:} As mentioned, OSA data are generally very unbalanced towards non-apneic events. This experiment implies that GAN augmentation with synthetic data can be used to efficiently rebalance OSA data. This has a positive effect on the detection of apneic events and on the overall classification performance for OSA detection, based on the classifiers we experimented with. \subsection{Experiment3: Personalization with Multiple GANs} \par In this experiment, the goal is to investigate whether we can improve performance by indirect personalization during GAN training. By \textit{personalization} we mean that we aim to make the learned distribution of the GAN we use to generate SD to approach the specific distribution of the RD$_{TEST}$ for a given proximity metric (MMD). Since we do not use a01b01 for the training of the GAN the method we apply is indirect. We use two recordings from Apnea-ECG as RD$_{TEST}$ (i.e., a01b01). \par \textbf{Experiment Description:} Based on the discussion of Section \ref{meth_exp3}, we separate our training recordings into three subsets (Figure \ref{fig:Fig_Pers1}). Then we create three GANs (GAN1, GAN2, and GAN3) and we use each subset to train the respective GAN, with a non-zero probability of choosing another subset for the gradient update based on a weighted dice toss (see Section \ref{meth_exp3}). We set $p=0.4$ (see Figure \ref{fig:Exp3_1}), i.e., for one gradient update of GAN1, the mini-batch is selected with probability 0.4 from Subset1, and probability 0.3 from Subset 2 and 3. We do the same for GAN 2 and 3. The choice of $p$ is made via experimental evaluation. \begin{figure} \vskip -0.3cm \centering \includegraphics[width=6cm,height=1.5cm]{Fig_Pers.png} \caption{Training and Test sets for Experiment 3} \label{fig:Fig_Pers1} \vskip -0.3cm \end{figure} \par Proposition 1 implies that through this training, a GAN converges to a mixture of distributions with weights for each subset distribution j equal to $P(J=j)$ (see Eq. \ref{gan_value_mixture}). By controlling $P(J=j)$ we control the weights of the mixture, and thus the degree to which each subset of recordings is represented in SD. \par We use the validation set from a01b01 (obtained as in Experiment 2) for two purposes: (1) to evaluate the SD from the three GANs (SD1, SD2 and SD3) and (2) to calculate the MMD between SD1-3 and this validation set. Then we examine two cases: In Exp3:Augm, SD1, SD2, and SD3 are combined with RD$_{TRAIN}$ to form AD. SD1, SD2, and SD3 combined have the same size as RD$_{TRAIN}$. In Exp3:AugmP, we identify the SD that has the lowest MMD in relation to the validation set, and use the corresponding GANi to generate more data until SDi has the size of RD$_{TRAIN}$. AD is formed by combining RD$_{TRAIN}$ and SDi. In Exp3:AugmP we perform indirect personalization, since the SDi selected originates from the GAN that best matches the distribution of the subset a01b01, i.e., RD$_{TEST}$ based on the MMD metric. This occurs since the validation set is also extracted from a01b01. This experiment is also repeated 5 times. \begin{table} \vskip -0.5cm \caption{ Kappa statistic for front-end classifiers } \centering \resizebox{10cm}{!}{% \begin{tabular}{| p{2.1cm} |p{2cm}|p{2cm}|p{2cm}|p{2cm}|} \hline \multicolumn{5}{|c|}{Exp3: Kappa statistic (X$\cdot 10^{-2}$), a01b01 as RD$_{TEST}$ } \\ \hline &MLP & RF &KNN & SVM \\ \hline Baseline&92.36$\pm$0.37 &92.88$\pm$0.38&93.12$\pm$0.21&88.20$\pm$0.37\\ Exp3:Augm&93.08$\pm$0.59 &93.6$\pm$0.62 &94.50$\pm$0.39 & 91.72$\pm$0.94\\ Exp3:AugmP&93.36$\pm$0.40&94.36$\pm$0.31 &94.58$\pm$0.17&93.92$\pm$0.23 \\ \hline \end{tabular} } \label{table:ResPersonalized} \vskip -0.5cm \end{table} \textbf{Results: } The results are found in Tables \ref{table:ResPersonalized} and \ref{table:ResPersonalized2}. We see that the general behavior is similar to the previous experiments. Again there are improvements for the augmented cases in relation to the Baseline. There are improvements in sensitivity and a small drop in specificity for the MLP cases, which is the case also for the other classifiers (with the exception of RF). \par Generally, Exp3:AugmP, exhibits slightly better performance both in terms of kappa and accuracy. SVM and RF seem to gain the most benefits from this approach. Interestingly, in Exp3:AugmP SVM surpasses MLP in terms of kappa. \begin{table} \vskip -0.4cm \caption{ Accuracy, specificity and sensitivity for MLP } \centering \resizebox{8cm}{!}{% \begin{tabular}{| p{2cm} |p{1.9cm}|p{1.9cm}|p{1.9cm}|} \hline \multicolumn{4}{|c|}{Exp3: MLP a01b01 Acc,Spec,Sens} \\ \hline &Acc & Spec &Sens \\ \hline Baseline&96.18$\pm$0.18&98.92$\pm$0.07& 93.54$\pm$0.25 \\ Exp3:Augm&96.54$\pm$0.29&98.4$\pm$0.19& 94.74$\pm$0.51 \\ Exp3:AugmP&96.68$\pm$0.20&98.64$\pm$0.18& 95.2$\pm$0.25 \\ \hline \end{tabular} } \label{table:ResPersonalized2} \vskip -0.5cm \end{table} \par Also, to further investigate the viability of Exp3:AugmP method we examine in the Appendix different recording combinations as RD$_{TEST}$ (i.e., a02c01, a04b01 and a03b01) and perform Baseline and Exp3:AugmP evaluations for the front-end classifiers. Intriguingly, for all cases, for all front-end classifiers we notice improvements for the kappa statistic, that vary from (RF, a02c01):0.28$\cdot 10^{-2}$ to (MLP, a03b01): 27.12$\cdot 10^{-2}$, especially for low performing cases e.g., for the (MLP, a03b01) case Baseline kappa is 57.4$\cdot 10^{-2}$ and Exp3:AugmP kappa is 84.5$\cdot 10^{-2}$. \par \textbf{Implications for OSA Detection:} This experiment implies that personalization can indeed have a positive impact on classification performance for the detection of OSA. Even the simple indirect approach of Exp3:AugmP exhibits performance advantages for all front-end classifiers in relation to when it is not applied in Exp3:Augm. \section{Conclusion} In this work we examined how dataset augmentation via the use of the GAN framework can improve the classification performance in three scenarios for OSA detection. We notice that for all the cases the augmentation clearly helps the classifiers to generalize better. Even for the simpler classifiers like KNN, we see that augmentation has a beneficial effect on performance. The largest performance improvement is achieved for the SVM for Experiment 2, and in all the cases the metric that increases the most is sensitivity. This leads us to believe that the class balancing that GAN can provide with synthetic data can be useful in situations for which one class is much less represented than others. This is even more pronounced in cases like OSA detection where the vast majority of the data belongs to one of two classes. \par As a next step we plan to investigate the viability of creating synthetic datasets that are differentially private. As health data are in many cases withheld from public access, we want to investigate the performance of front-end classifiers when using synthetic datasets that have privacy guarantees and examine how this impacts the performance of the classifiers. \medskip \bibliographystyle{splncs04}
1,314,259,992,826
arxiv
\section{Introduction} Consider the {\em divisible Grassmanians} $Gr(n,kn)$ of $n$-dimensional subspaces of $\mathbb{R}^{kn}$ as a homogenous space of $GL(kn)$. The aim of this paper is to study the geometry of curves $\ell(t) \in Gr(n,kn)$; these curves representes the projectivized geometry of solutions of systems of $n$ linear ordinary differential equations of order $k$. We will construct and study complete invariants that solve the congruence problem; but the main thrust of this paper is a thorough investigation of the equivariant geometry of the spaces of jets of curves in the divisible Grassmannian, by modelling them as adjoint orbits in the Lie algebra $\mathfrak{gl}(kn)$. Both the invariants and their geometric interpretation are a consequence of the adjoint model. This work extends several Klein geometries: \begin{itemize} \item The classical projective invariants of ordinary differential equations studied by Wilczynski (\cite{wil}), an important distinction between our invariants and the Wilczynski invariants is that he considers a single differential equation whereas we consider systems; this is reflected in the {\em non-commutativity} of the invariants. \item Our moving frames generalize the ``commutative'' case $n=1$, that is, the linear geometry of curves in the real projective space, studied by Cartan (\cite{cartan}). \item The main inspiration is the paper \cite{duran} that studies the case $k=2$ of systems of second order linear differential equations. In the general case treated here, in addition to extra combinatorial complexity some new phenomena appear; for example, the natural matrix of invariants of proposition \ref{Jacobi-in-base} does {\em not} coincide with the pullback of the Maurer-Cartan form. \item The common denominator of all these cases is the work of Flanders (\cite{flanders}) of curves in $\mathbb{R}P^1$. \end{itemize} The case $k=2$ (the ``half-Grassmannian'') was studied in \cite{duran}; we briefly describe that paper here: the main insight of \cite{duran} is that the linear invariants of curves in the Grassmannian and their geometry are completely described by the {\em fundamental endomorphism} $F$ and its derivatives, which is an equivariant map from $1$-jets of curves in the Grassmannian into the Lie algebra $\mathfrak{gl}(2n)$ endowed with the Adjoint action. The first derivative of the fundamental endomorphism is a reflection whose $-1$-eigenspace is the curve $\ell(t)$ itself, and the $1$-eigenspace furnishes an equivariant complement $h(t)$ of $\ell(t)$ which is called the {\em horizontal curve}. The main geometric invariant of a fanning curve $\ell(t)$, described in \cite{duran}, is its Jacobi endomorphism, that describes how the horizontal curve moves with respect to $\ell(t)$, and it gives the natural notion of curvature for fanning curves in the Grassmannian $Gr(n,\mathbb{R}^{2n})$. There is a close relationship between the matrix generalization of the Schwarzian derivative (based on the work of Zelikin \cite{Zelikin}) and the Jacobi endomorphism, also studied in \cite{duran}. Following the same spirit as the half-Grassmannian case, our study will proceed along the following main lines: \medskip \noindent{\bf Fanning frames and fanning curves:} We study curves $\ell(t) \in Gr(n,\mathbb{R}^{kn})$ via frames $A(t)$ spanning $\ell(t)$: \begin{Def} A frame $A(t)$, organized as a curve of $kn \times n$ matrices is fanning, if the $kn \times kn$ matrix $\mathbf{A}(t) := ( A(t)|\dot{A}(t)|...|A^{(k-1)}(t))$ formed juxtaposing $A(t)$ and its derivatives is invertible for all $t$. This condition depends only on the space $\ell(t)$ spanned by the columns of $A$; thus we say that a curve $\ell(t)$ is fanning if a curve of frames $A(t)$ spanning $\ell(t)$ is fanning. \end{Def} Fanning curves form an open and dense subset of all differentiable curves, and therefore it is a natural non-degeneracy condition. For the rest of this work, we will always assume to work in the set of fanning curves. Observe that the matrix $\mathbf{A}(t)$ gives a (highly non-canonical) $GL(kn)$ equivariant lift of the curve $\ell(t)$ into $GL(kn)$, the later endowed with the canonical left action on itself. Another construction defined for (fanning) frames that only depends on the curve is the canonical flag \[ Span\{A(t)\} \subset Span\{A(t),\dot{A}(t)\}\subset \dots \subset Span\{A(t),\dot{A}(t),...,A(t)^{(k-2)}\} \] $$Span\{A(t),\dot{A}(t),...,A(t)^{(k-2)}\} \subset \mathbb{R}^{kn} \, .$$ We call the last non-trivial space $Span\{A(t),\dot{A}(t),...,A(t)^{(k-2)}\}$ of this sequence the {\em vertical space} $v(t)$. The next items correspond to the sections of this paper: \medskip \noindent{\bf Normal forms:} We will construct a {\em normal form} for frames spanning the given curve $\ell(t)$. This normal form gives a canonical way of extending an initial frame $A(0)$ of $\ell(0)$ to a frame $A(t)$ spanning $\ell(t)$; linear relations between the derivatives of a normal frame furnish a complete system of invariants, which generalizes for systems the Wilczynski invariants for single differential equations ( \ref{mainCongruence}). However, for systems the invariants are matrices, instead of numbers, and the non-commutativity implies that there is an `up to conjugation by a constant matrix" in the conjugacy theorem. Therefore, the actual invariants are the linear transformations expressed as matrices in a given basis. If actual matrix invariants are wanted, it is necessary to further normalize the curve. \medskip \noindent{\bf The fundamental endomorphism and its derivatives:} We generalize the fundamental endomorphism of \cite{duran}, obtaining an Adjoint-equivariant map $F(t)$ into $\mathfrak{gl}(kn)$. The ``derivative'' \[ D(t)= \frac{1}{k} (2 \dot{F}(t) - (k-2)I) \] is the {\em fundamental reflection}, whose $-1$ eigenspace is the vertical $v(t)$; its $+1$ eigenspace, the {\em horizontal curve $h(t)$}, will be a fundamental piece of the study of the invariants, together with the related {\em horizontal derivative} $H(t)$ which spans $h(t)$. The horizontal derivative has the form \[ H(t)= A(t)^{k-1} + \text{extra terms depending on lower order derivatives of } A . \] Thus the horizontal derivative has the same spirit as a $k-1$-th order ``covariant" derivative. An important remark is that, for $k=2$, the horizontal derivative in a normal frame is just the ordinary derivative of the frame, whereas this fails for $k>2$. This influences, for example the Cartan lift of the $k-1$-jet of a curve to $GL(kn)$: there are two choices, one of them using $( A(t),\dot{A}(t),...,A^{(k-1)}(t))$ for a normal frame, and the other, still with a normal frame but using the horizontal derivative, $( A(t),\dot{A}(t),...,A(t)^{(k-2)}, H(t))$ (these two lifts coincide for $k=2$). Taking one more derivative, we arrive at the matrix of invariants, the {\em Jacobi endomorphism}, whose entries are in direct relationship with the normal form invariants and has the geometric interpretation of measuring the velocity of canonically defined curves of flags associated to the curve in the Grassmannian. \medskip \noindent{\bf Geometry of jets of fanning curves in the Grassmannian:} Here we see how the invariants arise naturally by representing the prolonged action of $GL(kn)$ on jets of curves on the Grassmannian as the Adjoint action on $\mathfrak{gl}(kn)$. We shall see that once one is comitted to an Adjoint representation, there is essentially no choice of invariants. Also, we use this Adjoint representation to give a better understanding of the spaces of jets of curves in the divisible Grassmannian as a $GL(kn)$-space. In particular, by restricting to ``standard" curves, we give numerical invariants that serves as coordinates for the space of orbits on the $k+1$-jets (the first level on which that $GL(kn)$-action fails to be transitive). \section{Normal Frames} \label{normal} The fanning condition for frames $A(t)$ spanning curves $\ell(t) \in Gr(n,\mathbb{R}^{kn})$, means that at each instant $t$ the columns of $A(t), \dot{A}(t), \ddot{A}(t), \cdots, A(t)^{(k-1)}$ span $\mathbb{R}^{kn}$ and therefore we can write $A^{(k)}$ as a linear combination of $A(t), \dot{A}(t), \ddot{A}(t)\cdots A(t)^{(k-1)}$. This gives a system of order $k$ differential equations satisfied by the columns of $A$; in this part we will adopt the notation of Wilczynski \cite{wil} for writing the coefficients: for example, in the case $k=3$ we write \begin{equation}\label{eq3} A^{(3)} + 3\ddot{A}P_1(t) + 3\dot{A}P_2(t) + AP_3(t) = 0, \end{equation} where $P_1(t)$, $P_2(t)$ and $P_3(t)$ are smooth curves of $n \times n$ matrices, and in the general case $Gr(n,\mathbb{R}^{kn})$, we have \begin{equation}\label{eqk} A^{(k)} + {k \choose 1}A^{(k-1)}P_1 + {k \choose 2}A^{(k-2)}P_2 +...+{k \choose k-1}\dot{A}P_{k-1}+ AP_{k} = 0, \end{equation} where $P_i$ for all $i$ are smooth curves depending on $t$ of $n \times n$ matrices. \medskip \noindent \textit{Remark.} This is the first instance of giving the case $n=3$ first and then the general case. It is much easier to visualize the combinatorics in this case, but most of that, this is the first $n$ where the differences from $Gr(n,\mathbb{R}^{2n})$ (\cite{duran}) appears. \medskip \begin{Def} A frame $A(t)$ of a fanning curve in $Gr(n,\mathbb{R}^{kn})$ is said to be normal if the columns of its kth-derivative $A^{(k)}$ are linear combinations of the columns of all derivatives of order equal or less than $k-2$, for all values of $t$. \end{Def} This definition is motivated by the normal frames defined by Cartan \cite{cartan}, and coincides for $k=2$ with the normal frames defined in \cite{duran}. In general, when two frames $A(t), B(t) $ span a curve $\ell(t) \in Gr(n,\mathbb{R}^{kn})$ this means that there is $X(t) \in Gl(n)$ that $A(t)=B(t)X(t)$. If $\ell(t)$ is a fanning curve of $n$-dimensional subspaces in $\mathbb{R}^{kn}$, there is a normal frame that spans it. In order to obtain a normal frame for $\ell(t)$ we use a method of reduction for differential equations of order $k$, described for Wilczynski \cite{wil}, that consists in a change of variables that results in a equation without the term of order $k-1$. If in equation \ref{eqk} we put $A(t)=B(t)X(t)$, where $X(t)$ satisfies $\dot{X}(t)=-X(t)P_1(t)$ with $X(0)=I$, we obtain the equation $$B^{(k)} + {k \choose 2}B^{(k-2)}Q_2(t) +...+{k \choose k-1}\dot{B}Q_{k-1}(t)+ BQ_{k}(t) = 0,$$ where : $$Q_j(t)= \sum_{i=0}^j {j \choose i} \left( \frac{d^{j-i}}{dt^{j-i}}X \right)P_iX^{-1}$$ with $j=2,...,k$ and $P_0=1.$ We find in particular: \begin{eqnarray*} Q_2 &= &X (P_2 - P^2_1 -\dot{P}_1)X^{-1},\\ Q_3& = & X(P_3 - 3 P_1P_2 -2\dot{P}_1P_1+2P_1\dot{P}_1 +2P^3_1 -\ddot{P}_1)X^{-1},\\ Q_4 & = & X ( P_4 -4P_1P_3 +6P^2_1P_2 -6\dot{P}_1P_2 +3\dot{P}_1P^2_1-3P^2_1\dot{P}_1+{} \nonumber\\ & & {} +6P_1\dot{P}_1P_1+3P_1\ddot{P}_1 -3\ddot{P}_1P_1 -3P^4_1 +3\dot{P}^2 -P^{(3)})X^{-1}. \end{eqnarray*} This matrices $Q_j$ in $Gl(n)$ are pointwise conjugate under the action of $Gl(n)$ in the space of frames, that is $Q_j = X T_j X^{-1}.$ (In the case $n=1$ these are the semi-invariants of \cite{wil}.) If $A(t)$ is a fanning frame that satisfies equation (\ref{eqk}), let us define the $Schwarzian$ of $A(t)$ as the function $$\{A(t),t\} = 2(P_2(t) - P_1(t)^2 -\dot{P}_1(t)).$$ \medskip \noindent\textit{Remark.} The notation adopted here does not change the results in \cite{duran}, the case $k=2$. When the fanning frame is of the form $A(t)= \left( \begin{array}{c} I \\ M(t) \end{array}\right),$ we still have $\{A(t),t\}= \frac{d}{dt}(\dot{M}^{-1}\ddot{M})-(1/2)(\dot{M}^{-1}\ddot{M})^2.$ And the normal form of the equation \ref{eqk} still be $$\ddot{A}+(1/2)A\{A(t),t\}=0.$$ The only change is the form of the horizontal derivative that changes to $ H(t) = \dot{A}(t) + A(t)P_1(t)$, without affecting the other results. In this way, this form generalizes the case $Gr(n,\mathbb{R}^{2n})$, and in this work the Schwarzian is the first invariant. We will use the notation $h_{j-2}[A(t),t]$ or simply $h_{j-2}$ to denote $X^{-1}(t)Q_j(t)X(t)$, with $j \in \{3,...,k\}$. For example, \[ h_1[A(t),t] = X^{-1}(t)Q_2(t)X(t) = P_3 - 3 P_1P_2 -2\dot{P}_1P_1+2P_1\dot{P}_1 +2P^3_1 -\ddot{P}_1 \, . \] We emphasize the Schwarzian by calling it $\kappa$ instead of $h_0$. The following properties of the Schwarzian and of the coefficients $h_j$ follow from the reduction of the equation: \begin{Prop}\label{actschw} Let $A(t)$ be a fanning frame. (1) If $X(t)$ is a smooth curve on $Gl(n)$, then \[\{A(t)X(t),t\} = X(t)^{-1}\{A(t),t\}X(t)\] and $h_{j}[A(t)X(t),t] = X(t)^{-1}h_{j}[A(t),t]X(t)$, for $ j \in \{1,...,k-2\}$. (2) If $T$ is a transformation on $Gl(kn)$, then $\{TA(t),t\} = \{A(t),t\}$ and $h_{j}[TA(t),t] = h_{j}[A(t),t]$, for $ j \in \{1,...,k-2\}$. \end{Prop} \begin{Prop}\label{cteX} Let $\ell(t)$ be a fanning curve of $n$-dimensional subspaces in $\mathbb{R}^{kn}$. If $A(t)$ e $B(t)$ are two normal frames spanning $\ell(t)$, there is a fixed invertible $n\times n$ matrix $X$ such that $B(t)= A(t)X$. \end{Prop} \textit{Proof:} If $A(t)$ e $B(t)$ are two normal frames spanning $\ell(t)$ in $Gr(n,\mathbb{R}^{kn})$, then there is a curve of $n\times n$ invertible matrices $X$ such that $B(t)= A(t)X(t).$ Differentiating the equation $k$-times $$B^{(k)} = A^{(k)}X +{k \choose 1}A^{(k-1)}\dot{X} + ...+{k \choose k-1}\dot{A}X^{(k-1)}+ AX^{(k)}.$$ Observe that $A(t)$ is normal, then $A^{(k)}$ depends only on the $k-2$-derivatives of $A(t)$; but $B(t)$ is normal too, so the only possible way in which the columns of $B^{(k)}$ could be linear combinations of the columns of the $k-2$-derivatives of $B(t)$ is that $\dot{X}$ be identically zero. \qed \medskip Proposition \ref{cteX} has two important consequences: first, the juxtaposed matrix $\mathbf{A}(t)$ in the introduction is almost canonically defined for normal frames: it just depend on the choice of a basis of $\ell(0)$. Also, as mentioned in the introduction, a fanning curve $\ell(t) \in Gr(n,kn)$ naturally produces a linear flag \[ Span\{A(t)\} \subset Span\{A(t),\dot{A}(t)\}\subset \dots \subset Span\{A(t),\dot{A}(t),...,A(t)^{(k-2)}\} \] \[ Span\{A(t),\dot{A}(t),...,A(t)^{(k-2)}\} \subset \mathbb{R}^{kn} \, , \] but now proposition \ref{cteX} makes this flag a {\em decomposition} flag: \[ Span\{A(t)\}\oplus Span\{\dot{A}(t)\}\oplus \dots \oplus Span\{A(t)^{(k-1)}\} \] In general, transforming a linear flag of nested subspaces into a decomposition is only possible in the presence of an Euclidean structure by taking complements, but here the normal frame construction on fanning curves gives this additional structure. We shall see, however, that neither the lift $\mathbf{A}(t)$ nor the decomposition are the most convenient ones; the ``right'' constructions will be given in sections \ref{horcur} and \ref{horder} by means of the horizontal curve and the horizontal derivative. Let us now prove the main result of this section, which essentially solves the congruence problem: \begin{The} \label{mainCongruence} Two fanning curves of $n$-dimensional subspaces of $\mathbb{R}^{kn}$ are congruent if and only if there exists a constant $n\times n$ invertible matrix $X$ such that the Schwarzians and the matrices $h_j$, for $j=1,..,k-2$, of any two of their normal frames are conjugate by $X$. \end{The} \textit{Proof:} Let $A(t)$ and $B(t)$ be two normal frames spanning congruent fanning curves. Then there is a linear transformation $T$ of $\mathbb{R}^{kn}$ such that $TA(t)$ and $B(t)$ span the same curve. Since $TA(t)$ is a normal frame too, using the Proposition \ref{cteX}, there exists $X$ constant such that $TA(t)=B(t)X$. And Proposition \ref{actschw} tell us that the Schwarzian and the $h_j$, for all $j$, of $A(t)$ and $B(t)$ are conjugate by a constant matrix on $Gl(n)$. On the other hand, let $A(t)$ and $B(t)$ be normal frames such that $\{B(t),t\} = X(t)^{-1}\{A(t),t\}X(t)$ and $h_{j}[B(t),t] = X(t)^{-1}h_{j}[A(t),t]X(t)$, for all $ j \in \{1,...,k-2\}$, then we can consider, without loss of generality, that $A(t)$ and $B(t)$ are two normal frames with the same Schwarzian and $h_j$, for all $j$. Assume that \[ T =( A(0)|\dot{A}(0)|...|A^{(k-1)}(0))( A(0)|\dot{A}(0)|...|A^{(k-1)}(0))^{-1}; \] since $A(t)$ is normal, then $D(t)= TA(t)$ satisfies a differential equation with order $k$: $$ D^{(k)} + 1/2 D^{(k-2)}\{B(t),t\}+ D^{(k-3)}h_{1}[B(t),t]+ ...+ D(t) h_{k-2}[B(t),t]= 0.$$ Therefore $D(t)$ and $B(t)$ satisfy the same differential equation of order $k$ and with the same initial conditions. It follows that $D(t)=B(t)$ and then $A(t)$ is congruent to $B(t) $. \qed \section{The fundamental endomorphism and its derivatives} \label{fundamentaletc} \subsection{The fundamental endomorphism} \begin{Def} The fundamental endomorphism of a fanning frame $A(t)$ at a given time $t$ is the linear transformation $\mathbb{R}^{kn} \rightarrow \mathbb{R}^{kn}$ defined by the equations $F(t)A(t)= 0$, $F(t)\dot{A}(t)=A(t)$, $F(t)A^{(2)}(t)= 2 \dot{A}(t)$, ..., $F(t)A^{(k-1)}(t)= (k-1) A^{(k-2)}(t)$. \end{Def} \medskip \noindent{\bf Remark.} Equivalently, we could have defined the fundamental endomorphism by the transformation, defined in the canonical basis, associated to the matrix $F$ of the theorem \ref{endfund} below. \medskip The fundamental endomorphism does not depend on the fanning frame (that is 1 of \ref{actF} below), therefore it is defined for fanning curves in the Grassmannian. Furthermore, if $\ell(t)$ is a fanning curve spanned by $A(t)$ and $F(t)$ its fundamental endomorphism, then $$\ell(t)=Span\{A(t)\}= Im \{F(t)^{k-1}\} \subset Span\{A(t),\dot{A}(t)\}=Im\{F(t)^{k-2}\},$$ and we have that, for all $i \in \{1,...,k-2\}$ $$ Span\{A(t),...,A(t)^{(i)}\}= Im \{F(t)^{k-i}\} \subset Span\{A(t),...,A(t)^{(i+1)}\},$$ $$\text{and} \quad Span\{A(t),...,A(t)^{(i+1)}\}=Im\{F(t)^{k-(i+2)}\}$$ moreover $Span\{A(t),...,A(t)^{(i)}\}$ does not depend on the frame, for all $i$. In the case of normal frames, $Span\{A(t)^{(i)}\}$, for all $i$, does not depend on the frame too. \begin{Prop}\label{actF} Let $A(t)$ be a fanning frame. Its fundamental endomorphism $F(t)$ satisfies the following properties: \begin{enumerate} \item If $X(t)$ is a smooth curve on $Gl(n)$, the fundamental endomorphism of $A(t)X(t)$ is $F(t)$. \item If $T$ is a matrix on $Gl(kn)$, the fundamental endomorphism of $TA(t)$ is $TF(t)T^{-1}$. \end{enumerate} \end{Prop} \textit{Proof:} The proof is the same as the half-grassmannian $Gr(n,2n)$ of \cite{duran}. \qed \subsection{The fundamental reflection and the horizontal curve}\label{horcur} We now take derivatives of the fundamental endomorphism and study the resultant geometry. \begin{Prop}\label{d} Let $F(t)$ be the fundamental endomorphism of a fanning frame $A(t)$. At each value of $t$, $D(t)= \frac{1}{k} (2 \dot{F}(t) - (k-2)I)$ is a reflection whose $-1$ eigenspace is the vertical space $v(t)$. \end{Prop} \textit{Proof:} We first observe that differentiating the identities $F(t)A(t)=0, F(t)\dot{A}(t)=A(t),..., F(t)A(t)^{(k-2)}=(k-2)A(t)^{(k-3)}$, we obtain, respectively, that $$\dot{F}(t)A(t) = -A(t), \dot{F}(t)\dot{A}(t) = -\dot{A}(t),..., \dot{F}(t)A(t)^{(k-2)} = -A(t)^{(k-2)}.$$ Consequently $$D(t)A(t)=-A(t), D(t)\dot{A}(t)=-\dot{A}(t), ..., D(t)A(t)^{(k-2)}=-A(t)^{(k-2)}.$$ Since the range of $F(t)$ is $Span\{A(t), \dot{A}(t),..., A(t)^{(k-2)}\}$, then $\dot{F}(t)F(t)=-F(t)$. Now we show that $D(t)^2=I$. Differentiating $F(t)A(t)^{(k-1)}=(k-1)A(t)^{(k-2)}$ and using that $\dot{F}(t)F(t)=-F(t)$, we have: $$\dot{F}(t)^2A(t)^{(k-1)} - F(t)A(t)^{(k)} = (k-1)\dot{F}(t)A(t)^{(k-1)},$$ then $$(\dot{F}(t)^2 -(k-2)\dot{F}(t))A(t)^{(k-1)} = (F(t)A(t)^{(k-1)})', $$ and consequently, $$(\dot{F}(t)^2 -(k-2)\dot{F}(t))A(t)^{(k-1)} = (k-1)A(t)^{(k-1)}.$$ Now multiplying by 4, using that $ 4k-4 = k^2 - (k-2)^2, \forall t$ and completing the square we obtain that $$\frac{1}{k^2} (2 \dot{F}(t) - (k-2)I)^2 A(t)^{(k-1)} = A(t)^{(k-1)}.$$ \qed \medskip It is useful to think in terms of the fundamental projection $P(t) := \frac{(I-D(t))}{2}$ associated to a the fundamental reflection; $P(t)$ has the vertical space $v(t)$ as its image. Its kernel is distinguished: \medskip \begin{Def} Let $\ell(t)$ be a fanning curve and let $F(t)$ be its fundamental endomorphism. We define the {\em horizontal curve} $h(t)$ as the map that takes $t$ to the kernel of the fundamental projection $P(t)$. \end{Def} The horizontal curve is clearly equivariant: if $T \in Gl(kn)$ then the horizontal curve of $Tl(t)$ is $Th(t)$. Observe that since the fundamental endomorphism depends only on the curve on the Grassmannian, the same holds for all its time derivatives. In particular, the curve of reflections $D(t)$ and the curve of projections $P(t) := \frac{(I-D(t))}{2}$ depend only on the curve on the Grassmannian. We will study now the second derivative $\ddot{F}$, for this observe that $\dot{P}=-\frac{1}{k}\ddot{F}$. \begin{Prop}\label{pponto} Let $\ell(t)$ be a fanning curve and let $h(t)$ be its horizontal curve. If $P(t)$ is the projection onto $v(t)$ with kernel $h(t)$, then $\dot{P}(t)$ has the following properties: 1)$\dot{P}(t)$ maps $h(t)$ into $v(t)$. 2)$\dot{P}(t)$ maps $v(t)$ into $h(t)$. \end{Prop} \textit{Proof:} Differentiating the identity $P(t)^2= P(t)$, we have $$\dot{P}(t)P(t)= (I-P(t))\dot{P}(t),$$ where $I-P(t)$ is the projection onto $h(t)$ with kernel $$Span\{A(t),\dot{A}(t),...,A(t)^{(k-2)}\}.$$ Thus the equation $$0 = \dot{P}(t)P(t)(h(t))= (I- P(t))\dot{P}(t)(h(t))$$ implies that $\dot{P}(t)$ maps $h(t)$ into $Span\{A(t),\dot{A}(t),...,A(t)^{(k-2)}\}$, and this proves the first item. For the second item, observe that $$\dot{P}(t)\ell(t))= \dot{P}(t)P(t)\ell(t))= (I- P(t))\dot{P}(t)\ell(t)),$$ which implies that the subspace $\dot{P}(t)\ell(t))$ is contained in $h(t)$. Similarly \\ $\dot{P}(t)(A^{(i)}(t))= (I- P(t))\dot{P}(t)(A^{(i)}(t))$, for all $i \in \{1,...,k-2\}$ , then $\dot{P}(t)(A^{(i)}(t))$, for all $i \in \{1,...,k-2\}$, is contained in $h(t)$. It follows from the proof of Proposition \ref{d} that $ \begin{array}{l} 1)\dot{F}(t)A(t)=-A(t),\\ 2)\dot{F}(t)A(t)^{(i)}=-A(t)^{(i)},$ for all $ i \in \{1,...,k-2\},\\ 3)\dot{F}(t)A(t)^{(k-1)}=(k-1)A(t)^{(k-1)}-F(t)A(t)^{(k)}. \end{array} $ Differentiating the first equation, we obtain that $ \ddot{F}(t)A(t)=0$, then \\ $\dot{P}(t)A(t)=0$; and similarly we have that $ \ddot{F}(t)A(t)^{(i)}=0$, then \\ $\dot{P}(t)A(t)^{(i)}=0$ for $i \in \{1,2,...,k-3\}$. Differentiating $\dot{F}(t)A(t)^{(k-2)}=-A(t)^{(k-2)}$, we have that $\ddot{F}(t)A(t)^{(k-2)} + \dot{F}(t)A(t)^{(k-1)}= -A(t)^{(k-1)}$; and using 3) we obtain $\ddot{F}(t)A(t)^{(k-2)}=-k A(t)^{(k-1)} + F(t)A(t)^{(k)}$, and consequently $\dot{P}(t)A(t)^{(k-2)}= A(t)^{(k-1)} - \frac{1}{k}FA^{(k)}$. Since the columns of $FA^{(K)}$ are linear combinations of those of $A(t)$, $\dot{A}(t)$, ..., $A(t)^{(k-2)}$; and $A(t)$, $\dot{A}(t)$, ...,$A(t)^{(k-2)}$ and $A(t)^{(k-1)}$ are linearly independents, it follows that $\dot{P}(t)A(t)^{(k-2)}$ has rank $n$. Therefore $\dot{P}(t)A(t)^{(k-2)}$ spans $h(t)$. \qed \medskip \noindent{\em Remark.} Observe that the proof of proposition \ref{pponto} gives a somewhat more precise information on how $\dot{P}(t)$ acts on the associated flags: for any frame, we have the nested flag and $\dot{P}(t)$ restricted to each subspace $Span\{A(t), \dot{A}(t), \dots, A(t)^{(r)}\}$ is zero for each $r<k-2$ and $\dot{P}(t)$ maps the quotient $v(t)/\{Span\{A(t), \dot{A}(t), \dots, A(t)^{(k-3)}\}$ isomorphically onto $h(t)$; if the frame $A(t)$ is normal, then $\dot{P}(t)$ restricted to each subspace $Span\{A(t)\}, Span\{\dot{A}(t)\}, \dots, Span\{A(t)^{(r)}\}$ is zero for each $r<k-2$ and $\dot{P}(t)$ maps $\{A(t)^{(k-2)}\}$ isomorphically onto $h(t)$. \subsection{The horizontal derivative } \label{horder} \begin{Def} The horizontal derivative of a fanning frame $A(t)$ is the curve of frames defined for: $$ H(t):= (I- P(t))A(t)^{(k-1)} = \dot{P}(t)A(t)^{(k-2)} = A(t)^{(k-1)} -\frac{1}{k}F(t)A(t)^{(k)}=$$ $$= -\frac{1}{k}\ddot{F}(t)A(t)^{(k-2)},$$ and observe that $H(t)$ is the projection of $A(t)^{(k-1)}$ onto $h(t)$. \end{Def} If $A^{(k)} + {k \choose 1}A^{(k-1)}P_1 + {k \choose 2}A^{(k-2)}P_2 +...+{k \choose k-1}\dot{A}P_{k-1}+ AP_{k} = 0$, then $$ H = A(t)^{(k-1)} + \frac{1}{k}F(t)\left( {k \choose 1}A^{(k-1)}P_1 +...+{k \choose k-1}\dot{A}P_{k-1}+ AP_{k}\right) ,$$ and using that $F(t)A(t)^{(k-i)}= (k-i)A(t)^{(k-i-1)}$ and ${k\choose i}\frac{k-i}{k}={k-1 \choose i},$ we have \begin{equation}\label{derhor} H(t) = A^{(k-1)} + {k-1 \choose 1}A^{(k-2)}P_1 +...+{k-1 \choose k-2}\dot{A}P_{k-2}+ AP_{k-1}. \end{equation} \begin{Prop} The horizontal derivative $H(t)$ of a fanning frame $A(t)$ satisfies the following properties: \begin{enumerate} \item If $X(t)$ is a smooth curve of invertible $n \times n$ matrices, the horizontal derivative of $A(t)X(t)$ is $H(t)X(t)$. \item If $T$ is a invertible linear transformation from $\mathbb{R}^{kn}$ to itself, the horizontal derivative of $TA(t)$ is $TH(t)$. \end{enumerate} \end{Prop} \textit{Proof:} The first property follows from $H(t)=\dot{P}(t)\dot{A}(t)$ e and the proposition \ref{pponto}, since we have $$\dot{P}(t)\frac{d}{dt}(A(t)X(t))= \dot{P}(t)\dot{A}(t)X(t)+\dot{P}(t)A(t)\dot{X}(t)= \dot{P}(t)\dot{A}(t)X(t).$$ The second one is obtained directly from equation \ref{derhor}. \qed We saw that the curve $\dot{F}(t)$ is a curve of linear transformations with two eigenvalues, $-1$ and $k-1$. The $-1$-eigenspace is the vertical space $v(t)$ and the $k-1$-eigenspace is $h(t)$, spanned by the horizontal derivative $H(t)$. Therefore $(A(t)|...|A(t)^{(k-2)}|H(t))$ is a natural lift of the curve $\ell(t)$ to $GL(kn)$, depending on the $k$-jet of the curve. It is worth remarking that once one has a normal form, another possible ``natural" lift is given by just using plain derivatives $(A(t)|...|A(t)^{(k-2)}|A(t)^{(k-1)})$; however one loses track of the geometry of the canonical reflection this way. In fact in the real projective plane case, Cartan (\cite{cartan}) uses the first lift, that is the one given by the horizontal curve as last columns. \subsection{The Jacobi Endomorphism} \label{Jacobi} Taking the derivative of the fundamental reflection, we reach the desired invariant: \begin{Def} Let $\ell(t)$ be a fanning curve, $F(t)$ be its fundamental endomorphism and $h(t)$ be the horizontal curve associated to $\ell(t)$. The Jacobi endomorphism of $\ell(t)$ is defined as $K(t):= \ddot{F}(t)^2/k^2$. If $P(t) = \frac{(I-D(t))}{2}$, then $K(t)=\dot{P}(t)^2$. \end{Def} If $A(t)$ be a fanning frame spanning $\ell(t)$ and $H(t)$ be its horizontal derivative, we can observe that \begin{equation}\label{jac} P(t)\dot{H}(t)=-\dot{P}(t)H(t)=-\dot{P}(t)^2A^{(k-2)}(t)=-K(t)A^{(k-2)}(t). \end{equation} \begin{The} Let $\ell(t)$ be a fanning curve in $Gr(n,\mathbb{R}^{kn})$ and let $h(t)$ be its horizontal curve. The Jacobi endomorphism satisfies the following properties: \begin{enumerate} \item At each value of $t$, the endomorphism $K(t)$ preserves the decomposition $ \mathbb{R}^ n = v(t) \oplus h(t)$. \item If $T$ is a transformation in $GL(kn)$, then the Jacobi Endomorphism of $Tl(t)$ is $TK(t)T^{-1}$. \end{enumerate} \end{The} \textit{Proof:} The first item follows from the proposition \ref{pponto} and from the expression $K(t)=\dot{P}(t)^2$. The second item follows follows from the action of $GL(kn)$ in the fundamental endomorphism. \qed \vspace{0.2cm} Observe that, when we consider $A(t)$ a normal frame as in section 2, this frame satisfies: \begin{equation}\label{normalk} A^{(k)} + {k \choose 2}A^{(k-2)}\kappa (t)+ {k \choose 3}A^{(k-3)}h_{1}(t) +...+ Ah_{k-2}(t) = 0. \end{equation} And in this case we have the horizontal derivative, projecting $A(t)^{(k-1)}$ onto horizontal curve, takes the form \begin{equation}\label{normalH} H(t)= A^{(k-1)} + {k-1 \choose 2}A^{(k-3)}\kappa + {k-1 \choose 3}A^{(k-4)}h_{1} +...+ {k-1 \choose k-1}Ah_{k-3}, \end{equation} where $\kappa (t) = 1/2\{A(t),t\}.$ \medskip \begin{The}\label{jacobik} Let $A(t)$ be a normal frame and $H(t)$ be its horizontal derivative, the matrix of the Jacobi Endomorphism $K(t)$ associated to $A(t)$ in the base of $\mathbb{R}^{kn}$ formed by the columns of $(A(t)|...|A(t)^{(k-2)}|H(t))$ is $$\left(\begin{array}{cccccc} 0& 0& ...& 0& h_{k-2}-h_{k-3}' & 0 \\ 0&0& ...& 0& {k-1 \choose k-2}(h_{k-3}-h_{k-4}')&0 \\ \vdots& \vdots & &\vdots & \vdots &\vdots\\ 0 & 0 &...&0& {k-1 \choose 2} (h_1 -\kappa')& 0 \\ 0&0&...&0& (k-1)\kappa &0 \\ 0&0&...&0& 0& (k-1)\kappa \end{array} \right),$$ where $\kappa(t)=\frac{1}{2}\{A(t),t\}$. \end{The} \textit{Proof:} First we have that $K(t)A(t)=0, K(t)\dot{A}(t)=0, ..., K(t)A(t)^{(k-3)}=0$, from proposition \ref{pponto}. The proof then follows from equations \ref{jac} and \ref{normalH}. Differentiating $H(t)$ in \ref{normalH}, replacing $$A^{(k)}=-{k \choose 2}A^{(k-2)}\kappa (t)- {k \choose 3}A^{(k-3)}h_{1}(t) -...- Ah_{k-2}(t)$$ and using the property that $${i-1 \choose j}-{i\choose j} = - {i-1 \choose j-1},$$ we obtain that \begin{eqnarray} \dot{H}(t)&=&-{k-1 \choose 1}A^{(k-2)}\kappa - {k-1 \choose 2}A^{(k-3)}(h_1-\kappa')- ... - {}\nonumber \\ & & {}-{k-1 \choose k-2}\dot{A}(h_{k-3}-h'_{k-4})-{k-1 \choose k-1}A (h_{k-2}-h'_{k-3}). \nonumber \end{eqnarray} From equation \ref{jac}, we have that $K(t)A(t)^{(k-2)}=-P(t)\dot{H}(t)$, so \begin{eqnarray} K(t)A(t)^{(k-2)}&=&{k-1 \choose 1}A^{(k-2)}\kappa + {k-1 \choose 2}A^{(k-3)}(h_1-\kappa')+ ... + {} \nonumber \\ & & {}+{k-1 \choose k-2}\dot{A}(h_{k-3}-h'_{k-4}) +{k-1 \choose k-1}A (h_{k-2}-h'_{k-3}). \nonumber \end{eqnarray} Then we just observe that $$K(t)H(t)= \dot{P}(t)(\dot{P}(t)H(t))= \dot{P}(t)(-P(t)\dot{H}(t)), $$ therefore $K(t)H(t)=H(t)(k-1)\kappa(t)$, as claimed. \qed \medskip The advantage of taking the square of $\dot{P}$ is that it preserves the vertical-horizontal decomposition; however it might be useful to consider $\dot{P}$ itself: \begin{Prop} \label{Jacobi-in-base} Let $A(t)$ be a normal frame and $H(t)$ be its horizontal derivative, the matrix of the transformation $\dot{P}(t)$ associated to $A(t)$ in the base of $\mathbb{R}^{kn}$ formed by the columns of $(A(t)|...|A(t)^{(k-2)}|H(t))$ is $$\left(\begin{array}{cccccc} 0& 0&...& 0& 0 &h_{k-2}-h_{k-3}' \\ 0&0& ...& 0& 0&{k-1 \choose k-2}(h_{k-3}-h_{k-4}') \\ \vdots& \vdots & &\vdots & \vdots &\vdots\\ 0 & 0 &...&0& 0& {k-1 \choose 2} (h_1 -\kappa') \\ 0&0&...&0&0 & (k-1)\kappa \\ 0&0&...&0& I& 0 \end{array} \right),$$ where $I$ represents the identity matrix. \end{Prop} \textit{Proof:} From proposition \ref{pponto} we have that $\dot{P}(t)A(t)=0, ..., \dot{P}(t)A(t)^{(k-3)}=0$ and $\dot{P}(t)A(t)^{(k-2)}=H(t)$, and from the preceding proof, we observe that \begin{eqnarray} \dot{P}(t)H(t)&=&{k-1 \choose 1}A^{(k-2)}\kappa + {k-1 \choose 2}A^{(k-3)}(h_1-\kappa')+ ... + {} \nonumber \\ & & {}+{k-1 \choose k-2}\dot{A}(h_{k-3}-h'_{k-4}) +{k-1 \choose k-1}A (h_{k-2}-h'_{k-3}). \nonumber \end{eqnarray} \qed \medskip \textit{Remark.} In the case $k=3$ and $n=1$, that is, the curves in the projective plane $\mathbb{R}P^{2}$, the matrix of theorem \ref{jacobik} is the same that Cartan found in \cite{cartan}. It is interesting to note that, in contrast to the case $n=2$ (where there is just a sign difference, see \S 8.3 in \cite{duran}), the matrix of proposition \ref{Jacobi-in-base} is {\em not} given by pulling back the Maurer-Cartan form by the lift $\mathbf{A}(t)=(A(t)|...|A(t)^{(k-2)}|H(t))$ nor by the other plausible lift $\tilde{\mathbf{A}}(t)=(A(t)|...|A(t)^{(k-2)}|A(t)^{(k-1)})$. Indeed, for example for $k=4$, we have \begin{eqnarray*} \mathbf{A}^{-1} \mathbf{A}' &=&\left(\begin{array}{cccc} 0& 0& -h_{1} & h'_{1}-h_{2}\\ 1& 0& -3\kappa & 3(\kappa'-h_{1})\\ 0 & 1 & 0& -3\kappa \\ 0&0& 1& 0 \end{array} \right)\, , \\ \tilde{\mathbf{A}}^{-1}\tilde{\mathbf{A}}' &=& \left(\begin{array}{cccc} 0& 0& h_{2} & 3h_{1}.\kappa\\ 0& 0& 3h_{1} & 9\kappa^{2}\\ 0 & 0 & 3\kappa& 0 \\ 0&0& 0& 3\kappa \end{array} \right) \, . \end{eqnarray*} \medskip The last two results explicitly relate the invariants obtained by the fundamental endomorphism and its derivatives and those obtained by the normal forms inspired by the classical invariant theory of projective ODEs. In the next section we shall see that the fundamental endomorphism follows naturally and rigidly from an Adjoint representation of the space of jets of fanning curves. \section{Geometry of jets of fanning curves in the Grassmannian}\label{k-jets} Last section shows that the fundamental endomorphism and related constructions furnish conjugation equivariant maps. Here we study these maps, especially their uniqueness, as equivariant maps from jets of fanning curves onto the Lie algebra $\mathfrak{gl}(kn)$. First we describe the space $J^{r}_f(\mathbb{R};Gr(n,\mathbb{R}^{kn}))$ of $r$-jets of fanning curves on the Grassmannian $Gr(n,\mathbb{R}^{kn})$ as the quotient of the space $ J^{r}_f(\mathbb{R}; M_{kn\times n})$ of $r$-jets of fanning frames by the action of the group $J^{r}(\mathbb{R};Gl(n))$ of $r$-jets of smooth curves of invertible $n\times n$ matrices; when $k=1$ this is the standard action $X \cdot A \rightarrow AX$, and we extend it to $r$-jets by reppeatly applying Leibnitz's rule. For details of jet groups and their actions, see \cite{Kolar}. For example, in the case $k=3$, the actions of $J^{2}(\mathbb{R};Gl(n))$ and $ Gl(3n)$ on $ J^{2}_f(\mathbb{R}; M_{3n\times n})$ are given by: $$( A,\dot{A}, \ddot{A}) \cdot (X,\dot{X},\ddot{X}) = (AX, \dot{A}X +A\dot{X}, \ddot{A}X +2\dot{A}\dot{X}+A\ddot{X}) $$ and $$T \cdot (A,\dot{A}, \ddot{A}) = (TA,T\dot{A}, T\ddot{A}).$$ In general we have $J^{r}(\mathbb{R};Gl(n))$ and $ Gl(kn)$ acting on $ J^{r}_f(\mathbb{R}; M_{kn\times n})$ in the following way: $$( A,\dot{A},...,A^{(r)}) \cdot (X,\dot{X},..., X^{(r)})=$$ $$ = (AX, \dot{A}X +A\dot{X},..., {r\choose0}A^{(r)}X+ {r\choose1}A^{(r-1)}\dot{X}+...+ {r\choose r}AX^{(r)} ) $$ and $$T \cdot (A,\dot{A},...,A^{(r)}) = (TA,T\dot{A},..., TA^{(r)}).$$ The first use of this description is to show the transitivity of the $GL(kn)$-action on the spaces of $k-1$ and $k$-jets of curves in the divisible Grassmannian $Gr(n,kn)$: \begin{Prop}\label{actstrans} The group of invertible linear transformations of $\mathbb{R}^{kn}$ acts transitively on the space $J^{k-1}_f(\mathbb{R}; M_{kn\times n})$ and, a fortiori, on $J^{k-1}_f(\mathbb{R};Gr(n,\mathbb{R}^{kn})).$ \end{Prop} \textit{Proof:} If $ (A| \dot{A}| \ddot{A} | \dots |A^{k-1}) \in J^{k-1}_f(\mathbb{R}; M_{kn\times n})$ then we choose \[ T:= (A| \dot{A}| \ddot{A} | \dots |A^{k-1}) \in Gl(kn) \] so we have $$T^{-1} \cdot (A| \dot{A}| \ddot{A} | \dots |A^{k-1}) = \left( \left(\begin{array}{c} I \\ 0 \\\vdots \\ 0 \end{array} \right), \left(\begin{array}{c} 0 \\ I \\ \vdots\\ 0 \end{array} \right), \cdots, \left(\begin{array}{c} 0 \\ 0 \\ \vdots\\ I \end{array} \right) \right). $$ \qed \medskip Let us look now at the space of $k$-jets of $Gr(n,\mathbb{R}^{kn})$. Here the action of $ Gl(kn)$ on the space of $k$-jets of fanning frames is not transitive; we have that the $k$-jets $(A,\dot{A}, ...,A^{(k)})$ and $(B, \dot{B},..., B^{(k)})$ are in the same $Gl(kn)$-orbit if and only if the matrices $(B|\dot{B}|...|B^{(k-1)})^{-1}B^{(k)}$ and $(A|\dot{A}|...|A^{(k-1)})^{-1}A^{(k)}$ are equal. However, we still have that $Gl(kn)$ acts transitively on $J^{k}_f(\mathbb{R};Gr(n,\mathbb{R}^{kn}))$: \begin{Prop}\label{actstrans-k} The group of invertible linear transformations of $\mathbb{R}^{kn}$ acts transitively on the space of $k$-jets of fanning curves in $Gr(n,\mathbb{R}^{kn})$. \end{Prop} \textit{Proof:} All that needs to be shown is that the joint action of $Gl(kn)$ and $J^{k}(\mathbb{R};Gl(n))$ on the space of $k$-jets of fanning frames is transitive. But if $$(A,\dot{A}, ...,A^{(k)}) \in J^{k}_f(\mathbb{R}; M_{kn\times n})$$ and $$A^{(k)} + {k \choose 1}A^{(k-1)}P_1 + {k \choose 2}A^{(k-2)}P_2 +...+{k \choose k-1}\dot{A}P_{k-1}+ AP_{k} = 0,$$ then if we act on $(A,\dot{A}, ...,A^{(k)})$, on the left by the matrix $$(A|\dot{A}+AP_1| A^{(2)}+2\dot{A}P_1 +AP_2|...| \ast )$$ where $\ast =A^{(k-1)} + {k-1 \choose 1}A^{(k-2)}P_1 +...+{k-1 \choose k-2}\dot{A}P_{k-2}+ AP_{k-1})^{-1}$ and on the right by the k-jet $(I,P_1,...,P_{k})$, we get $$\left( \left(\begin{array}{c} I \\ 0 \\ 0\\ \vdots\\0 \end{array} \right), \left(\begin{array}{c} 0 \\ I\\0\\\vdots\\ 0 \end{array} \right),..., \left(\begin{array}{c} 0 \\ 0\\0\\ \vdots \\ I \end{array} \right) \left(\begin{array}{c} 0 \\ 0\\0\\ \vdots \\ 0 \end{array} \right) \right). $$ \qed \subsection{Uniqueness} In this section we show that the Fundamental Endomorphism, horizontal derivative, etc; are essentially unavoidable if we want to represent the $GL(kn)$-action on jets as the Adjoint. We begin by characterizing the fundamental endomorphism for curves in $Gr(n,\mathbb{R}^{kn})$. In order to organize the proof, we need the following lemma whose proof is a matrix computation: \begin{lem} \label{tecnico} The matrix $$ \left( \begin{array}{cccccc} % {0\choose0} a_0 I & {1\choose0} a_1 I & {2\choose0} a_2 I & ... & {k-2\choose0} a_{k-2} I & {k-1\choose0} a_{k-1} I \\ & & & & & \\ 0 & {1\choose1} a_0 I & {2\choose1} a_1 I & ... & {k-2\choose1} a_{k-3} I & {k-1\choose1} a_{k-2} I \\ & & & & & \\ 0 & 0 & {2\choose2} a_0 I & ... & {k-2\choose2} a_{k-4} I & {k-1\choose2} a_{k-3} I \\ \vdots & \vdots & \vdots & & \vdots & \vdots \\ 0 & 0 & 0 & ... & {k-2\choose k-2}a_0 I & {k-1\choose k-2} a_{1} I \\ & & & & & \\ 0 & 0 & 0 & ... & 0 & {k-1\choose k-1} a_0 I \end{array} \right)$$ equals $a_0I + a_1F + \frac{a_2F^2}{2}+...+\frac{a_{k-1}F^{k-1}}{k-1!},$ where $$ F= \left( \begin{array}{cccccc} % 0 & I & 0 & ... &0 &0 \\ 0 & 0 & 2I& ...& 0&0 \\ \vdots & \vdots & \vdots & &\vdots & \vdots \\ 0 & 0 & 0 & ...& (k-2)I &0 \\ 0 & 0 & 0 & ...& 0& (k-1)I\\ 0 & 0 & 0 & ...& 0& 0 \end{array} \right). $$ \end{lem} \medskip \begin{The}\label{endfund} A map $$J^{k-1}_f(\mathbb{R};Gr(n,\mathbb{R}^{kn}))\rightarrow \mathcal{G}l(kn)$$ is equivariant with respect to the $Gl(kn)$ action if and only if it is of the form $$ a_0I + a_1F + \frac{a_2F^2}{2}+...+\frac{a_{k-1}F^{k-1}}{k-1!},$$ where $I$ is the identity matrix, $a_i$, for all $i$, are real numbers, and $$ F= \mathbf{A}(t) \left( \begin{array}{cccccc} 0 & I & 0 & ... &0 &0 \\ 0 & 0 & 2I& ...& 0&0 \\ \vdots & \vdots & \vdots & &\vdots & \vdots \\ 0 & 0 & 0 & ...& (k-2)I &0 \\ 0 & 0 & 0 & ...& 0& (k-1)I\\ 0 & 0 & 0 & ...& 0& 0 \end{array} \right) \mathbf{A}(t)^{-1},$$ with $\mathbf{A}(t) = ( A(t)|\dot{A}(t)|...|A^{(k-1)}(t))$. \end{The} \textit{Proof:} The proof is divided in two parts: first, that in the right basis, the matrix representing the map has to be constant. Then, we show that the entries of this matrix are the correct ones to give the desired result. \noindent{\em First part:} Let $G: J^{k-1}_f(\mathbb{R}; M_{kn\times n})\rightarrow \mathcal{G}l(kn)$ be a map invariant under the action of $J^{k-1}(\mathbb{R};Gl(n))$ and equivariant with respect to the action of $Gl(kn)$. Writing $G(A,\dot{A},...,A^{(k-1)})$ in the canonical basis, we obtain $$ (A|\dot{A}|...|A^{(k-1)}) \left( \begin{array}{c} G_{ij}(A,\dot{A},...,A^{(k-1)}) \end{array} \right)_{k\times k} (A|\dot{A}|...|A^{(k-1)})^{-1},$$ where $G_{ij}$ are blocks $n \times n$. The equivariance implies that \[ G(TA,T\dot{A},...,TA^{(k-1)}))= TG(A,\dot{A},...,A^{(k-1)})T^{-1}, \] then, $\forall T \in Gl(kn)$, $$\left( \begin{array}{c} G_{ij}(A,\dot{A},...,A^{(k-1)}) \end{array} \right)_{k\times k} = \left( \begin{array}{c} G_{ij}(TA,T\dot{A},...,TA^{(k-1)}) \end{array} \right)_{k\times k}.$$ Since $Gl(kn)$ acts transitively on $J^{k-1}_f(\mathbb{R}; M_{kn\times n})$, then the blocks $G_{ij}$ of $n \times n$ matrices are constant. \noindent{\em Second part:} By induction on $k$, assume that the matrices $G_{ij}$ of the map $$J^{k-2}_f(\mathbb{R};Gr(n,\mathbb{R}^{(k-1)n}))\rightarrow \mathcal{G}l((k-1)n)$$ satisfies the Lemma. Now, using the invariance under the action of $J^{k-1}(\mathbb{R};Gl(n))$, we need to conclude that the matrices $G_{ij}$ of the map $J^{k-1}_f(\mathbb{R};Gr(n,\mathbb{R}^{kn}))\rightarrow \mathcal{G}l(kn)$ has the form of the Lemma. The invariance under the action of $J^{k-1}(\mathbb{R};GL(n))$ implies, for all $X(t) \in Gl(n) $, that: \begin{eqnarray}\label{prim} \end{eqnarray} $$\left( \begin{array}{cccccc} {0\choose0} X & {1\choose0} \dot{X} & {2\choose0} \ddot{X}& ... & {k-2\choose0} X^{(k-2)}& {k-1\choose0} X^{(k-1)} \\ & & & & & \\ 0 & {1\choose1} X & {2\choose1} \dot{X} &... & {k-2\choose1} X^{(k-3)}&{k-1\choose1} X^{(k-2)} \\ \vdots & \vdots & \vdots & & \vdots & \vdots \\ 0 & 0 & 0 & ... & {k-2\choose k-2}X & {k-1\choose k-2}\dot{X} \\ & & & & & \\ 0 & 0 & 0 & ... & 0 & {k-1\choose k-1} X \end{array} \right) \left( \begin{array}{c} G_{ij} \end{array} \right)_{k\times k}= $$ $$= \left( \begin{array}{c} G_{ij} \end{array} \right)_{k\times k} \left( \begin{array}{cccccc} {0\choose0} X & {1\choose0} \dot{X} & {2\choose0} \ddot{X}& ... & {k-2\choose0} X^{(k-2)}& {k-1\choose0} X^{(k-1)} \\ & & & & & \\ 0 & {1\choose1} X & {2\choose1} \dot{X} &... & {k-2\choose1} X^{(k-3)}&{k-1\choose1} X^{(k-2)} \\ \vdots & \vdots & \vdots & & \vdots & \vdots \\ 0 & 0 & 0 & ... & {k-2\choose k-2}X & {k-1\choose k-2}\dot{X} \\ & & & & & \\ 0 & 0 & 0 & ... & 0 & {k-1\choose k-1} X \end{array} \right).$$ \bigskip Looking at the last line of these products, we obtain the relations: $XG_{k1} = G_{k1}X $ $XG_{k2} = G_{k1} \dot{X} + G_{k2}X $ $\hspace{1cm} \vdots $ $XG_{k,k-1} = {k-2\choose0}G_{k1} X^{(k-2)}+ \dots + {k-2\choose k-2}G_{k,k-1} X.$ Then $G_{k1}, G_{k2}, ..., G_{k,k-1}$ must be zero. Therefore $G$ has the form $\left( \begin{array}{ccccc} % & & & G_{1k} \\ & \ast & & G_{2k} \\ & & & \vdots \\ 0 & ... & 0 & G_{kk} \end{array} \right),$ where $\ast$ just depends of the first $k-1$ lines and columns of the matrix $(G_{ij})_{k\times k}$, and of $J^{k-2}(\mathbb{R};Gl(n)).$ Using the hypotheses, $G$ has the form: \begin{eqnarray} \label{seg} \left( \begin{array}{cccccc} % {0\choose0} a_0 I & {1\choose0} a_1 I & {2\choose0} a_2 I & ... & {k-2\choose0} a_{k-2} I & G_{1k} \\ & & & & & \\ 0 & {1\choose1} a_0 I & {2\choose1} a_1 I & ... & {k-2\choose1} a_{k-3} I & G_{2k} \\ & & & & & \\ 0 & 0 & {2\choose2} a_0 I & ... & {k-2\choose2} a_{k-4} I & G_{3k} \\ \vdots & \vdots & \vdots & & \vdots & \vdots \\ 0 & 0 & 0 & ... & {k-2\choose k-2}a_0 I & G_{k-1,k} \\ & & & & & \\ 0 & 0 & 0 & ... & 0 & G_{kk} \end{array} \right) \end{eqnarray} where, using \ref{prim} and \ref{seg}, $G_{ik}$ satisfies: $$ {0\choose0}XG_{1k}+ {1\choose0} \dot{X} G_{2k} +\cdots + {k-1\choose0}X^{(k-1)} G_{kk} = $$ $$= {0\choose0}a_0{k-1\choose0} X^{(k-1)}+ \cdots + {k-2\choose0}a_{k-2}{k-1\choose k-2} \dot{X}+ G_{1k} {k-1\choose k-1} X $$ \centerline{\vdots} $${k-3\choose k-3} XG_{k-2,k}+ {k-2\choose k-3} \dot{X} G_{k-1,k} + {k-1\choose k-3} X^{(2)} G_{kk} = $$ $$= {k-3\choose k-3}a_0{k-1\choose k-3} X^{(2)}+ {k-2\choose k-3}a_1{k-1\choose k-2}\dot{X} + G_{k-2,k} {k-1\choose k-1} X $$ $${k-2\choose k-2} XG_{k-1,k}+ {k-1\choose k-2} \dot{X} G_{kk} = {k-2\choose k-2}a_0{k-1\choose k-2}\dot{X} + G_{k-1,k} {k-1\choose k-1} X $$ $${k-1\choose k-1} X G_{kk} = G_{kk} {k-1\choose k-1}X $$ Therefore the matrix $\left( \begin{array}{c} G_{ij} \end{array} \right)_{k\times k} $ has the form of the Lemma \ref{tecnico}. \qed \vspace{1.0cm} Now we characterize the horizontal derivative: \begin{The}\label{kjethor} The assignment that sends a fanning curve $\ell(t)$ to its horizontal curve $h(t)$ is characterized by the following four properties: \begin{enumerate} \item At each $t$, the subspace $h(t)$ is transversal to $Span\{A(t),...,A^{(k-2)}\}$. \item The subspace $h(\tau)$ depends only on the $k$-jet of the curve $\ell(t)$ at $t=\tau$. \item If $T\in Gl(kn)$, the horizontal curve of $Tl(t)$ is $Th(t)$. \item If $\ell(t)$ is spanned by a curve $A_0+tA_1+...+t^{k-1}A_{k-1}$ in the space of frames $h(t)$, $h(t)$ is constant. \end{enumerate} \end{The} The next lemma, necessary for the proof of theorem \ref{kjethor}, is the analogue of lemma 7.5 of (\cite{duran}) for the generalized horizontal derivative. \begin{lem} A $Gl(kn)$-equivariant map $j:J^{k}_f(\mathbb{R};Gr(n,\mathbb{R}^{kn})) \rightarrow Gr(n,\mathbb{R}^{kn})$ is such that the subspaces $j([(A,\dot{A},...,A^{k})])$ and $Span\{A(t),...,A^{(k-2)}\}$ are always transversal if and only if it is of the form $$[(A,\dot{A},...,A^{(k)})]\longmapsto [H+ c_{k-1}A +c_{k-2}\dot{A}+...+ c_{1}A^{k-2}], $$ where $c_1,...,c_{k-1}$ are real numbers and $H $ is the horizontal derivative defined in \ref{derhor}. \end{lem} \textit{Proof:} Since $(A,\dot{A},...,A{(k)})\mapsto H$ is the horizontal derivative, for any real numbers $c_1,...,c_{k-1}$, the subspace $[H+ c_{k-1}A +c_{k-2}\dot{A}+...+ c_{1}A^{k-2}]$ is transversal to $Span\{A(t),...,A^{(k-2)}\}$. And the $Gl(kn)$-equivariance follows from the properties of the horizontal derivative. Conversely, let us define $P: J^{k}_f(\mathbb{R}; M_{kn\times n})\rightarrow \mathcal{G}l(kn)$ as the map whose value at a $k$-jet $(A,\dot{A},...,A^{(k)})$ is the projection with range $Span\{A(t),...,A^{(k-2)}\}$ and kernel $J([(A,\dot{A},...,A^{(k)})])$. The map $P$ has the following properties: \begin{enumerate} \item $P(A,...,A^{(k)})^2 = P(A,...,A^{(k)})$; \item $P(A,...,A^{(k)})A^{(i)}=A^{(i)}$, for $i=1,...,k-2$; \item $P(TA,...,TA^{(k)})=TP(A,...,A^{(k)})T^{-1}$; \item $P((A,\dot{A},...,A^{(k)}) \cdot (X,\dot{X},...,X^{(k)}))=P(A,\dot{A}, ...,A^{(k)})$. \end{enumerate} Using (1) and (2), we have that there exists $k-1$ functions \centerline{$R_1(A,\dot{A},...,A^{(k)})$, $R_2(A,\dot{A},...,A^{(k)})$, ..., $R_{k-1}(A,\dot{A},...,A^{(k)})$} with values in the space of $n\times n$ matrices such that $P(A,...,A^{(k)})$ is equal to $$( A(t)|\dot{A}(t)|...|A^{(k-1)}(t)) \left( \begin{array}{ccccc} % I & 0 & ... & 0 & R_1 \\ 0 & I & ... & 0 & R_2 \\ \vdots & \vdots & & \vdots & \vdots \\ 0 & 0 & ... & I & R_{k-1} \\ 0 & 0 & ... & 0 & 0 \end{array} \right) ( A(t)|\dot{A}(t)|...|A^{(k-1)}(t))^{-1}.$$ Since $(A|\dot{A}|...|A^{(k-1)})^{-1}A^{(k)}$ is the complete invariant for the action of $Gl(kn)$ on the $k$-jets of fanning frames (Proposition \ref{actstrans}), and property (3) implies that $R_i(A,\dot{A},...,A^{(k)})$, for all $i\in \{1,...,k-1\}$, depends only of the $Gl(kn)$-orbit; then $R_i(A,\dot{A},...,A^{(k)})$ depend only of $(A|\dot{A}|...|A^{(k-1)})^{-1}A^{(k)}$. Moreover, property (4) is equivalent to the $R_i$, for all $i$, having the following expressions: $$R_1 = -P_{k-1}-c_{k-1}I,$$ $$R_2= - {k-1 \choose k-2} P_{k-2} - c_{k-2}I,$$ $$\vdots$$ $$R_{k-1} = - {k-1 \choose 1}P_1-c_1I$$ where $P_i$, for all $i$, comes from $H(t)$. Therefore, since $j([(A,\dot{A},...,A^{k})])$ is the kernel of $P(A,...,A^{(k)})$, it must be $[H+ c_{k-1}A +c_{k-2}\dot{A}+...+ c_{1}A^{k-2}]$. \qed \vspace{1.0cm} \textit{Proof of Theorem \ref{kjethor}} If a map $J^{k}_f(\mathbb{R};Gr(n,\mathbb{R}^{kn})) \rightarrow Gr(n,\mathbb{R}^{kn})$ is equivariant and it has the property that its image is always transversal to $Span\{A(t),...,A^{(k-2)}\}$ then from the previous lemma $h(t)= [H+ c_{k-1}A+c_{k-2}\dot{A}+...+c_{1}A^{(k-2)}]$, for any choice of frame $A(t)$ spanning $\ell(t)$. Since $Gl(kn)$ acts transitively on $J^{k}_f(\mathbb{R};Gr(n,\mathbb{R}^{kn}))$ and the map is equivariant, we just need analyze the map in one point then we determine it. When $A(t)$ has the form $A_0+tA_1+...+t^{k-1}A_{k-1}$ in the space of frames, then $h(t)$ is $$[A_{k-1}+ c_{k-1}(A_0+tA_1+...+t^{k-1}A_{k-1})+...+ c_{1}((k-2)!A_{k-2}+(k-1)!\;tA_{k-1})]$$ and this is constant if and only if $c_1,c_2, ...,c_{k-1}$ are zero. So, as claimed, $h(t)=[H(t)]. $ \qed \subsection{$k$-jets and Adjoint orbits} Let us examine more closely the invariants of section \ref{fundamentaletc} as equivariant maps from the space of $r$-jets of curves onto the Lie algebra $\mathfrak{gl}(kn)$ endowed with the Adjoint action. We will do the interesting case $r=k-1$, where we shall see that the fundamental endomorphism is actually an equivariant {\em embedding}, thus modelling $J^{k-1}_f(\mathbb{R};Gr(n,\mathbb{R}^{kn}))$ as an adjoint orbit; $r=k$, where the $GL(kn)$ is still transitive, and $r=k+1$, the first stage where the action ceases to be transitive and we shall see how the invariants parametrize the space of orbits. \begin{Prop} The fundamental endomorphism $$F:J^{k-1}_f(\mathbb{R};Gr(n,\mathbb{R}^{kn}))\rightarrow \mathfrak{gl}(kn)$$ of theorem \ref{endfund} is a diffeomorphism onto its image, in fact, $F$ is a equivariant embedding of $k-1$-jets as an Adjoint orbit Lie algebra of $\mathfrak{gl}(kn)$. \end{Prop} \textit{Proof:} Since $GL(n,kn)$ acts transitively on $J^{k-1}_f(\mathbb{R};Gr(n,\mathbb{R}^{kn}))$, and the map is equivariant, its image is contained in an Adjoint orbit. All we need to check is that given $s\in J^{k-1}_f(\mathbb{R};Gr(n,\mathbb{R}^{kn}))$, the isotropy of $s$ and the isotropy of $F(s)$ coincide, which holds since both isotropies are composed of matrices with the form $$\left( \begin{array}{cccccc} X_1 & X_2 & X_3 & ... & X_{k-1} & X_k \\ 0 & X_1 & 2X_2& ...& {k-1 \choose 2} X_{k-2} & {k \choose 2} X_{k-1} \\ \vdots & \vdots & \vdots & &\vdots & \vdots \\ 0 & 0 & 0 & ...& {k-1 \choose k-2}X_2 & {k \choose k-2} X_3 \\ 0 & 0 & 0 & ...& X_1 & {k \choose k-1 }X_2\\ 0 & 0 & 0 & ...& 0 & X_1 \end{array} \right).$$ \qed \medskip \noindent{\bf Remark.} The previous result holds by taking as equivariant map any map of the form $ a_0I + a_1F + \frac{a_2F^2}{2}+...+\frac{a_{k-1}F^{k-1}}{k-1!}$, as long as $a_1 \neq 0$. We now study the first and second prolongation of the fundamental endomorphism. As soon as $r\geq k$, we have the advantage of having enough information to define normal frames: Let $N^{r}_f(\mathbb{R}; M_{kn\times n}) \subset J^{r}_f(\mathbb{R}; M_{kn\times n})$ be the space of $r$-jet of {\em normal} fanning frames. By the results in section \ref{normal}, we have that $J^{r}_f(\mathbb{R};Gr(n,\mathbb{R}^{kn}))$ is the quotient of $N_f^{r}(\mathbb{R}; M_{kn\times n})$ by the group $GL(n)$ of (constant) $n\times n$ invertible matrices. The fundamental projection $P(t)$ gives a map $P:J^{k}_f(\mathbb{R};Gr(n,\mathbb{R}^{kn}))$ into the space of projections; more precisely, into the connected component $\Pi(n,kn)$ of the space of projections of $\mathbb{R}^{kn}$ indexed by $n = \dim \ker P(t) = \dim h(t) = n$. Recall that everything is linear (as opposed to Euclidean), and the map \begin{eqnarray*} \Pi(n,kn) &\to& Gr(n,kn)\\ \pi &\mapsto& \ker(\pi) \end{eqnarray*} is a submersion. The space of linear projections is used, for example, as the classifying space in the category of vector bundles endowed with linear connections (\cite{porta-recht}). Since the action of $GL(kn)$ is transitive both on $k$-jets of curves and the space $\Pi(n,kn)$, the fundamental projection gives a {\em surjective} equivariant map $P:J^{k}_f(\mathbb{R};Gr(n,\mathbb{R}^{kn})) \to \Pi(n,kn)$. This map can be factored through the flags appearing previously in the paper; denoting by $\mathcal{F}(n,kn)$ (resp $\mathcal{D}(n,kn)$) the flag spaces of linear chains of subspaces (resp. decompositions) of $\mathbb{R}^{kn}$ of the appropriate dimensions, we have the submersions \[ J^{k}_f(\mathbb{R};Gr(n,\mathbb{R}^{kn}))\stackrel{d}{\to} \mathcal{D}(n,kn)\to \mathcal{F}(n,kn)\to \Pi(n,kn)\to Gr(n,kn)\, . \] All the arrows with the possible exception of the first one are well understood. In order to grasp the first map $d$, by equivariance and transitivity it is a homogeneous submersion whose typical fiber is the quotient of the isotropies $I_{d(x)}/I_x$. The isotropy of a given decomposition in $\mathcal{D}(n,kn)$ is the set of linear transformations that preserve each space, i.e., the $k$-fold product $\mathcal GL(n)^k$. Now let $T\in GL(kn)$ fixing a $k$-jet $j_k \in J^{k}_f(\mathbb{R};Gr(n,\mathbb{R}^{kn}))$ of the form given in the transitivity proposition \ref{actstrans-k}. By lifting $j_k$ to a normal $k$-jet of frames $\mathbb{A}_k\in N^{r}_f(\mathbb{R}; M_{kn\times n}) $ we have that $\mathbb{B}=T\mathbb{A}$ is also a normal $k$-jet, such that $Span(A^{(r)}) = Span(B^{(r)}), 0\leq r \leq k-2$ and $Span(H_{\mathbb{A}}) = Span(H_{\mathbb{B}})$. Therefore, there exists a constant, invertible $B$ such that $A^{(r)}X = B^{(r)} $ for all $0\leq r \leq k-2$ and also $H_{\mathbb{A}}X= H_{\mathbb{B}}$. That means that $T$ must be a block-diagonal matrix $(X,\dots,X)\in \Delta \subset GL(n)^k \subset GL(kn)$. Thus the fiber of the map $d$ is the homogenous space $GL(n)^k/\Delta$, which is diffeomorphic as a differentiable manifold to $GL(n)^{k-1}$, a diffeomorphism being realized by the ``''homogenous coordinates" \[ (g_1,\dots g_k) \mapsto (g_k^{-1}g_1,g_k^{-1}g_2, \dots g_k^{-1}g_{k-1}) \, . \] Note that $GL(n)$ sits inside of the set $M_n$ of all $n\times n$ matrices, and $GL(n)$ acts diagonally on $M_n^k$. The quotient is a ``non-commutative projective space" (it is actually $\mathbb{R}P^{k-1}$ when $n=1$) and the fiber is the open set that is the intersection of the domains of the homogeneous coordinate charts. \medskip Let us finally deal with the space of $k+1$-jets. Here the $GL(kn)$-action is no longer transitive and we want to coordinatize the space of orbits. We still use the restrict ourselves to normal frames, but additionally, we work on a {\em section}, that is an appropriate submanifold of $J^{k+1}_f(\mathbb{R};Gr(n,\mathbb{R}^{kn}))$ that intersects all orbits. This is done in order to avoid the ambiguity of a choice of basis that translates to conjugation in theorem \ref{mainCongruence}. Denote by $\{\vec{e}_1,\dots, \vec{e}_{kn}\}$ the canonical basis of $\mathbb{R}^{kn}$. \medskip \begin{Def} A $r$-jet of curves in the divisible Grassmannian to {\em standard} if its projection to $0$-jets is the plane $Span\{\vec{e}_1, \dots \vec{e}_n\}$. A $r$-jet of frames is {\em standard} if it is normal and its projection to $0$-jets is the frame $(\vec{e}_1, \dots \vec{e}_n)$. \end{Def} Let us observe that the concept of standard curves makes sense for all $r$-jets, whereas for a standard frame we need $r\geq k$ in order to define normalcy. Let us denote by $\mathcal{S}(n,kn)$ (resp. $\widetilde{\mathcal{S}}(n,kn)$ ) the space of $k+1$-jets of standard curves in $Gr(n,kn)$ (resp . $k+1$-jet of curves of standard frames). Since normal frames are unique given an initial frame and the initial frame is fixed for standard frames, we have \begin{Prop}\label{standard-mesma-coisa} The projection ({\it frame $A$}) $\mapsto$ {\it span(A)} induces a diffeomorphism $\widetilde{\mathcal{S}}(n,kn)\to \mathcal{S}(n,kn)$. \end{Prop} The group $G_0\subset GL(kn)$ that preserves $\mathcal{S}(n,kn)$ is the group of block-upper triangular matrices of the form \[ \begin{pmatrix} X & Y \\ 0 & Z \\ \end{pmatrix} \] where each $X \in GL(n), Z\in GL(n(k-1)) $. The action of $G_0$ on $\mathcal{S}(n,kn)$ lifts to standard frames as follows: \[ \begin{pmatrix} % X & Y \\ 0 & Z \end{pmatrix} \bullet A = \begin{pmatrix} % X & Y \\ 0 & Z \end{pmatrix} A X \] It is clear that $\mathcal{S}(n,kn)$ is indeed a section. Therefore the inclusion $\mathcal{S}(n,kn) \hookrightarrow J^{k+1}_f(\mathbb{R};Gr(n,\mathbb{R}^{kn})) $ induces a homeomorphism $$\mathcal{S}(n,kn)/G_0 \hookrightarrow J^{k+1}_f(\mathbb{R};Gr(n,\mathbb{R}^{kn}))/GL(kn),$$ and by proposition \ref{standard-mesma-coisa}, also a homeomorphism $$\widetilde{\mathcal{S}}(n,kn)/G_0 \hookrightarrow J^{k+1}_f(\mathbb{R};Gr(n,\mathbb{R}^{kn}))/GL(kn).$$ We have \begin{The} The map $Q: S(n,kn) \to \mathfrak{gl}(kn)$ given by the entries of the matrix of theorem \ref{jacobik} induces a homeomorphism between its image and the space of orbits $J^{k+1}_f(\mathbb{R};Gr(n,\mathbb{R}^{kn}))/GL(kn)$. \end{The} \textit{Proof:} Theorem \ref{mainCongruence} says that two curves are congruent if and only if the respective Schwarzians and matrices $h_j$ are of normal frames lifting them are conjugate by a constant $n\times n$ matrix $X$. If both curves are standard, then $X$ must be the identity. The only missing piece is to substitute ``$k+1$-jet" in place of ``curves" in the begginning of the proof; it is not clear at first glance that the Schwarzian and the $h_j$ depend on the $k+1$-jet of a curve. But this follows from the presentation of the Jacobi endomorphism of Theorem \ref{jacobik} and Proposition \ref{Jacobi-in-base}: the Jacobi endomorphism and its associated matrix in the basis given by $(A,\dot{A}, \dots , H)$ can be computed with using {\em at most} $k+1$ derivatives, and one needs {\em at least} $k+1$ derivatives since otherwise the Jacobi matrices of \ref{jacobik} and \ref{Jacobi-in-base} would be constant by the transitivity of the action on $r$-jets, $r\leq k$. \qed
1,314,259,992,827
arxiv
\section{Introduction.} Recently, there has been a revived interest in the research of multiferroics due to several new discoveries, and the possibility to use them technologically.$^{1}$ The electric and magnetic transitions are not necessarily correlated, but when it occurs - and the so called \textit{% magnetoelectric effect }appears - the materials suggest possible use as memories, etc. Besides the technical applications, several families of those compounds present very rich physics. Just as an example, one of this compounds, LiNiPO$_{4}$, shows a phase transition where the electric lattice is not only first order, but in a very short range of temperature several incommensurable transitions appear.$^{2}$ Even in front of those interesting phenomena, we found very few theoretical papers on this subject, and most of them using very elaborated theoretical methods.$^{3}$ With the motivation presented above, we developed two simple numerical models, based on the Monte Carlo method, and the Metropolis minimization of total energy. We present here two simple and understandable models for the magnetoelectricity. The electromagnetic Hamiltonian is solved for very simple cases, and the solutions present similitude with the reported experiments. The first model study the phase transitions at the same temperature, independently of the temperature value. The second model, a little more elaborated, allows the ferroelectric and ferromagnetic transitions to occur at different temperatures, and we studied the behavior of the model as function of this temperature difference. We compare the results between the models and with experimental cases. \section{The models.} The physics behind the magnetoelectric effect consists, as seen in a bird's eye view, \ in the creation or orientation of electric dipoles by the magnetic moments or vice-versa. In the first case, the magnetic dipoles, which are permanent, when change their orientation, modify the lattice in such a way that the negative electric charges displace relatively to the positive. This is accomplished via spin-orbit coupling of the spins, changing the total energy as the orbit lattice Hamiltonian explains. Simplifying the model, the spin-orbit-lattice energy is calculated as a spin-lattice Hamiltonian. We simplified the calculation even more, representing the magnetic lattice as a 2D Ising lattice in all the cases. This corresponds with many real compounds, as the olivines mentioned above, where the structure of the real lattice presents separated planes of magnetic ions.$^{4-6}$ We assume that the magnetoelectric system is a set of magnetic dipoles, coupled via the exchange interaction, in a lattice with a distribution of electric charges, susceptible to change when the magnetic dipoles change their orientations. The change in orientation of the magnetic dipoles modify their environment, via spin-orbit interaction, creating local strains, and creating or orienting a set of electric \ dipoles in the lattice. We assume that our crystal suffers the strain in such a way that electric dipoles are oriented to a particular direction when the magnetic dipoles relax. The model Hamiltonian used in the models follows: \begin{equation} H=H_{M}+H_{E}+H_{ME} \end{equation} where $H_{M}$ is the magnetic energy, $H_{E}$ the electric energy and $% H_{ME} $ the magnetoelectric coupling. \subsection{\protect\bigskip The first model} The first approach for a solution of eq.(1) is obtained replacing the first term in the sum of the right side by a square sublattice of Ising magnetic moments, and the electric moments in the second and third terms by random oriented classical electrical dipoles, located in a separated square sublattice. The Ising spins are coupled to their nearest neighbors only, and with periodic boundary conditions. The interaction Hamiltonian allows only nearest neighbors magnetoelectric interaction. Symmetry requires that the magnetic point group of the magnetic moment is one of the 58 Shubnikov groups that allow magnetoelectricity.$^{7}$ This forces our magnetic moments to have only one electric dipole as a nearest neighbor. The electromagnetic coupling is divided in two parts: the \textit{local }interaction between the spin and the electric dipole, and the \textit{lattice }total electromagnetic energy, that takes into account the interaction between the electric dipoles and their electric neighbors. As most of the electric parameters are measured perpendicularly to the magnetization,$^{2,8}$ we chose the $% \widehat{z}$ direction for the magnetic moments, and the $\widehat{x}$ axis for the electric dipoles. The numerical solution of the problem was done using the importance sample Monte Carlo method, looking for the minimum in energy for our system. Thus, \begin{equation} H=-J\sum_{<i,j>}\sigma _{i}\sigma _{j}-h\sum_{i}\sigma _{i}-\beta \sum_{\{i,j\}}P_{ix}P_{jx}+\gamma \sum_{i}P_{ix} \end{equation} \bigskip is the approximated Hamiltonian, where $J$ is the exchange coupling of the \ Ising magnetic spins $\sigma $, and $P$ the electric dipoles. The symbol $% <i,j>$ indicates sums over the nearest neighbors only. The first and second terms constitute the magnetic energy, where we included the possibility of an applied or external magnetic field $h$. The third term represent the electric energy, proportional to the orientation of the neighboring electric momenta. As the system is to be ferroelectric, and the direction of the polarized dipoles the $\hat{x}$ axis of the crystal, we considered only the energy coupling in that direction, which is represented by $\{i,j\}$, indicating sum over the two first neighbors located in the $\hat{x}$ axis. The interaction term, which represents a spin - lattice Hamiltonian, was separated into two parts. One of them is the local interaction between the spin and the $\widehat{x}$ projection of the electric dipole, which makes that every transition of the spin changes simultaneously the dipole; the second, represented the last term in eq. (2) is the contribution of the lattice as a whole to the total energy. As the local interaction is the same for every pair spin-dipole, we did not include it in the Hamiltonian. However, the meaning of this local part of the energy is important, as we will see below. \subsubsection{\protect\bigskip Ferromagnetic case.} Our first calculation was performed in a 100$\times $100 2D lattice of Ising ferromagnetic spins coupled to 100$\times $100 electric dipoles, located in another square lattice, parallel to the magnetic one, and slightly shifted from it. The electric dipoles were oriented at random, together with magnetic lattice, to begin with infinite temperature. The temperature was then fixed to a value, and a Monte Carlo program, where the transitions are allowed following the Metropolis technique,$^{9}$ is iterated the time necessary to obtain thermal equilibrium of the system. Then, the results are used as \ the initial condition for the following temperature. The calculation was performed reducing the temperature in each step. The complete calculation was performed after a study of convergence in our model. As first step, we unconsidered the electric and interaction energies when we looked for the minimum. This means that the model is just a 2D Ising system, moving electric dipoles together, and as expected, the magnetization follows the Ising model. The exact calculation published by Onsager allows a very good comparison, and the electric dipoles also feel a transition at the same temperature. This calculation decided the size of the set of moments, which we selected as 100x100 on this basis. The following step was to study the convergence when the parameters $\beta $ and $\gamma $ in the model are different from zero. It is well known that the Ising model converges slowly near by the transition temperature, due to fluctuations, and the equal value for the energy when the system is oriented in any of the both possible directions. This is easily solved with the addition of the small external field $h$; however, we observed that for particular values of $\gamma $, the convergency is the slowest. This can be explained by the fact that $\gamma $ appears in the Hamiltonian as an extra external field, in some manner. We can use the local coupling of the spin-dipole pair to write \begin{equation} P_{ix}=P_{ix}|\sigma _{i}|=\sigma _{i}|P_{ix}| \end{equation} for the modulus of $\sigma _{i}$ is always the unity; that can be used to write the second and last terms in the Hamiltonian as \bigskip \begin{equation} \gamma \sum_{i}P_{ix}-h\sum \sigma _{i}=-\sum (h-\gamma |P_{ix}|)\sigma _{i} \end{equation} which shows that the value of $\gamma |P_{ix}|$ appears as an extra magnetic field. The mean value of $|P_{ix}|$ annulate the external field when $\gamma /J\approx 0.02$ for an applied field $h/J=0.01$, and the convergency is the slowest for this value of $\gamma $. Taking this into account, we found that it was necessary 5000 iterations per spin to obtain thermal equilibrium and other 5000 to get the mean values of the energy and magnetization; this numbers were used in every case. Eq. (4) may also be used to analyze the meaning of the $\beta $ parameter. The first and third terms in eq.(2) can be written \begin{eqnarray} H_{1} &=&-J\sum_{<i,j>}\sigma _{i}\sigma _{j}-\beta \sum_{\{i,j\}}P_{ix}P_{jx} \\ &=&-J\sum_{<i,j>}\sigma _{i}\sigma _{j}-\beta \sum_{\{i,j\}}|P_{ix}|\sigma _{i}|P_{jx}|\sigma _{j} \notag \\ &=&-\sum_{\{i,j\}}(J+\beta |P_{ix}||P_{jx}|)\sigma _{i}\sigma _{j}-J\sum_{<i,j>\neq\{i,j\}}\sigma _{i}\sigma _{j} \notag \end{eqnarray} So it can be seen that the $\beta $ parameter makes the exchange anisotropic, modifiyng its value in the $\widehat{x}$ direction, leaving the coupling in the $\widehat{y}$ direction unaltered. We performed the calculation as function of the temperature for different values of the $\beta $ parameter for $\gamma =0$; then repeated the calculation for $\beta =0$ to study the dependence of the results from $% \gamma $, and finally we made a complete study of the form and transition temperature of the system as a function of both parameters. As $J$ determines the transition temperature for the Ising model, we used it as unit of energy for the whole system. To make the results clear, we made calculations as function of the temperature for different values of $\beta $, when $\gamma $ is zero - meaning that the electric interaction is bigger than the magnetoelectric one. After that, we calculated the minimum as function of $\gamma $, when $% \beta $ is zero. The complete calculation, with both parameters different from zero gives a clear vision of the total behavior of our model. The results for the ferromagnetic case are shown in Figs. 1, 2, and 3. The first and second figures show the change in the shape of the transition caused by the $\beta $ and $\gamma $ parameters. The effect of the$\ \beta $ parameter is principally to shift the transition, as seen in Fig. 2. The $\gamma $ value can be positive and negative, and the effect is to broaden the transition, and invert the magnetization-polarization when the sign of it is changed. Fig. 3 shows the complete dependence of the transition temperature depending of both parameters. \begin{figure}[tbp] \includegraphics[scale=0.30]{Fig5-4.eps} \caption{(color online) Normalized magnetization (and electric polarization) as function of the temperature when $\protect\beta =0$ in the first model.} \label{Figure 1} \end{figure} \begin{figure}[tbp] \includegraphics[scale=0.30]{Fig5-6.eps} \caption{(color online) The same temperature dependencies as in Fig. 1, when $\protect\gamma =0,$ in the first model.} \label{Figure 2} \end{figure} \begin{figure}[tbp] \includegraphics[scale=0.30]{Fig5-9.eps} \caption{(color online) The general results for the first model. The transition temperature is the same for both the electric and the magnetic transitions (see text).} \label{Figure 3} \end{figure} Figs. 1, 2 and 3 show the results for the ferromagnetic case. Fig. 1 shows the effect of the $\gamma $ parameter: as the transition is determined by $J$% , the effect of $\gamma $ is to broaden the transition. Fig. 2 shows the effect of $\beta $, which is to shift the transition without great modifications in the shape of it. Fig. 3 resumes the complete model, showing the effect of both parameters together. The induced electric polarization appears at the same temperature as the magnetic transition in all the cases, as it is defined by the model. As the model only allows, both $P/P_{0}$ and $% M/M_{0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }}$curves coincide. \subsubsection{Antiferromagnetic case} The antiferromagnetic case was treated similarly. It requires a negative value of $J$, but several changes in the other terms of the Hamiltonian are necessary. The magnetization of both sublattices will be coupled to the electric lattice opposed, in order to obtain the required ferroelectricity. The magnetic field is set to zero, because only can be directed parallel to one of the magnetic sublattices. The convergence is slowest for zero field, which we used to obtain the values for convergence. Our results in this case are very similar to those above, and we lack here of space to show them completely. Fig. 4 presents the transition temperature dependence for this case, which can be compared with the ferromagnet. The complete results will be publish elsewhere, together with \ a more sophisticated model for the spin-lattice coupling. \begin{figure}[tbp] \includegraphics[scale=0.30]{Fig5-15.eps} \caption{(color online) The general results for the first model in the antiferromagnetic case. The transition temperature is the same for both the electric and the magnetic transitions (see text).} \label{Figure 4} \end{figure} \bigskip \begin{figure}[tbp] \includegraphics[scale=0.3]{Fig6-1.eps} \caption{(color online) Normalized magnetization and electric polarization as functions of T for $\protect\beta /J=1$ in the second model.} \label{Figure 5} \end{figure} \bigskip \begin{figure}[tbp] \includegraphics[scale=0.3]{Fig6-3.eps} \caption{(color online) Calculated magnetization and polarization per spin/dipole, for the second model when $\protect\beta /J=2$ as functions of T. It can be seen that M/M$_{0}$ is strongly distorted, while P/P$_{0}$ is not.} \label{Figure 6} \end{figure} \bigskip \begin{figure}[tbp] \includegraphics[scale=0.3]{Fig6-7.eps} \caption{(color online) Results for $\protect\beta /J=0.5$ in the second model. In this case, the magnetic function is not distorted, but the polarization is.} \label{Figure 7} \end{figure} \bigskip \begin{figure}[tbp] \includegraphics[scale=0.3]{Fig6-11.eps} \caption{(color online) Results of the second model when $\Delta /J=0.5$ and changing the $\protect\beta $ parameter for $\protect\beta /J\leq 1$ . The shape of the polarization curves are distorted and shifts as the electric transition temperature is increased.} \label{Figure 8} \end{figure} \bigskip \begin{figure}[tbp] \includegraphics[scale=0.3]{Fig6-12.eps} \caption{(color online) Calculated polarization and magnetization for $% \Delta /J=0.5,$ for values of the $\protect\beta $ parameter when $\protect% \beta /J\geqslant 1.$} \label{Figure 9} \end{figure} \bigskip \begin{figure}[tbp] \includegraphics[scale=0.3]{Fig6-13.eps} \caption{(color online) Electric and magnetic transition temperatures, for $% \Delta /J=0.5$ as depending of the $\protect\beta $ parameter. } \label{Figure 10} \end{figure} \bigskip \begin{figure}[tbp] \includegraphics[scale=0.3]{Fig7-1.eps} \caption{(color online) The magnetoelectric coefficient as calculated for our models. As expected, they tend to zero at T=0, as the magnetizations and polarizations saturate. } \label{Figure 11} \end{figure} \bigskip \subsection{The second model} As seen above, the first model does not contain the capacity to allow different temperatures for the electric and the magnetic transitions, giving us only the changes generated in the shape of the transitions by the magnetoelectric coupling. We decided to develop a second model, where the transition temperatures are independent. To maintain the simplicity of the model, and the number of independent parameters reduced to three, we included an energy $\Delta =\varepsilon _{2}-\varepsilon _{1}$ for the pair of magnetic-electric moments. When the spin is up, and the electric dipole points the left, or when the spin points down, and the electric dipole to the right, they will have an energy $\Delta $ higher than in the other two cases. If we make $\Delta \rightarrow \infty $ we recover the first model again, for the system will be in the lower state all time. We changed some other things in the new model. Instead of the classical electric dipoles oriented at random at $T\rightarrow \infty $, we substituted the electric lattice by an Ising lattice, oriented in the $% \widehat{x}$ direction, that is, as formerly, the electric dipoles form the ferroelectric part of the lattice when they are oriented in the positive $% \widehat{x}$ direction. We excluded the local interaction between the pairs, considering that the electric interaction is only in the $\widehat{x}$ direction, let's say, the tails of the interaction after first neighbors cutted off. The result is that we substitute the magnetoelectric interaction for this two-level system. Hence, the complete Hamiltonian is, in this case \begin{equation} H=-J\sum_{<i,j>}\sigma _{i}\sigma _{j}-h\sum_{i}\sigma _{j}-\beta \sum_{<i,j>}P_{ix}P_{jx}+\sum_{i}\varepsilon _{i} \end{equation} where the symbols are the same as before, and the $\varepsilon _{i\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }}$% the energy of the pair magnetic-electric momenta as defined above. We used again the exchange coupling parameter $J$ as energy unit; as can be seen , this model has two independent transition temperatures for the magnetic and electric lattices, as $J$ and $\beta $ are independent in this model. To use the Monte Carlo method again, we need to preserve the mathematical requirements for it, then our model needs to behave as a Markovian one. To accomplish this requirement, the minimum of energy is calculated through the following steps: a - As in the first model, the initial state of the system is at $% T\rightarrow \infty $. Then the value of the temperature is inserted in the calculation. b - One number $A$ is then chosen at random between zero and one. If this number is zero, we invert the corresponding spin; instead, for $A=1$, the inversion is performed on the electric dipole. c - One of the pair of moments is chosen at random. If $A=0$, we calculate the energy difference if the spin is inverted. From eq. (6), this energy difference will be \begin{equation} \Delta E_{1}=2\sigma _{i}(JS_{S}+h)+\sigma _{i}P_{ix}\Delta \end{equation} where $\sigma _{i}$ is the chosen spin, $S_{S}$ the sum of the first \ spins which are nearest neighbors to it, and $P_{ix}$ the component of the electric dipole of the pair. If $\Delta E_{1}$ is negative, the spin is inverted. If $\Delta E_{1}$ is positive, we use the Metropolis comparison: if a new random number $0\leq r$ $\leq 1$, when compared with the energy population is less than that (that is $r\leq \exp \left( -\Delta E_{1}/k_{B}T\right) $) the spin is inverted too. If not, the spin is left unaltered. d - If $A=1$ the electric dipole is inverted; the energy difference in this case is \begin{equation} \Delta E_{2}=2P_{ix}\beta S_{D}+\sigma _{i}P_{ix}\Delta \end{equation} Here the value of $S_{D}$ is the sum over electric dipoles which are nearest neighbors to the one chosen. The other symbols have the meaning above. Again, if $\Delta E_{2}$ is negative, the dipole is inverted, and if positive, the random number $r$ is calculated and used to compare with the populations in order to decide the inversion of the dipole. e - The procedure from b) to d) is repeated as many times as necessary to obtain convergence to thermal equilibrium as defined for the first model. Then the temperature is reduced further and the calculation repeated to equilibrium. The model respects the symmetry requirements for magnetoelectricity. The system does not contains temporal nor spacial inversion, so this exigency is accomplished too. \subsubsection{Convergence.} Differently from \ the first case, this second one does not contain a strong local coupling for the pair, and even the local change is not always decided by the spin system. It was necessary, then, to realize an independent study for every pair of the parameters, $\Delta $ and $\beta $. We do not believe that the insertion of the whole convergence study could be interesting for the reader, so we just mention that the number of Monte Carlo steps per spin for convergency varies from 5 to 50 thousand steps, the small value for $% \Delta /J=0,\beta /J=1$ and the maximum for $\Delta /J=1,\beta /J=2.$ \subsubsection{Results} We performed complete calculations for several cases, described below 1 - Thermal dependence of polarization and magnetization when $\beta /J = 1$ (that is, the transitions occur at the same temperature) and $\Delta $ varies. 2- The same study, for $\beta /J>1.$ 3 - The same again, when $\beta /J<1.$ 4 - The thermal dependence, as function of $\beta ,$ for $\Delta /J=0.5.$ \bigskip Now we will describe the results for every case from 1 to 4. 1 - As the electric dipoles are not strongly coupled to his magnetic neighbor, and they can assume both orientations, the shapes of the transitions are different, as can be seen in Fig. (5). As there is not an applied electric field, the relative orientation of the polarization and the magnetization could be at random, as can be seen in the figure. The transition temperature increases with the value of $\Delta ,$ meaning that the coupling helps to mantain the system ordered. 2 - Fig. (6) shows the results for $\beta /J=2.$ As can be seen, the transitions differ strongly in shape, and the curve of magnetization is shifted to accompany the polarization transition, as $\Delta $ increases. This coincides with the idea that the second model limits with the first, when $\Delta \rightarrow \infty $. Anyway, we were not able to calculate this case for high values of $\Delta ,$ for the time of convergence increases too much. 3 - The case where the electric transition occurs at lower temperature than the magnetic one is presented in Fig. (7). Here, the magnetic transition is not deformed, and the electric one is distorted, with the tendency to accompany the magnetization for $\Delta \rightarrow \infty .$ Again, we were limited to values of $\Delta $ allowed by the time of convergency of the calculation. 4 - The calculation was performed for $\Delta /J=0.5$ for different values of the transition temperatures ($J\lessgtr \beta $). The results are shown graphically in Figs. (8) and (9). We separated the results when the electric transition is at lower temperature than the magnetic (Fig. (8)), and the opposite, when $\beta \geq J$ (Fig. (9)). It can be observed in both cases that the transition occuring at higher temperature remains almost undistorted, and the one whose transition temperature is smaller, distorced and shifts. Fig. (10) shows the transition temperatures, eletric and magnetic, when $% \Delta /J=0.5$ as functions of $\beta .$ Of course, both curves equal when $% \beta /J=1.$ The magnetic transition tends to saturate for small or great values of $\beta ,$ while the electric transitions behave almost linearly.\\ \section{Conclusions and perspectives.} Magnetoelectricity and magnetoferroics are studied experimentally using the phenomenological free energy obtained just from symmetry and the (possible) interaction \ between magnetic and electric fields, as follows$^{1}$: \begin{eqnarray*} F(\overrightarrow{E},\overrightarrow{H}) &=&F_{0}-P_{i}^{S}E_{i}-M_{i}^{S}H_{i}- \\ &&-\frac{1}{2}\epsilon _{0}\epsilon _{ij}E_{i}E_{j}-\frac{1}{2}\mu _{0}\mu _{ij}H_{i}H_{j}-\alpha _{ij}E_{i}H_{j}-... \end{eqnarray*} and we have: \begin{eqnarray*} P_{i} &=&-\frac{\partial F}{\partial E_{i}}=P_{i}^{S}+\epsilon _{0}\epsilon _{ij}E_{j}+\alpha _{ij}H_{j}+... \\ M_{i} &=&-\frac{\partial F}{\partial H_{i}}=M_{i}^{S}+\mu _{0}\mu _{ij}H_{j}+\alpha _{ji}E_{j}-... \end{eqnarray*} where it can be seen that the experimental measurement of the magnetoelectric tensor, $\alpha _{ij},$ is performed looking for the difference of the observed magnetization (polarization) with and without an external magnetic (electric) field. To calculate that parameter, we followed the same procedure, calculating for every temperature the magnetic polarization with and without an applied field, that is \begin{equation*} \alpha _{ij}(T)\backsimeq \frac{P_{i}-P_{i}^{S}}{H_{j}} \end{equation*} as the experiments are made.$^{3}$ Fig. (11) presents the results for both models, when the transition temperatures are the same ($J=\beta $) in the second model. The magnetoelectric coefficients \ go to zero at T = 0, which is expected in a system that saturates magnetically and electrically too. Our models do not include the possibility to change the energy saturated at 0 K, but the experiments (see ref. 3) show a remanescent value of the parameter. Our model does not include any effect in other than the $\widehat{x}$ axis, thus, we only obtained the $\alpha _{xz}$ magnetoelectric coefficient within both models. The transition in ref. 3 is first order, that is, the coefficient does not exist for temperatures above the magnetic transition, contrary to our models, where both transitions are second order, and as that, the curves in Fig. (11) extend to high temperatures. Resuming, our calculation arrives to many similitudes and differences with experiment. We believe that this can be the simpler way to simulate real systems, and developing more elaborated spin - lattice terms in the Hamiltonians will help to interprete the experimental results in an easy way. \ \bigskip \bigskip \bigskip \section*{Abstract (Not appropriate in this style!)}% \else \small \begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}% \quotation \fi }% }{% }% \@ifundefined{endabstract}{\def\endabstract {\if@twocolumn\else\endquotation\fi}}{}% \@ifundefined{maketitle}{\def\maketitle#1{}}{}% \@ifundefined{affiliation}{\def\affiliation#1{}}{}% \@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}% \@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}% \@ifundefined{newfield}{\def\newfield#1#2{}}{}% \@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }% \newcount\c@chapter}{}% \@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}% \@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}% \@ifundefined{subsection}{\def\subsection#1% {\par(Subsection head:)#1\par }}{}% \@ifundefined{subsubsection}{\def\subsubsection#1% {\par(Subsubsection head:)#1\par }}{}% \@ifundefined{paragraph}{\def\paragraph#1% {\par(Subsubsubsection head:)#1\par }}{}% \@ifundefined{subparagraph}{\def\subparagraph#1% {\par(Subsubsubsubsection head:)#1\par }}{}% \@ifundefined{therefore}{\def\therefore{}}{}% \@ifundefined{backepsilon}{\def\backepsilon{}}{}% \@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}% \@ifundefined{registered}{% \def\registered{\relax\ifmmode{}\r@gistered \else$\m@th\r@gistered$\fi}% \def\r@gistered{^{\ooalign {\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr \mathhexbox20D}}}}{}% \@ifundefined{Eth}{\def\Eth{}}{}% \@ifundefined{eth}{\def\eth{}}{}% \@ifundefined{Thorn}{\def\Thorn{}}{}% \@ifundefined{thorn}{\def\thorn{}}{}% \def\TEXTsymbol#1{\mbox{$#1$}}% \@ifundefined{degree}{\def\degree{{}^{\circ}}}{}% \newdimen\theight \def\Column{% \vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}% \theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip \kern -\theight \vbox to \theight{% \rightline{\rlap{\box\z@}}% \vss }% }% }% \def\qed{% \ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi \hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}% }% \def\cents{\hbox{\rm\rlap/c}}% \def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}% \def\vvert{\Vert \def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column} % \def\dB{\hbox{{}} \def\mB#1{\hbox{$#1$} \def\nB#1{\hbox{#1} \@ifundefined{note}{\def\note{$^{\dag}}}{}% \defLaTeX2e{LaTeX2e} \ifx\fmtnameLaTeX2e \DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm} \DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf} \DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt} \DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf} \DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit} \DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl} \DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc} \fi \def\alpha{{\Greekmath 010B}}% \def\beta{{\Greekmath 010C}}% \def\gamma{{\Greekmath 010D}}% \def\delta{{\Greekmath 010E}}% \def\epsilon{{\Greekmath 010F}}% \def\zeta{{\Greekmath 0110}}% \def\eta{{\Greekmath 0111}}% \def\theta{{\Greekmath 0112}}% \def\iota{{\Greekmath 0113}}% \def\kappa{{\Greekmath 0114}}% \def\lambda{{\Greekmath 0115}}% \def\mu{{\Greekmath 0116}}% \def\nu{{\Greekmath 0117}}% \def\xi{{\Greekmath 0118}}% \def\pi{{\Greekmath 0119}}% \def\rho{{\Greekmath 011A}}% \def\sigma{{\Greekmath 011B}}% \def\tau{{\Greekmath 011C}}% \def\upsilon{{\Greekmath 011D}}% \def\phi{{\Greekmath 011E}}% \def\chi{{\Greekmath 011F}}% \def\psi{{\Greekmath 0120}}% \def\omega{{\Greekmath 0121}}% \def\varepsilon{{\Greekmath 0122}}% \def\vartheta{{\Greekmath 0123}}% \def\varpi{{\Greekmath 0124}}% \def\varrho{{\Greekmath 0125}}% \def\varsigma{{\Greekmath 0126}}% \def\varphi{{\Greekmath 0127}}% \def{\Greekmath 0272}{{\Greekmath 0272}} \def\FindBoldGroup{% {\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}% } \def\Greekmath#1#2#3#4{% \if@compatibility \ifnum\mathgroup=\symbold \mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}% \else \mathchar"#1#2#3# \fi \else \FindBoldGroup \ifnum\mathgroup=\theboldgroup \mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}% \else \mathchar"#1#2#3# \fi \fi} \newif\ifGreekBold \GreekBoldfalse \let\SAVEPBF=\pbf \def\pbf{\GreekBoldtrue\SAVEPBF}% \@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{} \@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{} \@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{} \@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{} \@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{} \@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{} \@ifundefined{remark}{\newtheorem{remark}{Remark}}{} \@ifundefined{example}{\newtheorem{example}{Example}}{} \@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{} \@ifundefined{definition}{\newtheorem{definition}{Definition}}{} \@ifundefined{mathletters}{% \newcounter{equationnumber} \def\mathletters{% \addtocounter{equation}{1} \edef\@currentlabel{\arabic{equation}}% \setcounter{equationnumber}{\c@equation} \setcounter{equation}{0}% \edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}% } \def\endmathletters{% \setcounter{equation}{\value{equationnumber}}% } }{} \@ifundefined{BibTeX}{% \def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}% \@ifundefined{AmS}% {\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}% A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}% \@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}% \def\@@eqncr{\let\@tempa\relax \ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}% \else \def\@tempa{&}\fi \@tempa \if@eqnsw \iftag@ \@taggnum \else \@eqnnum\stepcounter{equation}% \fi \fi \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@eqnswtrue \global\@eqcnt\z@\cr} \def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}} \def\@TCItag#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{(#1)}} \def\@TCItagstar*#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{#1}} \def\tfrac#1#2{{\textstyle {#1 \over #2}}}% \def\dfrac#1#2{{\displaystyle {#1 \over #2}}}% \def\binom#1#2{{#1 \choose #2}}% \def\tbinom#1#2{{\textstyle {#1 \choose #2}}}% \def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}% \def\QATOP#1#2{{#1 \atop #2}}% \def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}% \def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}% \def\QABOVE#1#2#3{{#2 \above#1 #3}}% \def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}% \def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}% \def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}% \def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}% \def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}% \def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}% \def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}% \def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}% \def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}% \def\QTABOVED#1#2#3#4#5{{\textstyle {#4 \abovewithdelims#1#2#3 #5}}}% \def\QDABOVED#1#2#3#4#5{{\displaystyle {#4 \abovewithdelims#1#2#3 #5}}}% \def\tint{\mathop{\textstyle \int}}% \def\tiint{\mathop{\textstyle \iint }}% \def\tiiint{\mathop{\textstyle \iiint }}% \def\tiiiint{\mathop{\textstyle \iiiint }}% \def\tidotsint{\mathop{\textstyle \idotsint }}% \def\toint{\mathop{\textstyle \oint}}% \def\tsum{\mathop{\textstyle \sum }}% \def\tprod{\mathop{\textstyle \prod }}% \def\tbigcap{\mathop{\textstyle \bigcap }}% \def\tbigwedge{\mathop{\textstyle \bigwedge }}% \def\tbigoplus{\mathop{\textstyle \bigoplus }}% \def\tbigodot{\mathop{\textstyle \bigodot }}% \def\tbigsqcup{\mathop{\textstyle \bigsqcup }}% \def\tcoprod{\mathop{\textstyle \coprod }}% \def\tbigcup{\mathop{\textstyle \bigcup }}% \def\tbigvee{\mathop{\textstyle \bigvee }}% \def\tbigotimes{\mathop{\textstyle \bigotimes }}% \def\tbiguplus{\mathop{\textstyle \biguplus }}% \def\dint{\mathop{\displaystyle \int}}% \def\diint{\mathop{\displaystyle \iint }}% \def\diiint{\mathop{\displaystyle \iiint }}% \def\diiiint{\mathop{\displaystyle \iiiint }}% \def\didotsint{\mathop{\displaystyle \idotsint }}% \def\doint{\mathop{\displaystyle \oint}}% \def\dsum{\mathop{\displaystyle \sum }}% \def\dprod{\mathop{\displaystyle \prod }}% \def\dbigcap{\mathop{\displaystyle \bigcap }}% \def\dbigwedge{\mathop{\displaystyle \bigwedge }}% \def\dbigoplus{\mathop{\displaystyle \bigoplus }}% \def\dbigodot{\mathop{\displaystyle \bigodot }}% \def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}% \def\dcoprod{\mathop{\displaystyle \coprod }}% \def\dbigcup{\mathop{\displaystyle \bigcup }}% \def\dbigvee{\mathop{\displaystyle \bigvee }}% \def\dbigotimes{\mathop{\displaystyle \bigotimes }}% \def\dbiguplus{\mathop{\displaystyle \biguplus }}% \ifx\ds@amstex\relax \message{amstex already loaded}\makeatother\endinpu \else \@ifpackageloaded{amsmath}% {\message{amsmath already loaded}\makeatother\endinput} {} \@ifpackageloaded{amstex}% {\message{amstex already loaded}\makeatother\endinput} {} \@ifpackageloaded{amsgen}% {\message{amsgen already loaded}\makeatother\endinput} {} \fi \let\DOTSI\relax \def\RIfM@{\relax\ifmmode}% \def\FN@{\futurelet\next}% \newcount\intno@ \def\iint{\DOTSI\intno@\tw@\FN@\ints@}% \def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}% \def\iiiint{\DOTSI\intno@4 \FN@\ints@}% \def\idotsint{\DOTSI\intno@\z@\FN@\ints@}% \def\ints@{\findlimits@\ints@@}% \newif\iflimtoken@ \newif\iflimits@ \def\findlimits@{\limtoken@true\ifx\next\limits\limits@true \else\ifx\next\nolimits\limits@false\else \limtoken@false\ifx\ilimits@\nolimits\limits@false\else \ifinner\limits@false\else\limits@true\fi\fi\fi\fi}% \def\multint@{\int\ifnum\intno@=\z@\intdots@ \else\intkern@\fi \ifnum\intno@>\tw@\int\intkern@\fi \ifnum\intno@>\thr@@\int\intkern@\fi \int \def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi \ifnum\intno@>\tw@\intop\intkern@\fi \ifnum\intno@>\thr@@\intop\intkern@\fi\intop}% \def\intic@{% \mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}% \def\negintic@{\mathchoice {\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}% \def\ints@@{\iflimtoken@ \def\ints@@@{\iflimits@\negintic@ \mathop{\intic@\multintlimits@}\limits \else\multint@\nolimits\fi \eat@ \else \def\ints@@@{\iflimits@\negintic@ \mathop{\intic@\multintlimits@}\limits\else \multint@\nolimits\fi}\fi\ints@@@}% \def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}% \def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}% \def\intdots@{\mathchoice{\plaincdots@}% {{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}% {{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}% {{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}% \def\RIfM@{\relax\protect\ifmmode} \def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi} \let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi \def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice {\textdef@\displaystyle\f@size{#1}}% {\textdef@\textstyle\tf@size{\firstchoice@false #1}}% {\textdef@\textstyle\sf@size{\firstchoice@false #1}}% {\textdef@\textstyle \ssf@size{\firstchoice@false #1}}% \glb@settings} \def\textdef@#1#2#3{\hbox{{% \everymath{#1}% \let\f@size#2\selectfont #3}}} \newif\iffirstchoice@ \firstchoice@true \def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}% \def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}% \def\multilimits@{\bgroup\vspace@\Let@ \baselineskip\fontdimen10 \scriptfont\tw@ \advance\baselineskip\fontdimen12 \scriptfont\tw@ \lineskip\thr@@\fontdimen8 \scriptfont\thr@@ \lineskiplimit\lineskip \vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}% \def\Sb{_\multilimits@}% \def\endSb{\crcr\egroup\egroup\egroup}% \def\Sp{^\multilimits@}% \let\endSp\endSb \newdimen\ex@ \ex@.2326ex \def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$}% \def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}% \def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow \mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$}% \def\overrightarrow{\mathpalette\overrightarrow@}% \def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \let\overarrow\overrightarrow \def\overleftarrow{\mathpalette\overleftarrow@}% \def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \def\overleftrightarrow{\mathpalette\overleftrightarrow@}% \def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr \leftrightarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \def\underrightarrow{\mathpalette\underrightarrow@}% \def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil $\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}% \let\underarrow\underrightarrow \def\underleftarrow{\mathpalette\underleftarrow@}% \def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil $\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}% \def\underleftrightarrow{\mathpalette\underleftrightarrow@}% \def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th \hfil#1#2\hfil$\crcr \noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}% \def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@} \let\nlimits@\displaylimits \def\setboxz@h{\setbox\z@\hbox} \def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr \hfil$#1\m@th\operator@font lim$\hfil\crcr \noalign{\nointerlineskip}#2#1\crcr \noalign{\nointerlineskip\kern-\ex@}\crcr}}}} \def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@ $#1\copy\z@\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$} \def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@ $#1\mathord\leftarrow\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill \mkern-6mu\box\z@$} \def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}} \def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}} \def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@} \def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@} \def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}} \def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@ \hbox{$#1\m@th\operator@font lim$}}}} \def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}} \def\mathpalette\varlimsup@{}@#1{\mathop{\overline {\hbox{$#1\m@th\operator@font lim$}}}} \def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}% \begingroup \catcode `|=0 \catcode `[= 1 \catcode`]=2 \catcode `\{=12 \catcode `\}=12 \catcode`\\=12 |gdef|@alignverbatim#1\end{align}[#1|end[align]] |gdef|@salignverbatim#1\end{align*}[#1|end[align*]] |gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]] |gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]] |gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]] |gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]] |gdef|@gatherverbatim#1\end{gather}[#1|end[gather]] |gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]] |gdef|@gatherverbatim#1\end{gather}[#1|end[gather]] |gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]] |gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]] |gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]] |gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]] |gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]] |gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]] |gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]] |endgroup \def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim You are using the "align" environment in a style in which it is not defined.} \let\endalign=\endtrivlist \@namedef{align*}{\@verbatim\@salignverbatim You are using the "align*" environment in a style in which it is not defined.} \expandafter\let\csname endalign*\endcsname =\endtrivlist \def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim You are using the "alignat" environment in a style in which it is not defined.} \let\endalignat=\endtrivlist \@namedef{alignat*}{\@verbatim\@salignatverbatim You are using the "alignat*" environment in a style in which it is not defined.} \expandafter\let\csname endalignat*\endcsname =\endtrivlist \def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim You are using the "xalignat" environment in a style in which it is not defined.} \let\endxalignat=\endtrivlist \@namedef{xalignat*}{\@verbatim\@sxalignatverbatim You are using the "xalignat*" environment in a style in which it is not defined.} \expandafter\let\csname endxalignat*\endcsname =\endtrivlist \def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim You are using the "gather" environment in a style in which it is not defined.} \let\endgather=\endtrivlist \@namedef{gather*}{\@verbatim\@sgatherverbatim You are using the "gather*" environment in a style in which it is not defined.} \expandafter\let\csname endgather*\endcsname =\endtrivlist \def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim You are using the "multiline" environment in a style in which it is not defined.} \let\endmultiline=\endtrivlist \@namedef{multiline*}{\@verbatim\@smultilineverbatim You are using the "multiline*" environment in a style in which it is not defined.} \expandafter\let\csname endmultiline*\endcsname =\endtrivlist \def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim You are using a type of "array" construct that is only allowed in AmS-LaTeX.} \let\endarrax=\endtrivlist \def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.} \let\endtabulax=\endtrivlist \@namedef{arrax*}{\@verbatim\@sarraxverbatim You are using a type of "array*" construct that is only allowed in AmS-LaTeX.} \expandafter\let\csname endarrax*\endcsname =\endtrivlist \@namedef{tabulax*}{\@verbatim\@stabulaxverbatim You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.} \expandafter\let\csname endtabulax*\endcsname =\endtrivlist \def\endequation{% \ifmmode\ifinner \iftag@ \addtocounter{equation}{-1} $\hfil \displaywidth\linewidth\@taggnum\egroup \endtrivlist \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@ignoretrue \else $\hfil \displaywidth\linewidth\@eqnnum\egroup \endtrivlist \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@ignoretrue \fi \else \iftag@ \addtocounter{equation}{-1} \eqno \hbox{\@taggnum} \global\@ifnextchar*{\@tagstar}{\@tag}@false% $$\global\@ignoretrue \else \eqno \hbox{\@eqnnum $$\global\@ignoretrue \fi \fi\fi } \newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false \def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}} \def\@TCItag#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{(#1)}} \def\@TCItagstar*#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{#1}} \@ifundefined{tag}{ \def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}} \def\@tag#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{(#1)}} \def\@tagstar*#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{#1}} }{} \makeatother \endinput
1,314,259,992,828
arxiv
\section{Background} The possibility of building a time machine has been proposed by many authors ~\cite{friedman,gott,godel,bonnor,morris,politzer,boulware,hartie,politzer2,deutsch, novikov,lloyd,pegg,svetlichny}. Two common approaches are through closed time-like curves (CTC)~\cite{friedman,gott,godel,bonnor,morris,politzer,boulware,hartie,politzer2} and quantum phenomena~\cite{deutsch,lloyd,pegg,svetlichny}. Although the general theory of relativity allows for CTCs, it is not clear if the laws of physics permit their existence~\cite{hawking,carroll,deser,carroll2}. Hence the possibility of traveling back to the distant past remains an open question. Paradoxical thought experiments have been devised to suggest that traveling back in time may lead to violations of causality, and hence is not possible. The most famous paradox is the grandfather paradox, in which an agent travels back in time to kill his grandfather before his father was conceived. In this case, the agent will not exist at the current time and hence cannot travel back in time to kill his grandfather. An alternative version of the grandfather paradox is autoinfanticide, where an agent travels back in time to kill himself as an infant. This paradox plays a central role in the argument against traveling back in time. Another paradox is the Deutsch's unproven theorem paradox~\cite{lloyd}, in which an agent travels back in time to reveal the proof of a mathematical theorem. The proof is then recorded in a document that the agent reads in future time. Another version of Deutsch's unproven paradox is what we call the chicken-and-egg paradox. A hen travels back in time to lay an egg. The egg hatches into the hen herself. Without the egg, the hen would not exist but without the hen traveling back in time, the egg would not be laid. In this paper, a simple model is used in an attempt to solve time travel paradoxes and help set the logical foundations of traveling back in time. Our approach is quite different from approaches that focus on how a time machine can be built (in principle)~\cite{lloyd}. We suppose that a time machine can be built and then analyze what could be possible (or impossible) in time travel. We use a simple directed cyclic graph to explain causal relationships in different scenarios of time travel. Our conclusion is that, assuming traveling back in time is feasible, an agent who travels back in time is unable to kill himself although he may be able to alter the past in other ways; in a self-consistent manner. The self-consistency principle was proposed by Wheeler and Feynman~\cite{feynman}, Novikov {\it et al}~\cite{novikov} and Lloyd {\it et al}~\cite{lloyd}. It states that traveling back in time may be possible, but it cannot happen in a way that violates causality. Causality in this case includes events that happen in the future affecting the past. This principle precludes time travel paradoxes but does not forbid traveling back in time. Due to space limitations, the reader is referred to ~\cite{feynman,novikov,lloyd} for detailed discussion of the self-consistency principle. \section{Model} Our model can be considered as a simple case of graphical models. Graphical models have been extensively studied and are applicable in many fields such as in econometric models, social sciences, artificial intelligence and even in medical studies. Publications on graphical models are so numerous that we can only provide a non-exhaustive list ~\cite{richardson1997,richardson1996,schmidt,spirtes,lacerda,pearlbk,rebane1987,lauritzen, lauritzen1,salmon,morgan,spirtesbk,cooper1991}. Although directed acyclic graphs have been at the center stage of graphical models, directed cyclic graphical models have also received significant attention ~\cite{schmidt,richardson1997,richardson1996,spirtes,lacerda}. Two important components in graphical models are intervention and the {\it do calculus}. The theory of graphical models has few constraints built in on what is physically possible. This leaves the theory very general. \begin{figure}[h!] \begin{picture}(200,0)(0,0) \put(0,0){$ \sigma_1 \rightarrow \sigma_2 \rightarrow \sigma_3 \rightarrow \sigma_4 \cdots \rightarrow \sigma_i \cdots \rightarrow \sigma_k \cdots \rightarrow \sigma_n $} \end{picture} \caption{A simple graphical model for a Markov Chain} \label{fig:mc} \end{figure} We use a simple directed cyclic graph to study traveling back in time. First, we build constraints into our model as follows. Consider physical states evolving on a timeline as shown in Fig. \ref{fig:mc}. The graph is a one dimensional chain, and branching is excluded. Traveling back in time introduces a loop as in Fig. \ref{fig:loop}. We do not include intervention and {\it do calculus} because this enables us to simplify our analysis, while capturing the important physics for a closed system. \begin{figure}[h!] \begin{picture}(200,20)(0,0) \put(0,5){$ \sigma_1 \rightarrow \sigma_2 \rightarrow \sigma_3 \rightarrow \sigma_4 \cdots \rightarrow \sigma_i \cdots \rightarrow \sigma_k \cdots \rightarrow \sigma_n $} \put(112,-10){\vector(0,1){12}} \put(112,-10){\line(1,0){33}} \put(145,-10){\line(0,1){12}} \end{picture} \caption{A simple cyclic graph to model traveling back in time from $t=k$ to $t=i$.} \label{fig:loop} \end{figure} At each time $t$, the state of the system $\sigma_t$ is a random variable. Time is also discretized and the arrows connect events at neighboring times $\sigma_t \rightarrow \sigma_{t+1}$. The probability of transition from $\sigma_t$ to $\sigma_{t+1}$ is given by $T_{t+1}(\sigma_{t+1}|\sigma_t)$. In this case, the conditional probabilities can be interpreted as a transition matrix, and the graph as a Markov Chain. The following assumptions are used based on physical considerations: \begin{enumerate} \item The statistical time flows in the same direction as the physical time. \item Local normalization constraint is enforced, i.e. $\sum_{\sigma_{t+1}} T_{t+1}(\sigma_{t+1}|\sigma_t)=1$. Given that the system is in a state $\sigma_t$ at time $t$, the system has to take on a state at $t+1$. In general, we can condition on more than one variable, e.g. $T_{t+1}(\sigma_{t+1}|\sigma_i,\sigma_j,\cdots)$, then the local normalization condition is $\sum_{\sigma_{t+1}} T_{t+1}(\sigma_{t+1}|\sigma_i,\sigma_j,\cdots)=1$. \item Basic probability axioms are satisfied. Let $A_i$ be a set of states and $P(A_i)$ be its probability measure, then, \begin{equation} 0 \leq P(A_i) \leq 1 \label{eq:prange} \end{equation} \begin{equation} P(\Omega) = 1, \mbox{\hspace{0.5cm}} \label{eq:pnorm} \end{equation} \begin{equation} P(A_i \cup A_j) = P(A_i) + P(A_j), \end{equation} $\Omega$ is the set of all possible states and $A_i$ and $A_j$ are mutually exclusive. Clearly, for discrete events if $\sigma_i\in \Omega$ and $\sigma_j \in \Omega$, $\sigma_i \neq \sigma_j$, then $P(\sigma_i \cup \sigma_j) = P( \sigma_i) + P(\sigma_j)$. Here, we use a shorthand notation $\sigma_i \equiv \{ \sigma_i\}$. \end{enumerate} A sequence of states $\pi_n$ is shown in Fig. \ref{fig:mc}. If the set of all possible states is given by $\Omega$, then the set of all possible sequences is given by $\mathbf \Pi = \Omega^n$. The probability of obtaining $\pi_n$ is, \begin{equation} P_{mc}(\pi_n) = p(\sigma_1) T_2(\sigma_2|\sigma_1) T_3(\sigma_3|\sigma_2) \cdots T_n(\sigma_n|\sigma_{n-1}) \label{eq:mcseq} \end{equation} $p(\sigma_1)$ is the probability of sampling the initial state $\sigma_1$. The conditional probabilities encode the physics of how the system evolve from state to state. It can be shown that for $P_{mc}(\pi_n)$, basic axioms of probabilities hold. In the case of traveling back in time, the causal relationship has an arrow that loops back into the past (Fig. \ref{fig:loop}). To model traveling back in time, we condition on two states instead of one, $\hat{T}_i(\sigma_i | \sigma_{i-1}, \sigma_k)$ where $\sigma_k$ is an event in the future with respect to time $i$. In this case, \begin{equation} P(\pi_n) = p(\sigma_1)T_2(\sigma_2|\sigma_1)\cdots \hat{T}_i(\sigma_i|\sigma_{i-1},\sigma_k) \cdots T_n(\sigma_n|\sigma_{n-1}) \label{eq:mcloop} \end{equation} All the conditional probabilities $T_j(\sigma_j|\sigma_{j-1})$ are the same as in Eq. (\ref{eq:mcseq}) except for $\hat{T}_i(\sigma_i|\sigma_{i-1},\sigma_k)$. Making such a generalization is non-trivial because we need to check that the basic axioms of probabilities continue to hold. At this point, we would like to emphasize some key points that are important in this paper, \begin{enumerate} \item Time travel consists of sending a signal back to the past. The signal causes an effect only at one time point $t=i$ as in Fig. \ref{fig:loop}. The signal could contain a set of instructions to carry out some tasks or be an agent that travels back in time. \item The conditional probabilities $T_j$, $j=1,2,\cdots$, $j\neq i$ in Eq. (\ref{eq:mcseq}) are determined by the physics of how the system evolves forward in time. \item The term $\hat{T}_i(\sigma_i|\sigma_{i-1},\sigma_k)$ is special as it is the only term in Eq. (\ref{eq:mcloop}) that encodes the effects of traveling back in time. \item Our framework is probabilistic, in which many sequences of states can happen with non-zero probability, in contrast to a deterministic view where only one sequence is possible. Given any sequence $\pi_n$, its probability of occurrence can be calculated using Eq. (\ref{eq:mcloop}). \item A paradox be represented by many different sequences of states. Our objective is to show that either all these sequences happen with zero probability, or they result in violation of the basic axioms of probability. \end{enumerate} Consider $\hat{T}_i$ to be a function of three discrete variables, $\sigma_{i-1},\sigma_i$ and $\sigma_k$. This function has to satisfy, \begin{equation} 0 \leq \hat{T}_i(\sigma_i | \sigma_{i-1},\sigma_k)\leq 1 \label{eq:thatrange} \end{equation} \begin{equation} \sum_{\{ \pi_n\} } P(\pi_n) = 1 \label{eq:pinorm} \end{equation} \begin{equation} \sum_{\sigma_i} \hat{T}_i(\sigma_i | \sigma_{i-1},\sigma_k) = 1 \label{eq:thatnorm} \end{equation} The first two conditions are analogous to Eq. (\ref{eq:prange}) and (\ref{eq:pnorm}). The last condition is the local normalization condition. Eq. (\ref{eq:pinorm}) can be reduced to, \begin{equation} \sum_{\sigma_i,\sigma_k} \hat{T}_i(\sigma_i | \tilde{\sigma}_{i-1},\sigma_k) V(\sigma_k|\sigma_i ) = 1 \label{eq:TV} \end{equation} $V(\sigma_k|\sigma_i)$ is the conditional probability of $\sigma_k$ given $\sigma_i$ summed over all possible intermediate states $\sigma_{i+1}\cdots \sigma_{k-1}$. Detailed derivation of Eq. (\ref{eq:TV}) is given in Appendix A. This is an important equation. We will use this equation together with Eq. (\ref{eq:thatrange}) and (\ref{eq:thatnorm}) to show that the grandfather paradox, Deutsch's unproven theorem paradox and chicken-and-egg paradox have to be precluded in time travel. \subsection{Two-state system} For a two-state system, $\sigma$ takes the values $\{0,1\}$. Using Eq. (\ref{eq:TV}) and (\ref{eq:thatnorm}) and summing over four combinations $\sigma_{i+1}, \sigma_k \in \{0,1\}$, we obtain, \begin{equation} [\hat{T}_i(0|\tilde{\sigma}_{i-1},1) - \hat{T}_i(0|\tilde{\sigma}_{i-1},0)] [ V(1|0) - V(1|1) ] = 0 \label{eq:2state} \end{equation} We must have $V(1|0) = V(1|1)$ or $\hat{T}_i(0|\tilde{\sigma}_i,1) = \hat{T}_i(0|\tilde{\sigma}_i,0)$. For the case when $V(1|0) \neq V(1|1)$, the transition matrix $\hat{T}_i$ does not depend on $\sigma_k$. In this case, the backward loop in Fig. \ref{fig:loop} has no effect. We can't change the probability distribution of the past. For the case $V(1|0) = V(1|1)$, we could have $\hat{T}_i(0|\tilde{\sigma}_{i-1},1) \neq \hat{T}_i(0|\tilde{\sigma}_{i-1},0)$ and the transition probabilities at $t=i$ could be affected by a signal from future time ($t=k$). \subsection{Grandfather paradox in a two-state system} The grandfather paradox can be used to illustrate the physical implications of Eq. (\ref{eq:2state}). The basic assumptions we will use are (i) resurrection is impossible, and (ii) basic axioms of probabilities must be satisfied. Consider an agent sending a signal back in time to kill himself. Let us denote the dead state as $\sigma=0$ and alive state as $\sigma=1$. No resurrection implies that $V$ is of the form, $ V = \left( \begin{array}{cc} 1 & \beta^* \\ 0 & \beta \end{array} \right), $ $\beta^*=1-\beta$. Let $\hat{T}_i(\sigma_i|\tilde{\sigma}_{i-1},1) = S(\sigma_i|\tilde{\sigma}_{i-1})$ be the transition probabilities for the scenario in which the agent sends a signal from the future to kill himself. Let $\hat{T}_i(\sigma_i|\tilde{\sigma}_{i-1},0) = N(\sigma_i|\tilde{\sigma}_{i-1})$ be the transition probabilities for the sequences of events the agent is dead at $t=k$ and hence no signal is sent from the future to kill himself. Hence $S$ (the ``killing" matrix) and $N$ are of the form, \begin{equation} S = \left( \begin{array}{cc} 1 & 1 \\ 0 & 0 \end{array} \right) \mbox{\hspace{.6cm}} N = \left( \begin{array}{cc} 1 & b^* \\ 0 & b \end{array} \right) \end{equation} $b^* = 1-b$ is the probability of dying at $t=i$. Substituting values of $N$, $S$ and $V$ into Eq. (\ref{eq:2state}), we obtain $ [1-b^*]\beta= 0$. Either $b^*=1$ or $\beta=0$. When $b^*=1$ then $N=S$, the agent dies at $t=i$ with probability 1. If $\beta=0$, the agent dies sometime between $t=i$ and $t=k$ with probability 1. In either case, the scenario in which the agent is alive at $t=k$ and thus able to send the signal occurs with zero probability. Note that we are analyzing probabilities rather than specific events. The conclusion comes about because resurrection is impossible ($V(1|0)=0$). Suppose resurrection is possible, $V(1|0)=\alpha^*<1$, the paradox goes away when $\alpha^*=\beta$. Intuitively, if we allow resurrection, the agent could send a signal back in time from $t=k$ to kill himself at $t=i<k$. Between the time $t=i$ and $t=k$, the agent is resurrected and hence could again send the signal at $t=k$. There is no contradiction in this case. Another way to resolve the paradox is to relax the assumption that the agent always succeeds to kill himself. In this case, the matrix $S$ is $\left( \begin{array}{cc} 1 & \lambda^* \\ 0 & \lambda \end{array} \right)$, $\lambda>0$. Eq. (\ref{eq:2state}) gives, $\beta (\lambda-b)=0$. If $\beta=0$, then the agent dies sometime between $t=i$ and $t=k$. If $\lambda=b$ then $S=N$, the signal from the future could not change the transition probability at $t=i$. The agent cannot change his own fate by sending a signal to the past. \subsection{Deutsch's unproven theorem paradox} An agent sends a signal containing the proof of a mathematical theorem back in time. The signal is encoded in a document that the agent reads in future time. Denote the existence of the proof as $\sigma=0$ and absence of the proof as $\sigma=1$. A general form of $V$ is, $ V = \left( \begin{array}{cc} \alpha & \beta^* \\ \alpha^* & \beta \end{array} \right), $ $\alpha^*=1-\alpha$, $\beta^*=1-\beta$. The basic assumptions we use are (i) the transition from $\sigma=1$ to $\sigma=0$ (transition of absence of proof to existence of proof) happens solely through the signal traveling back in time, and (ii) the transition from $\sigma=0$ to $\sigma=1$ happens with zero probability (once the proof is obtained, it never gets lost). Hence $\beta=1$ and $\alpha^*=0$. The transition probabilities are $\hat{T}_i(\sigma_i=0|\tilde{\sigma}_{i-1}=1,\sigma_k=1) = 0$ representing no signal sent if proof does not exist at $t=k$ ($\sigma_k=1$). $\hat{T}_i(\sigma_i=0|\tilde{\sigma}_{i-1}=1,\sigma_k=0)=1$ represents a signal being sent when the proof exists at $t=k$. These basic assumptions contradict with Eq. (\ref{eq:2state}), $[\hat{T}_i(0|1,1)-\hat{T}_i(0|1,0)](\alpha^*-\beta) = (0-1)(0-1) \neq 0$. Hence the assumptions are false and Deutsch's unproven theorem paradox is precluded. The paradox can be resolved if we relax the assumptions. Suppose we allow the possibility that the proof can get lost ($\alpha^*\geq 0$) and that the proof can be derived by some brilliant mathematician $\beta\leq 1$. Then Eq. (\ref{eq:2state}) can be satisfied if $\alpha^*=\beta$. There is no paradox here because the proof can be sent back in time and subsequently be lost. It can be re-derived again and be sent back to the past. \begin{figure} \begin{picture}(180,130)(0,0) \put(0,-15){\includegraphics[width=6cm]{./range_crop.eps}} \put(125,65){$N(1|\tilde{\sigma}_i)-S(1|\tilde{\sigma}_i)$} \put(-12,130){$N(2|\tilde{\sigma}_i)-S(2|\tilde{\sigma}_i)$} \put(52.2,19){\line(1,2){38}} \put(95,95){$-a/b$} \end{picture} \caption{Shaded region shows the possible values of $N(1|\tilde{\sigma}_i,0)-S(1|\tilde{\sigma}_i)$, (x-axis) and $N(2|\tilde{\sigma}_i,0)-S(2|\tilde{\sigma}_i)$, (y-axis).} \label{fig:range} \end{figure} \subsection{Three-state system} For a three-state system, $\sigma$ takes the values $\{0,1,2\}$. For simplicity, let $\hat{T}_i(\sigma_i|\tilde{\sigma}_{i-1},0)=N(\sigma_i|\tilde{\sigma}_{i-1})$ and $\hat{T}_i(\sigma_i|\tilde{\sigma}_{i-1},1)=\hat{T}_i(\sigma_i|\tilde{\sigma}_{i-1},2)=S(\sigma_i|\tilde{\sigma}_{i-1})$. Using Eq. (\ref{eq:thatnorm}) and (\ref{eq:TV}), \begin{eqnarray} \label{eq:3state} [N(1|\tilde{\sigma}_{i-1})-S(1|\tilde{\sigma}_{i-1})] [V(0|1)-V(0|0)] +\mbox{\hspace{-.1cm}} && \\ \nonumber [N(2|\tilde{\sigma}_{i-1})-S(2|\tilde{\sigma}_{i-1})] [V(0|2)-V(0|0)] = && 0 \end{eqnarray} this is an equation of the form $xa+yb=0$ given $a=[V(0|1)-V(0|0)]$ and $b=[V(0|2)-V(0|0)]$, $x=[N(1|\tilde{\sigma}_{i-1})-S(1|\tilde{\sigma}_{i-1})]$ and $y=[N(2|\tilde{\sigma}_{i-1})-S(2|\tilde{\sigma}_{i-1})]$ can be solved. There are in general infinitely many solutions. From Eq. (\ref{eq:thatrange}), the range of $[N(1|\tilde{\sigma}_{i-1})-S(1|\tilde{\sigma}_{i-1})]$ and $[N(2|\tilde{\sigma}_{i-1})-S(2|\tilde{\sigma}_{i-1})]$ is bounded by the shaded region in Fig. \ref{fig:range}. Given $a$ and $b$, the set of solutions for $x$ and $y$ contains all the points on the line shown in Fig. \ref{fig:range}. The slope of the line is given by $-a/b$. $N\neq S$ implies that transition to the state $\sigma_i$ depends on future state $\sigma_k$, that is, signals from the future can affect the probability distribution of the past. \subsection{The grandfather paradox in a three-state system} Consider the three states represent healthy ($\sigma=2$), sick ($\sigma=1$) and dead ($\sigma=0$). First, we lay down our assumptions, \begin{enumerate} \item Assume resurrection is impossible so that transition from $\sigma=0$ to $\sigma\neq 0$ happens with zero probability. Then the matrix $V$ is of the form, \begin{equation} V = \left( \begin{array}{ccc} 1 & \alpha_0 & \beta_0 \\ 0 & \alpha_1 & \beta_1 \\ 0 & \alpha_2 & \beta_2 \end{array} \right) \end{equation} with $\alpha_0+\alpha_1+\alpha_2=1$ and $\beta_0+\beta_1+\beta_2=1$. \item The agent is able to send a signal back in time to kill himself only if he is not dead at $t=k$. \end{enumerate} $\hat{T}_i(\sigma_i|\sigma_{i-1}, 1)$ and $\hat{T}_i(\sigma_i|\sigma_{i-1}, 2)$ are the conditional probabilities that the agent is alive and sends a signal back in time to kill himself. Let, $\hat{T}_i(\sigma_i|\tilde{\sigma}_{i-1}, 1) = \hat{T}_i(\sigma_i|\tilde{\sigma}_{i-1}, 2) = S(\sigma_i|\tilde{\sigma}_{i-1})$. $\hat{T}_i(\sigma_i|\sigma_{i-1}, 0)$ is the conditional probability that the agent is dead at $t=k$ and can not send a signal back in time to kill himself. Let $\hat{T}_i(\sigma_i|\tilde{\sigma}_{i-1}, 0)=N(\sigma_i| \tilde{\sigma}_{i-1})$. Hence $S$ (the ``killing" matrix) $N$ are, \begin{equation} S = \left( \begin{array}{ccc} 1 & 1 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right) \mbox{\hspace{.3cm}} N = \left( \begin{array}{ccc} 1 & a_0 & b_0 \\ 0 & a_1 & b_1 \\ 0 & a_2 & b_2 \end{array} \right) \label{eq:kill3} \end{equation} We have from Eq. (\ref{eq:3state}), \begin{eqnarray} \label{eq:gfp3state} a_1 (1-\alpha_0) + a_2 (1-\beta_0) & = & 0 \\ \nonumber b_1 (1-\alpha_0) + b_2 (1-\beta_0) & = & 0 \end{eqnarray} There are four cases in which Eq. (\ref{eq:gfp3state}) is satisfied. \begin{enumerate} \item $\alpha_0=1$ and $\beta_0=1$. Then $V=S$ which means the agent is dead at $t=k$ with probability 1 (recall that $S$ is the killing matrix). \item $\alpha_0=1$ and $\beta_0< 1$. To satisfy Eq. (\ref{eq:gfp3state}), $a_2=b_2=0$. In this case the agent is dead at $t=k$ with probability 1 (see Appendix B for the proof). \item $\alpha_0<1$ and $\beta_0=1$. To satisfy Eq. (\ref{eq:gfp3state}), $a_1=b_1=0$. In this case the agent is dead at $t=k$ with probability 1 (see Appendix B for the proof). \item $\alpha_0< 1$ and $\beta_0< 1$. Then $a_1=a_2=b_1=b_2=0$ and $N=S$ which means the agent is dead at $t=i$ with probability 1. \end{enumerate} In all cases, the agent is dead with probability 1 at $t=k$ and hence never has a chance to send a signal back in time to kill himself. Suppose $S$ is not the killing matrix (Eq. (\ref{eq:kill3})) or resurrection is possible, then this argument does not hold, and the agent is able to alter his fate by changing the probability of being healthy, sick or dead. \subsection{The chicken-and-egg paradox} Consider the chicken-and-egg paradox in which at time $t=k$, a hen travels back in time to $t=i$ to lay an egg. The egg hatches into the hen herself. At this time point, there are two copies of the hen, the older self and the younger self (the chick). As both copies travel to time $t=k$, the chick grow older and travels back in time to lay the egg. This paradox seems ``self-consistent" in the sense that there is no contradiction in existence of the hen and chick from one time point to another. However the problem is the hen seems to pop out from nowhere. There are three possible states, hen and chick ($\sigma=0$), hen only ($\sigma=1$) and no hen and no chick ($\sigma=2$). We exclude the state of chick only, otherwise we would need four states. There are no hen and no chick initially, hence $\tilde{\sigma}_{i-1}=2$. Let $\hat{T}_i(\sigma_i|\tilde{\sigma}_{i-1},1) = \hat{T}_i(\sigma_i|\tilde{\sigma}_{i-1},2)=N(\sigma_i|\tilde{\sigma}_{i-1})$. This is the case when no chick travels back in time and hence there remains no hen and no chick at $t=i$. Let $\hat{T}_i(\sigma_i| \tilde{\sigma}_{i-1},0) = S(\sigma_i|\tilde{\sigma}_{i-1})$, the chick travels back in time from $t=k$ to $t=i$. The matrices $S$ and $N$ are, \begin{equation} S = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 0 \end{array} \right) \mbox{\hspace{.4cm}} N = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right) \end{equation} The matrix $V$ is of the form, \begin{equation} V = \left( \begin{array}{ccc} \alpha_0 & \beta_0 & 0 \\ \alpha_1 & \beta_1 & 0 \\ \alpha_2 & \beta_2 & 1 \end{array} \right) \end{equation} The first two columns are general expressions with $\sum_{j=0}^2 \alpha_j=1$ and $\sum_{j=0}^2 \beta_j=1$. The last column is $(0,0,1)^T$ because when there is no hen and no chick at time $t=i$, then there will be no hen and no chick at $t=k$. Now consider the probability, \begin{equation} P(\tilde{\sigma}_{i-1},\sigma_i,\sigma_k) = p(\tilde{\sigma}_{i-1}) \hat{T}_i(\sigma_i|\tilde{\sigma}_{i-1},\sigma_k) V(\sigma_k|\sigma_i) \end{equation} $p(\tilde{\sigma}_{i-1})$ is the probability of sampling the state $\tilde{\sigma}_{i-1}$. Since $\tilde{\sigma}_{i-1}=2$, $p(\tilde{\sigma}_{i-1})=\delta_{\tilde{\sigma}_{i-1},2}$. We remind the reader that the probability distribution $V$ is the sum of probabilities over all possible intermediate sequences. The chicken-and-egg paradox requires both hen and chick to be present at $t=k$ ($\sigma_k=0$) and the chick to appear at $t=i$ ($\sigma_i=1$), all intermediate states can take arbitrary values. Reading off entries from matrices $S$ and $V$, \begin{equation} P(\tilde{\sigma}_{i-1}=2,\sigma_i=1,\sigma_k=0) = \beta_0 \end{equation} Using Eq. (\ref{eq:3state}) we can calculate what $\beta_0$ should be, \begin{eqnarray} [ 1-0] (\beta_0 - \alpha_0) - [0-1] \alpha_0 & = & 0 \\ \nonumber \Rightarrow \beta_0 &= & 0 \end{eqnarray} The sum of probabilities of all possible sequences of states that represent the chicken-and-egg paradox equals zero. Therefore the chicken-and-egg event happens with zero probability. \section{Discussion} We have shown, using a graphical model with a loop back into the past, that the grandfather paradox, Deutsch's unproven theorem paradox and the chicken-and-egg paradox are precluded in time travel. We have also demonstrated that changing the probability distributions of the past is possible when no contradicting events are present. For the paradoxes we discussed in this paper, we gave scenarios in which the paradoxes are resolved. Our analysis is based on isolated two- and three-state systems. For future work, it would be useful to generalize our formalism to arbitrary systems. Lastly, in cases when the causal relationship between events at different times are very complex, the existence of time travel paradoxes in these cases may be very subtle. We hope that our mathematical framework can be used to uncover new time travel paradoxes, especially those that are embedded in complex interactions of events and are not obvious. The author would like to thank Mui Leng Seow and Ivana Mihalek for proofreading this article. \section{Appendix A: Derivation of Eq. (\ref{eq:TV})} Probability of a sequence $\pi_n$ is given by, \begin{equation} P(\pi_n) = p(\sigma_1)T_2(\sigma_2|\sigma_1)\cdots \hat{T}_i(\sigma_i|\sigma_{i-1},\sigma_k) \cdots T_n(\sigma_n|\sigma_{n-1}) \label{eq:mcloop} \end{equation} Summing over all sequences, \begin{eqnarray} \label{eq:sumseq} \sum_{\{\pi_n\} } P(& &\pi_n) = \\ \nonumber \sum_{\sigma_1,\sigma_2,\cdots,\sigma_n} & & p(\sigma_1) T_2(\sigma_2|\sigma_1)\cdots \hat{T}_i(\sigma_i|\sigma_{i-1},\sigma_k)\cdots T_n(\sigma_n|\sigma_{n-1}) \end{eqnarray} Since $\sum_{\sigma_j} T_j(\sigma_j|\sigma_{j-1})=1 \forall j$, summation can be evaluated recursively between $\sigma_{k+1}$ and $\sigma_n$. That is, \begin{equation} \sum_{\sigma_{k+1},\cdots \sigma_n} T_{k+1}(\sigma_{k+1}|\sigma_k) \cdots T_n(\sigma_n|\sigma_{n-1}) = 1 \end{equation} Next define, \begin{equation} U(\sigma_{i-1}) = \sum_{\sigma_1,\cdots \sigma_{i-2}} p(\sigma_1) T_2(\sigma_2|\sigma_1)\cdots T_{i-1}(\sigma_{i-1}|\sigma_{i-2}) \end{equation} \begin{equation} V(\sigma_k|\sigma_i) = \sum_{\sigma_{i+1},\cdots \sigma_{k-1}} T_{i+1}(\sigma_{i+1}|\sigma_i)\cdots T_k(\sigma_k|\sigma_{k-1}) \end{equation} Then Eq. (\ref{eq:sumseq}) becomes, \begin{equation} \sum_{\{ \pi_n\} } P(\pi_n) = \sum_{\sigma_{i-1},\sigma_i,\sigma_k} U(\sigma_{i-1}) \hat{T}_i(\sigma_i | \sigma_{i-1},\sigma_k) V(\sigma_k|\sigma_i ) \end{equation} The objective is to find the conditions in which $\sum_{\{\pi_n\} } P(\pi_n)=1$. $U(\sigma_{i-1})$ is the probability of sampling the state $\sigma_{i-1}$, it depends on the conditional probabilities $T_j$, $j\leq i-1$ and the initial condition $p(\sigma_1)$. We therefore have the freedom to choose $U$ for example, by choosing different initial conditions. Holding $T$ and $V$ fixed, we require $\sum P(\pi_n)=1$ for different choices of $U$, in which we arrive at, \begin{equation} \sum_{\sigma_i,\sigma_k} \hat{T}_i(\sigma_i | \tilde{\sigma}_{i-1},\sigma_k) V(\sigma_k|\sigma_i ) = 1 \end{equation} \section{Appendix B: The grandfather paradox in a three-state system} We present the proof that for the grandfather paradox in a three-state system, the probability that the agent is dead at $t=k$ is one. We consider cases II and III in which Eq. (\ref{eq:gfp3state}) is satisfied, \subsection{Case II: $\alpha_0=1$ and $\beta_0<1$} In this case, $a_2=b_2=0$ and, \begin{equation} V=\left( \begin{array}{ccc} 1 & 1 & \beta_0 \\ 0 & 0 & \beta_1 \\ 0 & 0 & \beta_2 \end{array} \right) \label{eq:B:V1} \end{equation} \begin{equation} N=\left( \begin{array}{ccc} 1 & a_0 & b_0 \\ 0 & a_1 & b_1 \\ 0 & 0 & 0 \end{array} \right) \label{eq:B:N1} \end{equation} We calculate the probability that the agent is dead, \begin{eqnarray} \nonumber P(\sigma_k=0) & = & \sum_{\sigma_1,\cdots,\sigma_{k-1}} p(\sigma_1) T_2(\sigma_2|\sigma_1) \cdots \\ \nonumber & = & \sum_{\sigma_{i-1},\sigma_i} U(\sigma_{i-1}) \hat{T}_i(\sigma_i|\sigma_{i-1},0) V(0|\sigma_i) \\ & = & \sum_{\sigma_{i-1},\sigma_i} U(\sigma_{i-1}) N(\sigma_i|\sigma_{i-1}) V(0|\sigma_i) \label{eq:B:p} \end{eqnarray} Reading off the entries of matrices $V$ and $N$ in Eq. (\ref{eq:B:V1}) and (\ref{eq:B:N1}), we get $\sum_{\sigma_i} N(\sigma_i|\sigma_{i-1}) V(0|\sigma_i) = 1$ for all $\sigma_{i-1}$. Hence $P(\sigma_k=0)=1$. \subsection{Case III: $\alpha_0<1$ and $\beta_0=1$} In this case, $a_1=b_1=0$ and, \begin{equation} V=\left( \begin{array}{ccc} 1 & \alpha_0 & 1\\ 0 & \alpha_1 & 0\\ 0 & \alpha_2 & 0 \end{array} \right) \label{eq:B:V2} \end{equation} \begin{equation} N=\left( \begin{array}{ccc} 1 & a_0 & b_0 \\ 0 & 0 & 0 \\ 0 & a_2 & b_2 \\ \end{array} \right) \label{eq:B:N2} \end{equation} We calculate the probability that the agent is dead, using Eq. (\ref{eq:B:p}) and reading off the entries of matrices $V$ and $N$ in Eq. (\ref{eq:B:V2}) and (\ref{eq:B:N2}), we get $\sum_{\sigma_i} N(\sigma_i|\sigma_{i-1}) V(0|\sigma_i) = 1$ for all $\sigma_{i-1}$. Hence $P(\sigma_k=0)=1$.
1,314,259,992,829
arxiv
\section{Introduction} It has long been known that quark confinement in QCD can be modeled by means of a dual-superconductor scenario~\cite{1, 15}. This scenario suggests that the Yang--Mills vacuum can resemble that of a dual superconductor, which consists of the condensate of a magnetically charged Higgs field. The resulting dual Abelian Higgs model is a four-dimensional relativistic generalization of the Landau--Ginzburg theory of dual superconductivity. Dedicated lattice simulations support this scenario of confinement with a very high accuracy~\cite{2}. It turns out that not only the dual Abelian Higgs model but also the dual Landau--Ginzburg theory (DLGT) can be relevant to the description of the Yang--Mills vacuum. The reason is that, upon the deconfinement phase transition, large spatially-oriented Wilson loops still exhibit an area-law behavior (see Ref.~\cite{f1} for the lattice results on the corresponding spatial string tension $\sigma_s$). Analytically, spatial confinement can with a good accuracy be described in terms of soft stochastic chromo-magnetic Yang--Mills fields~\cite{ag}, which (unlike soft chromo-electric fields) survive the deconfinement phase transition~\cite{de}. Moreover, for every temperature-dependent quantity, there exists the so-called temperature of dimensional reduction such that, above that temperature, the contribution to the quantity at issue produced by all Matsubara frequencies $\omega_k=2\pi Tk$ with $k\ne 0$ is negligible compared to the contribution of $\omega_0$. It should be, of course, borne in mind that, although the contributions of nonzero modes amount to at most few per cent of the static-mode contribution, these contributions are always present. For this reason, the dimensional reduction is not a phase transition with a definite critical temperature that can be determined from the thermodynamic equations. At the formal level, one can only say that the dimensional reduction of the Euclidean Yang--Mills action corresponds to the substitution \begin{equation} \label{dr} S_{\rm YM}=\frac{1}{4g_{\rm YM}^2}\int d^3x\int_{0}^{1/T} dx_4{\,}(F_{\mu\nu}^a)^2 \rightarrow \frac{1}{4g_{\rm YM}^2T}\int d^3x{\,}(F_{\mu\nu}^a)^2, \end{equation} where $F_{\mu\nu}^a=\partial_\mu A_\nu^a-\partial_\nu A_\mu^a-f^{abc}A_\mu^b A_\nu^c$ is the Yang--Mills field-strength tensor. Thus, the zero-temperature Yang--Mills coupling $g_{\rm YM}$ goes over to the temperature-dependent dimensionful coupling $g_T=g_{\rm YM}\sqrt{T}$. The latter defines the parametric temperature dependence of all the dimensionful nonperturbative quantities upon their dimensional reduction. In particular, the spatial string tension scales with temperature as~\cite{ag} $\sigma_s\propto g_T^4$, ensuring spatial confinement in the dimensionally-reduced Yang--Mills theory. As such, this theory can be modeled by means of DLGT. The aim of the present paper is to address topological effects that might occur in DLGT. These effects are related to the long-range interactions between the excitations of the dual-Higgs vacuum, which are described by Wilson loops, and the dual (i.e. carrying electric fluxes) Abrikosov vortices~\cite{az}. The latter are present in the vacuum as the topologically stable solutions to the classical equations of motion~\cite{15}. {\it A~priori} one can expect the Wilson loops and Abrikosov vortices to interact only by means of massive dual vector bosons. We show that, in addition, a long-range Aharonov--Bohm-type interaction is present, which appears in the form of a Gauss' linking number between the contour of a Wilson loop and an Abrikosov vortex. However, in the so-called London limit, which corresponds to an extreme type-II dual superconductor, the coupling of the Aharonov--Bohm-type interaction is shown to be such that the interaction trivializes, producing only an inessential factor of ${\rm e}^{2\pi i\times({\rm integer})}$. The paper is organized as follows. In the next Section, we perform a path-integral duality transformation of the Wilson loop, and explicitly find the said Aharonov--Bohm-type interaction. In Section~III, we generalize these results to the case of an SU($N_c$)-inspired [U(1)]$^{N_c-1}$-invariant DLGT. In Section~IV, we additionally consider the effects produced in DLGT by the Chern--Simons (CS) term. First, we briefly show that, in the absence of the dual Higgs field, the CS term leads to a self-linkage of the contour of the Wilson loop. Then we perform the duality transformation of the Wilson loop in the full theory, which includes the dual Higgs field. In particular, at sufficiently large values of the $\Theta$-parameter entering the CS term, we obtain an analytic expression for the Wilson loop. Furthermore, in the same large-$\Theta$ limit, we explicitly find knotted dual Abrikosov vortices, whose self-linkage is provided by the CS term. In Section~IV, the summary of the results obtained is presented. In Appendices A and B, we provide some technical details of the calculations performed. \section{Wilson loop in the dual Landau--Ginzburg theory} Dual Abelian Higgs model is described by the following Euclidean action: $$S_{\rm DAHM}=\int d^4x\left\{\frac14F_{\mu\nu}^2[B]+|D_\mu \varphi|^2+\lambda(|\varphi|^2-\eta_{\rm 4d}^2)^2\right\}.$$ Here $F_{\mu\nu}[B]=\partial_\mu B_\nu-\partial_\nu B_\mu$ is the strength tensor of the dual gauge field $B_\mu$, and $D_\mu=\partial_\mu+ig_mB_\mu$ is the covariant derivative, with $g_m$ being the dimensionless magnetic coupling related to the electric coupling $e$ via the Dirac quantization condition $g_m e=2\pi\times{\,}({\rm integer})$. We consider this model in the so-called London limit of $\sqrt{\lambda}\gg g_m$, that is, the extreme type-II dual superconductor. Due to the factor ${\rm e}^{-\lambda\int d^4x(|\varphi|^2-\eta_{\rm 4d}^2)^2}$ in the partition function, the dominant contribution to the functional integral is produced by configurations of the dual Higgs field with $|\varphi|=\eta_{\rm 4d}$. That is, variations of the radial part of the dual-Higgs field do not matter in the London limit, which is equivalent to the fact that the condensate of this field is fully developed everywhere except of infinitely thin cores of the dual strings. Rather, it is the phase of the dual Higgs field which matters, so that $\varphi(x)=\eta_{\rm 4d}{\,}{\rm e}^{i\theta(x)}$, and the kinetic term of the dual Higgs field takes the form $|D_\mu \varphi|^2=\eta_{\rm 4d}^2\cdot(\partial_\mu\theta+ g_mB_\mu)^2$. Accordingly, in the London limit of interest, the action of the dual Abelian Higgs model reads \begin{equation} \label{s4d} S_{4{\rm d}}=\int d^4x\left\{\frac14F_{\mu\nu}^2[B]+\eta_{4{\rm d}}^2(\partial_\mu\theta+g_m B_\mu)^2\right\}. \end{equation} Notice that this action can be used to calculate the tension of a Nambu--Goto string interconnecting two static electric charges, as well as the correlation length of the two-point function of $F_{\mu\nu}$'s (cf. Ref.~\cite{ae1}). Matching these two quantities with their phenomenological QCD-counterparts, one can readily find $\eta_{4{\rm d}}\sim\sqrt{\sigma}$ and $g_m\sim\frac{1}{a\sqrt{\sigma}}$, where $\sigma$ is the string tension entering the static quark-antiquark potential, and $a$ is the correlation length of the two-point correlation function of gluonic field strengths. As was mentioned in Introduction, upon the deconfinement phase transition in QCD, the chromo-electric part of the gluon condensate vanishes (in accordance with deconfinement), while the chromo-magnetic part survives, providing an area law for large spatial Wilson loops (cf. Refs.~\cite{f1,ag,de}). The corresponding spatially confining vacuum can be modelled by means of the dual Landau--Ginzburg theory. The action of this theory, \begin{equation} \label{s3dd} S_{3{\rm d}}=\int d^3x\left\{\frac{1}{4}F_{\mu\nu}^2[b]+ \eta_{3{\rm d}}^2(\partial_\mu\theta+\kappa b_\mu)^2\right\}, \end{equation} follows from the action~(\ref{s4d}) upon the substitution $\int d^4x\to\beta\int d^3x$, where $\beta\equiv1/T$ [cf. the same substitution in the Yang--Mills action~(\ref{dr})]. Matching the fields and parameters of the action $S_{3{\rm d}}$ with those of the action $S_{4{\rm d}}$, we obtain the following relations: \begin{equation} \label{rel} b_\mu=\sqrt{\beta}B_\mu,~~ \eta_{3{\rm d}}=\sqrt{\beta}\eta_{4{\rm d}},~~ \kappa=g_m\sqrt{T}. \end{equation} Notice that, in terms of the phenomenological QCD parameters $\sigma$ and $a$ (cf. the previous paragraph), one gets the estimates $\eta_{3{\rm d}}\sim\sqrt{\sigma\beta}$, $\kappa\sim\frac{\sqrt{T/\sigma}}{a}$. We consider now the central object of our study, that is, the Wilson loop associated with an excitation of the dual-Higgs vacuum. In the initial dual Abelian Higgs model, it has the form $\left<W(C)\right>_{\rm DAHM}=\left<\exp\left(ig_mN\oint_C dx_\mu B_\mu\right)\right>$, where the integer $N$ characterizes the magnetic charge $g_mN$ of an excitation that propagates along the contour $C$. The counterpart of this expression in the dual Landau--Ginzburg theory reads \begin{equation} \label{we} \left<W(C)\right>=\left<\exp\left(i\kappa N\oint_C dx_\mu b_\mu\right)\right>, \end{equation} where we have used the above relations~(\ref{rel}). We notice that, in the purely Maxwell theory corresponding to $\eta_{3{\rm d}}=0$ in Eq.~(\ref{s3dd}), the Wilson loop has the form \begin{equation} \label{wt} \left<W(C)\right>=\exp\left(-\frac{(\kappa N)^2}{2} \oint_C dx_\mu \oint_C dy_\mu D_0({\bf x}-{\bf y})\right), \end{equation} where $D_0({\bf x})=1/(4\pi|{\bf x}|)$ is the Coulomb propagator. We calculate now the Wilson loop $\left<W(C)\right>$ with the average $\left<\cdots\right>$ corresponding to the full action~(\ref{s3dd}), where $\eta_{3{\rm d}}\ne 0$. To this end, we find it convenient to introduce, instead of the field $b_\mu$, a rescaled field $v_\mu=b_\mu/(\kappa N)$, and denote \begin{equation} \label{numu} \nu=1/(\kappa N)^2,~~~~~ \mu=\kappa^2N. \end{equation} In terms of these notations, the Wilson loop~(\ref{we}) can be written as \begin{equation} \label{1} \left<W(C)\right>=\int {\cal D}v_\mu{\,}{\cal D}\tilde\theta{\,}{\cal D}\bar\theta{\,} {\rm e}^{-\int_x\left[\frac{1}{4\nu}F_{\mu\nu}^2[v]+\eta^2(\partial_\mu\theta+\mu v_\mu)^2- \frac{i}{\nu}v_\mu j_\mu\right]}, \end{equation} where $j_\mu({\bf x};C)=\oint_C dx_\mu(\tau)\delta({\bf x}- {\bf x}(\tau))$ is a conserved current, $\eta\equiv\eta_{3{\rm d}}$, and from now on we use the short-hand notations $\int_x\equiv\int d^3x$ and $\int_p\equiv \int \frac{d^3p}{(2\pi)^3}$. The full phase $\theta$ of the dual Higgs field can be represented as a sum $\theta=\tilde\theta+\bar\theta$, with $\tilde\theta$ experiencing jumps by $2\pi$ when going around dual Abrikosov vortices, while $\bar\theta$ being a Gaussian fluctuation around $\tilde\theta$. The said jumps of $\tilde\theta$ lead to the noncommutativity of two derivatives acting on this field (cf. Ref.~\cite{15}): \begin{equation} \label{2} (\partial_\mu\partial_\nu-\partial_\nu\partial_\mu)\tilde\theta=2\pi\varepsilon_{\mu\nu\lambda}J_\lambda, \end{equation} where $J_\lambda$ is a current of the dual Abrikosov vortex. To calculate the Wilson loop~(\ref{1}), we perform its duality transformation. To this end, it is first convenient to introduce two auxiliary fields as follows: $${\rm e}^{-\frac{1}{4\nu}\int_x F_{\mu\nu}^2}= \int {\cal D}G_\mu{\,}{\rm e}^{\int_x\left[-\frac{\nu}{2}G_\mu^2+i\varepsilon_{\mu\nu\lambda}v_\mu\partial_\nu G_\lambda\right]},~ {\rm e}^{-\eta^2\int_x(\partial_\mu\theta+\mu v_\mu)^2}=\int {\cal D}C_\mu{\,} {\rm e}^{\int_x\left[ -\frac{1}{4\eta^2}C_\mu^2+iC_\mu(\partial_\mu\theta+\mu v_\mu)\right]}.$$ The subsequent integration over $\bar\theta$ leads to the constraint $\partial_\mu C_\mu=0$, which can be resolved by representing $C_\mu$ as $C_\mu=\varepsilon_{\mu\nu\lambda}\partial_\nu\varphi_\lambda$. Accordingly, $C_\mu^2=\frac12\Phi_{\mu\nu}^2$, where $\Phi_{\mu\nu}=\partial_\mu\varphi_\nu-\partial_\nu\varphi_\mu$, and $i\int_x C_\mu\partial_\mu\tilde\theta=2\pi i\int_x\varphi_\mu J_\mu$, where at the last step we have used Eq.~(\ref{2}). Thus, the Wilson loop~(\ref{1}) takes the form \begin{equation} \label{qq} \left<W(C)\right>=\int {\cal D}J_\mu{\,}{\cal D}\varphi_\mu{\,}{\cal D}G_\mu{\,} {\cal D}v_\mu{\,} {\rm e}^{\int_x\left[-\frac{\nu}{2}G_\mu^2-\frac{1}{8\eta^2}\Phi_{\mu\nu}^2+i\varepsilon_{\mu\nu\lambda} v_\mu\partial_\nu(G_\lambda+\mu\varphi_\lambda)+2\pi i\varphi_\mu J_\mu+\frac{i}{\nu}v_\mu j_\mu\right]}. \end{equation} Note that, throughout this paper, we work at the entirely classical level. For this reason, the Jacobian corresponding to the change of integration variables $\tilde\theta\to J_\mu$ is omitted, and the measure ${\cal D}J_\mu$ in the functional integral has only a statistical (rather than a field-theoretical) meaning of counting vortices in their given configuration. Next, noticing that the $v_\mu$-field enters Eq.~(\ref{qq}) as just a Lagrange multiplier, and integrating over this field, we obtain a functional $\delta$-function $\delta\left(\varepsilon_{\mu\nu\lambda}\partial_\nu(G_\lambda+\mu\varphi_\lambda) +\frac{1}{\nu}j_\mu\right)$. The subsequent $G_\mu$-integration amounts to substituting $G_\mu$, which stems from this $\delta$-function, into ${\rm e}^{-\frac{\nu}{2}\int_x G_\mu^2}$. Such a $G_\mu$ reads $G_\mu=-\mu\varphi_\mu-\frac{1}{\nu}\varepsilon_{\mu\nu\lambda}\int_y\partial_\nu^xD_0^{xy}j_\lambda^y$, where we have introduced short-hand notations $D_0^{xy}\equiv 1/(4\pi|{\bf x}-{\bf y}|)$, $j_\lambda^y\equiv j_\lambda({\bf y};C)$, and used the conservation of $j_\mu$. Accordingly, the Wilson loop takes the form \begin{equation} \label{w5} \left<W(C)\right>=\int {\cal D}J_\mu{\,} {\cal D}\varphi_\mu{\,}{\rm e}^{\int_x\left[-\frac{1}{8\eta^2}\Phi_{\mu\nu}^2+2\pi i\varphi_\mu J_\mu -\frac{\nu}{2}\left(\mu\varphi_\mu+ \frac{1}{\nu}\varepsilon_{\mu\nu\lambda}\int_y\partial_\nu^xD_0^{xy}j_\lambda^y\right)^2\right]}, \end{equation} or, equivalently, $$\left<W(C)\right>={\rm e}^{-\frac{1}{2\nu}\int_{x,y}j_\mu^xj_\mu^yD_0^{xy}} \int {\cal D}J_\mu{\,} {\cal D}\varphi_\mu{\,}{\rm e}^{\int_x\bigl(-\frac{1}{8\eta^2}\Phi_{\mu\nu}^2-\frac{\mu^2\nu}{2} \varphi_\mu^2+i\varphi_\mu K_\mu\bigr)},$$ where \begin{equation} \label{k33} K_\mu^x\equiv 2\pi J_\mu^x+i\mu\varepsilon_{\mu\nu\lambda}\int_y\partial_\nu^xD_0^{xy}j_\lambda^y. \end{equation} To perform the remaining $\varphi_\mu$-integration, we introduce a rescaled field $\chi_\mu\equiv\varphi_\mu/(\eta\sqrt{2})$ and denote \begin{equation} \label{mml} {\sf m}\equiv\mu\eta\sqrt{2\nu}. \end{equation} That yields $$\int {\cal D}\chi_\mu{\,}{\rm e}^{\int_x\bigl[-\frac14(\partial_\mu\chi_\nu-\partial_\nu\chi_\mu)^2- \frac{{\sf m}^2}{2}\chi_\mu^2+i\sqrt{2}\eta\chi_\mu K_\mu\bigr]}={\rm e}^{-\eta^2\int_{x,y}K_\mu^xK_\mu^y D_{\sf m}^{xy}},$$ where $D_{\sf m}^{xy}\equiv{\rm e}^{-{\sf m}|{\bf x}-{\bf y}|}/(4\pi|{\bf x}-{\bf y}|)$ is the Yukawa propagator. Thus, the Wilson loop~(\ref{we}) becomes $$\left<W(C)\right>={\rm e}^{-\frac{1}{2\nu}\int_{x,y}j_\mu^xj_\mu^yD_0^{xy}} \int {\cal D}J_\mu{\,}{\rm e}^{-\eta^2\int_{x,y}K_\mu^xK_\mu^y D_{\sf m}^{xy}}.$$ The expression standing in the last exponential in this formula can be simplified (see Appendix~A for the details), that yields the following result: \begin{equation} \label{a2} \left<W(C)\right>={\rm e}^{-\frac{1}{2\nu}\int_{x,y}j_\mu^xj_\mu^yD_{\sf m}^{xy}} \int {\cal D}J_\mu{\,}{\rm e}^{-(2\pi\eta)^2\int_{x,y}J_\mu^xJ_\mu^yD_{\sf m}^{xy}+ \frac{2\pi i}{\mu\nu}\left[\hat L(j,J)-\varepsilon_{\mu\nu\lambda} \int_{x,y}J_\mu^x j_\nu^y\partial_\lambda^xD_{\sf m}^{xy}\right]}, \end{equation} where $\hat L(j,J)=\varepsilon_{\mu\nu\lambda}\int_{x,y}J_\mu^xj_\nu^y\partial_\lambda^xD_0^{xy}$ is the Gauss' linking number of the contour $C$ and a dual Abrikosov vortex. The exponential ${\rm e}^{\frac{2\pi i}{\mu\nu}\hat L(j,J)}$ in Eq.~(\ref{a2}) formally describes a long-range Aharonov--Bohm-type interaction of the dual-Higgs excitation with the dual Abrikosov vortex. However, recalling the notations introduced in Eq.~(\ref{numu}), we have $\frac{1}{\mu\nu}=N$. For this reason, the obtained interaction turns out to be trivial, i.e. ${\rm e}^{\frac{2\pi i}{\mu\nu}\hat L(j,J)}= 1$. Thus, we conclude that integer-charged excitations of the dual-Higgs vacuum do not interact with the dual Abrikosov vortices by means of the long-range Aharonov--Bohm-type interaction. Rather, the interaction between the excitations of the dual-Higgs vacuum and the dual Abrikosov vortices is provided by the dual vector boson, through the factor ${\rm e}^{-2\pi iN\varepsilon_{\mu\nu\lambda} \int_{x,y}J_\mu^x j_\nu^y\partial_\lambda^xD_{\sf m}^{xy}}$. \section{Generalization to the SU($N_c$)-inspired case} In this Section, we generalize the result~(\ref{a2}) to the SU($N_c$)-inspired case. The corresponding theory~\cite{ae,s3} is invariant under the $[U(1)]^{N_c-1}$-group, which is the maximal Abelian subgroup of SU($N_c$). A counterpart of Eq.~(\ref{1}) in this theory reads \begin{equation} \label{w22} \left<W_b(C)\right>=\int {\cal D}{\bf v}_\mu\left(\prod\limits_a {\cal D}\tilde\theta_a{\,}{\cal D}\bar\theta_a\right) {\cal D}k{\,}\delta\left(\sum\limits_{a}\tilde\theta_a\right) {\rm e}^{-\int_x\left[\frac{1}{4\nu}{\bf F}_{\mu\nu}^2+\eta^2\sum\limits_a(\partial_\mu\theta_a+\mu {\bf q}_a{\bf v}_\mu)^2-ik\sum\limits_a\bar\theta_a- \frac{i}{\nu}{\bf v}_\mu {\bf j}_\mu^b\right]}. \end{equation} Here ${\bf v}_\mu=(v_\mu^1,\ldots,v_\mu^{N_c-1})$, the index $a=1,\ldots,\frac{N_c(N_c-1)}{2}$ labels positive roots ${\bf q}_a$'s of the SU($N_c$)-group, and the fact that this group is special imposes a constraint $\sum\limits_{a}^{}\theta_a=0$ on the phases $\theta_a$'s of the dual Higgs fields. Similarly to Eq.~(\ref{2}), we have $\theta_a=\tilde\theta_a+\bar\theta_a$, where $(\partial_\mu\partial_\nu-\partial_\nu\partial_\mu)\tilde\theta_a=2\pi J_\mu^a$, with $J_\mu^a$ being a current of the dual Abrikosov vortex of the $a$-th type. The constraint $\sum\limits_{a}\bar\theta_a=0$ is further imposed in Eq.~(\ref{w22}) by means of a Lagrange multiplier $k(x)$. Next, since the current ${\bf j}_\mu^b$ describes a magnetically charged excitation of the vacuum, it is directed along some of the root vectors, ${\bf q}_b$, where ``$b$'' is a certain fixed index from the set $1,\ldots,\frac{N_c(N_c-1)}{2}$. Therefore, one can write ${\bf j}_\mu^b={\bf q}_b j_\mu$. Introducing auxiliary fields $C_\mu^a$'s as $${\rm e}^{-\eta^2\int_x\sum\limits_a(\partial_\mu\theta_a+\mu {\bf q}_a{\bf v}_\mu)^2}=\int \prod\limits_a {\cal D}C_\mu^a{\,} {\rm e}^{\int_x\left[-\frac{1}{4\eta^2}(C_\mu^a)^2+ iC_\mu^a (\partial_\mu\theta_a+\mu {\bf q}_a{\bf v}_\mu)\right]},$$ one obtains, similarly to the 4-d case considered in Refs.~\cite{ae,s3}, the following result: $$\int\left(\prod\limits_a {\cal D}\tilde\theta_a{\,}{\cal D}\bar\theta_a\right) {\cal D}k{\,}\delta\left(\sum\limits_{a}\tilde\theta_a\right){\rm e}^{-\int_x\left[\eta^2\sum\limits_a (\partial_\mu\theta_a+\mu {\bf q}_a{\bf v}_\mu)^2-ik\sum\limits_a\bar\theta_a\right]}=$$ $$=\int\left(\prod\limits_a {\cal D} J_\mu^a{\,} {\cal D}\varphi_\mu^a\right)\delta\left(\sum\limits_a J_\mu^a\right){\rm e}^{\int_x\left[-\frac{1}{8\eta^2}(\Phi_{\mu\nu}^a)^2+i\mu\varepsilon_{\mu\nu\lambda} {\bf q}_a{\bf v}_\mu\partial_\nu\varphi_\lambda^a+2\pi i\varphi_\mu^aJ_\mu^a\right]}.$$ Here, it has been taken into account that $\sum\limits_{a}{\bf q}_a=0$, owing to which the $k$-integration yields just an inessential global normalization constant. Furthermore, the constraint $\sum\limits_{a}^{}\tilde\theta_a=0$ went over into $\sum\limits_a J_\mu^a=0$, which means that the theory actually contains $\frac{N_c(N_c-1)}{2}-1$ types of mutually independent vortices (cf. Refs.~\cite{ae,s3} for a similar constraint for the dual strings). To further perform the integration over ${\bf v}_\mu$, it is convenient to introduce the fields $u_\mu^a={\bf q}_a{\bf v}_\mu$, and use the formula~\cite{s3, s4} $\sum\limits_{a}q_a^\alpha q_a^\beta=\frac{N_c}{2}\delta^{\alpha\beta}$. Recalling that ${\bf j}_\mu^b={\bf q}_b j_\mu$, we can then represent the ${\bf v}_\mu$-dependent part of the action as $$\int_x\left[\frac{1}{4\nu}{\bf F}_{\mu\nu}^2-i{\bf v}_\mu\left( \mu\varepsilon_{\mu\nu\lambda}{\bf q}_a\partial_\nu\varphi_\lambda^a+\frac{1}{\nu}{\bf j}_\mu^b\right)\right]= \int_x\left[\frac{1}{2N_c\nu}\left(\partial_\mu u_\nu^a-\partial_\nu u_\mu^a\right)^2-iu_\mu^aK_\mu^a\right],$$ where $K_\mu^a= \mu\varepsilon_{\mu\nu\lambda}\partial_\nu\varphi_\lambda^a+\frac{1}{\nu}\delta^{ab}j_\mu$. Then the Gaussian integration over $u_\mu^a$'s readily yields the action $\frac{N_c\nu}{4}\int_{x,y} K_\mu^{a,x}D_0^{xy}K_\mu^{a,y}$, which can be further simplified by representing $K_\mu^a$ as $K_\mu^a=\varepsilon_{\mu\nu\lambda}\partial_\nu\left(\mu\varphi_\lambda^a+\frac{1}{\nu}\delta^{ab} \varepsilon_{\lambda\alpha\beta}\int_y\partial_\alpha^x D_0^{xy}j_\beta^y\right)$. In this way, we obtain the following ($N_c>2$)-counterpart of Eq.~(\ref{w5}): $$\left<W_b(C)\right>=$$ $$=\int\left(\prod\limits_a {\cal D} J_\mu^a{\,} {\cal D}\varphi_\mu^a\right)\delta\left(\sum\limits_a J_\mu^a\right){\rm e}^{\int_x\left[-\frac{1}{8\eta^2}(\Phi_{\mu\nu}^a)^2+2\pi i\varphi_\mu^a J_\mu^a- \frac{N_c\nu}{4}\left(\mu\varphi_\mu^a+\frac1\nu\delta^{ab}\varepsilon_{\mu\nu\lambda}\int_y\partial_\nu^x D_0^{xy}j_\lambda^y\right)^2\right]}.$$ This expression can finally be brought to the form similar to that of Eq.~(\ref{a2}). Indeed, proceeding in the same way as from Eq.~(\ref{w5}) to Eq.~(\ref{a2}), we obtain the following final result: $$ \left<W_b(C)\right>=$$ \begin{equation} \label{v7} ={\rm e}^{-\frac{N_c}{4\nu}\int_{x,y}j_\mu^xj_\mu^yD_{\sf m}^{xy}} \int\prod\limits_a {\cal D} J_\mu^a{\,}\delta\left(\sum\limits_a J_\mu^a\right) {\rm e}^{-(2\pi\eta)^2\int_{x,y}J_\mu^{a,x}J_\mu^{a,y}D_{\sf m}^{xy}+ \frac{2\pi i}{\mu\nu}\left[\hat L(j,J^b)-\varepsilon_{\mu\nu\lambda} \int_{x,y}J_\mu^{b,x} j_\nu^y\partial_\lambda^xD_{\sf m}^{xy}\right]}, \end{equation} where ${\sf m}=\mu\eta\sqrt{N_c\nu}$ generalizes Eq.~(\ref{mml}) for the mass of the dual vector boson. Thus, Eq.~(\ref{v7}) represents the sought generalization of Eq.~(\ref{a2}) to the case of $N_c>2$. We notice that, while the strength of the $(j\times j)$-interaction becomes $(N_c/2)$ times larger compared to that of Eq.~(\ref{a2}), the coefficient at the linking number remains the same. Therefore, much as in the SU(2)-inspired case, in the general SU($N_c$)-inspired model considered in this Section, the Aharonov--Bohm-type interaction between the integer-charged excitations of the dual Higgs vacuum and the dual Abrikosov vortices yields only a trivial factor of ${\rm e}^{2\pi i\times({\rm integer})}$. \section{Dual Wilson loop and its interaction with Abrikosov vortices in the presence of a Chern--Simons term} We extend now the analysis performed in Section~II to the case where the CS term is included. This term is known to produce self-linkage of the contour of a Wilson loop~\cite{5}, and we expect that it would lead to a similar effect for the dual Abrikosov vortices. To start with, we again consider the theory where the dual Higgs field is absent, that is equivalent to setting $\eta=0$. The Wilson loop in such a theory is given by the following extension of Eq.~(\ref{1}): $$\left<W(C)\right>=\int {\cal D}v_\mu{\,} {\rm e}^{-\int_x\left[\frac{1}{4\nu}F_{\mu\nu}^2[v]+ i\Theta\varepsilon_{\mu\nu\lambda}v_\mu\partial_\nu v_\lambda -\frac{i}{\nu}v_\mu j_\mu\right]},$$ where the dimensionality of the new parameter $\Theta$ is (mass)$^2$. Imposing the gauge-fixing condition $\partial_\mu v_\mu=0$, we obtain the saddle-point equation $$-\partial^2v_\mu+im\varepsilon_{\mu\nu\lambda}\partial_\nu v_\lambda=ij_\mu,~~~~ {\rm where}~~~~ m=2\Theta\nu.$$ Seeking a solution in the form $v_\mu=U_\mu+iV_\mu$, we get a system of equations \begin{equation} \label{nm55} \partial^2U_\mu+m\varepsilon_{\mu\nu\lambda}\partial_\nu V_\lambda=0,~~~~~~ -\partial^2V_\mu+m\varepsilon_{\mu\nu\lambda}\partial_\nu U_\lambda=j_\mu. \end{equation} The first of these equations can be solved with respect to $U_\mu$ as \begin{equation} \label{amu} U_\mu^x=m\varepsilon_{\mu\nu\lambda}\int_y D_0^{xy}\partial_\nu^y V_\lambda^y. \end{equation} Differentiating the second equation~(\ref{nm55}), and applying the maximum principle, one gets $\partial_\mu V_\mu=0$. Using this relation, one further obtains from Eq.~(\ref{amu}): $\varepsilon_{\mu\nu\lambda}\partial_\nu U_\lambda=mV_\mu$. The substitution of this formula into the second equation~(\ref{nm55}) yields for that equation a remarkably simple form $(-\partial^2+m^2)V_\mu=j_\mu$. Therefore, one has $V_\mu^x=\int_y D_m^{xy}j_\mu^y$, while $U_\mu^x$, given by Eq.~(\ref{amu}), can be calculated by virtue of Eq.~(\ref{a23}), and reads $U_\mu^x=\frac1m\varepsilon_{\mu\nu\lambda}\int_y(D_0^{xy}-D_m^{xy})\partial_\nu^y j_\lambda^y$. Altogether, the resulting Wilson loop has the form \begin{equation} \label{dl8} \left<W(C)\right>\bigr|_{\eta=0}=\exp\left\{\frac{1}{2\nu}\int_{x,y}\left[-j_\mu^xD_m^{xy}j_\mu^y+ \frac{i}{m}\varepsilon_{\mu\nu\lambda}j_\mu^x j_\lambda^y\partial_\nu^x(D_0^{xy}-D_m^{xy})\right]\right\}. \end{equation} Recalling the definition of the parameter $\nu$ from Eq.~(\ref{numu}), we observe that the obtained Eq.~(\ref{dl8}) extends Eq.~(\ref{wt}) to the case of $\Theta\ne 0$. Clearly, the $\Theta$-term leads to a self-linkage of the contour $C$, as well as to a short-range self-interaction of this contour by means of the Yukawa propagator $D_m^{xy}$. We also notice that, when $\Theta\to 0$ in Eq.~(\ref{dl8}), one recovers Eq.~(\ref{wt}). Indeed, in this limit, one has $\frac{1}{m}(D_0^{xy}-D_m^{xy})\to\frac{1}{4\pi}$, so that $$\frac1m\int_{x,y}j_\mu^x j_\lambda^y\partial_\nu^x(D_0^{xy}-D_m^{xy})= \frac1m\int_{x,y}j_\mu^x (D_0^{xy}-D_m^{xy})\partial_\nu^y j_\lambda^y\to\frac{1}{4\pi}\int_{x,y} j_\mu^x\partial_\nu^y j_\lambda^y=0,$$ since $\int_x j_\mu^x=0$. We proceed now to the duality transformation of the Wilson loop in the full theory, where the dual Higgs field is present and its condensation does take place, i.e. $\eta\ne 0$. The corresponding extension of Eq.~(\ref{1}) reads \begin{equation} \label{in} \left<W(C)\right>=\int {\cal D}v_\mu{\,}{\cal D}\tilde\theta{\,}{\cal D}\bar\theta{\,} {\rm e}^{-\int_x\left[\frac{1}{4\nu}F_{\mu\nu}^2[v]+\eta^2(\partial_\mu\theta+\mu v_\mu)^2+ i\Theta\varepsilon_{\mu\nu\lambda}v_\mu\partial_\nu v_\lambda -\frac{i}{\nu}v_\mu j_\mu\right]}. \end{equation} The transformation leading from Eq.~(\ref{1}) to Eq.~(\ref{qq}) remains the same, so that the counterpart of Eq.~(\ref{qq}) in the presence of the CS term has the form $$\left<W(C)\right>=\int {\cal D}J_\mu{\,}{\cal D}\varphi_\mu{\,}{\cal D}G_\mu{\,} {\cal D}v_\mu{\,} {\rm e}^{\int_x\left\{-\frac{\nu}{2}G_\mu^2-\frac{1}{8\eta^2}\Phi_{\mu\nu}^2+ i v_\mu\left[\varepsilon_{\mu\nu\lambda}\partial_\nu(G_\lambda+\mu\varphi_\lambda-\Theta v_\lambda)+ \frac1\nu j_\mu\right] +2\pi i\varphi_\mu J_\mu\right\}}.$$ Unlike the case where the CS term was absent, the field $v_\mu$ now ceases to be a Lagrange multiplier. Nevertheless, since the $v_\mu$-integration is Gaussian, it can be performed exactly, and we proceed to this integration. The corresponding saddle-point equation for $v_\mu$ reads $\varepsilon_{\mu\nu\lambda}\partial_\nu v_\lambda=\frac{1}{2\Theta}k_\mu$, where we have denoted $k_\mu=\varepsilon_{\mu\nu\lambda}\partial_\nu(G_\lambda+\mu\varphi_\lambda)+\frac1\nu j_\mu$. Owing to the conservation of $k_\mu$, a solution to this saddle-point equation reads $v_\mu^x=\frac{1}{2\Theta} \varepsilon_{\mu\nu\lambda}\partial_\nu^x\int_y D_0^{xy}k_\lambda^y$. Plugging this solution back into the exponent ${\rm e}^{i\int_x v_\mu(k_\mu-\Theta\varepsilon_{\mu\nu\lambda}\partial_\nu v_\lambda)}$, and using the above explicit expression for $k_\mu$, we obtain, upon some algebra, the following formula: $$\left<W(C)\right>={\rm e}^{\frac{i}{2\nu m}\varepsilon_{\mu\nu\lambda}\int_{x,y}j_\mu^x j_\lambda^y \partial_\nu^xD_0^{xy}}\times$$ \begin{equation} \label{yy} \times\int {\cal D}J_\mu{\,}{\cal D}\varphi_\mu{\,}{\cal D}G_\mu{\,} {\rm e}^{\int_x\left\{-\frac{\nu}{2}G_\mu^2-\frac{1}{8\eta^2}\Phi_{\mu\nu}^2+ \frac{i}{4\Theta}\varepsilon_{\mu\nu\lambda}\left[G_\mu\partial_\nu(G_\lambda+2\mu\varphi_\lambda)+ \mu^2\varphi_\mu\partial_\nu\varphi_\lambda\right]+\frac{i}{2\Theta\nu}(G_\mu+\mu\varphi_\mu)j_\mu +2\pi i\varphi_\mu J_\mu\right\}}. \end{equation} Here, the argument of the first exponent coincides with the term containing the Gauss' self-linking number of the contour $C$, which was present already in Eq.~(\ref{dl8}). In addition, the functional integral in Eq.~(\ref{yy}) describes interactions of the dual-Higgs excitation with the dual Abrikosov vortices, as well as their self-interactions in the presence of the CS term. In order to visualize all these interactions, let us perform the $G_\mu$-integration first. Representing the saddle-point expression for $G_\mu$ in the form $G_\mu=L_\mu+iN_\mu$, we obtain a system of two saddle-point equations: $$\varepsilon_{\mu\nu\lambda}\partial_\nu L_\lambda-mN_\mu+n_\mu=0,~~~~~~ \varepsilon_{\mu\nu\lambda}\partial_\nu N_\lambda+mL_\mu=0,$$ where we have denoted $n_\mu=\mu\varepsilon_{\mu\nu\lambda}\partial_\nu \varphi_\lambda+\frac1\nu j_\mu$. Owing to the conservation of $n_\mu$, we find a solution to these equations in the form $$L_\mu^x=-\varepsilon_{\mu\nu\lambda}\int_y D_m^{xy}\partial_\nu^y n_\lambda^y,~~~~~~ N_\mu^x=m\int_y D_m^{xy}n_\mu^y.$$ Plugging the corresponding saddle-point expression for $G_\mu$ back into Eq.~(\ref{yy}), we obtain, after some algebra, the following general result: $$\int {\cal D}G_\mu{\,} {\rm e}^{\int_x\left(-\frac{\nu}{2}G_\mu^2+\frac{i}{4\Theta}\varepsilon_{\mu\nu\lambda}G_\mu\partial_\nu G_\lambda+\frac{i}{2\Theta}G_\mu k_\mu\right)}=$$ $$={\rm e}^{-\frac{1}{2\nu}\int_{x,y}j_\mu^x j_\mu^y D_m^{xy} -\mu\varepsilon_{\mu\nu\lambda}\int_{x,y} D_m^{xy}j_\mu^x\partial_\nu^y\varphi_\lambda^y +\frac{\nu\mu^2}{2}\left[\int_{x,y}D_m^{xy}\cdot\left( m^2\varphi_\mu^x\varphi_\mu^y+\partial_\mu^x\varphi_\mu^x\cdot\partial_\nu^y\varphi_\nu^y\right)- \int_x \varphi_\mu^2\right]}\times$$ \begin{equation} \label{5s} \times {\rm e}^{-\frac{i}{4\Theta}\left\{\mu^2\varepsilon_{\mu\nu\lambda}\int_x\varphi_\mu \partial_\nu\varphi_\lambda+\varepsilon_{\mu\nu\lambda}\int_{x,y}D_m^{xy}\cdot\left[\frac{1}{\nu^2} j_\mu^x\partial_\nu^y j_\lambda^y-(\mu m)^2\varphi_\mu^x\partial_\nu^y\varphi_\lambda^y\right]+ \frac{2\mu}{\nu}\left(\int_x \varphi_\mu j_\mu-m^2\int_{x,y}D_m^{xy}\varphi_\mu^x j_\mu^y\right)\right\}}. \end{equation} We notice that, in the limit of $\nu\to 0$, the initial Eq.~(\ref{in}) yields Eq.~(\ref{wt}): \begin{equation} \label{hh} \left<W(C)\right>\rightarrow\int {\cal D}v_\mu{\,} {\rm e}^{-\int_x\left(\frac{1}{4\nu}F_{\mu\nu}^2 -\frac{i}{\nu}v_\mu j_\mu\right)}={\rm e}^{-\frac{1}{2\nu}\int_{x,y}j_\mu^x j_\mu^y D_0^{xy}}. \end{equation} Therefore, the remaining $\varphi_\mu$-integration in Eq.~(\ref{yy}) should also yield Eq.~(\ref{wt}) in this limit. The limit of $\nu\to 0$ can thus serve as a check for Eq.~(\ref{5s}). The right-hand side of Eq.~(\ref{5s}) simplifies in this limit to the form $${\rm e}^{-\frac{1}{2\nu}\int_{x,y}j_\mu^x j_\mu^y D_0^{xy} -\mu\varepsilon_{\mu\nu\lambda}\int_{x,y} D_0^{xy}j_\mu^x\partial_\nu^y\varphi_\lambda^y-\frac{i}{4\Theta}\left(\mu^2\varepsilon_{\mu\nu\lambda} \int_x\varphi_\mu\partial_\nu\varphi_\lambda+\frac{1}{\nu^2}\varepsilon_{\mu\nu\lambda}\int_{x,y}D_0^{xy} j_\mu^x\partial_\nu^y j_\lambda^y+\frac{2\mu}{\nu}\int_x \varphi_\mu j_\mu\right)},$$ and the Wilson loop~(\ref{yy}) becomes $$\left<W(C)\right>\rightarrow {\rm e}^{-\frac{1}{2\nu}\int_{x,y} j_\mu^x j_\mu^y D_0^{xy} +\frac{i}{4\Theta\nu^2}\varepsilon_{\mu\nu\lambda}\int_{x,y}\left(j_\mu^x j_\lambda^y\partial_\nu^x D_0^{xy}-D_0^{xy}j_\mu^x\partial_\nu^y j_\lambda^y\right)}\times$$ $$\times \int {\cal D}J_\mu{\,}{\cal D}\varphi_\mu{\,}{\rm e}^{\int_x\left(-\frac{1}{8\eta^2}\Phi_{\mu\nu}^2+ 2\pi i\varphi_\mu J_\mu\right)-\mu\varepsilon_{\mu\nu\lambda}\int_{x,y}\varphi_\mu^x j_\lambda^y \partial_\nu^x D_0^{xy}}.$$ The Gaussian $\varphi_\mu$-integration in this formula yields, upon some algebra, $$\left<W(C)\right>\rightarrow {\rm e}^{-\frac{1}{2\nu}\int_{x,y} j_\mu^x j_\mu^y D_0^{xy}} \int {\cal D}J_\mu{\,}{\rm e}^{-(2\pi\eta)^2\int_{x,y}J_\mu^x J_\mu^y D_0^{xy}}. $$ Recalling the normalization of the integration measure ${\cal D}J_\mu$, discussed in Appendix~A, we indeed recover the expected result~(\ref{hh}). Thus, our check of Eq.~(\ref{5s}) was successful. We consider now large values of the $\Theta$-parameter, namely such that \begin{equation} \label{ineq2} \Theta\gg\kappa\mu\eta. \end{equation} According to Eq.~(\ref{numu}), such large $\Theta$'s imply $m\gg\kappa\eta$, that makes the action in the exponentials on the right-hand side of Eq.~(\ref{5s}) local, and brings the Wilson loop to the form $$\left<W(C)\right>\rightarrow{\rm e}^{-\frac{1}{2\nu m^2}\int_x j_\mu^2+\frac{i}{4\Theta\nu^2} \varepsilon_{\mu\nu\lambda}\left(\int_{x,y}j_\mu^x j_\lambda^y\partial_\nu^x D_0^{xy}- \frac{1}{m^2}\int_x j_\mu \partial_\nu j_\lambda\right)}\times$$ \begin{equation} \label{n3} \times\int{\cal D}J_\mu{\,}{\cal D}\varphi_\mu{\,}{\rm e}^{\int_x\left[-\frac{1}{8\eta^2}\Phi_{\mu\nu}^2+ \frac{\mu^2}{8\Theta^2\nu}(\partial_\mu\varphi_\mu)^2+\frac{i\mu^2}{4\Theta}\varepsilon_{\mu\nu\lambda} \varphi_\mu\partial_\nu\varphi_\lambda+i\varphi_\mu\left(2\pi J_\mu+\frac{\mu}{m}j_\mu+ \frac{i\mu}{m^2}\varepsilon_{\mu\nu\lambda}\partial_\nu j_\lambda\right)\right]}. \end{equation} Furthermore, in the same limiting case~(\ref{ineq2}), the $\varphi_\mu$-integration in this formula can also be performed analytically. Referring the reader for the details to Appendix~B, we present here the final result of this integration: $$\left<W(C)\right>\rightarrow{\rm e}^{-\frac{1}{2\nu m^2}\int_x j_\mu^2-\frac{i\Theta}{m^4} \varepsilon_{\mu\nu\lambda}\int_x j_\mu\partial_\nu j_\lambda}\times$$ \begin{equation} \label{rr} \times\int {\cal D}J_\mu{\,}{\rm e}^{-\eta^2\int_{x,y}R_\mu^x R_\mu^y D_{\cal M}^{xy}+\frac{i\Theta}{\mu^2} \varepsilon_{\mu\nu\lambda}\int_{x,y}\left[R_\mu^x R_\lambda^y\partial_\nu^x D_{\cal M}^{xy}-4\pi J_\mu^x \left(\pi J_\lambda^y+\frac{\mu}{m} j_\lambda^y\right)\partial_\nu^x D_0^{xy}\right]}. \end{equation} In this formula, ${\cal M}\equiv\frac{\mu^2\eta^2}{\Theta}$, and $R_\mu\equiv 2\pi J_\mu+\frac{\mu}{m}j_\mu$. Remarkably, in the limit~(\ref{ineq2}), the initial CS term for the velocity, $i\Theta\varepsilon_{\mu\nu\lambda}v_\mu\partial_\nu v_\lambda$ from Eq.~(\ref{in}), leads to the appearance of its counterpart $\frac{i\Theta}{m^4} \varepsilon_{\mu\nu\lambda} j_\mu\partial_\nu j_\lambda$ for the current $j_\mu$, while the self-linkage of the contour $C$, described by the first exponential in Eq.~(\ref{yy}), disappears. Rather, we observe the appearance of a self-linkage of the dual Abrikosov vortices, as well as of their linkage with the contour $C$, as described by the term $\frac{4\pi i\Theta}{\mu^2}\varepsilon_{\mu\nu\lambda} J_\mu^x \left(\pi J_\lambda^y+\frac{\mu}{m} j_\lambda^y\right)\partial_\nu^x D_0^{xy}$ in the Lagrangian. In particular, the part $\frac{4\pi i\Theta}{\mu^2}\varepsilon_{\mu\nu\lambda} J_\mu^x\cdot\frac{\mu}{m} j_\lambda^y \partial_\nu^x D_0^{xy}$ of this expression yields in the action the same term $-\frac{2\pi i}{\mu\nu}\hat L(j,J)$ as in the absence of the CS term (cf. the end of Section~II). Thus, in the presence of the CS term, the Aharonov--Bohm-type interaction of the dual-Higgs excitation with the dual Abrikosov vortex gets trivial in the limit~(\ref{ineq2}). Rather, the term $\frac{4\pi^2i\Theta}{\mu^2}\varepsilon_{\mu\nu\lambda}J_\mu^xJ_\lambda^y\partial_\nu^xD_0^{xy}$ means that the CS term makes dual Abrikosov vortices knotted as long as the condition $$\frac{\mu^2}{\Theta}\ne\frac{2\pi}{\rm integer}$$ is met, where the parameter $\mu$ is defined in Eq.~(\ref{numu}). \section{Summary} The spatial confinement in the dimensionally-reduced high-temperature gluodynamics can be modelled by means of the dual Landau--Ginzburg-type theory. In this paper, we have explored interactions between an excitation of the dual-Higgs vacuum and the dual Abrikosov vortices, which are present in such a theory. For this purpose, starting with the simplest SU(2)-inspired case, we have performed a duality transformation of the corresponding Wilson loop~(\ref{we}). The resulting Eq.~(\ref{a2}) contains a long-range Aharonov--Bohm-type interaction of the dual-Higgs excitation with the dual Abrikosov vortices, which is represented by the Gauss' linking number. However, we have found the coefficient at this linking number to be $2\pi i\times({\rm integer})$, which makes the said Aharonov--Bohm-type interaction trivial. In Section~III, we have obtained the same trivialization for the case of the SU($N_c$)-inspired dual Landau--Ginzburg-type theory, and in Section~IV --- at the sufficiently large values of the $\Theta$-parameter in the theory extended by the CS term. Thus, in all these cases, massless interactions drop out altogether from the dual formulation of the Wilson loop, so that the interactions between the dual-Higgs excitation and the dual Abrikosov vortices are mediated entirely by the dual vector bosons. Finally, we have explicitly demonstrated a qualitatively novel phenomenon of the appearance of knotted dual Abrikosov vortices due to the CS term. \begin{acknowledgments} \noindent This work was supported by the Portuguese Foundation for Science and Technology (FCT, program Ci\^encia-2008) and by the Center for Physics of Fundamental Interactions (CFIF) at Instituto Superior T\'ecnico (IST), Lisbon. The author is grateful to the whole staff of the Department of Physics of IST for their cordial hospitality. \end{acknowledgments}
1,314,259,992,830
arxiv
\section{Introduction} Ever since the establishment of live streaming platforms such as Twitch~\cite{twitch}, watching other play became a popular spare-time activity across all ages. The digital audience tunes in for various reasons, be it to follow an esports tournament or to check out a newly released, trending game. This curiosity also applies to recent virtual reality (VR) titles (cf. \textit{Half-Life: Alyx}~\cite{Alyx}), forcing streamers to adapt their content creation and delivery pipeline to the specifics of VR. The main difference is that VR games heavily rely on the high degree of immersion provided by such stereoscopic setups. The player experiences a feeling of being in the virtual world, which is usually achieved by the head-orientation-dependent view in combination with realistic full-body interactions. Clearly, that impression cannot be easily transferred to the audience, because most spectators utilize 2D displays, e.g., mobile devices or PC/TV screens. Hence, streamers seek for viable non-VR workarounds to deliver the immersive VR gaming content. One prominent way to transport the experienced presence to the audience is to provide a mixed-reality view of the player/streamer. By switching to a third-person perspective, the player blends into the surrounding virtual environment. The spectators can see the player’s full-body movements and interactions in context, enabling a better understanding of the actual gameplay experience (see Figure~\ref{fig:teaser}). On the other hand, the traditional first-person perspective offers a unique advantage: seeing the game from the player’s perspective brings the spectators as close to the in-game action as possible. Due to the same point of view, the spectators obtain identical visual information. This similar visual perception of the virtual world can potentially evoke the viewers’ feeling of playing on their own. So, which perspective is best for the streaming of VR games? Given the aforementioned conceptual differences and the fact that both perspectives are widely used and accepted, we do not expect a definite answer to this question. The right choice of perspective seems to depend on different contextual factors, such as the type of the game or the purpose of watching. Hence, there is a need to study spectators' preferences and motives in different contexts to support the creation of compelling audience experiences. With our work, we lay the foundations for the research of spectator experiences and perspectives in VR settings. The choice of perspective significantly frames the viewing experience. While researchers agree that immersion is an important part of interactive VR experiences, it remains unclear how important immersion is for spectators of VR players compared to other factors such as contextual understanding and player centricity. Aiming towards a comprehensive VR streaming guideline, our work is the first to contribute relevant insights into spectators’ opinions. We present an online survey (\textit{N}~=~217), which covered three different VR games: \textit{Beat Saber}~\cite{BeatSaber}, \textit{Superhot VR}~\cite{Superhot}, and \textit{Stand Out: VR Battle Royale}~\cite{StandOut}. For each game, the spectators watched a first-person and a third-person video and finally shared their impressions. The so obtained results allow us to discuss each perspective’s particular strengths and weaknesses and formulate preliminary design considerations, which are meant to provide a starting point for VR content creators. We have to understand how different perspectives (such as first-person and third-person) contribute to different demands of spectators to be able to make informed design choices. This research is not only relevant in the context of game streaming. It also applies to related VR setups including some kind of spectator. In particular, the choice of perspective is important in multi-user scenarios that combine VR and non-VR users. Example scenarios include VR training applications (surgical training, rehabilitation games) where the perspective choice is crucial for supervisors to be able to adequately monitor and evaluate the trainee's performance. Hence, apart from giving practical advice to VR streamers and content providers, our work creates the basis for more sophisticated choices of perspective and paves the way for future research on audience experiences in VR. \section{Related Work} Today, online video and streaming platforms such as YouTube~\cite{youtube} and Twitch~\cite{twitch} enable a globally distributed audience to watch their favorite players and games at any time. Former consumers can now easily produce user-generated content (UGC) and are challenging the traditional media~\cite{cha2007tube}. So-called \textit{Let's Play} videos have become increasingly popular~\cite{glas2015vicarious} and game live streaming has become a cultural phenomenon comparable to sports events~\cite{hamilton2014streaming, pires2015youtube, smith2013live}. Apart from casual gaming videos, competitive gaming events, commonly referred to as esports~\cite{hamari2017esports, rambusch2017pre}, are taking a growing share of the overall streaming landscape~\cite{cryan2014esports}. Considering the overall popularity of game streaming, the motives and experiences of spectators have been of ongoing interest to the games user research community~\cite{kappen2014engaged, taylor2016play, wehbe2015towards}. Instead of just focusing on the experience of the active player, a variety of work has broadened the research scope by explicitly investigating the spectator experience~\cite{carlsson2015designing, drucker2003spectator, frome2004helpless, maurer2015gaze, tekin2017ways} and the motivations for viewers to spend their free time watching others play~\cite{downs2014audience, hamilton2014streaming, kaytoue2012watch}. The spectator's experience is influenced by the game content, but also by the spectator interface~\cite{reeves2005designing}, that is the available information and the perspective from which they view both the game and the player. Moreover, spectators---no matter if co-located or mediated---are part of the social play setting and often engage in some form of interaction with the player, which further shapes their experience~\cite{Emmerich.2019, tekin2017ways, voida2009wii, hamilton2014streaming}. This social interaction is also a strong motivator for watching game streams~\cite{hamilton2014streaming}. Other vital factors are enjoyment, information seeking, and distraction~\cite{cheung2011starcraft, hamilton2014streaming, kaytoue2012watch}. For esports, research has revealed two additional motivators: the general competitive atmosphere and the opportunity to share emotional connections~\cite{hamari2017esports, lee2011comparison, shaw2014sport, weiss2013virtual, weiss2011fulfilling}. Hence, the reasons why spectators watch Let's Play videos and game live streams seem manifold. A commonly used framework to assess the motivation behind media usage is the \textit{uses and gratification} (UG) model~\cite{katz1973uses, katz1973use, rubin2009uses, ruggiero2000uses}. UG is based on the assumption that users actively choose certain media with the motivation to achieve a particular gratification. The available media has to compete constantly with other sources of gratification, and personal reasoning is considered individually for every user. UG typically classifies the user needs into the categories \textit{cognitive, affective, personal integrative, social integrative, and tension release}. In an empirical study, Sjöblom et al.~\cite{sjoblom2017people} revealed that all five classes of gratification are associated with the motivations of twitch users watching game live streams. The previous research on game videos and streams is mainly based on the footage of common, non-VR games. VR games have just recently gained a foothold in the consumer market. New hardware and the release of sophisticated AAA-games, such as \textit{Half-Life: Alyx}~\cite{Alyx} and \textit{Asgard's Wrath}~\cite{AsgardsWrath}, have led to regular media coverage and an increased interest of the broad gamer community. In contrast to non-VR games, the special setup of VR games including head-mounted displays (HMDs) and movement tracking makes it more challenging to convey the entire immersive experience to spectators. In particular, the player's body movements used to control the game become an integral part of gameplay. Research dealing with the audience of VR games remains sparse and is mainly focused on local spectatorship~\cite{gugenheimer2017sharevr, hartmannRealityCheck, jones2014roomalive, welsfordAsymmetric, krekhov2020silhouette}. Hence, the questions remains how the game experience can best be delivered in videos and streams to a broad online audience. One possibility is to use the same approach as with non-VR games: players directly broadcast the game view that is displayed on the HMD, so that the audience sees the same as the player. This first-person perspective ensures that the spectator's focus always matches the player's current focus and that the spectators see the game world as if they were playing themselves. Research on different player perspectives indicates that a first-person view makes it easier for players to focus on the action and provides advantages to immersion~\cite{denisova2015first, voorhees2012guns}. These effects might also apply to the spectator's perspective. On the other hand, a first-person streaming perspective does not show the player's bodily interaction with the VR game. This might impede a full understanding of what is happening, as the manipulations conducted by the player are partly hidden from the spectator and only the effects in the game are revealed~\cite{reeves2005designing}. Tekin and Reeves~\cite{tekin2017ways} point out that seeing the game on screen and at the same time seeing the player's bodily actions---resulting in a ``dual vision''---are important parts of spectating. Therefore, many VR game streamers follow a different approach by providing a third-person perspective: they use a green screen and external cameras to blend themselves into the virtual world (similar to the original trailer of the HTC Vive~\cite{viveTrailer}). This mixed-reality perspective enables spectators to see the game world and the player at the same time, and might thus enhance their experience. At the same time, this approach creates a mismatch between the spectator's view and the player's view. Consequently, the third-person perspective underlines the difference between player and spectator and shifts the focus of spectating from events in the game world to the player. As both perspectives seem to have advantages and shortcomings, the question arises how spectators experience and evaluate the different views and which perspective is superior in certain contexts. To the current date, there is no other work investigating this open research question. While there are some related studies on the use of different user perspectives in VR environments~\cite{cmentowski2019outstanding, gorisse2017first, salamin2006benefits, slater2010first}, these are not directly applicable to the experience of spectators. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/beat_saber.jpg} \caption{\textit{Beat Saber}~\cite{BeatSaber} is one of the three games used in the online survey. Participants compared two perspectives: first-person (top) and third-person view (bottom).} \label{fig:beatsaber} \Description[Comparison of the first-person and the third-person perspective for the game Beat Saber]{Two in-game screenshots of the game Beat Saber show the two video perspectives used in the survey. The upper screenshot shows the game world from the first-person perspective, depicting colored blocks approaching the camera. The screenshot below shows the same game scene from a different angle and additionally includes a recording of the real player who is blended into the game environment, swinging two lightsabers.} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/superhot.jpg} \caption{\textit{Superhot VR}~\cite{Superhot} is one of the three games used in the online survey. Participants compared two perspectives: first-person (top) and third-person view (bottom).} \label{fig:superhot} \Description[Comparison of the first-person and the third-person perspective for the game Superhot VR]{Two in-game screenshots of the game Superhot VR show the two video perspectives used in the survey. The upper screenshot shows the game world from the first-person perspective, depicting a kitchen room with two enemies and several objects such as weapons and pans. The screenshot below shows the same game scene from a different angle and additionally includes a recording of the real player who is blended into the game environment.} \end{figure} \section{Online Survey} We conducted an online survey to assess spectators' preferences and opinions on the different perspectives in VR game videos. More precisely, we compared the first-person perspective, which directly displays the in-game view of the player, with a mixed-reality third-person perspective, in which the player is captured and directly cut into the game world (see Figure \ref{fig:teaser}). The goal of the study was to gain insights about the advantages and disadvantages of both approaches regarding different aspects of the viewing experience such as comprehensibility, entertainment, and involvement. Hence, our main research question is how spectators experience the two perspectives, which differences can be found, and which perspective is preferable in certain settings. \subsection{Selection of Three Exemplary VR Games} We decided to compare the two perspectives using different commercial VR games, as viewers' preferences and experiences may also depend on certain characteristics of the game. Our game selection process was based on several criteria. First, the games should be popular and positively rated, to ensure that they provide an interesting experience and successfully make use of VR headsets. Second, the games had to support the software tool LIV~\cite{LIV}, which enabled us to create the mixed reality third-person perspective. Finally, the games should represent different game genres, which feature different core mechanics and controls. Following these criteria, we reviewed the rankings of current VR games on the online gaming platform Steam~\cite{steam} and analyzed viewer numbers on Twitch to identify popular games. We chose three games that match all criteria: \textit{Beat Saber}~\cite{BeatSaber}, \textit{Superhot VR}~\cite{Superhot}, and \textit{Stand Out: VR Battle Royale}~\cite{StandOut} (hereafter abbreviated as \textit{Stand Out}). All games had more than 25.000 peak viewers on Twitch and more than 1.000 mainly positive reviews on Steam, indicating their popularity. \textit{Beat Saber} is a music-based VR game. The player chooses a song and then swings two colored lightsabers to cut blocks of the same color, which represent the beats of the music and quickly approach the player (see Figure~\ref{fig:beatsaber}). Hence, the main focus of the game is on the quick gestural reaction of the player to the fast-paced blocks. There is a direct mapping between the player's hand movement and the movement of the lightsabers in the game. Apart from single steps to the side to avoid an obstacle wall, there is no locomotion needed. As the blocks always approach on fixed paths in front of the player, the view orientation in \textit{Beat Saber} is rather fixed. We chose this game due to its remarkably high popularity, and because in current \textit{Beat Saber} streams, both perspectives we want to compare (first person and third person mixed reality) are commonly used. In \textit{Superhot VR}, the player has to complete short levels by destroying all enemies and dodging their attacks (see Figure~\ref{fig:superhot}). For this purpose, the player can use various objects lying around, such as pistols and bottles. The unique twist of this game is that time progresses only at the speed in which the player moves. That means, if the player moves slowly, the enemies also move slowly, and vice versa. This way, the player has to consider every movement, resulting in rather slow-paced gameplay. Though the player can move in room-scale, the enemies are then approaching quickly. So in most levels, there is not much locomotion happening, and the focus is on the opponents. However, enemies are approaching from different sides, so that the perspective is not fixed in contrast to \textit{Beat Saber}. \textit{Stand Out} is a first-person shooter in which the player plays online against a large group of other players (see Figure~\ref{fig:standout}). Following the battle royale principle, the goal is to be the last survivor on the island where the game takes place. To win, the player has to collect weapons and ammunition, shoot other players, and move across the island. The player can travel larger distances using the control stick. Hence, in contrast to the other two games, locomotion is very prevalent in \textit{Stand Out}. Like in other first-person shooters, the gameplay is rather fast-paced in general. The focus of attention is very dynamic since the player has to react quickly in case of an attack. All in all, the three games differ mainly concerning pace, focus, and locomotion. We assume that these characteristics can potentially influence the spectators' experience in the two perspectives under investigation, as these provide different main focal points. For instance, quick game events might be more comprehensible for spectators if they see both the player and the game world in the third-person mixed-reality perspective. On the other hand, the first-person perspective might be more suitable for games with a dynamic focus and much locomotion. Therefore, we included all three games in both perspectives in our study to investigate potential differences. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/standout.jpg} \caption{\textit{Stand Out: VR Battle Royale}~\cite{StandOut} is one of the three games used in the survey. Participants compared two perspectives: first-person (top) and third-person view (bottom).} \label{fig:standout} \Description[Comparison of the first-person and the third-person perspective for the game Stand Out: VR Battle Royale]{Two in-game screenshots of the game Stand out: VR Battle Royale show the two video perspectives used in the survey. The upper screenshot shows the game world from the first-person perspective, depicting a room with a table and an open door that offers a view of a wide outdoor area. The screenshot below shows the same game scene, but additionally includes a recording of the real player who is blended into the game environment.} \end{figure} \subsection{Implementation of the Different Perspectives} \label{sec:implementation} Overall, there are several possibilities to compile a stream of a VR gaming session. We decided to compare two basic approaches which are commonly used by streamers and at the same time differ significantly regarding their main focal point: the first-person view and a mixed-reality third-person perspective. While some streamers also use a combination of different views by compiling picture-in-picture modes, we focus on the two main approaches, as we are particularly interested in how spectators evaluate the possibility to see the player integrated in the game world in the third-person view and the lack thereof in the first-person view. As stated above, we recorded two gameplay videos for each of the three games: one with the first-person perspective and one with the third-person perspective. In all cases, the same player (male, 26 years old) played the game and all videos are about three minutes long. The first-person view was simply a screen-recording of the game from the player's point of view. To create the third-person mixed-reality views, we used the software \textit{LIV}~\cite{LIV}, which allows integrating a green screen recording of the player into the game world. We used an additional, static game camera to capture the game scene from behind the player (cf. Figures~\ref{fig:beatsaber}, \ref{fig:superhot}, and \ref{fig:standout}). We also considered rotating the mixed-reality camera dynamically based on the player’s actions. While this approach is technically possible using \textit{LIV}, it requires a far more sophisticated setup. Since a dynamic camera was not required to address our main research question and since we wanted to stick to the most commonly used techniques in the gaming community, we discarded this option. Instead, we tested different positions while implementing the third-person views to find an appropriate static camera position for each game. \textit{LIV} also enables to replace the real player by an avatar model, a feature used by some streamers, as well. However, such a third-party avatar is not visually matched to the game and, thus, introduces an additional source of interference. While the real player's appearance also mismatches with the game world, a mixed-reality view best reveals the manipulations conducted by the player in the direct context of the game. For these reasons, we decided to not use a virtual avatar. \subsection{Study Plan and Survey Structure} We conducted a mixed design online study with the game shown in the videos as a between-subjects variable and the video perspective as a within-subjects variable. That means, each participant was randomly assigned to one game and watched both videos of that game. The order of the two perspectives was counterbalanced, as well, to avoid bias due to potential sequence effects. The survey started with a short introduction, informing participants about the goal, procedure, and anonymity of the study. Then we asked for basic demographic data, including age, gender, and nationality. Additionally, we requested some information about participants' familiarity with VR headsets and VR games, as well as their digital gaming and streaming habits. As we were also interested in viewers' general motivations to watch videos or streams of VR games, we compiled a list of possible motives based on the uses and gratification theory. More precisely, we derived our items from the work of Sjöblom et al.~\cite{sjoblom2017people}, who investigated the motivation of Twitch users. Although this motivational model does not explicitly refer to VR streaming content, we believe that the general types of motives of viewers are largely independent from the platform used by the streamer (including VR setups). Hence, we think the model includes all high-level motivations that are relevant in the context of our study. The question and the final list of answers can be found in Table~\ref{tab:UGOverview}. Participants were asked to select all reasons that apply (multiple answers were possible). We also included the option \textit{"None (I would not watch a video of a VR game)"}, to be able to identify participants having no interest in the study's topic. \begin{table*}[ht] \caption{Overview of the different motivational aspects related to the viewing of VR game videos (participants were asked to select all answers that apply).} \begin{tabularx}{\textwidth} {>{\raggedright\arraybackslash}p{3.2cm}>{\raggedright\arraybackslash}X >{\centering\arraybackslash}p{1cm}} \toprule \addlinespace \textbf{Class of Gratification (based on Sjöblom et al.~\cite{sjoblom2017people})} & \textbf{Which of the following reasons could motivate you to watch a video where a player is playing a VR game?} & \textbf{Votes} (\textit{N}=217) \\ \midrule \addlinespace cognitive & to inform me about the game or to get an impression of it (\textit{information seeking}) & 113 \\ cognitive & to learn new game strategies or how to master the game (\textit{learning game strategies}) & \ 84 \\ affective & because it is entertaining and/or exciting (\textit{enjoyment}) & 119 \\ personal integrative & to be able to comment and have a say (\textit{recognition}) & \ 24 \\ social integrative & in order not to feel alone (\textit{companionship}) & \ 26 \\ tension release & to distract me and pass the time (\textit{distraction}) & \ 62 \\ tension release & in order to relax (\textit{relaxation}) & \ 46 \\ \bottomrule \end{tabularx} \label{tab:UGOverview} \Description[Cognitive and affective gratifications are voted most often]{Table 1 gives an overview of the different motivational aspects related to the viewing of VR game videos. The most prominent gratifications relate to the categories cognitive (information seeking and learning game strategies) and affective (enjoyment).} \end{table*} Following this first, general part of the questionnaire, we asked participants to ensure that their speakers or headphones are active to be able to hear the sound of the videos and then showed them the first video. To control that the video was not forwarded or skipped, we measured the time participants spent with the video. This way, we were able to identify participants who skipped (parts of) the videos and label their data as invalid. After the video, we administered the enjoyment subscale of the Intrinsic Motivation Inventory (IMI)~\cite{ryan2000self} to assess how much participants enjoyed watching the video. To further investigate the viewing experience, we asked additional custom questions about the view and the comprehensibility of the video, as well as the perceived involvement. The full list of questions can be found in Table~\ref{tab:ViewExperience}. Then the second video was shown, and again IMI and the custom questions were administered after that. Then, we asked whether participants knew or have played the game shown in the videos before, how much they like the game and how much they like the genre it belongs to in general. Finally, participants were asked which of the two perspectives they preferred. There was also the option to indicate that they did not have a preference. In a free-text form, we asked participants to give reasons for their decision. Moreover, participants could provide any additional notes. Considering that we cannot completely control the setting and conditions under which participants take part in an online study, we increased the validity of the data by including sanity check questions. For this purpose, we asked the same question twice with reversed scales, to ensure that participants have read the question text and did not select random answers. \subsection{Recruitment and Sample} We were interested in the opinion of potential spectators of VR game videos and aimed at improving their viewing experience. Thus, we defined all persons who have at least some interest in VR technology and digital games as our target group, with no further restrictions regarding demographic data or prior experience with VR. We promoted the survey on different online channels, both in English and in German. That included several Reddit communities and Facebook groups related to the topics game streaming or VR games. Moreover, we also used more general groups that are aimed at the recruitment of online survey participants. In total, 316 participants completed the survey. Sixty-nine of these cases had to be excluded from the analysis because participants failed the sanity check questions or did not watch the videos completely. Moreover, we excluded 30 additional participants, who stated that they had no interest in VR games and would never view videos of such games voluntarily. Those participants do not match our target group. Hence, our final sample contains 217 participants. The sample includes a wide variety of nationalities (27 different countries), with 63 German and 77 American participants being the majority. The mean age of participants was 28 (\textit{SD}~=~8.69), with a range from 16 to 64. Regarding gender, the sample included 125 male and 92 female participants. About three-quarters of all participants (\textit{N}~=~172) reported that they regularly played digital games. Many participants also had prior experience with VR games, with only 58 persons stating that they have not yet used a VR headset. Regarding the question how often they watched gaming videos/streams on average, most participants (\textit{N}~=~189) reported that they watched game streams at least once a month. Concerning our three game subgroups, the distribution is a bit uneven: 89 participants viewed the videos of \textit{Superhot VR}, 67 participants viewed \textit{Beat Saber}, and 61 \textit{Stand Out}. However, the distribution of age, gender, and nationality is comparable among the three groups. About two thirds of the participants in the \textit{Beat Saber} group knew the game before (\textit{N}~=~44) and nearly half of the group had played the game themselves (\textit{N}~=~29). \textit{Superhot VR} was known to half of the participants (\textit{N}~=~40) and 33 participants had played the game. \textit{Stand Out} was less known by our participants, with only 13 persons being familiar with the game, of whom 8 had played it. Asked about how much they liked the game, participants in all three game groups rated the games slightly positive on average on a scale from 0 to 6 (Beat Saber: \textit{M}~=~4.19, \textit{SD}~=~1.79; \textit{Superhot VR}: \textit{M}~=~3.53, \textit{SD}~=~2.07; \textit{Stand Out}: \textit{M}~=~3.15, \textit{SD}~=~1.99). Similarly, participants stated to rather like the genre of the game they watched in general (Beat Saber: \textit{M}~=~4.39, \textit{SD}~=~1.65; \textit{Superhot VR}: \textit{M}~=~3.73, \textit{SD}~=~2.17; \textit{Stand Out}: \textit{M}~=~3.30, \textit{SD}~=~2.13). \section{Results} In the first step of our data analysis, we have a look at participants' general motivation to watch videos of VR games. After that, we address our main research question by comparing participants' evaluations of both video perspectives and their preferences. \subsection{Motivation to Watch VR Game Videos} To examine participants' general motivation to watch VR game videos, we analyzed their answers to the uses and gratifications question. Table~\ref{tab:UGOverview} shows how often participants selected each reason to watch a VR game video. Whereas all different motivations received some votes, the distribution of votes indicates that affective and cognitive gratifications were most prevalent among our participants. The majority of participants would watch videos of VR games, if they seek information about the game (\textit{N}~=~113) or because they enjoy watching them and feel entertained (\textit{N}~=~119). Learning about game strategies was also mentioned often (\textit{N}~=~84). On the other hand, fewer participants see tension release, both in terms of distraction (\textit{N}~=~62) and relaxation (\textit{N}~=~46), as a motivation to watch videos of VR games. Finally, personal and social integrative motives were least prevalent (\textit{N}~=~24 and \textit{N}~=~26). \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/fpstps.jpg} \caption{Distribution of the preferred perspective votes of our participants for the three games \textit{Beat Saber}, \textit{Superhot VR}, and \textit{Stand Out}. } \label{fig:piechart} \Description[Superhot VR and Stand Out show clear preferences for the first-person perspective, whereas for Beat Saber votes are more evenly distributed]{Three pie charts show which perspective was preferred by the participants of our study in the three different games Beat Saber, Superhot VR and Stand Out. There were three possible answers, namely 'first-person', 'third-person', and 'no favorite'. In the Beat Saber condition, the first-person and the third-person perspective received comparable number of votes. In contrast, Superhot VR and Stand Out show clear preferences for the first-person perspective.} \end{figure*} \subsection{Evaluation of First- and Third-Person Perspectives} With regard to our main research question, we analyze how participants perceived both perspectives and which one they preferred. Overall, the voting shows a recognizable preference for the first-person perspective: 134 of all 217 participants preferred the first-person perspective, whereas only 60 voted for the third-person perspective. Twenty-three participants stated that they have no favorite view. We performed Pearson chi-square tests to investigate if there are significant relations between the preferred perspective and certain characteristics of participants that might influence their vote, namely their gender, whether or not they were familiar with the game that was shown, and their general motivations to watch VR game videos. Regarding gender, vote distribution is very similar between male and female participants, and there is no significant correlation, ${\chi}^2$(2)~=~2.31, \textit{p}~=~.316. Participants' familiarity with the game seems to have no effect on their voting, either, ${\chi}^2$(2)~=~0.35, \textit{p}~=~.839. To test the influence of different general motivational aspects, we performed chi-square tests for each of the statements shown in Table~\ref{tab:UGOverview}. The results indicate that whether or not participants selected a particular motivation is not related to their preferred perspective, as no significant correlations could be found (all \textit{p}~>~.348). As there might be differences with regard to the three games we tested, we further investigate participants' preferences in the three subgroups for \textit{Beat Saber}, \textit{Superhot VR}, and \textit{Stand Out}. Therefore, we split our data for the following analysis and report results for each game individually. Figure~\ref{fig:piechart} shows participants' preferred perspective in the three study conditions. In line with the overall result, the first-person perspective received most votes in all cases. However, there is a noticeable difference regarding the distribution of votes: whereas there is a clear preference for the first-person perspective in the \textit{Stand Out} group (48 out of 61 participants) and the \textit{Superhot VR} group (55 out of 89 participants), the votes in the \textit{Beat Saber} group are almost evenly distributed with 31 participants preferring the first-person perspective and 27 participants preferring the third-person perspective. A chi-square test underlines that the game that was shown had a significant influence on the vote of the preferred perspective, ${\chi}^2$(4)~=~14.48, \textit{p}~=~.006, Cramer's \textit{V}~=~0.183 (no expected cell frequencies were below 5). Though the effect size is only small (\textit{V}~<~0.3), the result indicates that the two video perspectives were perceived differently in \textit{Beat Saber} than in the other games. To investigate the reasons why participants prefer one view to the other, we compared the viewing experiences between both perspectives and tested for significant differences. Table~\ref{tab:ViewExperience} shows all mean values for both perspectives in the three game conditions. For each game and each dimension of the viewing experience, we performed repeated measures analysis of variance (ANOVA) with perspective as a within-subjects variable and order of game views as a between-subjects factor to test for potential sequence effects. In the following, we report the results of these analyzes for each game. In the interest of better legibility, we only report on sequence effects if they are significant. If not mentioned, the analysis did not show a significant interaction effect between the experience dimension and the order of the two perspectives. \begin{table*} \caption{Mean values and standard deviations (\textit{M}(\textit{SD})) of different aspects of the viewing experience in the three study conditions (games) \textit{Beat Saber}, \textit{Superhot VR}, and \textit{Stand Out}, comparing the first-person and the third-person perspectives. Each item was rated on a 7-point scale ranging from 0 to 6. Significant differences between two perspectives are indicated in bold print.} \begin{tabularx}{\textwidth} {>{\raggedright\arraybackslash}p{4.5cm} >{\raggedleft\arraybackslash}X >{\raggedright\arraybackslash}X >{\raggedleft\arraybackslash}X >{\raggedright\arraybackslash}X >{\raggedleft\arraybackslash}X >{\raggedright\arraybackslash}X} \toprule \addlinespace & \multicolumn{2}{c}{\textbf{Beat Saber} (\textit{N}=67)} & \multicolumn{2}{c}{\textbf{Superhot VR} (\textit{N}=89)} & \multicolumn{2}{c}{\textbf{Stand Out} (\textit{N}=61)} \\ & 1st person & 3rd person & 1st person & 3rd person & 1st person & 3rd person \\ \midrule \addlinespace \textbf{IMI} \\ \ \ \ \ \ \ Enjoyment & 3.18 (1.47) & 3.26 (1.44) & 3.03 (1.70) & 3.10 (1.64) & \textbf{3.21} (1.52) & \textbf{2.43} (1.73)\\ \addlinespace \textbf{Focus and Clear View} \\ F1) I saw well how the player & 4.24 (1.61) & 4.57 (1.46) & 4.28 (1.60) & 4.10 (1.62) & \textbf{4.54} (1.21) & \textbf{3.74} (1.70)\\ \ \ \ \ \ \ interacted with objects in the \\ \ \ \ \ \ \ game world. \\ F2) While watching, I had the & 2.58 (1.83) & 2.19 (1.78) & \textbf{2.31} (1.97) & \textbf{3.03} (2.07) & \textbf{2.70} (1.80) & \textbf{3.93} (1.89)\\ \ \ \ \ \ \ feeling of missing important \\ \ \ \ \ \ \ things in the game because I \\ \ \ \ \ \ \ couldn't see them. \\ F3) I had a good view of the game & 4.15 (1.49) & 4.24 (1.46) & \textbf{4.18} (1.47) & \textbf{3.65 }(1.72) & \textbf{4.14} (1.38) & \textbf{3.13} (1.94)\\ \ \ \ \ \ \ world. \\ \addlinespace \textbf{Comprehensibility} \\ C1) I always understood what & 4.52 (1.58) & 4.75 (1.47) & \textbf{4.33} (1.43) & \textbf{3.80} (1.63) & \textbf{4.23} (1.33) & \textbf{3.64} (1.89)\\ \ \ \ \ \ \ happened in the game. \\ C2) At any time I could & 3.93 (1.64) & 4.42 (1.63) & 4.17 (1.51) & 4.06 (1.74) & \textbf{4.44} (1.35) & \textbf{3.78} (1.75) \\ \ \ \ \ \ \ comprehend what the player \\ \ \ \ \ \ \ was doing in the VR world. \\ C3) I was able to understand how & 4.33 (1.66) & 4.61 (1.59) & \textbf{4.37} (1.58) & \textbf{3.85} (1.70) & \textbf{4.10} (1.47) & \textbf{3.16} (1.91) \\ \ \ \ \ \ \ successful the player was in \\ \ \ \ \ \ \ the game. \\ \addlinespace \textbf{Involvement} \\ I1) \ I felt like being part of the & \textbf{3.06} (1.88) & \textbf{2.46} (1.97) & \textbf{3.28} (1.85) & \textbf{2.35} (1.95) & \textbf{3.67} (1.88) & \textbf{2.54} (2.28)\\ \ \ \ \ \ \ game. \\ I2) \ I saw the virtual world as if I & \textbf{3.54} (1.99) & \textbf{2.70} (1.92) & \textbf{3.35 }(1.85) & \textbf{2.44} (1.89) & \textbf{3.93} (1.83) & \textbf{2.59} (2.03)\\ \ \ \ \ \ \ was there myself. \\ \bottomrule \end{tabularx} \label{tab:ViewExperience} \Description[While there are few significant differences in the viewing experience for the game Beat Saber, every aspect differs significantly between the two perspectives in the game Stand Out]{This table list all mean values and standard deviations of the different aspects of the viewing experience in the three study conditions (games) Beat Saber, Superhot VR, and Stand Out comparing the first-person and the third-person perspectives. Significant differences are highlighted in bold print. While there are few significant differences for the game Beat Saber, every aspect differs significantly between the two perspectives in the game Stand Out.} \end{table*} \subsubsection{Beat Saber} In the \textit{Beat Saber} group, most differences are not significant. Neither enjoyment nor any ratings of focus and clear view and comprehensibility were rated significantly different between the two perspectives (all \textit{p}~>~.05). In contrast, the two questions regarding the perceived involvement of the viewers show significant differences: in the first-person view, participants felt more like being part of the game (I1), \textit{F}(1,~65)~=~6.06, \textit{p}~=~.016, and like being in the virtual world (I2), \textit{F}(1,~65)~=~9.04, \textit{p}~=~.004. \subsubsection{Superhot VR} In the \textit{Superhot VR} condition, the repeated measures ANOVA revealed more significant differences. In the first-person perspective, the ratings regarding having a good view of the game world (F3) were higher than in the third-person perspective, \textit{F}(1,~87)~=~5.64, \textit{p}~=~.020. Additionally, the feeling of missing important things (F2) was significantly higher in the third-person perspective, \textit{F}(1,~87)~=~6.93, \textit{p}~=~.010. In terms of comprehensibility, participants had the feeling of significantly better understanding what happened in the game (C1), \textit{F}(1,~87)~=~7.47, \textit{p}~=~.008, and how successful the player was (C3), \textit{F}(1,~87)~=~7.93, \textit{p}~=~.006, in the first-person perspective. Similar to the results in the \textit{Beat Saber} group, both items regarding involvement (I1 and I2) were rated significantly higher in the first-person perspective, \textit{F}(1,~87)~=~17.19, \textit{p}~<~.001 (I1), and \textit{F}(1,~87)~=~15.91, \textit{p}~<~.001 (I2). However, in the \textit{Superhot VR} condition, there was also a significant interaction effect between the ratings for involvement and the order in which the two videos were watched, indicating sequence effects. The rating for being part of the game (I1) was particularly high for the first-person perspective if participants had viewed the third-person perspective video beforehand (\textit{M}~=~3.64 compared to \textit{M}~=~2.88), \textit{F}(1,~87)~=~6.27, \textit{p}~=~.014. The same pattern becomes apparent for the item \textit{"I saw the virtual world as if I was there myself"} (\textit{M}~=~3.98 compared to \textit{M}~=~2.64), \textit{F}(1,~87)~=~13.40, \textit{p}~<~.001. All other differences (IMI, F1, and C2) were not significant (all \textit{p}~>~.05). \subsubsection{Stand Out} In the \textit{Stand Out} group, enjoyment (IMI) was significantly higher in the first-person video, \textit{F}(1,~59)~=~13.68, \textit{p}~<~.001. Moreover, participants gave significantly better ratings regarding focus and clear view in the first-person perspective: they better saw how the player interacted with game objects (F1), \textit{F}(1,~59)~=~12.47, \textit{p}~=~.001, and had a better view of the game world (F3), \textit{F}(1,~59)~=~15.53, \textit{p}~<~.001. At the same time, the feeling of missing important things was lower (F2), \textit{F}(1,~59)~=~14.43, \textit{p}~<~.001. The three items regarding comprehensibility were also rated significantly higher in the first-person perspective: participants better understood what happened in the game (C1), \textit{F}(1,~59)~=~4.10, \textit{p}~=~.047, what the player was doing (C2), \textit{F}(1,~59)~=~6.98, \textit{p}~=~.011, and how successful the player was (C3), \textit{F}(1,~59)~=~16.60, \textit{p}~<~.001. In line with the other two groups, involvement (I1 and I2) was significantly higher in the first-person perspective: \textit{F}(1,~59)~=~16.39, \textit{p}~<~.001 (I1), and \textit{F}(1,~59)~=~26.23, \textit{p}~<~.001 (I2). Summarized, all aspects of the viewing experience differ significantly between the first- and the third-person perspective in the \textit{Stand Out} group, with the first-person perspective being rated better in all cases. \begin{table*} \caption{Results of the thematic analysis with regard to reasons why participants preferred the first-person perspective. The middle column contains exemplary quotes of participants which were assigned to the topics. The right column shows the number of mentions, i.e. how many single answers of participants were assigned to the respective topic.} \begin{tabularx}{\textwidth} {>{\raggedright\arraybackslash}p{0.1cm}>{\raggedright\arraybackslash}X >{\raggedright\arraybackslash}p{6.7cm}>{\centering\arraybackslash}p{1.4cm}} \toprule \addlinespace \multicolumn{2}{l}{\textbf{Reasons to Prefer the }} & \textbf{Examples} & \textbf{Mentions}\\ \multicolumn{2}{l}{\textbf{First-Person Perspective}} & \\ \midrule \addlinespace \multicolumn{2}{l}{\textbf{Involvement}} & \\ & The viewers felt more immersed, they felt like being part of the game, being in the game world, or being the player. & \textit{I like the first-person perspective because it makes me feel like I'm playing the game, not someone else. It is more entertaining when I feel like I'm part of the game.} & 38\\ \addlinespace \multicolumn{2}{l}{\textbf{Focus}} \\ & The viewers think that the focus was better, because they were able to see all important things and did not miss something outside the viewport. & \textit{It gives me the ability to see the important parts of the game as they happen, rather than being stuck facing one direction, missing details that are behind my point of view.} & 12\\ \addlinespace \multicolumn{2}{l}{\textbf{Comprehensibility}} \\ & The viewers better understood what happened in the game and what the player was doing. & \textit{First person (in this game at least) lets viewers understand what the player is doing.} & 10\\ \addlinespace \multicolumn{2}{l}{\textbf{Obstructive Player in Third Person}} \\ & The player in the third-person view was perceived as obstructive, because he obscured the view on the game world and did not fit to the environment. & \textit{First person allows for better visibility without obstruction by the player.} & 10\\ \addlinespace \multicolumn{2}{l}{\textbf{Original Game Perspective}} \\ & The first-person view corresponds with the original game perspective, hence viewers can better imagine how it would be to play the game. & \textit{I don't like the mixed reality view. I want to see exactly what the player sees.} &8 \\ \addlinespace \multicolumn{2}{l}{\textbf{Realism}} \\ & The experience felt more real to viewers. & \textit{Because it looks more real to me.} & 7 \\ \bottomrule \end{tabularx} \label{tab:ThematicAnalysis1st} \Description[Six identified reasons to prefer the first-person perspective: involvement, focus, comprehensibility, obstructive player, original game perspective, and realism]{The table lists the results of the thematic analysis with regard to reasons why participants preferred the first-person perspective. The first column shows the six identified topics involvement, focus, comprehensibility, obstructive player, original game perspective, and realism. The middle column contains exemplary quotes of participants which were assigned to the topics. The right column shows the number of mentions, i.e., how many single answers of participants were assigned to the respective topic.} \end{table*} \begin{table*}[ht] \caption{Results of the thematic analysis with regard to reasons why participants preferred the third-person perspective. The middle column contains exemplary quotes of participants which were assigned to the topics. The right column shows the number of mentions, i.e. how many single answers of participants were assigned to the respective topic.} \begin{tabularx}{\textwidth} {>{\raggedright\arraybackslash}p{0.1cm}>{\raggedright\arraybackslash}X >{\raggedright\arraybackslash}p{6.7cm}>{\centering\arraybackslash}p{1.4cm}} \toprule \addlinespace \multicolumn{2}{l}{\textbf{Reasons to Prefer the }} & \textbf{Examples} & \textbf{Mentions} \\ \multicolumn{2}{l}{\textbf{Third-Person Perspective}} & \\ \midrule \addlinespace \multicolumn{2}{l}{\textbf{Player's Movements}} \\ & To see the player's movement and his interaction with the game world is more entertaining and interesting. & \textit{It was more interesting to see how the person was actually moving around and how it looked like he was actually in the game world.} & 19\\ \addlinespace \multicolumn{2}{l}{\textbf{Comprehensibility}} \\ & The viewers better understood what the player was doing. & \textit{It was easier to see what the player was doing in the game world.} & 11\\ \addlinespace \multicolumn{2}{l}{\textbf{Motion Sickness in First Person}} \\ & It was more comfortable, because in the first-person view viewers experienced dizziness or nausea. & \textit{Watching first person made me kind of dizzy so the third-person perspective was more interesting and more comfortable to watch.} & 6\\ \addlinespace \multicolumn{2}{l}{\textbf{View on Game World}} \\ & The viewers feel that they can see more of the game world. & \textit{The third-person perspective gave me a wider view of the world in which the game was taking place.} & 4\\ \bottomrule \end{tabularx} \label{tab:ThematicAnalysis3rd} \Description[Four identified reasons to prefer the third-person perspective: player's movements, comprehensibility, motion sickness, and view on game world]{The table lists the results of the thematic analysis with regard to reasons why participants preferred the third-person perspective. The first column shows the four identified topics player's movements, comprehensibility, motion sickness, and view on game world. The middle column contains exemplary quotes of participants which were assigned to the topics. The right column shows the number of mentions, i.e., how many single answers of participants were assigned to the respective topic.} \end{table*} \subsection{Thematic Analysis: Reasons for Preferred Perspective} To gain further insight into the positive and negative qualities of the two perspectives, we performed a thematic analysis of the free text answers to the question of why participants prefer one perspective to the other. For this purpose, two researchers looked at all answers independently and sorted them by recurring topics. We followed a deductive approach based on the reflexive thematic analysis described by Braun and Clarke~\cite{braun2006}, with an additional check of inter-rater agreement. After the first round of clustering, both researchers compared their lists and discussed all differences. Based on the discussion, a final clustering was agreed upon. We identified six clusters that describe reasons why participants preferred the first-person perspective, as shown in Table~\ref{tab:ThematicAnalysis1st}. Many participants (\textit{N}~=~38) highlighted a higher involvement perceived in the first-person perspective. They reported that this perspective made them feel like being part of the game or even being the player themselves. Besides, some participants (\textit{N}~=~12) pointed out that the focus was better in the first-person perspective because they were able to see the important things (such as enemies approaching). Participants also reported that the comprehensibility was higher, as they were better able to follow the game events (\textit{N}~=~10). In the third-person perspective, the player was perceived as an obstacle by some participants (\textit{N}~=~10), covering parts of the game and interfering with immersion. Apart from the higher involvement, some participants (\textit{N}~=~8) also emphasized that they prefer the first-person perspective because it is the \textit{"original game perspective"}. This way, they experience how the game looks to the player and can better imagine how it would feel to play the game. Finally, some participants (\textit{N}~=~7) pointed out that the experienced realism was higher in the first-person perspective. For the third-person perspective, we identified four categories of reasons to prefer it to the first-person perspective, as shown in Table~\ref{tab:ThematicAnalysis3rd}. The most frequently mentioned reason was that participants (\textit{N}~=~19) liked to see the player and his movements. They reported that it was more interesting and entertaining to focus the player and to be able to observe the direct interaction between the player and the game's environment. Related to seeing the player's movement, some participants (\textit{N}~=~10) also highlighted that they gained a better understanding of how the game is played and how the interaction works. Hence, they stated that the comprehensibility was better in the third-person perspective. Besides, some participants (\textit{N}~=~6) preferred the third-person view, because they experienced some form of motion sickness in the first-person perspective. They reported that it was more comfortable to watch the game in third person. Finally, some participants (\textit{N}~=~4) preferred the third-person perspective, because they think that it enabled them to see more of the game world. Part of the identified reasons to prefer one perspective to the other were mentioned comparably often in all three game groups. More precisely, participants in each group addressed the topics higher involvement, realism, and the original game perspective of the first-person perspective, as well as less motion sickness and a better view on the game world in the third-person perspective. In contrast, some topics were more prevalent for specific games. The better focus and the better comprehensibility of the first-person perspective were predominantly mentioned by participants who had watched the videos of \textit{Superhot VR}: we counted focus eight times and comprehensibility six times in the \textit{Superhot VR} condition, while both topics appeared only two times in each of the other two conditions. However, at this point we want to remind that the \textit{Superhot VR} group was also bigger than the other two groups (\textit{N}~=~89 vs. 67 and 61), which might account for such differences. In the \textit{Stand Out} condition, participants complained more about the obstructive player in the third-person perspective (N~=~8) than participants in the other two groups. Moreover, two reasons to prefer the third-person perspective---seeing the player's movements and comprehensibility---were mentioned for both \textit{Beat Saber} and \textit{Superhot VR}, but not for the game \textit{Stand Out}. Even though the Stand Out group was a bit smaller than the other two study groups, the difference is still noticeable. \section{Discussion} We observed an overall preference for the first-person perspective in our study. However, we found significant differences between the three games. This result confirms our assumption that the choice of an appropriate perspective is dependent on the particular game. Moreover, the perceived benefits and shortcomings of both perspectives as reported by our participants indicate that personal preferences and the motivation of the viewer also play an important role. \subsection{Influence of Game Characteristics on Perspective Preferences} We received the most homogeneous feedback for the game \textit{Stand Out}. Very few participants preferred the third-person perspective, and it also performed significantly worse regarding all measured aspects of the viewing experience, including overall enjoyment. Many participants mentioned a feeling of confusion and the impression of missing essential parts of the gameplay. Whereas viewers of the other two games rather appreciated seeing the player in action according to our thematic analysis, viewers of \textit{Stand Out} experienced the player as obstructive in the third-person view. We assume that this issue is caused by a mismatch between the focus of the viewer in the third-person perspective, which lies on the player, and the location of the important game events: in \textit{Stand Out}, the main actions---such as approaching enemies, the search for coverage or gun fights---are not centered around the player's position, but evolve further away in the surrounding. A first-person perspective better matches this game characteristic and, thus, seems to be more appropriate for this kind of games. For the game \textit{Superhot VR}, the participants were able to perceive the player's actions and interactions with objects in both views equally and preferences are less clearly distributed. In contrast to \textit{Stand Out}, there is no significant difference in the IMI enjoyment subscale: both perspectives induced similar levels of enjoyment. Since entertainment was the most commonly mentioned motivator to watch VR game videos, we can assume that at least some participants preferred the third-person view in \textit{Superhot VR} for enjoyment reasons. Nevertheless, many participants still disliked the third-person perspective due to the feeling of having a limited view and missing important game events. Comments of some participants point towards a possible explanation: these viewers explicitly stated problems with situations where the player reacted to opponents that were not visible on the screen. This issue seems similar to the problems reported for the third-person perspective in \textit{Stand Out}. Yet, the problem is less prominent in \textit{Superhot VR} and only applies to certain situations. In contrast to \textit{Stand Out}, \textit{Superhot VR} also contains important game events that are directly centered around the player, such as dodging attacks in slow-motion. Such events might account for the fact that still 27\% of participants preferred the third-person perspective, which offers a good view on the player. We assume that a more dynamic third-person camera could reduce the issue of the limited view and increase approval of the third-person view to a certain extent. The most inconclusive results are the ones for the game \textit{Beat Saber}. In this case, our sample shows no clear preference for one perspective. Regarding the spectator experience, only involvement---measuring the feeling of being part of the game---was rated higher for the first-person view. All other subscales, namely enjoyment, comprehensibility, and seeing everything that is important, do not indicate any difference between the two perspectives. These results indicate that the third-person view seems to have advantages to this particular game and that the first-person alternative is not preferable in every case. \textit{Beat Saber} seems to be more appropriate for the third-person perspective than both other games. Considering the identical study conditions and similar audiences, the reasons for the measured differences between all three games have to reside within the particular game characteristics. Our results indicate that the focus on the player's bodily interaction in the third-person perspective is more compelling for spectators of \textit{Beat Saber} than for viewers of the other two games. In contrast to \textit{Stand Out} and \textit{Superhot VR}, \textit{Beat Saber} requires very fast and coordinated movements of the players. All relevant game events (approaching blocks and hits of the player) are tightly coupled to these movements both temporally and visually. Watching this type of experience is likely more interesting if the viewers can see the players and their movements, as they have an immediate effect on the gameplay. Spectators of the other two games might prefer the first-person perspective, because the player's bodily interaction looks less intriguing and, hence, a clear focus on the in-game events is more interesting. In addition, the overall high pace of the players' movements in \textit{Beat Saber} makes it hard for new viewers to understand and follow the gameplay. In this case, the third-person perspective could help the audience to gain a better understanding of the game and its goals. For the other two games, it is more important to see the players' view and their interactions with the weaponry to understand the overall gameplay and the players' strategies. Moreover, we assume that the fixed viewing direction of \textit{Beat Saber} contributes to the success of the third-person view. During the game, the player's view is mostly fixed in one direction, which makes it easy to align the third-person camera with the main course of action. As a result, the viewers' impression of missing essential aspects is reduced. For comparison, \textit{Superhot VR} features a more dynamic environment where enemies approach the player from multiple directions. \textit{Stand Out} provides the most dynamic locomotion system that combines virtual motion and rotation with real movements. Additionally, it relies heavily on long travel distances. In summary, we assume that the key difference between the three games is the visual coupling of the main game actions and the player's position and movement. In games like \textit{Beat Saber} all relevant game events are centered directly around the player and, thus, emphasized by the third-person-perspective. In games like \textit{Stand Out} most events dynamically evolve in the wider surroundings. In the latter case, the first-person perspective is more appropriate, as it better guides the focus of the spectator towards the important game events. Despite the discussed reasons that explain the usefulness of the third-person perspective for games like \textit{Beat Saber}, a considerable number of our participants still favored the first-person view for this game. This preference hints towards certain desires of the spectators---such as experiencing the game from the player's view---that require a first-person perspective and are less linked to characteristics of the game. This finding is especially interesting considering that using a mixed-reality third-person view is a widespread approach in current \textit{Beat Saber} videos and streams. \subsection{Subliming the Perceived Strengths and Weaknesses of Both Perspectives} Our analysis of the three VR games has shown that certain game characteristics seem to influence the suitability of the two different streaming perspectives. However, we also consider spectators' personal preferences and motives to be a relevant factor. The UG results revealed two primary motivators of our participants for watching VR game videos: entertainment and information seeking. This finding fits the most frequently mentioned reasons for choosing one perspective over the other: involvement and comprehensibility. Whereas we could not identify significant correlations between participants' general motives and the perspective they preferred, our thematic analysis of participants' reasons to prefer a certain perspective further helps to understand the perceived strengths and weaknesses of both views. The first-person view is preferred by spectators who want to feel like they are playing the game themselves and who want to see \textit{"through the player's eyes"}. Some participants explicitly mentioned a \textit{"preference for the original perspective"}. An increased involvement was the most prevalent reason of our participants to prefer the first-person perspective. This finding of the thematic analysis is underlined by our questions concerning the feeling of \textit{"being part of the game"}. For all three games, participants felt significantly more as a part of the game in the first-person view. Hence, the first-person perspective better fosters immersive experiences of spectators than the third-person view. On the downside, some participants indicated that \textit{"seeing through the player's eyes"} made them dizzy and motion sick. In these cases, participants preferred the third-person view, which seems to be less prone to motion sickness. Interestingly, a better \textit{"comprehensibility"} is mentioned as an perceived advantage for both views. Participants disagreed which perspective provides a better understanding. This feedback might be the result of the different foci of both perspectives: In the first-person perspective, viewers experience the game exactly how it would look like playing it. Hence, some participants might have the feeling that this view provides a better overall impression of the game. In the third-person view, the spectators see the player's movements and the resulting actions in the virtual environment. They might feel that this matching between manipulations and effects provides a better understanding. In this case, the focus does not lie on the original perspective and game events, but on the player and their interaction with the game. We can summarize the feedback of our participants into three main perceived advantages of the third-person perspective: (1) providing an entertaining experience by showing the player in action, (2) giving a good impression of the VR experience by revealing the player's full-body movement and the relation between the player's manipulations and the effects in the game world, and (3) avoiding motion sickness. Consequently, the mixed-reality third-person approach is particularly promising for games in which the player performs interesting, distinctive movements in real life. However, the overall preference for the first-person view and the aforementioned concerns demonstrate that the third-person perspective introduces challenges that need to be taken into account. It is essential that the spectator's view is not noticeably limited: spectators must not feel that they miss important events due to a static third-person view or that the player's body might cover significant parts of the environment. Besides the higher immersive experience, these were the most prevalent reasons speaking in favor of the first-person alternative. Hence, the chosen point of view in the third-person perspective must be considered carefully. Especially in the case of dynamic viewing directions, content creators should consider integrating more dynamic solutions, such as aligning the third-person camera with the player's head rotations. \subsection{Limitations and Future Work} Our work presents the first step towards a better understanding of the preferences and experiences of VR game spectators with regard to the streaming perspective. While the study provides valuable insights, there are also some limitations leading to the need for further research. First of all, we point out that our choice of games does not represent the full VR gaming landscape. Hence, our results are limited to comparable game content. We will consider other genres (e.g., RPGs like \textit{Asgard's Wrath}~\cite{AsgardsWrath}) in the future to see which features of the two perspectives become particularly prevalent in other scenarios. Furthermore, our study does not take into account other common streaming approaches that integrate the player into the stream in other ways, such as picture-in-picture modes. While this is a limitation of our current work and should be considered in the future, our focus on the comparison of the first-person and the third-person perspectives promotes our understanding of spectator's basic preferences and desires. Another limitation concerns the concrete implementation of the third-person perspective used in our study, in particular regarding the game \textit{Superhot VR}. As already explained in Section~\ref{sec:implementation}, the static mixed-reality approach is just one possibility to create a third-person view. Other possibilities include the use of a dynamic camera and the replacement of the real player footage by a virtually created avatar. Some of the perceived shortcomings of the third-person view might trail off when using a different method, in particular a dynamic camera. For instance, the missing focus on current game events or the occlusion of relevant game objects can be reduced by automatically adapting the camera to the player's viewing direction or by enabling spectators to control the viewing angle. While we are convinced that our choice of a static mixed-reality view is appropriate to investigate basic differences between a focus on the game (first-person) and a focus on the player (third-person), future studies with alternative implementations such as using a dynamic camera or a virtual avatar should complement and refine the findings. Our design decisions regarding the concrete positions of the static third-person viewpoint in the three games (i.e., the position of the spectator's camera) might have influenced the viewing experience, as well. During the design process, we experienced that some positions are better suited than others, though no position seems to be optimal in every game situation, because a static view does not adapt during the course of the game. Hence, we informally tested different positions while implementing the third-person views to find an appropriate position for each game. Another important constituent of the third-person perspective is the player's persona. We used the same player in all our videos to preserve comparability. Nevertheless, the specific choice introduces possible effects arising from participants' personal preferences regarding gender or appearance of the player. Hence, future studies should include other types of players to investigate potential impacts. Furthermore, preferences may change if the viewers have some kind of relationship with the content producer, for instance, if the player is their favorite streamer. One participant explicitly stated: \textit{"If it's my usual go-to streamer on Twitch, then I would probably like the third-person perspective better because it'd be funnier."}. In such cases, the focus of interest is more on the player and less on the game, which is much better supported by a mixed-reality perspective. Previous research also indicates that the social interaction between the player and the audience can be an important motivator for spectators to follow game streamers~\cite{hamilton2014streaming, sjoblom2017people}. Particularly in a live streaming context, viewers' social motives become more prevalent, as they have the possibility to interact with the streamer or other viewers while the action takes place. In our study, we presented prerecorded videos and decided to not include social features (i.e., the player did not speak to the audience) to reduce the potential interference effects caused by our specific player. This approach increased the controllability of the study procedure, but limits the direct transferability of our results to live streaming. We assume that the pros and cons of the different perspectives found in our study also apply to live streaming contexts and that our results can also inform live streaming design choices. For instance, streamers using a first-person perspective can further increase comprehensibility by verbally describing which movements they are performing (because these are not visible to the audience). However, our study does not provide direct indications on how the different perspectives support or interfere with the viewers' need for social interaction. It might become more important to see the player, as visual cues are a central aspect in human communication. On the other hand, a first-person view might provide a closer connection to the player, because this perspective fosters a shared focus and attentional allocation. Future research is needed to test such assumptions. Hence, as a complement to our current work, we recommend the conduct of in-the-wild studies on streaming platforms with actual streamers and their audiences to capture this important social aspect with regard to the preferences of different perspectives. This would also enable a more sophisticated investigation of the correlations between spectators' motives to watch a certain VR game stream and their preferred view. Another interesting direction for future research in the area of VR spectatorship would be to investigate the experience of spectating VR game streams using HMDs. If the viewer is equipped with an immersive HMD, there are different possibilities to present VR content and the viewing experience will probably differ from 2D displays. \section{Conclusion} Delivering the highly immersive experience of VR games to a broad audience via common 2D video streams is a challenge for VR content providers, such as streamers, advertisers, and game developers. This work offers support by giving advice on the choice of an appropriate spectator perspective to foster a positive viewing experience. Based on our study results, we identified two key factors that need to be considered when deciding between a first-person and a mixed-reality third-person perspective: first, the characteristics of the game, in particular the location of game events in relation to the player's position; and second, the motives and expectations of the audience. While the first-person perspective puts the focus on the game and resembles the player's view, the mixed-reality third-person perspective shifts the focus to the player and the player-game interaction. For games in which most game events evolve directly around the player, the third-person perspective provides viewers with unique insights by revealing the player's real movements and their effects in the game world. This positive effect of the third-person perspective particularly applies to games that require the player to perform interesting, distinctive movements. In contrast, if the main game action is distributed over the game environment and not centered around the player, a first-person perspective is more appropriate due to its immersive quality and a clear focus on relevant game events. Apart from the game characteristics, content providers also should consider their audience. If spectators are supposed to be mainly interested in gaining an impression of the game and less in the player's persona, the first-person view provides the desired information better than the third-person perspective. On the other hand, if spectators have a keen interest in a specific streamer, their preference might be biased towards a third-person view, which highlights this person. This work presents the first step towards a comprehensive VR streaming guideline. As discussed above, there are some limitations and follow-up studies with more VR games and different settings are needed to extent our current knowledge. In particular, other implementations of the third-person perspective, for instance with a dynamic camera, need to be investigated to test our hypothesis about the importance of player centricity and visual coupling. Our work paves the way for further research on the spectators' experiences and expectations in the context of VR content. In the long term, understanding how different perspectives contribute to different demands of VR spectators will foster informed design decisions in diverse application areas such as game streaming, VR training and mixed-reality multi-user scenarios. \balance \bibliographystyle{ACM-Reference-Format} \section{Introduction} Ever since the establishment of live streaming platforms such as Twitch~\cite{twitch}, watching other play became a popular spare-time activity across all ages. The digital audience tunes in for various reasons, be it to follow an esports tournament or to check out a newly released, trending game. This curiosity also applies to recent virtual reality (VR) titles (cf. \textit{Half-Life: Alyx}~\cite{Alyx}), forcing streamers to adapt their content creation and delivery pipeline to the specifics of VR. The main difference is that VR games heavily rely on the high degree of immersion provided by such stereoscopic setups. The player experiences a feeling of being in the virtual world, which is usually achieved by the head-orientation-dependent view in combination with realistic full-body interactions. Clearly, that impression cannot be easily transferred to the audience, because most spectators utilize 2D displays, e.g., mobile devices or PC/TV screens. Hence, streamers seek for viable non-VR workarounds to deliver the immersive VR gaming content. One prominent way to transport the experienced presence to the audience is to provide a mixed-reality view of the player/streamer. By switching to a third-person perspective, the player blends into the surrounding virtual environment. The spectators can see the player’s full-body movements and interactions in context, enabling a better understanding of the actual gameplay experience (see Figure~\ref{fig:teaser}). On the other hand, the traditional first-person perspective offers a unique advantage: seeing the game from the player’s perspective brings the spectators as close to the in-game action as possible. Due to the same point of view, the spectators obtain identical visual information. This similar visual perception of the virtual world can potentially evoke the viewers’ feeling of playing on their own. So, which perspective is best for the streaming of VR games? Given the aforementioned conceptual differences and the fact that both perspectives are widely used and accepted, we do not expect a definite answer to this question. The right choice of perspective seems to depend on different contextual factors, such as the type of the game or the purpose of watching. Hence, there is a need to study spectators' preferences and motives in different contexts to support the creation of compelling audience experiences. With our work, we lay the foundations for the research of spectator experiences and perspectives in VR settings. The choice of perspective significantly frames the viewing experience. While researchers agree that immersion is an important part of interactive VR experiences, it remains unclear how important immersion is for spectators of VR players compared to other factors such as contextual understanding and player centricity. Aiming towards a comprehensive VR streaming guideline, our work is the first to contribute relevant insights into spectators’ opinions. We present an online survey (\textit{N}~=~217), which covered three different VR games: \textit{Beat Saber}~\cite{BeatSaber}, \textit{Superhot VR}~\cite{Superhot}, and \textit{Stand Out: VR Battle Royale}~\cite{StandOut}. For each game, the spectators watched a first-person and a third-person video and finally shared their impressions. The so obtained results allow us to discuss each perspective’s particular strengths and weaknesses and formulate preliminary design considerations, which are meant to provide a starting point for VR content creators. We have to understand how different perspectives (such as first-person and third-person) contribute to different demands of spectators to be able to make informed design choices. This research is not only relevant in the context of game streaming. It also applies to related VR setups including some kind of spectator. In particular, the choice of perspective is important in multi-user scenarios that combine VR and non-VR users. Example scenarios include VR training applications (surgical training, rehabilitation games) where the perspective choice is crucial for supervisors to be able to adequately monitor and evaluate the trainee's performance. Hence, apart from giving practical advice to VR streamers and content providers, our work creates the basis for more sophisticated choices of perspective and paves the way for future research on audience experiences in VR. \section{Related Work} Today, online video and streaming platforms such as YouTube~\cite{youtube} and Twitch~\cite{twitch} enable a globally distributed audience to watch their favorite players and games at any time. Former consumers can now easily produce user-generated content (UGC) and are challenging the traditional media~\cite{cha2007tube}. So-called \textit{Let's Play} videos have become increasingly popular~\cite{glas2015vicarious} and game live streaming has become a cultural phenomenon comparable to sports events~\cite{hamilton2014streaming, pires2015youtube, smith2013live}. Apart from casual gaming videos, competitive gaming events, commonly referred to as esports~\cite{hamari2017esports, rambusch2017pre}, are taking a growing share of the overall streaming landscape~\cite{cryan2014esports}. Considering the overall popularity of game streaming, the motives and experiences of spectators have been of ongoing interest to the games user research community~\cite{kappen2014engaged, taylor2016play, wehbe2015towards}. Instead of just focusing on the experience of the active player, a variety of work has broadened the research scope by explicitly investigating the spectator experience~\cite{carlsson2015designing, drucker2003spectator, frome2004helpless, maurer2015gaze, tekin2017ways} and the motivations for viewers to spend their free time watching others play~\cite{downs2014audience, hamilton2014streaming, kaytoue2012watch}. The spectator's experience is influenced by the game content, but also by the spectator interface~\cite{reeves2005designing}, that is the available information and the perspective from which they view both the game and the player. Moreover, spectators---no matter if co-located or mediated---are part of the social play setting and often engage in some form of interaction with the player, which further shapes their experience~\cite{Emmerich.2019, tekin2017ways, voida2009wii, hamilton2014streaming}. This social interaction is also a strong motivator for watching game streams~\cite{hamilton2014streaming}. Other vital factors are enjoyment, information seeking, and distraction~\cite{cheung2011starcraft, hamilton2014streaming, kaytoue2012watch}. For esports, research has revealed two additional motivators: the general competitive atmosphere and the opportunity to share emotional connections~\cite{hamari2017esports, lee2011comparison, shaw2014sport, weiss2013virtual, weiss2011fulfilling}. Hence, the reasons why spectators watch Let's Play videos and game live streams seem manifold. A commonly used framework to assess the motivation behind media usage is the \textit{uses and gratification} (UG) model~\cite{katz1973uses, katz1973use, rubin2009uses, ruggiero2000uses}. UG is based on the assumption that users actively choose certain media with the motivation to achieve a particular gratification. The available media has to compete constantly with other sources of gratification, and personal reasoning is considered individually for every user. UG typically classifies the user needs into the categories \textit{cognitive, affective, personal integrative, social integrative, and tension release}. In an empirical study, Sjöblom et al.~\cite{sjoblom2017people} revealed that all five classes of gratification are associated with the motivations of twitch users watching game live streams. The previous research on game videos and streams is mainly based on the footage of common, non-VR games. VR games have just recently gained a foothold in the consumer market. New hardware and the release of sophisticated AAA-games, such as \textit{Half-Life: Alyx}~\cite{Alyx} and \textit{Asgard's Wrath}~\cite{AsgardsWrath}, have led to regular media coverage and an increased interest of the broad gamer community. In contrast to non-VR games, the special setup of VR games including head-mounted displays (HMDs) and movement tracking makes it more challenging to convey the entire immersive experience to spectators. In particular, the player's body movements used to control the game become an integral part of gameplay. Research dealing with the audience of VR games remains sparse and is mainly focused on local spectatorship~\cite{gugenheimer2017sharevr, hartmannRealityCheck, jones2014roomalive, welsfordAsymmetric, krekhov2020silhouette}. Hence, the questions remains how the game experience can best be delivered in videos and streams to a broad online audience. One possibility is to use the same approach as with non-VR games: players directly broadcast the game view that is displayed on the HMD, so that the audience sees the same as the player. This first-person perspective ensures that the spectator's focus always matches the player's current focus and that the spectators see the game world as if they were playing themselves. Research on different player perspectives indicates that a first-person view makes it easier for players to focus on the action and provides advantages to immersion~\cite{denisova2015first, voorhees2012guns}. These effects might also apply to the spectator's perspective. On the other hand, a first-person streaming perspective does not show the player's bodily interaction with the VR game. This might impede a full understanding of what is happening, as the manipulations conducted by the player are partly hidden from the spectator and only the effects in the game are revealed~\cite{reeves2005designing}. Tekin and Reeves~\cite{tekin2017ways} point out that seeing the game on screen and at the same time seeing the player's bodily actions---resulting in a ``dual vision''---are important parts of spectating. Therefore, many VR game streamers follow a different approach by providing a third-person perspective: they use a green screen and external cameras to blend themselves into the virtual world (similar to the original trailer of the HTC Vive~\cite{viveTrailer}). This mixed-reality perspective enables spectators to see the game world and the player at the same time, and might thus enhance their experience. At the same time, this approach creates a mismatch between the spectator's view and the player's view. Consequently, the third-person perspective underlines the difference between player and spectator and shifts the focus of spectating from events in the game world to the player. As both perspectives seem to have advantages and shortcomings, the question arises how spectators experience and evaluate the different views and which perspective is superior in certain contexts. To the current date, there is no other work investigating this open research question. While there are some related studies on the use of different user perspectives in VR environments~\cite{cmentowski2019outstanding, gorisse2017first, salamin2006benefits, slater2010first}, these are not directly applicable to the experience of spectators. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/beat_saber.jpg} \caption{\textit{Beat Saber}~\cite{BeatSaber} is one of the three games used in the online survey. Participants compared two perspectives: first-person (top) and third-person view (bottom).} \label{fig:beatsaber} \Description[Comparison of the first-person and the third-person perspective for the game Beat Saber]{Two in-game screenshots of the game Beat Saber show the two video perspectives used in the survey. The upper screenshot shows the game world from the first-person perspective, depicting colored blocks approaching the camera. The screenshot below shows the same game scene from a different angle and additionally includes a recording of the real player who is blended into the game environment, swinging two lightsabers.} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/superhot.jpg} \caption{\textit{Superhot VR}~\cite{Superhot} is one of the three games used in the online survey. Participants compared two perspectives: first-person (top) and third-person view (bottom).} \label{fig:superhot} \Description[Comparison of the first-person and the third-person perspective for the game Superhot VR]{Two in-game screenshots of the game Superhot VR show the two video perspectives used in the survey. The upper screenshot shows the game world from the first-person perspective, depicting a kitchen room with two enemies and several objects such as weapons and pans. The screenshot below shows the same game scene from a different angle and additionally includes a recording of the real player who is blended into the game environment.} \end{figure} \section{Online Survey} We conducted an online survey to assess spectators' preferences and opinions on the different perspectives in VR game videos. More precisely, we compared the first-person perspective, which directly displays the in-game view of the player, with a mixed-reality third-person perspective, in which the player is captured and directly cut into the game world (see Figure \ref{fig:teaser}). The goal of the study was to gain insights about the advantages and disadvantages of both approaches regarding different aspects of the viewing experience such as comprehensibility, entertainment, and involvement. Hence, our main research question is how spectators experience the two perspectives, which differences can be found, and which perspective is preferable in certain settings. \subsection{Selection of Three Exemplary VR Games} We decided to compare the two perspectives using different commercial VR games, as viewers' preferences and experiences may also depend on certain characteristics of the game. Our game selection process was based on several criteria. First, the games should be popular and positively rated, to ensure that they provide an interesting experience and successfully make use of VR headsets. Second, the games had to support the software tool LIV~\cite{LIV}, which enabled us to create the mixed reality third-person perspective. Finally, the games should represent different game genres, which feature different core mechanics and controls. Following these criteria, we reviewed the rankings of current VR games on the online gaming platform Steam~\cite{steam} and analyzed viewer numbers on Twitch to identify popular games. We chose three games that match all criteria: \textit{Beat Saber}~\cite{BeatSaber}, \textit{Superhot VR}~\cite{Superhot}, and \textit{Stand Out: VR Battle Royale}~\cite{StandOut} (hereafter abbreviated as \textit{Stand Out}). All games had more than 25.000 peak viewers on Twitch and more than 1.000 mainly positive reviews on Steam, indicating their popularity. \textit{Beat Saber} is a music-based VR game. The player chooses a song and then swings two colored lightsabers to cut blocks of the same color, which represent the beats of the music and quickly approach the player (see Figure~\ref{fig:beatsaber}). Hence, the main focus of the game is on the quick gestural reaction of the player to the fast-paced blocks. There is a direct mapping between the player's hand movement and the movement of the lightsabers in the game. Apart from single steps to the side to avoid an obstacle wall, there is no locomotion needed. As the blocks always approach on fixed paths in front of the player, the view orientation in \textit{Beat Saber} is rather fixed. We chose this game due to its remarkably high popularity, and because in current \textit{Beat Saber} streams, both perspectives we want to compare (first person and third person mixed reality) are commonly used. In \textit{Superhot VR}, the player has to complete short levels by destroying all enemies and dodging their attacks (see Figure~\ref{fig:superhot}). For this purpose, the player can use various objects lying around, such as pistols and bottles. The unique twist of this game is that time progresses only at the speed in which the player moves. That means, if the player moves slowly, the enemies also move slowly, and vice versa. This way, the player has to consider every movement, resulting in rather slow-paced gameplay. Though the player can move in room-scale, the enemies are then approaching quickly. So in most levels, there is not much locomotion happening, and the focus is on the opponents. However, enemies are approaching from different sides, so that the perspective is not fixed in contrast to \textit{Beat Saber}. \textit{Stand Out} is a first-person shooter in which the player plays online against a large group of other players (see Figure~\ref{fig:standout}). Following the battle royale principle, the goal is to be the last survivor on the island where the game takes place. To win, the player has to collect weapons and ammunition, shoot other players, and move across the island. The player can travel larger distances using the control stick. Hence, in contrast to the other two games, locomotion is very prevalent in \textit{Stand Out}. Like in other first-person shooters, the gameplay is rather fast-paced in general. The focus of attention is very dynamic since the player has to react quickly in case of an attack. All in all, the three games differ mainly concerning pace, focus, and locomotion. We assume that these characteristics can potentially influence the spectators' experience in the two perspectives under investigation, as these provide different main focal points. For instance, quick game events might be more comprehensible for spectators if they see both the player and the game world in the third-person mixed-reality perspective. On the other hand, the first-person perspective might be more suitable for games with a dynamic focus and much locomotion. Therefore, we included all three games in both perspectives in our study to investigate potential differences. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/standout.jpg} \caption{\textit{Stand Out: VR Battle Royale}~\cite{StandOut} is one of the three games used in the survey. Participants compared two perspectives: first-person (top) and third-person view (bottom).} \label{fig:standout} \Description[Comparison of the first-person and the third-person perspective for the game Stand Out: VR Battle Royale]{Two in-game screenshots of the game Stand out: VR Battle Royale show the two video perspectives used in the survey. The upper screenshot shows the game world from the first-person perspective, depicting a room with a table and an open door that offers a view of a wide outdoor area. The screenshot below shows the same game scene, but additionally includes a recording of the real player who is blended into the game environment.} \end{figure} \subsection{Implementation of the Different Perspectives} \label{sec:implementation} Overall, there are several possibilities to compile a stream of a VR gaming session. We decided to compare two basic approaches which are commonly used by streamers and at the same time differ significantly regarding their main focal point: the first-person view and a mixed-reality third-person perspective. While some streamers also use a combination of different views by compiling picture-in-picture modes, we focus on the two main approaches, as we are particularly interested in how spectators evaluate the possibility to see the player integrated in the game world in the third-person view and the lack thereof in the first-person view. As stated above, we recorded two gameplay videos for each of the three games: one with the first-person perspective and one with the third-person perspective. In all cases, the same player (male, 26 years old) played the game and all videos are about three minutes long. The first-person view was simply a screen-recording of the game from the player's point of view. To create the third-person mixed-reality views, we used the software \textit{LIV}~\cite{LIV}, which allows integrating a green screen recording of the player into the game world. We used an additional, static game camera to capture the game scene from behind the player (cf. Figures~\ref{fig:beatsaber}, \ref{fig:superhot}, and \ref{fig:standout}). We also considered rotating the mixed-reality camera dynamically based on the player’s actions. While this approach is technically possible using \textit{LIV}, it requires a far more sophisticated setup. Since a dynamic camera was not required to address our main research question and since we wanted to stick to the most commonly used techniques in the gaming community, we discarded this option. Instead, we tested different positions while implementing the third-person views to find an appropriate static camera position for each game. \textit{LIV} also enables to replace the real player by an avatar model, a feature used by some streamers, as well. However, such a third-party avatar is not visually matched to the game and, thus, introduces an additional source of interference. While the real player's appearance also mismatches with the game world, a mixed-reality view best reveals the manipulations conducted by the player in the direct context of the game. For these reasons, we decided to not use a virtual avatar. \subsection{Study Plan and Survey Structure} We conducted a mixed design online study with the game shown in the videos as a between-subjects variable and the video perspective as a within-subjects variable. That means, each participant was randomly assigned to one game and watched both videos of that game. The order of the two perspectives was counterbalanced, as well, to avoid bias due to potential sequence effects. The survey started with a short introduction, informing participants about the goal, procedure, and anonymity of the study. Then we asked for basic demographic data, including age, gender, and nationality. Additionally, we requested some information about participants' familiarity with VR headsets and VR games, as well as their digital gaming and streaming habits. As we were also interested in viewers' general motivations to watch videos or streams of VR games, we compiled a list of possible motives based on the uses and gratification theory. More precisely, we derived our items from the work of Sjöblom et al.~\cite{sjoblom2017people}, who investigated the motivation of Twitch users. Although this motivational model does not explicitly refer to VR streaming content, we believe that the general types of motives of viewers are largely independent from the platform used by the streamer (including VR setups). Hence, we think the model includes all high-level motivations that are relevant in the context of our study. The question and the final list of answers can be found in Table~\ref{tab:UGOverview}. Participants were asked to select all reasons that apply (multiple answers were possible). We also included the option \textit{"None (I would not watch a video of a VR game)"}, to be able to identify participants having no interest in the study's topic. \begin{table*}[ht] \caption{Overview of the different motivational aspects related to the viewing of VR game videos (participants were asked to select all answers that apply).} \begin{tabularx}{\textwidth} {>{\raggedright\arraybackslash}p{3.2cm}>{\raggedright\arraybackslash}X >{\centering\arraybackslash}p{1cm}} \toprule \addlinespace \textbf{Class of Gratification (based on Sjöblom et al.~\cite{sjoblom2017people})} & \textbf{Which of the following reasons could motivate you to watch a video where a player is playing a VR game?} & \textbf{Votes} (\textit{N}=217) \\ \midrule \addlinespace cognitive & to inform me about the game or to get an impression of it (\textit{information seeking}) & 113 \\ cognitive & to learn new game strategies or how to master the game (\textit{learning game strategies}) & \ 84 \\ affective & because it is entertaining and/or exciting (\textit{enjoyment}) & 119 \\ personal integrative & to be able to comment and have a say (\textit{recognition}) & \ 24 \\ social integrative & in order not to feel alone (\textit{companionship}) & \ 26 \\ tension release & to distract me and pass the time (\textit{distraction}) & \ 62 \\ tension release & in order to relax (\textit{relaxation}) & \ 46 \\ \bottomrule \end{tabularx} \label{tab:UGOverview} \Description[Cognitive and affective gratifications are voted most often]{Table 1 gives an overview of the different motivational aspects related to the viewing of VR game videos. The most prominent gratifications relate to the categories cognitive (information seeking and learning game strategies) and affective (enjoyment).} \end{table*} Following this first, general part of the questionnaire, we asked participants to ensure that their speakers or headphones are active to be able to hear the sound of the videos and then showed them the first video. To control that the video was not forwarded or skipped, we measured the time participants spent with the video. This way, we were able to identify participants who skipped (parts of) the videos and label their data as invalid. After the video, we administered the enjoyment subscale of the Intrinsic Motivation Inventory (IMI)~\cite{ryan2000self} to assess how much participants enjoyed watching the video. To further investigate the viewing experience, we asked additional custom questions about the view and the comprehensibility of the video, as well as the perceived involvement. The full list of questions can be found in Table~\ref{tab:ViewExperience}. Then the second video was shown, and again IMI and the custom questions were administered after that. Then, we asked whether participants knew or have played the game shown in the videos before, how much they like the game and how much they like the genre it belongs to in general. Finally, participants were asked which of the two perspectives they preferred. There was also the option to indicate that they did not have a preference. In a free-text form, we asked participants to give reasons for their decision. Moreover, participants could provide any additional notes. Considering that we cannot completely control the setting and conditions under which participants take part in an online study, we increased the validity of the data by including sanity check questions. For this purpose, we asked the same question twice with reversed scales, to ensure that participants have read the question text and did not select random answers. \subsection{Recruitment and Sample} We were interested in the opinion of potential spectators of VR game videos and aimed at improving their viewing experience. Thus, we defined all persons who have at least some interest in VR technology and digital games as our target group, with no further restrictions regarding demographic data or prior experience with VR. We promoted the survey on different online channels, both in English and in German. That included several Reddit communities and Facebook groups related to the topics game streaming or VR games. Moreover, we also used more general groups that are aimed at the recruitment of online survey participants. In total, 316 participants completed the survey. Sixty-nine of these cases had to be excluded from the analysis because participants failed the sanity check questions or did not watch the videos completely. Moreover, we excluded 30 additional participants, who stated that they had no interest in VR games and would never view videos of such games voluntarily. Those participants do not match our target group. Hence, our final sample contains 217 participants. The sample includes a wide variety of nationalities (27 different countries), with 63 German and 77 American participants being the majority. The mean age of participants was 28 (\textit{SD}~=~8.69), with a range from 16 to 64. Regarding gender, the sample included 125 male and 92 female participants. About three-quarters of all participants (\textit{N}~=~172) reported that they regularly played digital games. Many participants also had prior experience with VR games, with only 58 persons stating that they have not yet used a VR headset. Regarding the question how often they watched gaming videos/streams on average, most participants (\textit{N}~=~189) reported that they watched game streams at least once a month. Concerning our three game subgroups, the distribution is a bit uneven: 89 participants viewed the videos of \textit{Superhot VR}, 67 participants viewed \textit{Beat Saber}, and 61 \textit{Stand Out}. However, the distribution of age, gender, and nationality is comparable among the three groups. About two thirds of the participants in the \textit{Beat Saber} group knew the game before (\textit{N}~=~44) and nearly half of the group had played the game themselves (\textit{N}~=~29). \textit{Superhot VR} was known to half of the participants (\textit{N}~=~40) and 33 participants had played the game. \textit{Stand Out} was less known by our participants, with only 13 persons being familiar with the game, of whom 8 had played it. Asked about how much they liked the game, participants in all three game groups rated the games slightly positive on average on a scale from 0 to 6 (Beat Saber: \textit{M}~=~4.19, \textit{SD}~=~1.79; \textit{Superhot VR}: \textit{M}~=~3.53, \textit{SD}~=~2.07; \textit{Stand Out}: \textit{M}~=~3.15, \textit{SD}~=~1.99). Similarly, participants stated to rather like the genre of the game they watched in general (Beat Saber: \textit{M}~=~4.39, \textit{SD}~=~1.65; \textit{Superhot VR}: \textit{M}~=~3.73, \textit{SD}~=~2.17; \textit{Stand Out}: \textit{M}~=~3.30, \textit{SD}~=~2.13). \section{Results} In the first step of our data analysis, we have a look at participants' general motivation to watch videos of VR games. After that, we address our main research question by comparing participants' evaluations of both video perspectives and their preferences. \subsection{Motivation to Watch VR Game Videos} To examine participants' general motivation to watch VR game videos, we analyzed their answers to the uses and gratifications question. Table~\ref{tab:UGOverview} shows how often participants selected each reason to watch a VR game video. Whereas all different motivations received some votes, the distribution of votes indicates that affective and cognitive gratifications were most prevalent among our participants. The majority of participants would watch videos of VR games, if they seek information about the game (\textit{N}~=~113) or because they enjoy watching them and feel entertained (\textit{N}~=~119). Learning about game strategies was also mentioned often (\textit{N}~=~84). On the other hand, fewer participants see tension release, both in terms of distraction (\textit{N}~=~62) and relaxation (\textit{N}~=~46), as a motivation to watch videos of VR games. Finally, personal and social integrative motives were least prevalent (\textit{N}~=~24 and \textit{N}~=~26). \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/fpstps.jpg} \caption{Distribution of the preferred perspective votes of our participants for the three games \textit{Beat Saber}, \textit{Superhot VR}, and \textit{Stand Out}. } \label{fig:piechart} \Description[Superhot VR and Stand Out show clear preferences for the first-person perspective, whereas for Beat Saber votes are more evenly distributed]{Three pie charts show which perspective was preferred by the participants of our study in the three different games Beat Saber, Superhot VR and Stand Out. There were three possible answers, namely 'first-person', 'third-person', and 'no favorite'. In the Beat Saber condition, the first-person and the third-person perspective received comparable number of votes. In contrast, Superhot VR and Stand Out show clear preferences for the first-person perspective.} \end{figure*} \subsection{Evaluation of First- and Third-Person Perspectives} With regard to our main research question, we analyze how participants perceived both perspectives and which one they preferred. Overall, the voting shows a recognizable preference for the first-person perspective: 134 of all 217 participants preferred the first-person perspective, whereas only 60 voted for the third-person perspective. Twenty-three participants stated that they have no favorite view. We performed Pearson chi-square tests to investigate if there are significant relations between the preferred perspective and certain characteristics of participants that might influence their vote, namely their gender, whether or not they were familiar with the game that was shown, and their general motivations to watch VR game videos. Regarding gender, vote distribution is very similar between male and female participants, and there is no significant correlation, ${\chi}^2$(2)~=~2.31, \textit{p}~=~.316. Participants' familiarity with the game seems to have no effect on their voting, either, ${\chi}^2$(2)~=~0.35, \textit{p}~=~.839. To test the influence of different general motivational aspects, we performed chi-square tests for each of the statements shown in Table~\ref{tab:UGOverview}. The results indicate that whether or not participants selected a particular motivation is not related to their preferred perspective, as no significant correlations could be found (all \textit{p}~>~.348). As there might be differences with regard to the three games we tested, we further investigate participants' preferences in the three subgroups for \textit{Beat Saber}, \textit{Superhot VR}, and \textit{Stand Out}. Therefore, we split our data for the following analysis and report results for each game individually. Figure~\ref{fig:piechart} shows participants' preferred perspective in the three study conditions. In line with the overall result, the first-person perspective received most votes in all cases. However, there is a noticeable difference regarding the distribution of votes: whereas there is a clear preference for the first-person perspective in the \textit{Stand Out} group (48 out of 61 participants) and the \textit{Superhot VR} group (55 out of 89 participants), the votes in the \textit{Beat Saber} group are almost evenly distributed with 31 participants preferring the first-person perspective and 27 participants preferring the third-person perspective. A chi-square test underlines that the game that was shown had a significant influence on the vote of the preferred perspective, ${\chi}^2$(4)~=~14.48, \textit{p}~=~.006, Cramer's \textit{V}~=~0.183 (no expected cell frequencies were below 5). Though the effect size is only small (\textit{V}~<~0.3), the result indicates that the two video perspectives were perceived differently in \textit{Beat Saber} than in the other games. To investigate the reasons why participants prefer one view to the other, we compared the viewing experiences between both perspectives and tested for significant differences. Table~\ref{tab:ViewExperience} shows all mean values for both perspectives in the three game conditions. For each game and each dimension of the viewing experience, we performed repeated measures analysis of variance (ANOVA) with perspective as a within-subjects variable and order of game views as a between-subjects factor to test for potential sequence effects. In the following, we report the results of these analyzes for each game. In the interest of better legibility, we only report on sequence effects if they are significant. If not mentioned, the analysis did not show a significant interaction effect between the experience dimension and the order of the two perspectives. \begin{table*} \caption{Mean values and standard deviations (\textit{M}(\textit{SD})) of different aspects of the viewing experience in the three study conditions (games) \textit{Beat Saber}, \textit{Superhot VR}, and \textit{Stand Out}, comparing the first-person and the third-person perspectives. Each item was rated on a 7-point scale ranging from 0 to 6. Significant differences between two perspectives are indicated in bold print.} \begin{tabularx}{\textwidth} {>{\raggedright\arraybackslash}p{4.5cm} >{\raggedleft\arraybackslash}X >{\raggedright\arraybackslash}X >{\raggedleft\arraybackslash}X >{\raggedright\arraybackslash}X >{\raggedleft\arraybackslash}X >{\raggedright\arraybackslash}X} \toprule \addlinespace & \multicolumn{2}{c}{\textbf{Beat Saber} (\textit{N}=67)} & \multicolumn{2}{c}{\textbf{Superhot VR} (\textit{N}=89)} & \multicolumn{2}{c}{\textbf{Stand Out} (\textit{N}=61)} \\ & 1st person & 3rd person & 1st person & 3rd person & 1st person & 3rd person \\ \midrule \addlinespace \textbf{IMI} \\ \ \ \ \ \ \ Enjoyment & 3.18 (1.47) & 3.26 (1.44) & 3.03 (1.70) & 3.10 (1.64) & \textbf{3.21} (1.52) & \textbf{2.43} (1.73)\\ \addlinespace \textbf{Focus and Clear View} \\ F1) I saw well how the player & 4.24 (1.61) & 4.57 (1.46) & 4.28 (1.60) & 4.10 (1.62) & \textbf{4.54} (1.21) & \textbf{3.74} (1.70)\\ \ \ \ \ \ \ interacted with objects in the \\ \ \ \ \ \ \ game world. \\ F2) While watching, I had the & 2.58 (1.83) & 2.19 (1.78) & \textbf{2.31} (1.97) & \textbf{3.03} (2.07) & \textbf{2.70} (1.80) & \textbf{3.93} (1.89)\\ \ \ \ \ \ \ feeling of missing important \\ \ \ \ \ \ \ things in the game because I \\ \ \ \ \ \ \ couldn't see them. \\ F3) I had a good view of the game & 4.15 (1.49) & 4.24 (1.46) & \textbf{4.18} (1.47) & \textbf{3.65 }(1.72) & \textbf{4.14} (1.38) & \textbf{3.13} (1.94)\\ \ \ \ \ \ \ world. \\ \addlinespace \textbf{Comprehensibility} \\ C1) I always understood what & 4.52 (1.58) & 4.75 (1.47) & \textbf{4.33} (1.43) & \textbf{3.80} (1.63) & \textbf{4.23} (1.33) & \textbf{3.64} (1.89)\\ \ \ \ \ \ \ happened in the game. \\ C2) At any time I could & 3.93 (1.64) & 4.42 (1.63) & 4.17 (1.51) & 4.06 (1.74) & \textbf{4.44} (1.35) & \textbf{3.78} (1.75) \\ \ \ \ \ \ \ comprehend what the player \\ \ \ \ \ \ \ was doing in the VR world. \\ C3) I was able to understand how & 4.33 (1.66) & 4.61 (1.59) & \textbf{4.37} (1.58) & \textbf{3.85} (1.70) & \textbf{4.10} (1.47) & \textbf{3.16} (1.91) \\ \ \ \ \ \ \ successful the player was in \\ \ \ \ \ \ \ the game. \\ \addlinespace \textbf{Involvement} \\ I1) \ I felt like being part of the & \textbf{3.06} (1.88) & \textbf{2.46} (1.97) & \textbf{3.28} (1.85) & \textbf{2.35} (1.95) & \textbf{3.67} (1.88) & \textbf{2.54} (2.28)\\ \ \ \ \ \ \ game. \\ I2) \ I saw the virtual world as if I & \textbf{3.54} (1.99) & \textbf{2.70} (1.92) & \textbf{3.35 }(1.85) & \textbf{2.44} (1.89) & \textbf{3.93} (1.83) & \textbf{2.59} (2.03)\\ \ \ \ \ \ \ was there myself. \\ \bottomrule \end{tabularx} \label{tab:ViewExperience} \Description[While there are few significant differences in the viewing experience for the game Beat Saber, every aspect differs significantly between the two perspectives in the game Stand Out]{This table list all mean values and standard deviations of the different aspects of the viewing experience in the three study conditions (games) Beat Saber, Superhot VR, and Stand Out comparing the first-person and the third-person perspectives. Significant differences are highlighted in bold print. While there are few significant differences for the game Beat Saber, every aspect differs significantly between the two perspectives in the game Stand Out.} \end{table*} \subsubsection{Beat Saber} In the \textit{Beat Saber} group, most differences are not significant. Neither enjoyment nor any ratings of focus and clear view and comprehensibility were rated significantly different between the two perspectives (all \textit{p}~>~.05). In contrast, the two questions regarding the perceived involvement of the viewers show significant differences: in the first-person view, participants felt more like being part of the game (I1), \textit{F}(1,~65)~=~6.06, \textit{p}~=~.016, and like being in the virtual world (I2), \textit{F}(1,~65)~=~9.04, \textit{p}~=~.004. \subsubsection{Superhot VR} In the \textit{Superhot VR} condition, the repeated measures ANOVA revealed more significant differences. In the first-person perspective, the ratings regarding having a good view of the game world (F3) were higher than in the third-person perspective, \textit{F}(1,~87)~=~5.64, \textit{p}~=~.020. Additionally, the feeling of missing important things (F2) was significantly higher in the third-person perspective, \textit{F}(1,~87)~=~6.93, \textit{p}~=~.010. In terms of comprehensibility, participants had the feeling of significantly better understanding what happened in the game (C1), \textit{F}(1,~87)~=~7.47, \textit{p}~=~.008, and how successful the player was (C3), \textit{F}(1,~87)~=~7.93, \textit{p}~=~.006, in the first-person perspective. Similar to the results in the \textit{Beat Saber} group, both items regarding involvement (I1 and I2) were rated significantly higher in the first-person perspective, \textit{F}(1,~87)~=~17.19, \textit{p}~<~.001 (I1), and \textit{F}(1,~87)~=~15.91, \textit{p}~<~.001 (I2). However, in the \textit{Superhot VR} condition, there was also a significant interaction effect between the ratings for involvement and the order in which the two videos were watched, indicating sequence effects. The rating for being part of the game (I1) was particularly high for the first-person perspective if participants had viewed the third-person perspective video beforehand (\textit{M}~=~3.64 compared to \textit{M}~=~2.88), \textit{F}(1,~87)~=~6.27, \textit{p}~=~.014. The same pattern becomes apparent for the item \textit{"I saw the virtual world as if I was there myself"} (\textit{M}~=~3.98 compared to \textit{M}~=~2.64), \textit{F}(1,~87)~=~13.40, \textit{p}~<~.001. All other differences (IMI, F1, and C2) were not significant (all \textit{p}~>~.05). \subsubsection{Stand Out} In the \textit{Stand Out} group, enjoyment (IMI) was significantly higher in the first-person video, \textit{F}(1,~59)~=~13.68, \textit{p}~<~.001. Moreover, participants gave significantly better ratings regarding focus and clear view in the first-person perspective: they better saw how the player interacted with game objects (F1), \textit{F}(1,~59)~=~12.47, \textit{p}~=~.001, and had a better view of the game world (F3), \textit{F}(1,~59)~=~15.53, \textit{p}~<~.001. At the same time, the feeling of missing important things was lower (F2), \textit{F}(1,~59)~=~14.43, \textit{p}~<~.001. The three items regarding comprehensibility were also rated significantly higher in the first-person perspective: participants better understood what happened in the game (C1), \textit{F}(1,~59)~=~4.10, \textit{p}~=~.047, what the player was doing (C2), \textit{F}(1,~59)~=~6.98, \textit{p}~=~.011, and how successful the player was (C3), \textit{F}(1,~59)~=~16.60, \textit{p}~<~.001. In line with the other two groups, involvement (I1 and I2) was significantly higher in the first-person perspective: \textit{F}(1,~59)~=~16.39, \textit{p}~<~.001 (I1), and \textit{F}(1,~59)~=~26.23, \textit{p}~<~.001 (I2). Summarized, all aspects of the viewing experience differ significantly between the first- and the third-person perspective in the \textit{Stand Out} group, with the first-person perspective being rated better in all cases. \begin{table*} \caption{Results of the thematic analysis with regard to reasons why participants preferred the first-person perspective. The middle column contains exemplary quotes of participants which were assigned to the topics. The right column shows the number of mentions, i.e. how many single answers of participants were assigned to the respective topic.} \begin{tabularx}{\textwidth} {>{\raggedright\arraybackslash}p{0.1cm}>{\raggedright\arraybackslash}X >{\raggedright\arraybackslash}p{6.7cm}>{\centering\arraybackslash}p{1.4cm}} \toprule \addlinespace \multicolumn{2}{l}{\textbf{Reasons to Prefer the }} & \textbf{Examples} & \textbf{Mentions}\\ \multicolumn{2}{l}{\textbf{First-Person Perspective}} & \\ \midrule \addlinespace \multicolumn{2}{l}{\textbf{Involvement}} & \\ & The viewers felt more immersed, they felt like being part of the game, being in the game world, or being the player. & \textit{I like the first-person perspective because it makes me feel like I'm playing the game, not someone else. It is more entertaining when I feel like I'm part of the game.} & 38\\ \addlinespace \multicolumn{2}{l}{\textbf{Focus}} \\ & The viewers think that the focus was better, because they were able to see all important things and did not miss something outside the viewport. & \textit{It gives me the ability to see the important parts of the game as they happen, rather than being stuck facing one direction, missing details that are behind my point of view.} & 12\\ \addlinespace \multicolumn{2}{l}{\textbf{Comprehensibility}} \\ & The viewers better understood what happened in the game and what the player was doing. & \textit{First person (in this game at least) lets viewers understand what the player is doing.} & 10\\ \addlinespace \multicolumn{2}{l}{\textbf{Obstructive Player in Third Person}} \\ & The player in the third-person view was perceived as obstructive, because he obscured the view on the game world and did not fit to the environment. & \textit{First person allows for better visibility without obstruction by the player.} & 10\\ \addlinespace \multicolumn{2}{l}{\textbf{Original Game Perspective}} \\ & The first-person view corresponds with the original game perspective, hence viewers can better imagine how it would be to play the game. & \textit{I don't like the mixed reality view. I want to see exactly what the player sees.} &8 \\ \addlinespace \multicolumn{2}{l}{\textbf{Realism}} \\ & The experience felt more real to viewers. & \textit{Because it looks more real to me.} & 7 \\ \bottomrule \end{tabularx} \label{tab:ThematicAnalysis1st} \Description[Six identified reasons to prefer the first-person perspective: involvement, focus, comprehensibility, obstructive player, original game perspective, and realism]{The table lists the results of the thematic analysis with regard to reasons why participants preferred the first-person perspective. The first column shows the six identified topics involvement, focus, comprehensibility, obstructive player, original game perspective, and realism. The middle column contains exemplary quotes of participants which were assigned to the topics. The right column shows the number of mentions, i.e., how many single answers of participants were assigned to the respective topic.} \end{table*} \begin{table*}[ht] \caption{Results of the thematic analysis with regard to reasons why participants preferred the third-person perspective. The middle column contains exemplary quotes of participants which were assigned to the topics. The right column shows the number of mentions, i.e. how many single answers of participants were assigned to the respective topic.} \begin{tabularx}{\textwidth} {>{\raggedright\arraybackslash}p{0.1cm}>{\raggedright\arraybackslash}X >{\raggedright\arraybackslash}p{6.7cm}>{\centering\arraybackslash}p{1.4cm}} \toprule \addlinespace \multicolumn{2}{l}{\textbf{Reasons to Prefer the }} & \textbf{Examples} & \textbf{Mentions} \\ \multicolumn{2}{l}{\textbf{Third-Person Perspective}} & \\ \midrule \addlinespace \multicolumn{2}{l}{\textbf{Player's Movements}} \\ & To see the player's movement and his interaction with the game world is more entertaining and interesting. & \textit{It was more interesting to see how the person was actually moving around and how it looked like he was actually in the game world.} & 19\\ \addlinespace \multicolumn{2}{l}{\textbf{Comprehensibility}} \\ & The viewers better understood what the player was doing. & \textit{It was easier to see what the player was doing in the game world.} & 11\\ \addlinespace \multicolumn{2}{l}{\textbf{Motion Sickness in First Person}} \\ & It was more comfortable, because in the first-person view viewers experienced dizziness or nausea. & \textit{Watching first person made me kind of dizzy so the third-person perspective was more interesting and more comfortable to watch.} & 6\\ \addlinespace \multicolumn{2}{l}{\textbf{View on Game World}} \\ & The viewers feel that they can see more of the game world. & \textit{The third-person perspective gave me a wider view of the world in which the game was taking place.} & 4\\ \bottomrule \end{tabularx} \label{tab:ThematicAnalysis3rd} \Description[Four identified reasons to prefer the third-person perspective: player's movements, comprehensibility, motion sickness, and view on game world]{The table lists the results of the thematic analysis with regard to reasons why participants preferred the third-person perspective. The first column shows the four identified topics player's movements, comprehensibility, motion sickness, and view on game world. The middle column contains exemplary quotes of participants which were assigned to the topics. The right column shows the number of mentions, i.e., how many single answers of participants were assigned to the respective topic.} \end{table*} \subsection{Thematic Analysis: Reasons for Preferred Perspective} To gain further insight into the positive and negative qualities of the two perspectives, we performed a thematic analysis of the free text answers to the question of why participants prefer one perspective to the other. For this purpose, two researchers looked at all answers independently and sorted them by recurring topics. We followed a deductive approach based on the reflexive thematic analysis described by Braun and Clarke~\cite{braun2006}, with an additional check of inter-rater agreement. After the first round of clustering, both researchers compared their lists and discussed all differences. Based on the discussion, a final clustering was agreed upon. We identified six clusters that describe reasons why participants preferred the first-person perspective, as shown in Table~\ref{tab:ThematicAnalysis1st}. Many participants (\textit{N}~=~38) highlighted a higher involvement perceived in the first-person perspective. They reported that this perspective made them feel like being part of the game or even being the player themselves. Besides, some participants (\textit{N}~=~12) pointed out that the focus was better in the first-person perspective because they were able to see the important things (such as enemies approaching). Participants also reported that the comprehensibility was higher, as they were better able to follow the game events (\textit{N}~=~10). In the third-person perspective, the player was perceived as an obstacle by some participants (\textit{N}~=~10), covering parts of the game and interfering with immersion. Apart from the higher involvement, some participants (\textit{N}~=~8) also emphasized that they prefer the first-person perspective because it is the \textit{"original game perspective"}. This way, they experience how the game looks to the player and can better imagine how it would feel to play the game. Finally, some participants (\textit{N}~=~7) pointed out that the experienced realism was higher in the first-person perspective. For the third-person perspective, we identified four categories of reasons to prefer it to the first-person perspective, as shown in Table~\ref{tab:ThematicAnalysis3rd}. The most frequently mentioned reason was that participants (\textit{N}~=~19) liked to see the player and his movements. They reported that it was more interesting and entertaining to focus the player and to be able to observe the direct interaction between the player and the game's environment. Related to seeing the player's movement, some participants (\textit{N}~=~10) also highlighted that they gained a better understanding of how the game is played and how the interaction works. Hence, they stated that the comprehensibility was better in the third-person perspective. Besides, some participants (\textit{N}~=~6) preferred the third-person view, because they experienced some form of motion sickness in the first-person perspective. They reported that it was more comfortable to watch the game in third person. Finally, some participants (\textit{N}~=~4) preferred the third-person perspective, because they think that it enabled them to see more of the game world. Part of the identified reasons to prefer one perspective to the other were mentioned comparably often in all three game groups. More precisely, participants in each group addressed the topics higher involvement, realism, and the original game perspective of the first-person perspective, as well as less motion sickness and a better view on the game world in the third-person perspective. In contrast, some topics were more prevalent for specific games. The better focus and the better comprehensibility of the first-person perspective were predominantly mentioned by participants who had watched the videos of \textit{Superhot VR}: we counted focus eight times and comprehensibility six times in the \textit{Superhot VR} condition, while both topics appeared only two times in each of the other two conditions. However, at this point we want to remind that the \textit{Superhot VR} group was also bigger than the other two groups (\textit{N}~=~89 vs. 67 and 61), which might account for such differences. In the \textit{Stand Out} condition, participants complained more about the obstructive player in the third-person perspective (N~=~8) than participants in the other two groups. Moreover, two reasons to prefer the third-person perspective---seeing the player's movements and comprehensibility---were mentioned for both \textit{Beat Saber} and \textit{Superhot VR}, but not for the game \textit{Stand Out}. Even though the Stand Out group was a bit smaller than the other two study groups, the difference is still noticeable. \section{Discussion} We observed an overall preference for the first-person perspective in our study. However, we found significant differences between the three games. This result confirms our assumption that the choice of an appropriate perspective is dependent on the particular game. Moreover, the perceived benefits and shortcomings of both perspectives as reported by our participants indicate that personal preferences and the motivation of the viewer also play an important role. \subsection{Influence of Game Characteristics on Perspective Preferences} We received the most homogeneous feedback for the game \textit{Stand Out}. Very few participants preferred the third-person perspective, and it also performed significantly worse regarding all measured aspects of the viewing experience, including overall enjoyment. Many participants mentioned a feeling of confusion and the impression of missing essential parts of the gameplay. Whereas viewers of the other two games rather appreciated seeing the player in action according to our thematic analysis, viewers of \textit{Stand Out} experienced the player as obstructive in the third-person view. We assume that this issue is caused by a mismatch between the focus of the viewer in the third-person perspective, which lies on the player, and the location of the important game events: in \textit{Stand Out}, the main actions---such as approaching enemies, the search for coverage or gun fights---are not centered around the player's position, but evolve further away in the surrounding. A first-person perspective better matches this game characteristic and, thus, seems to be more appropriate for this kind of games. For the game \textit{Superhot VR}, the participants were able to perceive the player's actions and interactions with objects in both views equally and preferences are less clearly distributed. In contrast to \textit{Stand Out}, there is no significant difference in the IMI enjoyment subscale: both perspectives induced similar levels of enjoyment. Since entertainment was the most commonly mentioned motivator to watch VR game videos, we can assume that at least some participants preferred the third-person view in \textit{Superhot VR} for enjoyment reasons. Nevertheless, many participants still disliked the third-person perspective due to the feeling of having a limited view and missing important game events. Comments of some participants point towards a possible explanation: these viewers explicitly stated problems with situations where the player reacted to opponents that were not visible on the screen. This issue seems similar to the problems reported for the third-person perspective in \textit{Stand Out}. Yet, the problem is less prominent in \textit{Superhot VR} and only applies to certain situations. In contrast to \textit{Stand Out}, \textit{Superhot VR} also contains important game events that are directly centered around the player, such as dodging attacks in slow-motion. Such events might account for the fact that still 27\% of participants preferred the third-person perspective, which offers a good view on the player. We assume that a more dynamic third-person camera could reduce the issue of the limited view and increase approval of the third-person view to a certain extent. The most inconclusive results are the ones for the game \textit{Beat Saber}. In this case, our sample shows no clear preference for one perspective. Regarding the spectator experience, only involvement---measuring the feeling of being part of the game---was rated higher for the first-person view. All other subscales, namely enjoyment, comprehensibility, and seeing everything that is important, do not indicate any difference between the two perspectives. These results indicate that the third-person view seems to have advantages to this particular game and that the first-person alternative is not preferable in every case. \textit{Beat Saber} seems to be more appropriate for the third-person perspective than both other games. Considering the identical study conditions and similar audiences, the reasons for the measured differences between all three games have to reside within the particular game characteristics. Our results indicate that the focus on the player's bodily interaction in the third-person perspective is more compelling for spectators of \textit{Beat Saber} than for viewers of the other two games. In contrast to \textit{Stand Out} and \textit{Superhot VR}, \textit{Beat Saber} requires very fast and coordinated movements of the players. All relevant game events (approaching blocks and hits of the player) are tightly coupled to these movements both temporally and visually. Watching this type of experience is likely more interesting if the viewers can see the players and their movements, as they have an immediate effect on the gameplay. Spectators of the other two games might prefer the first-person perspective, because the player's bodily interaction looks less intriguing and, hence, a clear focus on the in-game events is more interesting. In addition, the overall high pace of the players' movements in \textit{Beat Saber} makes it hard for new viewers to understand and follow the gameplay. In this case, the third-person perspective could help the audience to gain a better understanding of the game and its goals. For the other two games, it is more important to see the players' view and their interactions with the weaponry to understand the overall gameplay and the players' strategies. Moreover, we assume that the fixed viewing direction of \textit{Beat Saber} contributes to the success of the third-person view. During the game, the player's view is mostly fixed in one direction, which makes it easy to align the third-person camera with the main course of action. As a result, the viewers' impression of missing essential aspects is reduced. For comparison, \textit{Superhot VR} features a more dynamic environment where enemies approach the player from multiple directions. \textit{Stand Out} provides the most dynamic locomotion system that combines virtual motion and rotation with real movements. Additionally, it relies heavily on long travel distances. In summary, we assume that the key difference between the three games is the visual coupling of the main game actions and the player's position and movement. In games like \textit{Beat Saber} all relevant game events are centered directly around the player and, thus, emphasized by the third-person-perspective. In games like \textit{Stand Out} most events dynamically evolve in the wider surroundings. In the latter case, the first-person perspective is more appropriate, as it better guides the focus of the spectator towards the important game events. Despite the discussed reasons that explain the usefulness of the third-person perspective for games like \textit{Beat Saber}, a considerable number of our participants still favored the first-person view for this game. This preference hints towards certain desires of the spectators---such as experiencing the game from the player's view---that require a first-person perspective and are less linked to characteristics of the game. This finding is especially interesting considering that using a mixed-reality third-person view is a widespread approach in current \textit{Beat Saber} videos and streams. \subsection{Subliming the Perceived Strengths and Weaknesses of Both Perspectives} Our analysis of the three VR games has shown that certain game characteristics seem to influence the suitability of the two different streaming perspectives. However, we also consider spectators' personal preferences and motives to be a relevant factor. The UG results revealed two primary motivators of our participants for watching VR game videos: entertainment and information seeking. This finding fits the most frequently mentioned reasons for choosing one perspective over the other: involvement and comprehensibility. Whereas we could not identify significant correlations between participants' general motives and the perspective they preferred, our thematic analysis of participants' reasons to prefer a certain perspective further helps to understand the perceived strengths and weaknesses of both views. The first-person view is preferred by spectators who want to feel like they are playing the game themselves and who want to see \textit{"through the player's eyes"}. Some participants explicitly mentioned a \textit{"preference for the original perspective"}. An increased involvement was the most prevalent reason of our participants to prefer the first-person perspective. This finding of the thematic analysis is underlined by our questions concerning the feeling of \textit{"being part of the game"}. For all three games, participants felt significantly more as a part of the game in the first-person view. Hence, the first-person perspective better fosters immersive experiences of spectators than the third-person view. On the downside, some participants indicated that \textit{"seeing through the player's eyes"} made them dizzy and motion sick. In these cases, participants preferred the third-person view, which seems to be less prone to motion sickness. Interestingly, a better \textit{"comprehensibility"} is mentioned as an perceived advantage for both views. Participants disagreed which perspective provides a better understanding. This feedback might be the result of the different foci of both perspectives: In the first-person perspective, viewers experience the game exactly how it would look like playing it. Hence, some participants might have the feeling that this view provides a better overall impression of the game. In the third-person view, the spectators see the player's movements and the resulting actions in the virtual environment. They might feel that this matching between manipulations and effects provides a better understanding. In this case, the focus does not lie on the original perspective and game events, but on the player and their interaction with the game. We can summarize the feedback of our participants into three main perceived advantages of the third-person perspective: (1) providing an entertaining experience by showing the player in action, (2) giving a good impression of the VR experience by revealing the player's full-body movement and the relation between the player's manipulations and the effects in the game world, and (3) avoiding motion sickness. Consequently, the mixed-reality third-person approach is particularly promising for games in which the player performs interesting, distinctive movements in real life. However, the overall preference for the first-person view and the aforementioned concerns demonstrate that the third-person perspective introduces challenges that need to be taken into account. It is essential that the spectator's view is not noticeably limited: spectators must not feel that they miss important events due to a static third-person view or that the player's body might cover significant parts of the environment. Besides the higher immersive experience, these were the most prevalent reasons speaking in favor of the first-person alternative. Hence, the chosen point of view in the third-person perspective must be considered carefully. Especially in the case of dynamic viewing directions, content creators should consider integrating more dynamic solutions, such as aligning the third-person camera with the player's head rotations. \subsection{Limitations and Future Work} Our work presents the first step towards a better understanding of the preferences and experiences of VR game spectators with regard to the streaming perspective. While the study provides valuable insights, there are also some limitations leading to the need for further research. First of all, we point out that our choice of games does not represent the full VR gaming landscape. Hence, our results are limited to comparable game content. We will consider other genres (e.g., RPGs like \textit{Asgard's Wrath}~\cite{AsgardsWrath}) in the future to see which features of the two perspectives become particularly prevalent in other scenarios. Furthermore, our study does not take into account other common streaming approaches that integrate the player into the stream in other ways, such as picture-in-picture modes. While this is a limitation of our current work and should be considered in the future, our focus on the comparison of the first-person and the third-person perspectives promotes our understanding of spectator's basic preferences and desires. Another limitation concerns the concrete implementation of the third-person perspective used in our study, in particular regarding the game \textit{Superhot VR}. As already explained in Section~\ref{sec:implementation}, the static mixed-reality approach is just one possibility to create a third-person view. Other possibilities include the use of a dynamic camera and the replacement of the real player footage by a virtually created avatar. Some of the perceived shortcomings of the third-person view might trail off when using a different method, in particular a dynamic camera. For instance, the missing focus on current game events or the occlusion of relevant game objects can be reduced by automatically adapting the camera to the player's viewing direction or by enabling spectators to control the viewing angle. While we are convinced that our choice of a static mixed-reality view is appropriate to investigate basic differences between a focus on the game (first-person) and a focus on the player (third-person), future studies with alternative implementations such as using a dynamic camera or a virtual avatar should complement and refine the findings. Our design decisions regarding the concrete positions of the static third-person viewpoint in the three games (i.e., the position of the spectator's camera) might have influenced the viewing experience, as well. During the design process, we experienced that some positions are better suited than others, though no position seems to be optimal in every game situation, because a static view does not adapt during the course of the game. Hence, we informally tested different positions while implementing the third-person views to find an appropriate position for each game. Another important constituent of the third-person perspective is the player's persona. We used the same player in all our videos to preserve comparability. Nevertheless, the specific choice introduces possible effects arising from participants' personal preferences regarding gender or appearance of the player. Hence, future studies should include other types of players to investigate potential impacts. Furthermore, preferences may change if the viewers have some kind of relationship with the content producer, for instance, if the player is their favorite streamer. One participant explicitly stated: \textit{"If it's my usual go-to streamer on Twitch, then I would probably like the third-person perspective better because it'd be funnier."}. In such cases, the focus of interest is more on the player and less on the game, which is much better supported by a mixed-reality perspective. Previous research also indicates that the social interaction between the player and the audience can be an important motivator for spectators to follow game streamers~\cite{hamilton2014streaming, sjoblom2017people}. Particularly in a live streaming context, viewers' social motives become more prevalent, as they have the possibility to interact with the streamer or other viewers while the action takes place. In our study, we presented prerecorded videos and decided to not include social features (i.e., the player did not speak to the audience) to reduce the potential interference effects caused by our specific player. This approach increased the controllability of the study procedure, but limits the direct transferability of our results to live streaming. We assume that the pros and cons of the different perspectives found in our study also apply to live streaming contexts and that our results can also inform live streaming design choices. For instance, streamers using a first-person perspective can further increase comprehensibility by verbally describing which movements they are performing (because these are not visible to the audience). However, our study does not provide direct indications on how the different perspectives support or interfere with the viewers' need for social interaction. It might become more important to see the player, as visual cues are a central aspect in human communication. On the other hand, a first-person view might provide a closer connection to the player, because this perspective fosters a shared focus and attentional allocation. Future research is needed to test such assumptions. Hence, as a complement to our current work, we recommend the conduct of in-the-wild studies on streaming platforms with actual streamers and their audiences to capture this important social aspect with regard to the preferences of different perspectives. This would also enable a more sophisticated investigation of the correlations between spectators' motives to watch a certain VR game stream and their preferred view. Another interesting direction for future research in the area of VR spectatorship would be to investigate the experience of spectating VR game streams using HMDs. If the viewer is equipped with an immersive HMD, there are different possibilities to present VR content and the viewing experience will probably differ from 2D displays. \section{Conclusion} Delivering the highly immersive experience of VR games to a broad audience via common 2D video streams is a challenge for VR content providers, such as streamers, advertisers, and game developers. This work offers support by giving advice on the choice of an appropriate spectator perspective to foster a positive viewing experience. Based on our study results, we identified two key factors that need to be considered when deciding between a first-person and a mixed-reality third-person perspective: first, the characteristics of the game, in particular the location of game events in relation to the player's position; and second, the motives and expectations of the audience. While the first-person perspective puts the focus on the game and resembles the player's view, the mixed-reality third-person perspective shifts the focus to the player and the player-game interaction. For games in which most game events evolve directly around the player, the third-person perspective provides viewers with unique insights by revealing the player's real movements and their effects in the game world. This positive effect of the third-person perspective particularly applies to games that require the player to perform interesting, distinctive movements. In contrast, if the main game action is distributed over the game environment and not centered around the player, a first-person perspective is more appropriate due to its immersive quality and a clear focus on relevant game events. Apart from the game characteristics, content providers also should consider their audience. If spectators are supposed to be mainly interested in gaining an impression of the game and less in the player's persona, the first-person view provides the desired information better than the third-person perspective. On the other hand, if spectators have a keen interest in a specific streamer, their preference might be biased towards a third-person view, which highlights this person. This work presents the first step towards a comprehensive VR streaming guideline. As discussed above, there are some limitations and follow-up studies with more VR games and different settings are needed to extent our current knowledge. In particular, other implementations of the third-person perspective, for instance with a dynamic camera, need to be investigated to test our hypothesis about the importance of player centricity and visual coupling. Our work paves the way for further research on the spectators' experiences and expectations in the context of VR content. In the long term, understanding how different perspectives contribute to different demands of VR spectators will foster informed design decisions in diverse application areas such as game streaming, VR training and mixed-reality multi-user scenarios. \balance \bibliographystyle{ACM-Reference-Format}
1,314,259,992,831
arxiv
\section{Introduction} Three-dimensional reconstruction has a wide range of applications (e.g.~virtual reality, robot navigation or self-driving cars), and therefore is an output of many algorithms such as Structure from Motion (SfM), Simultaneous location and mapping (SLAM) or Multi-view Stereo (MVS). Recent work in SfM and SLAM has demonstrated that the geometry of three-dimensional scene can be obtained from a large number of images~\cite{agarwal2011building},\cite{heinly2015_reconstructing_the_world},\cite{ila2017slam++}. Efficient non-linear refinement~\cite{ceres-solver} of camera and point parameters has been developed to produce optimal reconstructions. The uncertainty of detected points in images can be efficiently propagated in case of SLAM~\cite{ila2017slam++},\cite{polok2016} into the uncertainty o three-dimensional scene parameters thanks to fixing the first camera pose and scale. In SfM framework, however, we are often allowing for gauge freedom~\cite{kanatani2001gauges}, and therefore practical computation of the uncertainty~\cite{forstner2016photogrammetric} is mostly missing in the state of the art pipelines~\cite{openMVG},\cite{schoenberger2016sfm},\cite{theia}. In SfM, reconstructions are in general obtained up to an unknown similarity transformation, i.e., a rotation, translation, and scale. The backward uncertainty propagation~\cite{hartley2003multiple} (the propagation from detected feature points to parameters the of the reconstruction) requires the ``inversion'' of a Fischer information matrix, which is rank deficient~\cite{forstner2016photogrammetric},\cite{hartley2003multiple} in this case. Naturally, we want to compute the uncertainty of the inner geometry~\cite{forstner2016photogrammetric} and ignore the infinite uncertainty of the free choice of the similarity transformation. This can be done by the Moore-Penrose (M-P) inversion of the Fisher information matrix~\cite{forstner2016photogrammetric},\cite{hartley2003multiple},\cite{kanatani2001gauges}. However, the M-P inversion is a computationally challenging process. It has cubic time and quadratic memory complexity in the number of columns of the information matrix, i.e., the number of parameters. Fast and numerically stable uncertainty propagation has numerous applications~\cite{polic2017uncertainty3DV}. We could use it for selecting the next best view~\cite{frahm2010building} from a large collection of images~\cite{agarwal2011building},\cite{heinly2015_reconstructing_the_world}, for detecting wrongly added cameras to existing partial reconstructions, for improving fitting to the control points~\cite{maurer2012geo}, and for filtering the mostly unconstrained cameras in the reconstruction to speed up the bundle adjustment~\cite{ceres-solver} by reducing the size of the reconstruction. It would also help to improve the accuracy of the iterative closest point (ICP) algorithm~\cite{besl1992method}, by using the precision of the camera poses, and to provide the uncertainty of the points in 3D~\cite{polic2017uncertainty}. \section{Contribution} We present the first algorithm for uncertainty propagation from input feature points to camera parameters that works without any approximation of the natural form of the covariance matrix on thousands of cameras. It is about ten times faster than the state of the art algorithms~\cite{lhuillier2006},\cite{polic2017uncertainty3DV}. Our approach builds on top of Gauss-Markov estimation with constraints by Rao~\cite{rao1973linear}. The novelty is in a new method for nullspace computation in SfM. We introdice a fast sparse method, which is independent on a chosen parametrization of rotations. Further, we combine the fixation of gauge freedom by nullspace, from F\"orstner and Wrobel~\cite{forstner2016photogrammetric} and methods applied in SLAM, i.e., the block matrix inversion~\cite{eves1966elementary} and Woodbury matrix identity~\cite{hager1989updating}. Our main contribution is a clear formulation of the nullspace construction, which is based on the similarity transformation between parameters of the reconstruction. Using the nullspace and the normal equation from~\cite{forstner2016photogrammetric}, we correctly apply the block matrix inversion, which has been done only approximately before~\cite{polic2017uncertainty3DV}. This brings an improvement in accuracy as well as in speed. We also demonstrate that our approach can be effectively used for reconstructions of any size by applying it to smaller sub-reconstructions. We show empirically that our approach is valid and practical. Our algorithm is faster, more accurate and more stable than any previous method~\cite{lhuillier2006},\cite{polic2017uncertainty3DV},\cite{polic2017uncertainty}. The output of our work is publicly available as source code which can be used as an external library in nonlinear optimization pipelines, like Ceres Solver~\cite{ceres-solver} and reconstruction pipelines like~\cite{openMVG},\cite{schoenberger2016sfm},\cite{theia}. The code, datasets, and detailed experiments will be available online \url{https://michalpolic.github.io/usfm.github.io}. \section{Related work} The uncertainty propagation is a well known process~\cite{forstner2016photogrammetric},\cite{hartley2003multiple},\cite{kanatani2001gauges},\cite{polic2017uncertainty3DV}. Our goal is to propagate the uncertainties of input measurements, i.e.\ feature points in images, into the parameters of the reconstruction, e.g.\ poses of cameras and positions of points in 3D, by using the projection function~\cite{hartley2003multiple}. For the purpose of uncertainty propagation, a non-linear projection function is in practice often replaced by its first order approximation using its Jacobian matrix~\cite{forstner2005uncertainty},\cite{hartley2003multiple}. For the propagation using higher order approximations of the projection function, as described in F\"orstner and Wrobel~\cite{forstner2016photogrammetric}, higher order estimates of uncertainties of feature points are required. Unfortunately, these are difficult to estimate~\cite{forstner2016photogrammetric,polic2017thesisproposal} reliably. In the case of SfM, the uncertainty propagation is called the {\em backward propagation of non-linear function in over-parameterized case}~\cite{hartley2003multiple} because of the projection function, which does not fully constrain the reconstruction parameters~\cite{morris2001gauge}, i.e., the reconstruction can be shifted, rotated and scaled without any change of image projections. We are primarily interested in estimating {\em inner geometry }, e.g.\ angles and ratios of distance, and its {\em inner precision}~\cite{forstner2016photogrammetric}. Inner precision is invariant to changes of gauge, i.e.\ to similarity transformations of the cameras and the scene~\cite{kanatani2001gauges}. A natural choice of the fixation of gauge, which leads to the inner uncertainty of inner geometry, is to fix seven degrees of freedom caused by the invariance of the projection function to the similarity transformation of space~\cite{forstner2016photogrammetric},\cite{hartley2003multiple},\cite{kanatani2001gauges}. One way to do this is to use the Moore-Penrose (M-P) inversion~\cite{nashed2014generalized} of the Fisher information matrix~\cite{forstner2016photogrammetric}. Recently, several works on speeding up the M-P inversion of the information matrix for SfM frameworks have appeared. Lhuillier and Perriollat~\cite{lhuillier2006} used the block matrix inversion of the Fisher information matrix. They performed M-P inversion of the Schur complement matrix~\cite{schur2005} of the block related to point parameters and then projected the results to the space orthogonal to the similarity transformation constraints. This approach allowed working with much larger scenes because the square Schur complement matrix has the dimension equal to the number of camera parameters, which is at least six times the number of cameras, compared to the mere dimension of the square Fisher information matrix, which is just about three times the number of points. However, it is not clear if the decomposition of Fisher information matrix holds for M-P inversion without fulfilling the rank additivity condition~\cite{tian1998moore} and it was shown in~\cite{polic2017uncertainty3DV} that approach~\cite{lhuillier2006} is not always accurate enough. Polic et al.~\cite{polic2017uncertainty3DV} evaluated the state of the art solutions against more accurate results computed in high precision arithmetics, i.e.\ using 100 digits instead of 15 significant digits of double precision. They compared the influence of several fixations of the gauge on the output uncertainties and found that fixing three points that are far from each other together with a clever approximation of the inversion leads to a good approximation of the uncertainties. The most related work is~\cite{rao1973linear}, which contains uncertainty formulation for Gauss-Markov model with constraints. We combine this result with our new approach for nullspace computation to fixing gauge freedom. Finally, let us mention work on fast uncertainty propagation in SLAM. The difference between SfM and SLAM is that in SLAM we know, and fix, the first camera pose and the scale of the scene which makes the information matrix full rank. Thus one can use a fast Cholesky decomposition to invert a Schur complement matrix as well as other techniques for fast covariance computation~\cite{ila2017slam++,kaess2009covariance}. Polok, Ila~et~al.~\cite{ila2017sfm},\cite{polok2016} claim addressing uncertainty computation in SfM but actually assume full rank Fisher information matrix and hence do not deal with gauge freedom. In contrary, we solved here the full SfM problem which requires dealings with gauge freedom. \section{Problem formulation} In this section, we describe basic notions in uncertainty propagation in SfM and provide the problem formulation. The set of parameters of three-dimensional scene $\theta = \{ P, X \}$ is composed from $n$ cameras $P = \{ P_1, P_2, ... , P_n \}$ and $m$ points $X = \{X_1, X_2, ..., X_m \}$ in 3D. The $i$-th camera is a vector $P \in \mathbb{R}^{8}$, which consist of internal parameters (i.e.~focal length $c_i \in \mathbb{R}$ and radial distortion $k_{i} \in \mathbb{R}$) and external parameters (i.e.~rotation $r_i \in SO(3)$ and camera center $C_i \in \mathbb{R}^3$). Estimated parameters are labelled with the hat $\hat{~}$. We consider that the parameters $\hat{\theta}$ were estimated by a reconstruction pipeline using a vector of $t$ observations $u \in \mathbb{R}^{2t}$. Each observation is a 2D point $u_{i,j} \in \mathbb{R}^{2}$ in the image $i$ detected up to some uncertainty that is described by its covariance matrix $\Sigma_{u_{i,j}} = \Sigma_{\epsilon_{i,j}}$. It characterizes the Gaussian distribution assumed for the detection error $\epsilon_{i,j}$ and can be computed from the structure tensor~\cite{foerstner93:image} of the local neighbourhood of $u_{i,j}$ in the image $i$. The vector $\hat{u}_{i,j}=p(\hat{X}_j,\hat{P}_i)$ is a projection of point $\hat{X}_j$ into the image plane described by camera parameters $\hat{P}_i$. All pairs of indices $(i,j)$ are in the index set $S$ that determines which point is seen by which camera \begin{eqnarray} \hat{u}_{i,j} &=& u_{i,j} - \epsilon_{i,j} \\ \hat{u}_{i,j} &=& p(\hat{X}_j,\hat{P}_i) \quad \quad \forall (i,j) \in S \end{eqnarray} Next, we define function $f(\hat{\theta})$ and vector $\epsilon$ as a composition of all projection functions $p(\hat{X}_j,\hat{P}_i)$ and related detection errors $\epsilon_{i,j}$ \begin{equation} u = \hat{u} + \epsilon = f(\hat{\theta}) + \epsilon \end{equation} This function is used in the non-linear least squares optimization (Bundle Adjustment~\cite{ceres-solver}) \begin{equation} \label{equ:optimization-residual-function} \hat{\theta} = \argmin_{\theta} \norm{f(\hat{\theta}) - u}^2 \end{equation} which minimises the sum of squared differences between the measured feature points and the projections of the reconstructed 3D points. We assume the $\Sigma_u$ as a block diagonal matrix composed of $\Sigma_{u_{i,j}}$ blocks. The optimal estimate $\hat{\theta}$, minimising the Mahalanobis norm, is \begin{equation} \hat{\theta} = \argmin_{\theta} r^{\top}(\hat{\theta}) \Sigma_{u}^{-1} r(\hat{\theta}) \end{equation} To find the formula for uncertainty propagation, the non-linear projection functions $f$ can be linearized by the first order term of its Taylor expansion \begin{eqnarray} \label{eqn:linearization-of-projection-fun} f(\theta) &\approx& f(\hat{\theta}) + J_{\hat{\theta}}(\hat{\theta} - \theta) \\ f(\theta) &\approx& \hat{u} + J_{\hat{\theta}}\Delta\theta \end{eqnarray} which leads to the estimated correction of the parameters \begin{equation} \hat{\theta} = \theta + \argmin_{\Delta\theta} (J_{\hat{\theta}}\Delta\theta + \hat{u}- u)^{\top} \Sigma_{u}^{-1} (J_{\hat{\theta}}\Delta\theta + \hat{u}- u) \end{equation} Partial derivatives of the objective function must vanishing in the optimum \begin{equation} \label{eqn:partial-derivative} \frac{1}{2} \dfrac{\partial (r^{\top}(\theta) \Sigma_{u}^{-1} r(\theta))}{\partial\theta^{\top}} = J_{\hat{\theta}}^{\top} \Sigma_{u}^{-1} ( J_{\hat{\theta}}\widehat{\Delta \theta} + \hat{u} - u) = J_{\hat{\theta}}^{\top} \Sigma_{u}^{-1} r(\hat{\theta}) = 0 \end{equation} which defines the {\em normal equation system} \begin{eqnarray} \label{eqn:normal-equation-system} M \widehat{\Delta \theta} &=& \bm{m} \\ M = J_{\hat{\theta}}^{\top} \Sigma_{u}^{-1} J_{\hat{\theta}} & ,\quad & \bm{m} = J_{\hat{\theta}}^{\top} \Sigma_{u}^{-1} ( u - \hat{u} ) \end{eqnarray} The normal equation system has seven degrees of freedom and therefore requires to fix seven parameters, called the gauge~\cite{kanatani2001gauges}, namely a scale, a translation and a rotation. Any choice of fixing these parameters leads to a valid solution. The natural choice of covariance, which is unique, has the zero uncertainty in the scale, the translation, and rotation of all cameras and scene points. It can be obtained by the M-P inversion of Fisher information matrix $M$ or by Gauss-Markov Model with constraints~\cite{forstner2016photogrammetric}. If we assume a constraints $h(\hat{\theta}) = 0$, which fix the scene scale, translation and rotation, we can write their derivatives, i.e.\ the nullspace $H$, as \begin{equation} \label{eqn:additional-constraints-definition} H^{T} \Delta \theta = 0 \quad \quad H = \dfrac{\partial h(\hat{\theta})}{\partial \hat{\theta}} \end{equation} Using Lagrange multipliers $\lambda$, we are minimising the function \begin{equation} g(\Delta \theta,\lambda) = \frac{1}{2}(J_{\hat{\theta}}\Delta\theta + \hat{u}- u)^{\top} \Sigma_{u}^{-1} (J_{\hat{\theta}}\Delta\theta + \hat{u}- u) + \lambda^{\top}(H^{\top}\Delta \theta) \end{equation} that has partial derivative with respect $\lambda$ equal to zero in the optimum (as in Eqn.~\ref{eqn:partial-derivative}) \begin{equation} \dfrac{\partial g(\Delta \theta,\lambda)}{\partial \lambda} = H^{T} \Delta \theta = 0 \end{equation} This constraints lead to the {\em extended normal equations} \begin{equation} \mat{cc}{M & H \\ H^{\top} & 0 }\mat{c}{\hat{\theta}\\ \lambda} = \mat{c}{J_{\hat{\theta}}^{\top} \Sigma_{u}^{-1} (\hat{u}- u) \\ 0} \end{equation} and allow us to compute the inversion instead of M-P inversion \begin{equation} \label{eqn:inversion-of-extended-information-matrix} \mat{cc}{\Sigma_{\hat{\theta}} & K \\ K^{\top} & T } = \mat{cc}{M & H \\ H^{\top} & 0 }^{-1} \end{equation} \section{Solution method} \label{sec:solution-method} We next describe how to compute the nullspace $H$ and decompose the original Eqn.~\ref{eqn:inversion-of-extended-information-matrix} by a block matrix inversion. The proposed method assumes that the Jacobian of the projection function is provided numerically and provides the nullspace independently of the representation of the camera rotation. \subsection{The nullspace of the Jacobian} The scene can be transformed by a similarity transformation\footnote{The variable $\lS{s}\theta$ is a function of $\theta$ and $q$} \begin{equation} \label{eqn:similarity-equality} \lS{s}\theta = {\lS{s}\theta}(\theta,q) \end{equation} depending on seven parameters $q=[T, s, \mu]$ for translation, rotation, and scale without any change of the projection function $f(\theta)-f(\lS{s}\theta(\theta,q))=0$. If we assume a difference similarity transformation, we obtain the total derivative \begin{equation} \label{eqn:jacobian-nullspace-condition} J_\theta \Delta \theta - (J_\theta \Delta \theta + J_\theta J_q \Delta q)= J_\theta J_q \Delta q =0 \end{equation} Since it needs to hold for any $\Delta q$, the matrix \begin{equation} H = \frac{\partial \lS{s}\theta}{\partial q}= J_q \end{equation} is the nullspace of $J_\theta$. Next, consider an order of parameters such that 3D point parameters follow the camera parameters \begin{equation} \hat{\theta} = \{P,X\} = \{P_1, \dots P_n, X_1, \dots X_m\} \end{equation} The cameras have parameters ordered as $P_i = \{r_i, C_i, c_i, k_{i}\}$ and the projection function equals \begin{eqnarray} p(\hat{X}_j,\hat{P}_i) = \Phi_i(c_i R(\hat{r}_i) ( \hat{X}_j - \hat{C}_i )) \quad \quad \forall (i,j) \in S \end{eqnarray} where $\Phi_i$ projects vectors from $\mathbb{R}^3$ to $\mathbb{R}^2$ by (i) first dividing by the third coordinate, and (ii) then applying image distortion with parameters $\hat{P}_i$. Note that function $\Phi_i$ can be chosen quite freely, e.g.\ adding a tangential distortion or encountering a rolling shutter projection model~\cite{albl2015r6p}. Using Eqn.~\ref{eqn:similarity-equality}, we are getting for $\forall (i,j) \in S$ \begin{eqnarray} \label{eqn:transformation-conditions} p(\hat{X}_j,\hat{P}_i) &=& p(\lS{s}{\!\hat{X}}_j(q),\lS{s}{\!\hat{P}}_i(q)) \\ p(\hat{X}_j,\hat{P}_i) &=& \Phi_i(c_i \, \lS{s}{R}(\!\hat{r}_i,s) (\lS{s}{\!\hat{X}}_j(q) - \lS{s}{\!\hat{C}}_i(q) )) \\ p(\hat{X}_j,\hat{P}_i) &=& \Phi_i(c_i \, (R(\!\hat{r}_i) R(s)^{-1}) \, ((\mu R(s) \hat{X}_j + T) - (\mu R(s) \hat{C}_i + T) )) \label{eqn:transformation-conditions-3} \end{eqnarray} Note that for any parameters $q$, the projection remains unchanged. It can be checked by expanding the equation above. Eqn.~\ref{eqn:transformation-conditions-3} is linear in $T$ and $\mu$. The differences of $\hat{X}_j$ and $\hat{C}_i$ are as follows \begin{eqnarray} \Delta \hat{X}_j(\hat{X}_j,q) &=& \hat{X}_j - \lS{s}{\!\hat{X}}_j(q) = \hat{X}_j - (\mu R(s) \hat{X}_j + T) \\ \Delta \hat{C}_i(\hat{C}_i,q) &=& \hat{C}_i - \lS{s}{\!\hat{C}}_i(q) = \hat{C}_i - (\mu R(s) \hat{C}_i + T) \end{eqnarray} The Jacobian $J_{\hat{\theta}}$ and the nullspace $H$ can be written as \begin{equation} J_{\hat{\theta}} = \dfrac{\partial f(\hat{\theta})}{\partial \hat{\theta}} = \mat{cccccc}{ \dfrac{\partial p_1}{\partial \hat{P}_1} & \dots & \dfrac{\partial p_1}{\partial \hat{P}_n} & \dfrac{\partial p_1}{\partial \hat{X}_1} & \dots & \dfrac{\partial p_1}{\partial \hat{X}_m}\\ \vdots & & \vdots & \vdots & & \vdots \\ \dfrac{\partial p_t}{\partial \hat{P}_1} & \dots & \dfrac{\partial p_t}{\partial \hat{P}_n} & \dfrac{\partial p_t}{\partial \hat{X}_1} & \dots & \dfrac{\partial p_t}{\partial \hat{X}_m} }, \quad H = \mat{ccc}{ H_{\hat{P}_1}^{T} & H_{\hat{P}_1}^{s} & H_{\hat{P}_1}^{\mu} \\ \vdots & \vdots & \vdots \\ H_{\hat{P}_n}^{T} & H_{\hat{P}_n}^{s} & H_{\hat{P}_n}^{\mu}\\ H_{\hat{X}_1}^{T} & H_{\hat{X}_1}^{s} & H_{\hat{X}_1}^{\mu}\\ \vdots & \vdots & \vdots \\ H_{\hat{X}_m}^{T} & H_{\hat{X}_m}^{s} & H_{\hat{X}_m}^{\mu}} \end{equation} where $p_t$ is the $t^{th}$ observation, i.e.\ the pair $(i,j) \in S$. The columns of $H$ are related to transformation parameters $q$. The rows are related to parameters $\hat{\theta}$. The derivatives of differences of scene parameters $\Delta \hat{P_i} = [\Delta \hat{r}_i, \Delta \hat{C}_i, \Delta \hat{c}_i, \Delta \hat{k}_i]$ and $\Delta \hat{X}_j$ with respect to the transformation parameters $q=[T, s, \mu]$ are exactly the blocks of the nullspace \begin{equation} \quad \quad H = \mat{ccc}{ \dfrac{\partial \Delta r_1}{\partial T} & \dfrac{\partial \Delta r_1}{\partial s} & \dfrac{\partial \Delta r_1}{\partial \mu} \\ \dfrac{\partial \Delta C_1}{\partial T} & \dfrac{\partial \Delta C_1}{\partial R(s)} & \dfrac{\partial \Delta C_1}{\partial \mu} \\ \dfrac{\partial \Delta c_1}{\partial T} & \dfrac{\partial \Delta c_1}{\partial R(s)} & \dfrac{\partial \Delta c_1}{\partial \mu} \\ \dfrac{\partial \Delta k_1}{\partial T} & \dfrac{\partial \Delta k_1}{\partial R(s)} & \dfrac{\partial \Delta k_1}{\partial \mu} \\ \vdots & \vdots & \vdots \\ \dfrac{\partial \Delta X_1}{\partial T} & \dfrac{\partial \Delta X_1}{\partial R(s)} & \dfrac{\partial \Delta X_1}{\partial \mu} \\ \vdots & \vdots & \vdots \\ \dfrac{\partial \Delta X_m}{\partial T} & \dfrac{\partial \Delta X_m}{\partial R(s)} & \dfrac{\partial \Delta X_m}{\partial \mu} } = \mat{ccc}{ 0_{3 \times 3} & H_{r_1} & 0_{3 \times 1} \\ I_{3 \times 3} & [C_1]_x & C_1 \\ 0_{1 \times 3} & 0_{1 \times 3} & 0 \\ 0_{1 \times 3} & 0_{1 \times 3} & 0 \\ \vdots & \vdots & \vdots \\ I_{3 \times 3} & [X_1]_x & X_1 \\ \vdots & \vdots & \vdots \\ I_{3 \times 3} & [X_m]_x & X_m } \end{equation} where $[v]_x$ is the skew symmetric matrix such that $[v]_x\, y = v \times y$ for all $v, y \in \mathbb{R}^3$. Eqn.~\ref{eqn:transformation-conditions-3} is not linear in rotation $s$. To deal with any rotation representation, we can compute the values of $H_{\hat{r}_i}$ for all $i$ using Eqn.~\ref{eqn:jacobian-nullspace-condition}. The columns, which contain blocks $H_{\hat{r}_i}$, are orthogonal to the rest of the nullspace and to the Jacobian $J_{\hat{\theta}}$. \begin{figure}[!t] \centering \begin{subfigure}[t]{0.7\textwidth} \centering \includegraphics[height=4.5cm]{J.pdf} \caption{The Jacobian $J_{\hat{\theta}}$} \end{subfigure} \hbox{\begin{subfigure}[t]{0.28\textwidth} \centering \includegraphics[height=4.4cm]{H.pdf} \caption{The nullspace $H$} \end{subfigure}} \caption{The structure of the matrices $J_{\hat{\theta}} \, H$ for Cube dataset, for clarity, using 6 parameters for one camera $\hat{P}_i$(no focal length and lens distortion shown). The matrices $J_{\hat{r}}$ and $H_{\hat{r}}$ are composed from the red submatrices of $J$ and $H$. The multiplication of green submatrices equals $-B$, see Eqn.~\ref{eqn:rotation-nullspace-equation}.} \label{fig:nulspaceJH-structure} \end{figure} The system of equations $J_{\hat{\theta}} H = 0$ can be rewritten as \begin{equation} \label{eqn:rotation-nullspace-equation} J_{\hat{r}} H_{\hat{r}} = B \end{equation} where $J_{\hat{r}} \in \mathbb{R}^{3n \times 3n}$ is composed as a block-diagonal matrix from the red submatrices (see Fig.~\ref{fig:nulspaceJH-structure}) of $J_{\hat{\theta}}$. The matrix $H_{\hat{r}} \in \mathbb{R}^{3n \times 3}$ is composed from red submatrices $H_{\hat{r}_i} \in \mathbb{R}^{3n \times 3}$ as \begin{equation} H_{\hat{r}} = \mat{ccc}{H_{\hat{r}_1}^{\top} & \dots & H_{\hat{r}_n}^{\top}}^{\top} \end{equation} The matrix $B \in \mathbb{R}^{3n \times 3}$ is composed of the green submatrices (see Fig.~\ref{fig:nulspaceJH-structure}) of $J_{\hat{\theta}}$ multiplied by the minus green submatrices of $H$. The solution to this system is \begin{equation} \label{eqn:rotation-nullspace-equation} H_{\hat{r}} = J_{\hat{r}}^{-1} B \end{equation} where $B$ is computed by a sparse multiplication, see Fig.~\ref{fig:nulspaceJH-structure}. The inversion of $J_{\hat{r}}$ is the inversion of a sparse matrix with $n$ blocks $\mathbb{R}^{3 \times 3}$ on the diagonal. \subsection{Uncertainty propagation to camera parameters} The propagation of uncertainty is based on Eqn.~\ref{eqn:inversion-of-extended-information-matrix}. The inversion of extended Fisher information matrix is first conditioned for better numerical accuracy as follows \begin{figure}[bt] \centering \includegraphics[height=3.7cm]{Qp.pdf} \caption{The structure of the matrix $Q_p$ for Cube dataset and $\hat{P}_i \in \mathbb{R}^6$.} \label{fig:Qp} \end{figure} \begin{eqnarray} \mat{cc}{\Sigma_{\hat{\theta}} & K \\ K^{\top} & T } &=& \mat{cc}{S_a & 0 \\ 0 & S_b } \left( \mat{cc}{S_a & 0 \\ 0 & S_b } \mat{cc}{M & H \\ H^{\top} & 0 } \mat{cc}{S_a & 0 \\ 0 & S_b } \right)^{-1} \mat{cc}{S_a & 0 \\ 0 & S_b} \\ \mat{cc}{\Sigma_{\hat{\theta}} & K \\ K^{\top} & T } &=& \mat{cc}{S_a & 0 \\ 0 & S_b } \mat{cc}{M_s & H_s \\ H_s^{\top} & 0 }^{-1} \mat{cc}{S_a & 0 \\ 0 & S_b} \\ \mat{cc}{\Sigma_{\hat{\theta}} & K \\ K^{\top} & T } &=& S Q^{-1} S \end{eqnarray} by diagonal matrices $S_a$,$S_b$ which condition the columns of matrices~$J$,~$H$. Secondly, we permute the columns of $Q$ to have point parameters followed by the camera parameters \begin{equation} \mat{cc}{\Sigma_{\hat{\theta}} & K \\ K^{\top} & T } = S \widetilde{P} (\widetilde{P} Q \widetilde{P})^{-1} \widetilde{P} S = S \widetilde{P} Q_p^{-1} \widetilde{P} S \end{equation} where $\widetilde{P}$ is an appropriate permutation matrix. The matrix $Q_p = \widetilde{P} Q \widetilde{P}$ is a full rank matrix which can be decomposed and inverted using a block matrix inversion \begin{equation} \label{eqn:block-inversion-Qp} Q_p^{-1} = \mat{cc}{A_p & B_p \\ B_p^{\top} & D_p}^{-1} = \mat{cc}{A_p^{-1} + A_p^{-1} B Z_p^{-1} B_p^{\top} A_p^{-1} & -A_p^{-1} B Z_p^{-1} \\ -Z_p^{-1} B_p^{\top} A_p^{-1} & Z_p^{-1}} \end{equation} where $Z_p$ is the symmetric Schur complement matrix of point parameters block~$A_p$ \begin{equation} \label{eqn:schur-complement-inversion} Z_p^{-1} = (D_p - B_p^{\top} A_p^{-1} B_p)^{-1} \end{equation} Matrix $A_p \in \mathbb{R}^{3m \times 3m}$ is a sparse symmetric block diagonal matrix with $\mathbb{R}^{3 \times 3}$ blocks on the diagonal, see Fig.~\ref{fig:Qp}. The covariances for camera parameters are computed using the inversion of $Z_p$ with the size $\mathbb{R}^{(8n+7) \times (8n+7)}$ for our model of cameras (i.e.,~$P_i \in \mathbb{R}^{8}$) \begin{equation} \Sigma_{\hat{P}} = S_{P} Z_s S_{P} \end{equation} where $Z_s \in \mathbb{R}^{8n \times 8n}$ is the left top submatrix of $Z_p^{-1}$ and $S_{P}$ is the corresponding sub-block of scale matrix $S_a$. \section{Uncertainty for sub-reconstructions} The algorithm based on Gauss-Markov estimate with constraints, which is described in Section~\ref{sec:solution-method}, works in principle properly for thousands of cameras. However, large-scale reconstructions with thousands cameras would require a large space, e.g.\ 131GB for Rome dataset~\cite{li2010location}, to store the matrix $Z_p$ for our camera model $\hat{P}_i \in \mathbb{R}^8$, and its inversion might be inaccurate due to rounding errors. Fortunately, it is possible to evaluate the uncertainty of a camera $\hat{P}_i$ from only a partial sub-reconstruction comprising cameras and points in the vicinity of $\hat{C}_i$. Using sub-reconstructions, we can approximate the uncertainty computed from a complete reconstruction. The error of our approximation decreases with increasing size of a sub-reconstruction. If we add a camera to a reconstruction, we add at least four observations which influence the Fisher information matrix $M_i$ as \begin{equation} M_{i+1} = M_i + M_{\Delta} \end{equation} where the matrix $M_{\Delta}$ is the Fisher information matrix of the added observations. We can propagate this update using equations in Section~\ref{sec:solution-method} to the Schur complement matrix \begin{equation} Z_{i+1} = Z_i + Z_{\Delta} \end{equation} which has full rank. Using Woodbury matrix identity \begin{equation} (Z_i + J_{\Delta}^{\top} \Sigma_{\Delta} J_{\Delta})^{-1} = Z_i^{-1} - Z_i^{-1} J_{\Delta}^{\top} (I + J_{\Delta} Z_i J_{\Delta}^{\top})^{-1} J_{\Delta} Z_i^{-1} \end{equation} we can see that the positive definite covariance matrices are subtracted after adding some observations, i.e.\ the uncertainty decreases. We show empirically that the error decreases with increasing the size of the reconstruction (see Fig.~\ref{fig:precision}). We have found that for 100--150 neighbouring cameras, the error is usually small enough to be used in practice. Each evaluation of the sub-reconstruction produces an upper bound on the uncertainty for cameras involved in the sub-reconstruction. The accuracy of the upper bound depends on a particular decomposition of the complete reconstruction into sub-reconstructions. To get reliable results, it is useful to decompose the reconstruction several times and choose the covariance matrix with the smallest trace. The theoretical proof of the quality of this approximation and selection of the optimal decomposition is an open question for future research. \section{Experimental evaluation} \label{sec:experiments} We use synthetic as well as real datasets (Table~\ref{table:datasets}) to test and compare the algorithms (Table~\ref{table:algorithms}) with respect the accuracy (Fig.~\ref{fig:precision}) and speed (Fig.~\ref{fig:speed}). The evaluations on sub-reconstructions are shown in Figs.~\ref{fig:cam_cov_approx100},~\ref{fig:relative-approx-view-graph-error},~\ref{fig:absolute-approx-view-graph-error}. All experiments were performed on a single computer with one 2.6GHz Intel Core i7-6700HQ with 32GB RAM running a 64-bit Windows 10 operating system. \setlength{\tabcolsep}{4pt} \begin{table}[bt] \begin{center} \caption{Summary of the datasets: $N_{P}$ is the number of cameras, $N_{X}$ is the number of points in 3D and $N_{u}$ is the number of observations. Datasets 1 and 3 are synthetic, 2, 9 from COLMAP~\cite{schoenberger2016sfm}, and 4-8 from Bundler~\cite{snavely2006photo}} \label{table:datasets} \begin{tabular}{lllll} \hline\noalign{\smallskip} \# & Dataset & $N_P$ & $N_X$ & $N_u$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} 1 & Cube & 6 & 15 & 60 \\ 2 & Toy & 10 & 60 & 200 \\ 3 & Flat & 30 & 100 & 1033 \\ 4 & Daliborka & 64 & 200 & 5205 \\ \hline 5 & Marianska & 118 & 80 873 & 248 511 \\ 6 & Dolnoslaskie & 360 & 529 829 & 226 0026 \\ 7 & Tower of London & 530 & 65 768 & 508 579 \\ 8 & Notre Dame & 715 & 127 431 & 748 003 \\ 9 & Seychelles & 1400 & 407 193 & 2 098 201 \\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} \paragraph{\bf Compared algorithms} are listed in Table~\ref{table:algorithms}. The standard way of computing the covariance matrix $\Sigma_{\hat{P}}$ is by using the M-P inversion of the information matrix using the Singular Value Decomposition (SVD) with the last seven singular values set to zeros and inverting the rest of them as in~\cite{polic2017uncertainty3DV}. There are many implementations of this procedure that differ in numerical stability and speed. We compared three of them. Alg.~1 uses high precision number representation in Maple (runs 22~hours on Daliborka dataset), Alg.~2 denotes the implementation in Ceres~\cite{ceres-solver}, which uses Eigen library~\cite{eigen} internally (runs 25.9~minutes on Daliborka dataset) and Alg.~3 is our Matlab implementation, which internally calls LAPACK library~\cite{anderson1990lapack} (runs 0.45~seconds on Daliborka dataset). Further, we compared Lhuilier~\cite{lhuillier2006} and Polic~\cite{polic2017uncertainty3DV} approaches, which approximate the uncertainty propagation, with our algorithm denoted as {\em Nullspace bounding uncertainty propagation}~(NBUP). \\ \setlength{\tabcolsep}{4pt} \begin{table}[bt] \begin{center} \caption{The summary of used algorithms } \label{table:algorithms} \begin{tabular}{ll} \hline\noalign{\smallskip} \# & Algorithm \\ \noalign{\smallskip} \hline \noalign{\smallskip} 1. & M-P inversion of $M$ using Maple (Kanatani~\cite{kanatani2001gauges}) (\textbf{Ground Truth})\\ 2. & M-P inversion of $M$ using Ceres (Kanatani~\cite{kanatani2001gauges}) \\ 3. & M-P inversion of $M$ using Matlab (Kanatani~\cite{kanatani2001gauges}) \\ 4. & M-P inversion of Schur complement matrix with correction term (Lhuillier~\cite{lhuillier2006})\\ 5. & TE inversion of Schur complement matrix with three points fixed (Polic~\cite{polic2017uncertainty})\\ 6. & \textbf{Nullspace bounding uncertainty propagation (NBUP)} \\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} \paragraph{\bf The accuracy} of all algorithms is compared against the Ground Truth (GT) in Fig.~\ref{fig:precision}. The evaluation is performed on the first four datasets which have reasonably small number of 3D points. The computation of GT for the fourth dataset took about 22~hours and larger datasets were uncomputable because of time and memory requirements. We decomposed information matrix using SVD, set exactly the last seven singular values to zero and inverted the rest of them. We also used 100 significant digits instead of 15 digits used by a double number representation. The GT computation follows approach from~\cite{polic2017uncertainty3DV}. The covariance matrices for our camera model (comprising rotation, camera center, focal length and radial distortion) contain a large range of values. Some parameters, e.g.\ rotations represented by the Euler vector, are in units while other parameters, as the focal length, are in thousands of units. Moreover, the rotation is in all tested examples better constrained than the focal length. This fact leads to approximately $\num{6e-5}$ mean absolute value in rotation part of the covariance matrix and approximately $\num{3e4}$ mean value for the focal length variance. Standard deviations for datasets 1-4 and are about $\num{8e-3}$ for rotations and $\num{2e3}$ for focal lengths. To obtain comparable standard deviations for different parameters, we can divide the mean values of rotations by $\pi$ and focal length by $\num{2e3}$. We used the same approach for the comparison of the measured errors \begin{equation} \label{eqn:relative-error} err_{\hat{P_i}} = \frac{1}{64} \sum_{l=1}^8 \sum_{m=1}^8 \left( \sqrt{|\widetilde{\Sigma}_{\hat{P_i}(l,m)} - \widehat{\Sigma}_{\hat{P_i}(l,m)} |} \oslash O_{(l,m)} \right) \end{equation} The error $err_{\hat{P_i}}$ shows the differences between GT covariance matrices $\widetilde{\Sigma}_{\hat{P_i}}$ and the computed ones~$\hat{\Sigma}_{\hat{P_i}}$. The matrix \begin{equation} O = \sqrt{E(\hat{|P_i|}) \, E(\hat{|P_i|})^{\top}} \end{equation} has dimension $O \in \mathbb{R}^{8\times 8}$ and normalises the error to percentages of the absolute magnitude of the original units. Symbol $\oslash$ stands for element-wise division of matrices (i.e.\ $\bar{C} = \bar{A} \oslash \bar{B}$ equals $\bar{C}_{(i,j)} = \bar{A}_{(i,j)} / \bar{B}_{(i,j)}$ for $\forall (i,j)$). Fig.~\ref{fig:precision} shows the comparison of the mean of the errors for all cameras in the datasets. We see that our new method, NBUP, delivers the most accurate results on all datasets. \begin{figure}[tb] \centering \includegraphics[width=.7\linewidth]{precision2.pdf} \caption{The mean error $err_{\hat{P_i}}$ of all cameras $\hat{P_i}$ and Alg.~2-6 on datasets 1-4. Note that the Alg.~3, leading to the normal form of the covariance matrix, is numerically much more sensitive. It sometimes produces completely wrong results even for small reconstructions.} \label{fig:precision} \end{figure} \paragraph{\bf Speed} of the algorithms is shown in Fig.~\ref{fig:speed}. Note that the M-P inversion (i.e.\ Alg.~1-3) cannot be evaluated on medium and larger datasets 5-9 because of memory requirements for storing dense matrix $M$. We see that our new method NBUP is faster than all other methods. Considerable speedup is obtained on datasets 7-9 where our NBUP method is about 8 times faster. \begin{figure}[t] \centering \begin{minipage}[t]{0.48\textwidth} \includegraphics[width=0.99\linewidth]{speed_intel_small.pdf} \caption{The speed comparison. Full comparison against Alg.~2, 3 was not possible because of the memory complexity. Alg.~3 failed, see Fig.~\ref{fig:precision}.} \label{fig:speed} \end{minipage}~ \begin{minipage}[t]{0.48\textwidth} \includegraphics[width=0.99\linewidth]{cam_cov_approx100_small.pdf} \caption{The relative error for approximating camera covariances by one hundred of their neighbours from the view-graph.} \label{fig:cam_cov_approx100} \end{minipage} \end{figure} \begin{figure}[b!] \centering \begin{subfigure}[t]{0.48\textwidth} \includegraphics[height=5.4cm]{relative_mean_error.pdf} \caption{Mean of relative error $err_{\hat{P}_i}$} \label{fig:relative-approx-view-graph-error} \end{subfigure}~ \begin{subfigure}[t]{0.48\textwidth} \includegraphics[height=5.4cm]{absolute_mean_error.pdf} \caption{Median of absolute error} \label{fig:absolute-approx-view-graph-error} \end{subfigure} \caption{The error of the uncertainty approximation using sub-reconstructions as a function of the number of cameras in the sub-reconstruction.} \label{fig:sub-reconstruction-view-graph} \end{figure} \paragraph{\bf Uncertainty approximation} on sub-reconstructions was tested on datasets 5-9. We decomposed reconstructions several times using a different number of cameras $\bar{k} = \{5,10,20,40,80,160,320\}$ inside smaller sub-reconstructions, and measured relative and absolute errors of approximated covariances for cameras parameters. Fig.~\ref{fig:sub-reconstruction-view-graph} shows the decrease of error for larger sub-reconstructions. There were 25 sub-reconstructions for each $\bar{k}_i$ with the set of neighbouring cameras randomly selected using the view graph. Note that Fig.~\ref{fig:relative-approx-view-graph-error} shows the mean of relative errors given by Eqn.~\ref{eqn:relative-error}. Fig.~\ref{fig:absolute-approx-view-graph-error} shows that the absolute covariance error decreases significantly with increasing the number of cameras in a sub-reconstruction. Fig.~\ref{fig:cam_cov_approx100} shows the error of the simplest approximation of covariances used in practice. For every camera, one hundred of its neighbours using view-graph were used to get a sub-reconstruction for evaluating the uncertainties. It produces upper bound estimates for the covariances for each camera from which we selected the smallest one, i.e.\ the covariance matrix with the smallest trace, and evaluate the mean of the relative error $err_{\hat{P_i}}$. \section{Conclusions} Current methods for evaluating of the uncertainty~\cite{lhuillier2006},\cite{polic2017uncertainty3DV} in SfM rely 1) either on imposing the gauge constraints by using a few parameters as observations, which does not lead to the natural form of the covariance matrix, or 2) on the Moore-Penrose inversion~\cite{ceres-solver}, which cannot be used in case of medium and large-scale datasets because of cubic time and quadratic memory complexity. We proposed a new method for the nullspace computation in SfM and combined it with Gauss Markov estimate with constraints~\cite{rao1973linear} to obtain a full-rank matrix~\cite{forstner2016photogrammetric} allowing robust inversion. This allowed us to use efficient methods from SLAM such as block matrix inversion or Woodbury matrix identity. Our approach is the first one which allows a computation of natural form of the covariance matrix on scenes with more than thousand of cameras, e.g.\ 1400 cameras, with affordable computation time, e.g.\ 60 seconds, on a standard PC. Further, we show that using sub-reconstruction of roughly 100-300 cameras provides reliable estimates of the uncertainties for arbitrarily large scenes. \section{Acknowledgement} This work was supported by the European Regional Development Fund under the project IMPACT (reg. no. CZ.02.1.01/0.0/0.0/15\_003/0000468), EU-H2020 project LADIO no. 731970, and by Grant Agency of the CTU in Prague projects SGS16/230/OHK3/3T/13, SGS18/104/OHK3/1T/37. \clearpage \bibliographystyle{splncs04}
1,314,259,992,832
arxiv
\subsection{Amazon Web Services} Amazon's Elastic Compute Cloud (EC2) allows users to rent virtual computers on which to run their own computer applications. EC2 allows the deployment of applications by providing a web service through which a user can boot an Amazon Machine Image to create a virtual machine, which Amazon calls an ``instance,'' containing any software desired. A user can create, launch, and terminate server instances as needed, paying by the hour for active servers, hence the term ``elastic.'' EC2 provides users with control over the geographical location of instances, which allows for latency optimization and high levels of redundancy. For example, to minimize downtime, a user can set up server instances in multiple zones that are insulated from each other for most causes of failure, such that one backs up the other. Amazon Simple Storage Service (S3) is a web service that enables users to store data in the cloud. Users can then download the data or use the data with other Amazon Web Services (AWS), such as EC2, Amazon Elastic MapReduce, and Amazon Import/Export. With Amazon S3, a user can charge others who download data the user makes available. A user can store up to 5 TB of data in one object but can store as many objects as desired. The path to the data is a URL, which makes accessing the data easy. \minihead{History} Amazon announced a limited public beta of EC2 in 2006. Access to EC2 was granted on a first-come, first-served basis. Amazon added two new instance types (Large and Extra-Large) in October 2007. Before EC2, Amazon launched S3, its first publicly available web service, in the United States in March 2006 and in Europe in November 2007. S3 initially allowed storage of objects up to 5~GB (increased to 5~TB in December 2010). In May 2008, two more instance types were added, High-CPU Medium and High-CPU Extra Large. Currently nine types of instances are available, including Compute-Cluster instances that serve high-end CPU and interconnect requirements. Compute-Cluster instances use a 10-Gbps interconnect. Amazon continuously adds features to its portfolio; these features have included static IP addresses, Availability Zones (specified datacenters), and user-selectable kernels. Amazon added Elastic Block Store (EBS) in August 2008. EBS allows the user to create storage volumes that can be mounted by EC2 instances. EBS also allows these volumes to be backed-up to S3, providing persistent storage. EC2 moved from beta to full production in October 2008. \subsubsection{EC2 Characteristics} Amazon EC2 presents a virtual computing environment, allowing a user to use web service interfaces to launch instances (virtual machines) with a variety of operating systems, load them with a custom application environment, manage the network's access permissions, and run the image using as many or few systems as desired. EC2 is intended to have the following characteristics~\cite{ec2-web}: \begin{shortlist} \item Elastic: Capacity can be increased or decreased within minutes. A user can commission one to thousands of server instances simultaneously. Because this is all controlled with web service APIs, an application can automatically scale itself up and down depending on its needs. \item Completely controlled: Users have complete control of their instances. The user has root access to each instance; thus, the user can stop an instance while retaining the data on a boot partition and then subsequently restart the same instance using web service APIs. Instances can be rebooted remotely by using web service APIs. \item Flexible: The user has the choice of multiple instance types, operating systems, and software packages. Amazon EC2 allows the user to select a configuration of memory, CPU, instance storage, and the boot partition size that is optimal for the choice of operating system and application. Operating systems include numerous Linux distributions, Microsoft Windows Server, and OpenSolaris. \item Designed for use with other Amazon Web Services: EC2 works in conjunction with S3, Amazon Relational Database Service (Amazon RDS), Amazon SimpleDB, and Amazon Simple Queue Service (Amazon SQS) to provide a complete solution for computing, query processing, and storage across a wide range of applications. \item Reliable: Amazon EC2 offers a highly reliable environment where replacement instances can be rapidly and predictably commissioned. The service runs within Amazon's proven network infrastructure and datacenters. The Amazon EC2 Service Level Agreement commitment is 99.95\% availability for each Amazon EC2 Region. To ensure a higher-level of availability, users can deploy EC2 instance to different EC2 Regions and Availability Zones. \item Secure: Amazon EC2 provides numerous mechanisms for securing the user's compute resources, including customizable firewall settings that control network access to and between groups of instances and isolation of compute instances by using the Virtual Private Cloud (VPC) service. \item Inexpensive: Amazon EC2 passes on some of financial benefits of Amazon's scale. Users pay only for the resources that they consume. \end{shortlist} EC2 features include Amazon Elastic Block Store (EBS), which offers persistent storage for Amazon EC2 instances; Amazon CloudWatch, a web service that provides monitoring for AWS cloud resources; Amazon Virtual Private Cloud (VPC), a set of isolated compute resources accessible via a Virtual Private Network (VPN) connection; and high-performance computing clusters, tightly coupled computer resources with high-performance network capability. \subsubsection{S3 Characteristics} S3 is based on the idea that Internet storage should be taken for granted. It is intended to free developers from worrying about how they will store their data, whether it will be safe and secure, or whether they will have enough storage available. There are no upfront costs for setting up a storage solution. The operational costs can be managed by using Amazon's tools and generally depend on the storage usage. However, depending on the overall amount of storage and usage, the operational costs can be higher than for an on-premise solution. The functionality of S3 is simple and robust: store any amount of data while ensuring that the data will always be available when needed. S3 enables developers to focus on innovating with data, rather than figuring out how to store it. A forcing-function for the S3 design was that a single S3 distributed system was needed that supported the requirements of both internal Amazon applications and external developers of any application. This meant that S3 had to be fast and reliable enough to run Amazon.com's websites, while flexible enough that any developer could use it for any data storage need. S3 was built to fulfill the following design requirements:~\cite{s3-web} \begin{shortlist} \item Scalable: S3 can scale in terms of storage, request rate, and users to support an unlimited number of web-scale applications. It uses scale as an advantage: adding nodes to the system increases, not decreases, its availability, speed, throughput, capacity, and robustness. \item Reliable: Amazon provides different kind of SLAs: The best SLA ensures 99.999999999\% durability with 99.99\% availability. Lower-level SLAs offering, for example, less durability are available. The overall architecture avoids single points of failures. If a failure occurs, the system attempts to repair itself without any downtime. \item Fast: S3 must be fast enough to support high-performance applications. Generally S3 storage can be collocated in the same datacenter as the compute instances. Using CloudFront---Amazon's content-delivery network---S3 content can be efficiently distributed. \item Inexpensive: S3 is inexpensive because it is built from inexpensive commodity hardware components as well as open source software such as Linux and Xen. It is also hardware-agnostic, so the price decreases as Amazon continues to drive down infrastructure costs. \item Simple: Building highly scalable, reliable, fast, and inexpensive storage is difficult. S3 offers the user a service that fulfills these properties using an easy-to-use REST-based interface. \end{shortlist} We note that these are design requirements, not necessarily operational characteristics. Amazon has recently had some very public failures that resulted in the unavailability of a large number of applications that depended on AWS ~\cite{aws-failure-2010,aws-failure-2011}. The S3 architecture is designed to be programming language-neutral, using Amazon's supported interfaces to store and retrieve objects. S3 provides both a REST and a SOAP interface. Buckets are the fundamental container in S3 for data storage, and they provide a unique namespace for the management of objects contained in the bucket. Objects (which are stored in buckets) consist of object data and metadata and can range in size from 1~B to 5~TB. The data portion is opaque to S3, and the metadata is a set of name-value pairs that describe the object. Each object is uniquely identified by a key. Together, a bucket name and a key uniquely identify an object in Amazon S3. \subsubsection{Usage Modes} Amazon describes the usage of Amazon Web Services as application hosting, backup and storage, content delivery, and so forth~\cite{AWS_case_studies} Most usage of AWS has been built around the ``usage pattern'' of using EC2 as an infrastructure that is available on demand, either as a complete substitute for in-house computing (e.g., hosting web services) or as a resource that can handle excess demand (e.g, cloudbursting); in both cases S3 is used to store the required images and data. In addition, several companies use EC2 or S3 to, in turn, host PaaS-like or SaaS-like capabilities, as described under Successes and Limitations. In addition, S3 is often used simply as distributed storage (e.g., for content storage and distribution; for backup, archiving, and disaster recovery). Given that a common usage of AWS is as a source of on-demand pool of resources (spare and instantaneously available), most applications have developed ``glue code'' that directly spins up instances as needed. Many applications, however, have made use of other features of AWS, for example, Amazon Queuing Service (AQS), Elastic Beanstalk, and Elastic MapReduce. In general, many services with well-defined APIs are emerging that provide easier ways to do more than just stand up an image instance (a characteristic of IaaS clouds); they extend the basic IaaS capability to provide SaaS-like capabilities. It is likely that an increasing number of data analytics services will be provided at this level. \subsubsection{Successes and Limitations} Amazon has been successfully used by a wide variety of users~\cite{AWS_customer_apps}. As of November 2010, Amazon was publicizing 569 customer applications that had been built on top of EC2 and S3. In addition, a variety of companies are using EC2 and S3 for application hosting, backup and storage, content delivery, e-commerce, high performance computing, media hosting, on-demand workforce, search engines, and web hosting. Some well-known examples of such companies are DropBox, Facebook, Guardian News \& Media, Playfish, Salesforce.com, and the MathWorks~\cite{AWS_solution_providers, AWS_case_studies}. Examples of academic and scientific projects using EC2 include the Belle high energy physics experiment's use of the DIRAC framework to process data using EC2 to supplement existing resources~\cite{belleDIRAC} and NASA's use of the Polyphony framework~\cite{polyphony} to execute large workflows on EC2 in conjunction with existing supercomputers, with Amazon's Simple Queuing Service used for coordination. As previously mentioned, Amazon has also had very public failures that primarily impacted public-facing companies, making their products temporarily unavailable~\cite{aws-failure-2010,aws-failure-2011}. These issues are probably less important for most scientific applications. On the whole, AWS has been successful and has been a pioneer in the development of cloud computing; most limitations of AWS are probably limitations of current virtualization and cloud technology, for example, the limited support for applications that require tightly coupled parallelism . Currently, the cost of data movement into and out of AWS is sufficiently expensive that academic data-intensive applications (e.g., decadal astronomy surveys, bioinformatics projects that analyze data from next generation sequencers) are unable to utilize AWS as a production alternative to campus cyberinfrastructure. \subsection{Microsoft Azure} Azure~\cite{win_azure} is an emerging cloud platform developed and operated by Microsoft. Azure follows the PaS paradigm, offering an integrated solution for managing compute and data-intensive tasks as well as web applications. The platform is able to dynamically scale applications without the need to manually manage tasks and deployments on the virtual-machine level. In contrast to traditional IaaS clouds (e.g., EC2 and S3), Azure provides different benefits: first, it operates on a higher level of abstraction and removes the need to manage details, such as configuration and patching of the operating system; and second, Azure applications are declaratively described and packaged and are automatically mapped to available hardware by the fabric controller (which generally manages the lifecycle of all VMs, monitors them, automatically reacts to hardware and software failures, and manages application upgrades). \minihead{History} The foundation for Azure was laid by a memo from Microsoft's chief software architect Ray Ozzie on the Internet service disruption in 2005~\cite{Ozzie:2005fk}. Azure was first announced at the Microsoft Professional Developer Conference in October 2008. The initial customer preview included the Azure Storage services: Blob, Queue, and Table storage, as well as the two kinds of hosted services for web applications and for general compute tasks. Gradually, new features (e.g., support for native code, development tools for Java/PHP, a content delivery network) have been added to the platform. The latest addition is a generic Windows VM hosting service. Azure went into production in January 2010. \subsubsection{Characteristics} Azure is a group of cloud-related technologies. Parts of these technologies, such as the Windows Azure storage and compute services, have been specifically designed for cloud environments, while other services are mainly ports of Microsoft's existing in-house products, for example, SQL Azure from Microsoft's SQL Server. Windows Azure can be used for different types of on-demand computing and for hosting generic server-side applications. Azure was designed addressing the following principles: \begin{shortlist} \item Simplicity: Azure provides a set of well-defined services that are accessible via standard protocols, such as HTTP. \item Strong consistency: In contrast to other storage services such as S3, Azure storage provides strong consistency guarantees. \item Failure tolerance: Failures of the system are handled by the Azure fabric controller, which monitors all applications running in a role environment and restarts them if necessary. Each fabric controller is redundantly deployed to a cluster of five to seven machines using a master-based replication algorithm. Paxos~\cite{279229} is used for master election. \item Caching: Using standard HTTP header mechanisms such Etag and If-Match HTTP, client-side caching of requests is supported. \item Autonomy: Azure utilizes a hierarchical management structure consisting of the fabric controllers and agents that operate according to a set of specified objectives. \end{shortlist} Azure provides different abstractions as building blocks for creating scalable and reliable scientific applications, including web and worker roles for compute services and blob, table, and queue services for storage. Windows Azure formalizes different types of virtual machines into {\em roles}. {\em Web} roles are used to host web applications and frontend code; {\em worker} roles are well suited for background processing. While these roles target specific scenarios, they are also customizable. Worker roles can, for example, run native code. The application must implement a defined entry point, which is then called by Azure. The newest addition are {\em Azure VM} roles, which essentially allow users to run Windows Server 2008 VMs on Azure. VM roles give users more control over the environment than do worker roles but still maintain PaaS benefits, such as automatic operating system updates, fault tolerance, and automatic load balancing. VM roles can also be accessed via the Remote Desktop Protocol and are particularly well suited for running more complex applications. For storing large amounts of data, the Azure storage platform provides three key services: \emph{Azure Blob Storage} for storing large objects of raw data, \emph{Azure Table Storage} for semi-structured data, and \emph{Azure Queue Storage} for implementing message queues. The data is storage replicated across multiple data centers to protect it against hardware and software failures. In contrast to other cloud offerings (e.g., S3), the Azure Storage Services provide strong consistency guarantees, i.e., all changes are immediately visible to all future calls. While eventual consistency as implemented by S3~\cite{1294281} usually offers better performance and scalability, it has some disadvantages, mainly caused by the fact that the complexity is moved to the application space. The Blob Storage Service can store files up to 1~TB, which makes it particularly well suited for data-intensive applications. Further, the access to the blob storage can be optimized for certain usage modes. \emph{Block blobs} can be split into chunks that can be uploaded and downloaded separately and in parallel. They are well suited for uploading and streaming large amounts of data. \emph{Page blobs} manage the storage as an array of pages. Each of these pages can be addressed individually, making page blobs a good tool for random read/write scenarios. \emph{Azure XDrive} provides a durable NTFS volume that is backed by a page blob. In particular, legacy applications that heavily utilize file-based storage can simply be ported to Azure using XDrive. The Azure Queue Service provides reliable storage for the delivery of messages within distributed applications. The queue service is ideal for orchestrating various components of a distributed application, for example, by distributing work packages or collecting results, which could be running on Azure or on another resource (e.g., a science cloud). Azure Table Storage is designed for storing structured data. Unlike traditional relational database systems, the table storage is designed with respect to scale-out, low cost, and high performance similar to Google's BigTable~\cite{1267323} system. For legacy applications, Azure also provides an SQL server-based relational datastore called SQL Azure. In contrast to Azure tables, SQL storage supports common relation database features, such as foreign keys, joins, and SQL as the query language. \subsubsection{Usage Modes} Azure provides several core services supporting various application characteristics and patterns. Compute-intensive tasks naturally map to worker roles. The communication and coordination between multiple role instances are commonly done via the Azure storage services or defined communication endpoints. Worker roles can run either .NET code or native code. The Azure Queue Service can support batch queue-style operations, namely, the subsequent execution of a set of tasks. The VMs containing Azure-based applications can be started on demand or in time to meet a deadline. More resources can be added at any time to meet a deadline. Azure resources can be accessed via a user portal as well as by different command line and GUI utilities, for example, Visual Studio and Eclipse. Many applications deploy custom portal applications that provide a domain-specific entry point. Other applications utilize just Azure resources. In some cases, Azure resources are also used in conjunction with grid/HPC resources, for example, to offload computation in order to meet a deadline. Loosely coupled, ensemble-based applications (e.g., parameter sweeps) that demand a large number of processors but do not require a low-latency interconnect are particularly well suited for Azure. Workflow-type applications including applications based on the Windows Workflow foundation can be easily supported on top of Azure. Data-intensive applications are particularly well supported. Affinity groups \ifreport \else (discussed in Chapter~\ref{Chap:dataIntense}) \fi are used as abstraction for supporting the colocation of data storage and compute instances. On a more fine-grained level, data stored in Azure storage can be grouped by using a partitioning key. Entities that have the same partitioning key are guaranteed to be stored on the same server. Further, Azure provides direct access to a set of public data through the Azure Data Market~\cite{azure-data-market}. Scientific problems that do not require high-end HPC hardware and interconnects and can be easily ported and scaled out on Azure. These applications can benefit from the ability to acquire and release resources on demand. An increasing number of applications therefore directly target distributed infra\-structures as Azure instead of high-end machines. For example, ensemble-based molecular dynamics approaches utilize multiple sets of simulations of shorter duration instead of a single, longer simulation to support a more efficient phase-space sampling. Single ensemble runs spawning up to 8 cores can be encapsulated into an Azure worker role. Such simulations often need to acquire additional resources---for example, if a certain simulation event occurs that requires the spawning of an additional replica. This type of application can greatly benefit from Azure's capability to dynamically allocate resources on demand. For this purpose, Azure provides a Service Management API, which gives applications a programmatic access for acquiring and releasing resources. This capability is also useful for applications where the execution time and resource requirements cannot be determined exactly in advance, because of changes in runtime requirements or changes in application structure. For data-intensive applications Azure provides several interesting storage options: xDrive offers file system access to the Azure Storage service, which is particularly relevant for applications that manage file-based data flows. Blob storage can store large amounts of data: a page blob, for example, can store files up to 1~TB. Blob storage supports two different data access patterns: block blobs are designed for continuous access, such as data streaming, while page blobs can address each of their constituent pages individually and are particularly well suited for random access. These properties can be mapped to the characteristics of the respective application; for example, a MapReduce application usually accesses data in large chunks, which is well supported by the block blob. \subsubsection{Successes and Limitations} A number of scientific applications that use worker roles for compute and/or data-intensive tasks have been ported to Azure. AzureBlast~\cite{azure_blast}, for example, relies on worker roles for computing bio-sequences. Lately, applications with more demanding coordination methods have also been ported to Azure; for example, the Replica-Exchange algorithm has been successfully ported to Azure using the BigJob framework~\cite{Luckow:2010uq}. The MODISAzure framework~\cite{Microsoft:2010fk} implements a four-step image pipeline, including a user portal for analyzing environmental sensor data from NASA satellites on top of Azure. Azure imposes scaling limitations. The largest supported VM has 8 cores, 14~GB of memory and 2~TB of disk space. Further, MPI applications currently cannot be run on Azure. Whereas other clouds can run MPI jobs, the performance usually degrades significantly when running jobs across multiple VMs. \subsection{DAS\label{Sec:DAS}} The Distributed ASCI Supercomputer (DAS) is a Dutch distributed-computing platform aimed at computer science research. \minihead{History} The Dutch research school ASCI (Advanced School for Computing and Imaging) has set up four generations of the DAS system over the past 14 years. Each incarnation consisted of four to six clusters located at different universities, integrated into a single system. The systems have been used for over 60 PhD theses and for numerous large collaborations, including the 30-40M EURO knowledge infrastructure projects VL-e~\cite{VL-e} and MultimediaN~\cite{MultimediaN} and dozens of large national and European projects. The computer science research done using DAS has shifted focus over time, from cluster computing in DAS-1 starting in 1997, to distributed computing in DAS-2 starting in 2002, to grid computing and e-Science in DAS-3 starting in 2006, to hardware diversity and green IT in DAS-4 starting in 2010. \minihead{Mission/Vision} The purpose of DAS is to allow computer science experiments, for example, distributed experiments that use multiple clusters simultaneously; experiments that need high-speed optical networks; and experiments for which accurate, reproducible performance measurements are required. \minihead{Management} The DAS project is managed by a steering committee with staff members from all participating sites. The committee is in charge of making overall decisions about the infrastructure. In addition, ASCI has set up a team of highly skilled people (mostly scientific programmers) from all sites who are in charge of systems management. An attempt is made to simplify systems management as much as possible, which has proven to be a successful strategy, resulting in a stable and reliable environment. \minihead{Roadmap/Future} The most recent system, DAS-4, has been operational since October 2010 and will allow experiments with various types of accelerators such as GPUs, FPGAs, multiprocessor system-on-chip (MP-SoC), and many-core processors. DAS-4 consists of six largely homogeneous clusters extended with a variety of such accelerators. ASTRON (Netherlands Institute for Radio Astronomy) is a new partner in DAS-4 and brings in data-intensive astronomy applications. \subsubsection{Characteristics} DAS differs from production systems in many aspects. Foremost, it is designed to allow clean, laboratory-like experiments, as opposed to running large production jobs. The system therefore is largely homogeneous and uses the same processor type and operating system on all nodes. Also, nearly all clusters have the same local network (Myrinet in DAS-1 to DAS-3, InfiniBand in DAS-4). This simple design results in a reliable, easy-to-maintain system with reasonably reproducible performance. DAS is designed to allow distributed experiments that run on multiple clusters at the same time. Therefore, the load of the clusters is deliberately kept low: only short-running jobs (less than 15 minutes) are allowed during daytime. The usefulness of the computer science research that can be done with the system is optimized; utilization degree is not maximized (and to some extent is even ``minimized''). DAS-3 and DAS-4 have an optical private network interconnect called StarPlane, provided by SURFnet, linking the different sites with multiple, dedicated 10-Gbps light paths. An important goal of DAS-3 was to investigate how the topology of such an optical network can be changed dynamically. A photonic switch is being designed for DAS-4 that will allow topology changes within seconds. \subsubsection{Usage Modes} Over the years, three broad categories of patterns of usage that scale well on DAS have been identified: \begin{shortlist} \item \textit{Master-worker} or \textit{divide-and-conquer} patterns scale well because they generally have good locality and thus relatively little wide-area communication. Examples that have been investigated include medical image analysis, N-body simulations, SAT-solvers, gene sequence analysis, and automatic grammar learning. \item Applications with \textit{asynchronous high-throughput communication} perform well because they can do latency-hiding on the wide-area networks. The bandwidth of the wide-area network usually is less of a problem (especially given our optical interconnect). Examples include distributed model checking and search applications. Many measurements have been done with the DiVinE model checking system~\cite{divine} on wide-area DAS-3, demonstrating that much larger models can be validated on a grid than on a single cluster. Also, the Awari solver~\cite{awari} has been implemented on wide-area DAS-3~\cite{DAS3}. \item Applications with mixed task parallelism and data parallelism often also scale well because they can use (often fine-grained) data parallelism within a cluster and (more coarse-grained) task parallelism between clusters. The best DAS example is multimedia content analysis, with which many (award-winning) large-scale grid experiments have been done. \end{shortlist} DAS has developed its own programming systems, including Ibis, Satin, JavaGAT, SmartSockets, and KOALA: \begin{shortlist} \item Ibis~\cite{ibis2010} aims to dramatically simplify the programming and deployment process of high-performance grid applications. Its philosophy (``grids as promised'') is that grid applications should be developed on a local workstation and simply be launched from there on hostile grid environments that are dynamic and heterogeneous and suffer from connectivity problems. As an example, the CCGrid'08 Scalable Computing Challenge was won using Ibis to create ``scalable wall-socket multimedia grid computing.'' \item Satin is a programming system based (like Cilk~\cite{cilk}) on divide-and-conquer parallelism, which transparently handles resource failures and malleability. \item The Java Grid Application Toolkit (JavaGAT)~\cite{javagat} offers a set of coordinated, generic, and flexible APIs for accessing grid services from application codes, portals, data management systems, and so on. JavaGAT sits between grid applications and numerous types of grid middleware. \item The SmartSockets communication library~\cite{smartsockets} automatically discovers connectivity problems (due to firewalls, network address translation, nonrouted networks, multihoming) and solves them with as little support from the user as possible. \item KOALA~\cite{koala} is a grid scheduler that supports co-allocation of multiple clusters at the same time. Most DAS applications may run on multiple clusters simultaneously, over a short period. They need an efficient scheduler and support for I/O to stage the input and result files in and out. \end{shortlist} \subsubsection{Successes and Limitations} Several applications were described above. In addition, DAS-3 has been used for collaborations between computer scientists and application scientists, for example: \begin{shortlist} \item DAS-3 was used to analyze the computational characteristics of the multiphysics simulations published in \emph{Nature}~\cite{DASsupernova}. It was discovered that the brightest supernova ever recorded, SN2006gy, was the result of emergent behavior in a dense star cluster. \item The MultimediaN project has used DAS-3 to make a giant leap forward in the automatic analysis of multimedia data, resulting in multiple ``best performances'' in the international TRECVID benchmark evaluation for content-based video retrieval. Furthermore, researchers using DAS-2 and DAS-3 MultimediaN have earned a ``most visionary research award'' at AAAI 2007 and a `best technical demo award' at ACM Multimedia 2005. \item The HiRLAM weather forecast model has been experimented with on wide-area DAS-3. This model is used by several European meteorological institutes for their daily weather forecasts. For very high-resolution forecasts, which will need many processors from multiple clusters, the results are promising. \end{shortlist} The main limitation with DAS-3 was that it was difficult to do large-scale experiments with more clusters and nodes and to do experiments that need slower (long-haul) networks and more heterogeneity. For this reason, the DAS project collaborated with the French Grid'5000 project. The two systems were connected by a dedicated 10-Gbps light path, aiming to create a European-scale computer science grid testbed~\cite{bal2007}. Currently, hardware heterogeneity is being tackled with the introduction of various HPC accelerators in DAS-4. \subsection{DEISA} Resources from a distributed set of European HPC centers are integrated in the Distributed European Infrastructure for Supercomputing Applications (DEISA) \cite{deisa} to provide a common set of services for HPC users primarily in Europe. \minihead{History} The DEISA Consortium deployed and operated DEISA, cofunded through the EU FP6 DEISA project, from 2004 to 2008. The consortium has continued to support and further develop the distributed high-performance computing infrastructure and its services through the EU FP7 DEISA2 project with funds for another three years until 2011. \minihead{Mission/Vision} The mission of DEISA is to support European scientists through an integrated and unified infrastructure with remote, user-friendly, secure access to a European HPC service to solve big-science (grand challenge) problems. \minihead{Management} DEISA supports and enhances activities and services relevant to enabling applications, operations, and technologies, as these are needed to effectively support computational sciences in the HPC area. The DEISA Extreme Computing Initiative (DECI), launched in 2005, has regularly supported grand challenge projects to enhance DEISA's impact on the advancement of computational sciences. By selecting the most appropriate supercomputer architectures for each project, DEISA has opened up the most powerful HPC architectures available in Europe for the most challenging projects. This service provisioning model has been extended from single-project support to supporting virtual European communities. Collaborative activities have been carried out with new European and other international initiatives. \minihead{Roadmap/Future} Of strategic importance has been the cooperation with PRACE, the Partnership for Advanced Computing in Europe~\cite{prace}. PRACE has first prepared for the installation of a limited number of leadership-class Tier-0 supercomputers in Europe and is now building an ecosystem of Tier-0 and national Tier-1 resources.\footnote{In Europe, Tier-1 is the term used for national centers, and Tier-0 is the term used for pan-European centers. The use of these terms implies a pyramid, with a small number of Tier-0 centers at the top, a larger number of Tier-1 centers below, and possibly more tiers below that.} The key role and aim of DEISA has been to deliver a turnkey operational solution for such a persistent European HPC service, as suggested by ESFRI, the European Strategy Forum on Research Infrastructures (a strategic instrument to develop the scientific integration of Europe and to strengthen its international outreach.) \subsubsection{Characteristics} DEISA has operated on top of national services. It includes the most powerful supercomputers in Europe with an aggregated peak performance of over 2 petaflops in 2010. The supercomputers are interconnected with a dedicated 10-Gbps network, based on GEANT2 and the National Research and Education Networks. DEISA has operated a high-performance global file system, facilitating data management and community access to data repositories. Core DEISA services include single sign-on based on common authorization, authentication, and accounting; the provision and maintenance of the DEISA Common Production Environment (DCPE); and various middleware stacks. As a principle, all DEISA partners provide about 5\% of their national HPC resources for DEISA projects. In 2008, 66 proposals were submitted to DECI, from 15 European countries, involving co-investigators from North and South America, Asia, and Australia. A total of 134 million normalized CPU-hours\footnote{DEISA normalizes CPU-hours so that resource requirements can be compared with systems with CPUs of varying capability. DEISA has chosen to use an IBM P4+ CPU-hour as its normalized unit.} were requested. Of these, 42 proposals were accepted, using 48 million normalized CPU-hours. In addition, 8 million CPU-hours were awarded to science communities. These projects were executed in 2009. In the next DECI call (DECI-5 for 2010 access), 69 million CPU-hours were awarded to 50 projects; and in DECI-6 (for access in 2010 and 2011), 91 million CPU-hours were awarded to 56 projects, in addition to another 12 million CPU-hours awarded to science communities. \subsubsection{Usage Modes} The DEISA infrastructure essentially supports large, single-site capability computing through highly parallel batch jobs. Proposals for grand challenge computational projects are peer reviewed for scientific excellence, innovation potential, international aspects, and national preferences. The best suited, and, when required, most powerful supercomputer architectures are selected for each project. DEISA also supports multisite supercomputing for many independent supercomputer jobs (e.g., parameter sweeps) through various technical means (e.g., UNICORE~\cite{unicore}, DESHL, Globus~\cite{globus}, Application Hosting Environment~\cite{ahe}), using the DEISA global file system and its single name space. Data management is also supported via GridFTP. DEISA supports mainly four application usage modes: single-job parallel programs for efficient usage of thousands of processor-cores (including ensembles, namely, multiple copies of one application with different input parameters), data-intensive applications with distributed file system support, workflow applications to combine several compute tasks (simulation, pre- and post-processing steps), and coupled applications. The DEISA system addresses these modes by job management and data management services developed with the distributed nature of the system in mind. The job management service is realized by a user interface for submitting jobs and workflows to distributed compute systems. Alternatively, users can log in to a target system and submit jobs directly, which is what is done for the vast majority of DEISA jobs. Workflow management, currently based on UNICORE, enables the coordinated execution of multiple interdependent subjobs running on various platforms. The data management service has been based on IBM's Global Parallel File System (GPFS). DEISA members provide access to this DEISA-wide shared file system to enable users to access their data transparently from every partner site. In addition, for systems not capable of attaching to GPFS directly, GridFTP, a component of the Globus Toolkit, is used to transfer data. \subsubsection{Successes and Limitations} DEISA has had three strong successes. First, it has created a unified infra\-structure for accessing the most powerful European HPC systems, using grid middleware such as Globus and UNICORE (single sign-on). The same middleware is also being used in EGI's HTC systems (see~\S\ref{egee}), and it thus allows users to satisfy their growing computer needs from HTC to HPC without having to change their access methods. Second, through DEISA, the consequences of Moore's law have been mitigated for many countries with only one or no national supercomputer center, since a supercomputer at the end of its productive lifetime after some five years is hardly still usable for leading-edge computational science projects. Third, DECI has proven to be successful, and a large amount of science has been done~\cite{DEISA_DIGEST_2010}. Additionally, DEISA has been a single contact point for supercomputer time allocation all over Europe, which simplifies proposals for users and allows centers to optimize computer time usage (aiming to direct projects to the ``best suited execution site''). This is a success, as it gives access to HPC resources to researchers who would otherwise not have access to these computers, but it is also a limitation in some sense, as it makes using multiple systems or metascheduling applications for best time to solution difficult. Another limitation of DEISA is that there is no coscheduling service (also, no advance reservations) for all sites. Some tools (e.g., HARC~\cite{harc}) have been evaluated, but none are widely deployed in DEISA. This is not a technical problem but has to do with the way the HPC resources are used, namely, for rather long-running, large jobs (which is different from HTC resources). Furthermore, HPC resources are often overbooked (i.e., loaded to close to 100\%); using advance reservation would cause lost time by having to block resources to satisfy the advance reservation. This is not as problematic in an HTC setting where small jobs can be used for backfilling. The limited usage of the component resources as part of DEISA also is a problem. The hardware resources for the European supercomputing infrastructure are funded from national budgets of the member states; Europe does not provide central funding for supercomputers and or ensure persistence for a European HPC infrastructure. Therefore, DEISA includes only a fraction of these nationally funded resources. \subsection{EGEE and EGI\label{egee}} The Enabling Grids for e-Science \cite{egee} project supported a multidisciplinary research community that primarily performs high-throughput data analysis using a distributed storage and computing infrastructure built from multiple resource providers operating in different administrative domains, including supporting the Worldwide LHC Computing Grid in Europe. Access to this infrastructure was provided through a software layer (middleware) that abstracted the distributed resources through a service-oriented architecture (SOA) into an environment that could be used as a platform for high-throughput data analysis. The middleware distribution used within the EGEE project was gLite~\cite{gLite}, an assembly of software components developed within the project and by its collaborators. EGEE is no longer active; it was recently replaced through the European Grid Initiative (a community-driven process with the aim of establishing a sustainable European infrastructure) to provide the European Grid Infrastructure (EGI). The EGI-InSPIRE project has supported the EGI~\cite{egi,egiURL} since May 2010, and during its first year has focused on the transition from a regional to a national operational structure. This section mainly describes EGEE. \minihead{History} EGEE had its origins in the European Data Grid (EDG) project that ran between 2001 and 2004. EDG's main role was to prototype the technologies needed to build a European grid infrastructure and to bring together the groups providing the resources, constituting the user community, and building the technology components. As a result of this successful prototyping activity, the EGEE projects (EGEE-I and EGEE-II ran between 2004 and 2008) funded by the European Commission's Framework Programs were established to move the experimental grid infrastructure to production quality. This goal was successfully achieved, and the EGEE-III project continued the operation of the production infrastructure and preparing for its transition to a sustainable structure for future production operation (EGI), while supporting a multidisciplinary community of 13,000 users across the high energy physics, life sciences, astronomy, astrophysics, computational chemistry, Earth sciences, fusion, and computer science domains. \minihead{Mission/Vision} EGEE's mission was twofold: (1) to provide a generic production-quality grid infrastructure that was continuously available to reliably support multiple user communities and (2) to provide an integrated pool of resources to researchers in Europe and their international collaborators. The focus in EGI now is primarily on the operational infrastructure delivered in collaboration with national grid initiatives and European intergovernmental research organizations, which are seen as the main building blocks of long-term sustainability. \minihead{Management} EGEE's management structures were focused on two issues: the overall direction and management of the project, which had activities beyond just running the infrastructure, and the delivery of the production grid infrastructure itself. EGEE was managed on a daily basis by the managers of each activity within the project that encompassed the dissemination, training, user community activities, operations, networking, software integration, and software development activities in the project. This approach ensured regular coordination among all the activities at a managerial level to resolve any technical issues. The delivery of the operational production infrastructure was managed through regional operational centers (ROCs). ROCs integrated the resources within a single country (e.g., Italy) or across a large number of countries (e.g., central Europe or southeast Europe). Within each ROC, operational teams monitored the state of their federated resources, identified performance or failed services, and raised ``trouble tickets'' with the relevant resource providers to trigger resolution of these problems. Within EGI, these management structures have evolved to clearly defined coordination functions established within a new dedicated organization~\cite{egiURL} that federates an operation infrastructure contributed by over 35 European national resource providers that have replaced the regional model established within EGEE. \minihead{Roadmap/Future} EGI, a collaboration rooted in the EGEE community and related regional infrastructure projects such as BalticGrid, SEE-Grid, and the Nordic DataGrid Facility, is now coordinating the provision of a European-wide production infrastructure integrated with production infrastructures around the world as required by its user community, open to all disciplines. This moves the support of the infrastructure from a series of short-term projects to a model that is more sustainable long-term by leveraging established national and domain-specific infrastructures. Part of the goal of EGI is to provide greater integration between high-performance, commodity computing (grids) and volunteer desktop resources and to include new resources such as cloud computing, as increasingly demanded by its users. Ideally, a single authentication token and interoperable software distributions (coordinated by the European Middleware Initiative, or EMI) will eventually provide secure, controlled, integrated access to all resources regardless of type and irrespective of the provider being run by a local, national or international body. Progress on these two aspects will provide the integrated e-infrastructure (or cyberinfrastructure) that has been the vision of this community over the past decade. The choice of a set of interoperable middleware stacks (gLite, UNICORE, and ARC) that are supported by EMI and by the Initiative for Globus in Europe (IGE), rather than a single, monolithic distribution, was made because different user communities (including new communities that EGI wants to attract) have different needs that can best be met through different technologies. Additionally, while some solutions are comparable, they may be adopted by different sites or countries for nontechnical reasons. For sustainability, most of the larger EGI sites will likely end up having to support multiple communities and so will have to support multiple stacks. The integration and harmonization activities being undertaken within the EMI project may reduce the number of stacks that eventually need to be deployed. \subsubsection{Characteristics} EGEE supported a user community that ran applications from research domains as diverse as multimedia, finance, archaeology, and civil protection. The users benefited from a federated distributed computing infrastructure that operated around the clock across approximately 300 sites in 50 countries, encompassing 140,000 CPU cores and many petabytes of on- and near-line storage. The applications run on EGEE focused on the computational analysis and generation of data stored within the EGEE infrastructure. In some cases, this data was stored remotely from where the analysis was performed; mechanisms were provided to move the data or place the computational analysis near to the data location. In other cases, data was replicated throughout the grid, allowing jobs to retrieve data from or locate themselves near a particular copy. While many of the applications were executed on a single core, support was also provided for parallel applications (MPI) on resources that were enabled to support this workload. The following were key aspects of EGEE: \begin{shortlist} \item Exposing the grid resources: computing and storage elements hosted by the resource providers that were part of EGEE advertised their resources in the information index. \item Controlled access: not every community or project, represented by one or more virtual organizations, had access to every resource within the grid. An individual's role in a virtual organization was managed through a service (VOMS) that specified the roles a user had within that organization. \item Consistent availability: the grid fabric was monitored to ensure its availability through tests that were, in addition, able to determine the version of the installed software. \end{shortlist} \subsubsection{Usage Modes} The key function of EGEE was to manage data files located on storage elements throughout the grid. Data files could be registered in a file catalogue where their physical location could be mapped from a logical name. Multiple physical copies of a data file could be distributed within the grid, mapped from a single logical name. Physical data files could be moved between storage elements, which could encompass temporary or permanent disk caches or near-line tape storage, as part of the data analysis. Applications were then deployed on EGEE and used to analyze the data. The EGEE Grid infrastructure (using the gLite middleware) was developed to support high-throughput computing, where the work could be based around the movement of data (files) as part of a computational analysis workflow. Work could be submitted directly to a computational element by a user or through the Workload Management Service (WMS) that selected a resource according to the requirements of the application specified by the user and the then available resources that the user had access to through virtual organization membership(s). Physical copies of a logical data file could be located through a file catalogue. The movement of files was coordinated through a file transfer service that enabled policy to be imposed on the use of dedicated network paths linking the transfer sites. \subsubsection{Successes and Limitations} EGEE provided a production-quality infrastructure to its community. It supported the four experiments using the Large Hadron Collider, the life sciences community through medical imaging, bioinformatics and drug discovery,\footnote{See, for example, the recent WISDOM experiments \url{http://www.isgtw.org/?pid=1000993}.} and many other application communities.\footnote{See ISGTW for other examples: \url{http://www.isgtw.org/}.} EGEE collaborated with OSG to provide interoperating federated infrastructures that could be used transparently by the LHC experiments' software. EGEE found limitations in scalability, reliability, and efficiency, which it worked to overcome during its seven-year, multinational development effort. \subsection{FutureGrid} \minihead{History} FutureGrid was funded by NSF's Office of Cyberinfrastructure as a result of a proposal submitted in November 2008. It started October 2009 with a four-year budget of \$15M. \minihead{Mission/Vision} The goal of FutureGrid is to support research on the future of distributed, grid, and cloud computing by building a robustly managed simulation environment or testbed to support the development and early use in science of new technologies at all levels of the software stack: from networking to middleware to scientific applications. The environment will mimic TeraGrid and/or general parallel and distributed systems. This testbed will succeed if it enables major advances in science and engineering through collaborative development of science applications and related software. FutureGrid can be considered as a small science/computer science cloud, but it is more accurately a virtual-machine-based simulation environment. In many ways, it was conceptually based upon Grid'5000 but is not encumbered by the requirement and responsibility to support production usage. Consequently, FutureGrid is unusual among the infrastructures that we discuss in this \ifreport report. \else chapter. \fi Although experimental in its early stages, there is a clear trajectory to making FutureGrid a part of the US national cyberinfrastructure. Specifically it is planned that the FutureGrid research testbed will ``open up'' and become part of XSEDE in fall 2011. \minihead{Management} FutureGrid is a partnership of Indiana University (lead, architecture, core software, support), Purdue University (HTC hardware), San Diego Supercomputer Center at the University of California San Diego (monitoring), University of Chicago/Argonne National Laboratory (Nimbus), University of Florida (ViNE, education and outreach), University of Southern California Information Sciences Institute (Pegasus to manage experiments), University of Tennessee Knoxville (benchmarking), University of Texas at Austin/Texas Advanced Computing Center (portal), University of Virginia (OGF, advisory board and allocations), and Center for Information Services and GWT-TUD from Technische Universtit\"{a}t Dresden Germany (VAMPIR). FutureGrid hardware totals about 5,000 cores, located at Indiana, Purdue, Chicago, Florida, Texas, and San Diego. It has a dedicated network (except to Texas) that can be isolated, and it features a programmable network fault generator. In the initial phase, high-level decisions are made by the co-PIs. There are seven working groups covering operations and change management, performance and monitoring, software, system administration \& Networking, Training, Education and Outreach Services, User Requirements and User Support. These groups report biweekly to NSF and the co-PIs. There is a weekly phone call between all collaborators. Since this is an experimental/research testbed, the focus is less on supporting all users (like the TeraGrid has) and more on specific requirements and understanding the limitation in existing capabilities to support these requirements. The management structure reflects this design feature. \minihead{Roadmap/Future} Formal early use of FutureGrid started in April 2010, and it remained in early usage mode for much of 2010. However, experimental usage has been increasing, with the number of supported projects crossing 25. Standalone production began in November 2010, and FutureGrid is planned to be integrated with XSEDE's other systems in late 2011. \subsubsection{Characteristics} The system mimics TeraGrid, with a distributed set of conventional clusters as well as systems specific to TeraGrid. Currently the clusters are four IBM iDataPlex systems and a Dell cluster at Texas. There is also a small Cray XT5 and a HTC Condor pool; other specialized systems will be added. Users can request environments that are either VM or bare-metal based with both Linux and Windows. \subsubsection{Usage Modes} FutureGrid can be used for developing new applications and software systems probing a variety of interests, including distributed or parallel systems, multicore technologies, and cloud and MapReduce programming paradigms. Users can request a distributed collection of resources that can be dynamically configured using IBM's xCAT software. In general, FutureGrid will allow repeatable system experiments and reliable performance measurements comparing VM and bare-metal environments. FutureGrid will support both research and education. Early uses re expected to include new computer science and computational science classes that can exploit the special features (e.g., the isolatable network and cloud architecture) of FutureGrid. \subsubsection{Successes and Limitations} A major goal and success of FutureGrid is the support of cyberinfrastructure developers and users who traditionally have not been major users of TeraGrid/XSEDE. Over half the projects on FutureGrid have a computer science focus, while computational biology~\cite{futuregrid1} is the most frequent domain science focus for the other projects. Project goals cover interoperability~\cite{futuregrid2} (including standards-based approaches such as Genesis and SAGA), technology evaluation (e.g., for adoption of tested technologies by TeraGrid/XSEDE), programming models (e.g., iterative MapReduce), education~\cite{lsu_scicomp} (with semester-long classes), and computer science and domain sciences. The richness and novelty of FutureGrid offerings created unexpectedly large demands on systems management and user support, leading to staffing shortfalls. User support for FutureGrid projects is often end-to-end, not simply issue- or ticket-based; this was reflected in changes to the user and project-support structure in late 2010. Additionally, the original architecture for FutureGrid was developed based on initial and predicted use-cases; the actual uptake has been somewhat different and several original features have not been exploited, such as the network interrupt capability. It is ironic that providing a technology aimed at supporting clouds with efficiency of operation and reduced support costs requirements in large data centers itself needs above-average support. The FutureGrid project also did not take advantage of the drastic decrease in disk cost between preparing the FutureGrid proposal and placing system orders, so the FutureGrid systems are underprovisioned in disk space per node. \subsection{Grid'5000} Grid'5000~\cite{grid5000} has been designed as a highly reconfigurable experimental testbed for large-scale distributed systems. It includes more than 5,000 cores in clusters at nine sites across France, connected by a network with dedicated capacity. \minihead{History} Preparation for Grid'5000 began in 2003 with a series of interviews of 10 research groups active in grid computing in France. These 10 groups described 100 potential experiments. In general, the experiments were diverse in their infrastructure needs, a situation that was reflected in the design of the infrastructure, which entered production in 2005. \minihead{Mission/Vision} Grid'5000 is designed to support experiment-driven research in all areas of computer science related to parallel, large-scale, or distributed computing and networking. Experiments that use Grid'5000 should lead to results in those research fields and use the resources as a model for the use of nonacademic resources. Available resources can be used in a low-priority mode to generate useful results for other communities, especially if this generates results that are also relevant to the main research fields of Grid'5000. The initial Grid'5000 machines are distributed across nine sites, a side effect of the way the construction of the Grid'5000 was funded. Because securing resources for large-scale experiments (at least three sites and 1,000 CPUs) can be difficult in the absence of specific rules and because these experiments are a driving factor for a multisite instrument, such experiments are favored by Grid'5000. Nevertheless, research at a smaller (local) scale is also welcome. \minihead{Management} The Grid'5000 executive committee (the scientific director, the deputy scientific director, the technical director, representatives from each Grid'5000 site, and a representative of RENATER, the French National Research and Education Network provider) meets once a month by teleconference. Directions for the technical team's work are laid out in a document written in 2008 for the next four years under the technical director's leadership and reviewed by the executive committee. This document allocates resources to the technical team, and an updated workplan is submitted every year using the same process. A steering committee, representing the funding institutions, meets once a year to review the board of directors' action and to give recommendations on the directions to take. \minihead{Roadmap/Future} Grid'5000 has become a tool for everyday work for the research community in France, and it has been classified a very large research infrastructure by the French Ministry of Research. The institutional context of Grid'5000 is evolving to ensure the sustainability of Grid'5000 and especially the renewal of the hardware used to run some sites. Specifically, three major activities are under way. The first is work on the network links between sites, to enable bandwidth reservation and measurement at a fine-grained level. The second is extending Grid'5000 to new sites. A memorandum of understanding has been signed with Porto Alegre, Brazil, and additional sites are in preparation. The third is development of an API to improve the scriptability of working on Grid'5000. \subsubsection{Characteristics} Grid'5000 comprises a number of sites interconnected by a dedicated network. A Grid'5000 site has two attributes: (1) a single LAN with a frontend and (optionally) an access machine, a server for the resource scheduler and a server for deployment, an NFS server and a DNS server, and a route to the interconnect network and (2) One or more clusters of machines. The objective is to have at least 128 nodes per site. A site manages Grid'5000 machines and possibly other resources. These other resources are considered to be outside Grid'5000 but are integrated in the site (with the same accounts and same resource scheduler) because they are useful to the community. The resources of a site are static and are described in the resources scheduler's database. Thus, volatile sites are excluded from Grid'5000; sites are either available or going through maintenance operations. Requiring that resources on a site be static avoids having to manage dynamic addition and retrieval of resources and limits the complexity of the testbed for users. For sites that want to put resource sharing with other projects in place, periods where the resources are made available to other projects are required to appear as reservations of the resources, and their existence must be negotiated with Grid'5000. A Grid'5000 system has the following properties: it is exclusively available in Grid'5000 context; it can be allocated to users without requiring the use of specific properties during job submission; and it is managed by kadeploy~\cite{kadeploy} in a reliable way---that is, it can be managed remotely (reboot, power-off, power-on, etc.). Some systems can have unusual properties, and therefore the resource scheduler can be configured so that these systems are last to be generally allocated. Moreover, some users can be given higher priority to access these specific systems if required for day-to-day work. Accounts are requested by users at one of the sites participating in Grid'5000 and are approved by the site's chief scientist. This approach gives users complete access to all the resources of Grid'5000 at all sites without any usage quotas, as well as disk space on the NFS server serving home directories on each site. A tool tracks usage and relates it to reports that users have to update regularly. The reports describe planned usage, current usage, and results obtained using Grid'5000 and are published on the website. In 2009, Grid'5000 was used by 572 different people, with an average of 272 different users over a three-month period. Of these 272 unique users, one-third used three or more sites on the same day. {\it Network}: Because reproducibility of experiments is a goal for Grid'5000, the network interconnect is dedicated, ensuring the only perturbations seen in the interconnect links are those generated by the testbed. Because experimenting with the network layer of large-scale distributed systems, including testing new protocols, is a goal for the testbed, the interconnection provides a layer 2 service. The first generation of interconnect used Ethernet over MPLS-VPNs between all sites. It was a full mesh topology based on MPLS tunnels established between the RENATER POPS and the Grid'5000 sites. In practice, sites were interconnected through 1-Gbps VLANs. The current version uses a dark fiber infrastructure allowing for 10-Gbps links. With this infrastructure, Grid'5000 sites are directly connected to switches inside RENATER POPs and see each other inside the same VLAN. {\it Site independence}: Grid'5000 systems do not have special provisions to guarantee high availability. Demands for electricity, network, and cooling equipments are such that machines at any one site can remain unavailable or unconnected to the others for a few consecutive days every year because of maintenance or upgrade operations. Such operations should have only minimal impact on the availability of resources. Of particular importance are the following: \begin{enumerate} \item Machines hosted on a site that has no Grid'5000 network connection to the other sites should still be usable by all users who have access to the site using an out-of-band network connection. This particularly concerns users from the site hosting the machines. \item If any site has no Grid'5000 network connection, the other Grid'5000 resources should remain usable, even to users at that site who have access to other sites using an out-of-band network connection, through public access points for example. \end{enumerate} This design decision has proven valuable for the day-to-day availability of Grid'5000 resources. It also has profound impact on account and resource management. For account management, this design decision implies the following: \begin{itemize} \item A distributed architecture is needed for authentication and authorization. A master LDAP server holds all account information and is replicated on a slave server on each site. This slave server remains functional even after having lost its connection to the master server for a few days. \item A different home directory must exist on each site for a given user. No automatic synchronization is provided to users, but one of the first tutorials explains to users how they can synchronize their data. \end{itemize} For resource management, this design decision implies the use of a independent resource scheduler on each site. This, in turn, leads to co-allocation problems when users try to run experiments spanning multiple sites. In order to handle this problem, advance reservation of resources is required from the resource scheduler. {\it Reconfiguration}: At the core of Grid'5000 concepts is reconfiguration of resources by users. The motivation is to have an instrument on which all existing Grid middleware can be deployed by users and therefore compared. In the current iteration of the infrastructure, only reconfiguration of the software stack of nodes is possible. Users have to choose one node on each site they use to act as a head node for their experiment, if applicable, to mimic a classical grid environment. The concept is to give users complete control of Grid'5000 nodes by allowing them to deploy their complete environment, including the operating system or hypervisor, on nodes they are allocated. This is done by changing the contents of hard drive on these nodes, as well as the nodes' PXE directives, used to boot using kadeploy. As user-deployed environments are, by definition, not controlled, the reconfiguration tool cannot make assumptions about the deployed operating system, and might not even be able to log into the environment at the end of a job to restore the node to a default state. Therefore, reconfiguration requires hardware support, in the form of management cards on the nodes. Grid'5000 has found that these management cards need their own independent access to the network. Grid'5000 provides either a default environment or a seed environment to users that boots on all Grid'5000 hardware. Users can customize the seed environment according to their needs. \subsubsection{Usage Modes} Grid'5000 was built to be used for a wide variety of experiments on large-scale distributed systems. (An experiment is typically composed of one or more jobs running on Grid'5000's clusters.) One of the key issues is how experiments are prepared and run. In Grid'5000, users develop and debug their experiments during normal work hours, with all the resources available for large-scale experiments during the night and during weekends. These resources can be viewed as a network of workstations, but the other way round: all machines are part of a local cluster but are made available to users during work hours. This approach helps build an infrastructure with a very large number of nodes. It should be understood as applying locally to each cluster, therefore shortening the time all resources can be used at the same time for a single experiment. The target life-cycle of an experiment involves three phases: (1) develop the software stack to experiment on a few machines on one or more sites, in interactive mode during normal work hours; (2) automate running of the experiment by developing ad hoc scripts; and (3) run the experiment in batch mode using an increasingly large number of resources, outside normal work hours. This has proven difficult to promote efficiently, however, because users tend to skip the second step as the resource scheduler implements interactive and advance reservations. Users tend to simply reserve in advance an increasingly large number of nodes interactively to run their experiments, preferring to stay long hours rather than scripting the experiments. Grid'5000 has two modes: submission, in which the user submits an experiment and lets the scheduler decide when to run it, and reservations, in which the user makes a reservation to run an experiment at a specific time and then, at that time, launches the experiment interactively. A second class of experiments has also emerged: CPU-hungry users who are eager to fill in any gap in the scheduling of resources to run a specific experiment. Those users are allowed use of resources in best-effort mode, where their job will be killed if anybody requests the resources. No infrastructure has been built to cater to their specific needs, and this situation could be problematic for fair sharing between these experiments. For the time being, this is handled in an ad hoc fashion, where users require approval of the experiment in advance to discuss the way this sharing will be implemented. {\it Principles}: Grid'5000 is a shared system, used by many people with different needs. The administrators pursue two objectives. First and most important, they want to make Grid'5000 available to experiments involving a significant number of nodes (in the thousands). In order to make this possible, reservation fragmentation must be avoided as much as possible. Second, they seek to keep Grid'5000 available during the day for the development of experiments. Therefore, reservations using all the nodes available on one site during work hours for that site should generally be avoided. \subsubsection{Successes and Failures} More than 600 experiments have been executed on the platform since it was made available to the community. These have led to more than 400 publications in international journals and conferences and over 30 PhD theses defended. Many experiments used more than five sites and more than 1,000 nodes. From low-level network protocols to ``classical'' application parallelization and grid middleware large-scale validation, Grid'5000 has become a highly valued target evaluation platform for computer science. Some records were broken with Grid'5000. For example, the prime factors of the RSA challenge number RSA-768, using the Number Field Sieve, were obtained by an international team of scientists from EPFL (Switzerland), INRIA (France), NTT (Japan), CWI (the Netherlands), and Bonn University (Germany). The calculation took less than 2,000 core-years on modern CPUs (including the nodes from the Grid'5000 platform). {\it Diversity of sites and cultures}: As many grid projects have found out, one of the most difficult tasks when building a distributed architecture is to have local cultures converge. This is especially true for system administrators. Ideally, all system administrators should be able to help out to manage distant sites. But this approach can be efficient only if all sites share a common architecture and server distribution, which is not possible because each site depends on an independent administration and local funding. This in turn clashes with local strategies, where Grid'5000 site administrators also share their time between administration of other machines of their lab. Efficient support, in terms of manpower use or of quality of service, for a distributed testbed remains an open issue. In the first Grid'5000 phase, every site had to find local manpower to manage the Grid'5000 hosted locally. All system administrators had to install and configure every needed service for their own site and often applied local strategies for administration. It could therefore take time for an update to be applied on all sites, thus reducing the coherence of Grid'5000 as experienced by its users. Moreover, this organization encourages system administrators to think locally. In the second phase, a dedicated system administration team of five people was created with access to eight of the nine sites. This eases the quick deployment of updates as well as a ``think global'' attitude to system administration. Nevertheless, physical access to the machines is frequently needed; and for sites with no member of the team present, complex interactions with local staff are necessary. This second phase has increased automation of tasks and could lead to a third phase where system administration task are automated using a central configuration management tool. One could then imagine part-time system administrators on all sites and a core team to manage all sites. The drawback to this strategy is that it could kill local knowledge of cluster and experimental machine administration on sites not hosting the core team. {\it About usage patterns}: Because one of the aims of Grid'5000 is large scale experiments, resource fragmentation in many small experiments is a major concern that is exacerbated by long-running jobs. As people attempt to share the cost of clusters between Grid'5000 and other projects, incompatible usage patterns are often an issue. The policy of Grid'5000 is to have frequent but short periods of time where all the resources can be given to an unique user. This can prevent the effective sharing of resources with users who want to run jobs that last a week or more. Understanding the expected usage pattern of the target users and setting up rules to enforce them have proved crucial for the success of Grid'5000. \section*{Context} The material in this report is a draft of a large part of Chapter 3 of ``Abstractions for Distributed Applications and Systems,'' a book being written by Shantenu Jha, Daniel S. Katz, Manish Parashar, Omer Rana, and Jon Weissman, to be published by Wiley in 2012. This report primarily covers production distributed computing infrastructures that have been used to develop and deploy large-scale scientific applications. We define a production distributed computing infrastructure as a set of computational hardware and software, in multiple locations, intended for use by multiple people who are not the developers of the infrastructure. We observe that typically the time scales over which scientific applications are developed and used is qualitatively larger than the time scales over which the underlying infrastructure tends to evolve. For instance, the middleware used and the services and interfaces offered by many distributed computing infrastructures have changed over recent years due to changes in providers and other technical, political, and funding reasons. Additionally, some of the commercial infrastructures themselves have developed relatively recently. However, one component of this landscape has essentially remained the same: scientific applications and the most commonly used methods used to develop them. The relatively slow evolution of scientific applications is both an opportunity and a challenge. It is a challenge in that once developed, they are hard to modify and adapt to changes in infrastructure. It is an opportunity in the sense that if we can design and architect scientific applications correctly they will be immune to shifts in the underlying infrastructures! Given the many changes in academic computing infrastructures the world over, and the fast evolution of commercial infrastructures, this report is an attempt to provide a topical and focused analysis of distributed computing infrastructures. The book from which this report has originated provides: (i) a critical assessment of a number of existing scientific applications and infrastructures -- to identify gaps between application requirements and the abstractions and capabilities provided by the current generation of systems and infrastructure; (ii) a survey of 13 application case studies; (iii) survey of coordination abstractions and infrastructures currently employed by distributed applications, in particular identifying mechanisms that may have benefit for future applications (in addition to those surveyed); and (iv) a survey and assessment of abstractions and infrastructures within the emerging area of data intensive applications. The book is, in part, a consequence of what we perceive to be a lack of sufficient connection between: (i) the theory of scientific application development; and (ii) the theory and practice of deployment over distributed systems. The method we used to write this report was that we asked the following questions: \begin{enumerate} \item What is the purpose of your system? \item What are the main characteristics of your system? \item What common patterns and usage modes does your system support? \item What are the common usage modes for applications that use (or will use) your system? \item How does your system address the usage modes that you have identified? \item What types of applications and users have been successful in using your system? \item What are the limitations in the use of your system (i.e. where your system has not been successful)? \end{enumerate} \noindent to a set of contributors who were knowledgable about the various infrastructures (Paul Avery, Henri Bal, Geoffrey Fox, Wolfgang Gentzsch, Helmut Heller, Adriana Iamnitchi, Scott Lathrop, Hermann Lederer, Andre Luckow, David Margery, Steven Newhouse, Ruth Pordes, and David Wallom), and then adapted their responses as the starting point for the text in sections \ref{sec:pdis}, \ref{sec:rdis}, and \ref{sec:cdis} of the report. (Of course, any errors are our responsibility, not the responsibility of the contributors.) We then wrote the other sections of the report to analyze and integrate these the sections based on contributed material. \input{infrastructures} \newpage \subsection*{Objectives} This report has two objectives. First, we describe a set of the production distributed infrastructures currently available, so that the reader has a basic understanding of them. This includes explaining why each infrastructure was created and made available and how it has succeeded and failed. The set is not complete, but we believe it is representative. A specific infrastructure we do not discuss that of the US Department of Energy, because it isn't really a unified infrastructure in the same sense as those we do discuss. Rather, it is a set of independently managed resources, connected by a high-bandwidth network. Second, we describe the infrastructures in terms of their use, which is a combination of how they were designed to be used and how users have found ways to use them. Applications are often designed and created with specific infrastructures in mind, with both an appreciation of the existing capabilities provided by those infrastructures and an anticipation of their future capabilities. Here, the infrastructures we discuss were often designed and created with specific applications in mind, or at least specific types of applications. The reader should understand how the interplay between the infrastructure providers and the users leads to such usages, which we call usage modalities. These usage modalities are really abstractions that exist between the infrastructures and the applications; they influence the infrastructures by representing the applications, and they influence the applications by representing the infrastructures. \subsection*{Motivation} To analyze why an infrastructure was put together and made available, we need to understand the overall design decisions and design considerations. We know that these are driven by several factors, including politics and funding, expectations of which applications will be run on the infrastructure and of who the users will be, and the desire of the infrastructure providers to try out new technologies. To describe how an infrastructure is used, we consider its usage modes. These can be described as combinations of a set of modalities (based on those previously published in~\cite{usage_modalities}): \begin{itemize} \item User intent: production, exploration/porting, education \item When to run: batch (normal); interactive (when the user is ready); urgent (immediate); urgent (not immediate, but high priority); reservation (at a set time) \item Submission mechanism: command line; grid tools; science gateway; metascheduler (automatically selected) \item Targeted resources: use of multiple resources of the same type\footnote{Type here is used to mean HPC compute, HTC compute, storage, visualization, etc.} within the infrastructure; use of multiple types of resources within the infrastructure; Coupling of these resources with other resources that are not part of the infrastructure \item Job/resource coupling: independent; independent but related (e.g., ensemble); tightly coupled (e.g., must be coscheduled with low-latency, high-bandwidth network connection); dependent (e.g., workflow) \end{itemize} For example, one usage mode could be when a user runs an MPIg\footnote{MPIg~\cite{mpig} is a tool that allows one to run an MPI application across more than one system.} application, as part of a set of production runs, using a reservation, submitted through grid tools, on a pair of HPC systems, where the two applications are tightly coupled. Another example might involve a user running a production workflow for a forecast hurricane, using urgent scheduling, submitted through a metascheduler, targeting multiple HPC resources and storage resources, with dependent coupling between jobs. \subsection*{Overview} Many production distributed-computing infrastructures are now available. These can be classified into three categories: science, research, and commercial. TeraGrid (now transitioned into XSEDE) and DEISA are two roughly similar science infrastructures, the former based in the US and the latter in Europe. Each is intended to ``unify'' activities involving multiple large-scale parallel systems across the geographical area it covers. OSG, EGEE (now transitioned into EGI), and NGS are roughly similar science infrastructures that are more oriented to high-throughput computing, in the United States, Europe, and the United Kingdom, respectively. All five of these science infrastructures are primarily intended to be used to achieve research results in application science. Grid'5000, in France, and DAS, in the Netherlands, are research infrastructures aimed more at computer science research. PlanetLab is a worldwide research infrastructure aimed at computer science research, and FutureGrid is an emerging experimental testbed that will transition into being part of the US national cyberinfrastructure. The commercial Amazon Web Services and Microsoft Azure infrastructures are a mixture of commercial usage, science, and research. From the points of view of Amazon and Microsoft, these infrastructures are products that support their company. Unlike the science infrastructures, they are not open, meaning that users cannot easily interact with the infrastructure providers to ask for new features. The sections of this report describe a number of science, research, and commercial infrastructures, prior to a discussion and comparison of the various infrastructures. Each infrastructure description in the next three sections is laid out as follows: an introduction to the infrastructure, generally including history, source of funding, mission and vision, management, and a roadmap of where the infrastructure is going; the characteristics of the infrastructure, often including the resource provisioning or aggregation model; the patterns and usage modes employed in the infrastructure; and the successes and limitations of the infrastructure. Please note that the infrastructures described were chosen as representative of the infrastructure landscape at the time of writing, and we recognize that these infrastructures are quite disparate in goals, scope, scale, and targeted user communities. \subsubsection{Issues related to the timing of this report} Most of this report was completed at the end of 2010, with some additions made in mid-2011. It provides a snapshot of the state of the infrastructures discussed and gives an outline of where we think the infrastructures are heading, based on discussions, our own knowledge, and assorted public material. During the writing of this report, EGEE transitioned into EGI, TeraGrid transitioned into XSEDE, and Open Science Grid will transition into a new program. Infrastructures are always changing. \section{Science Production Distributed Infrastructures\label{sec:pdis}} In this section, we discuss five national and international science production distributed infrastructures. \inputc{teragrid_edited} \inputc{deisa_edited} \inputc{osg_edited} \inputc{egee} \inputc{ngs_edited} \section{Research Production Distributed Infrastructures\label{sec:rdis}} In this section we discuss four national and international research production distributed infrastructures. \inputc{grid5000} \inputc{planetLab} \inputc{das} \inputc{futuregrid} \section{Commercial Production Distributed Infrastructures\label{sec:cdis}} Currently, most commercial production distributed infrastructures are clouds. Clouds can be characterized in a number of ways~\cite{nist_cloud,you08,Armbrust:EECS-2009-28}, including which layer of services they offer, as shown in Figure~\ref{fig:cloud-ontology}. The commonly accepted layers are infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), and software-as-a-service (SaaS). At each layer, both public and private clouds can be offered, and each cloud typically uses a set of tools and infrastructure. Here, we discuss two public clouds: an example of IaaS, Amazon Web Services (EC2/S3), and an example of PaaS, Microsoft Azure. We have selected these two because they are the examples of commercial infrastructures on which we are aware that science is being carried out. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.77]{cloud_ontology} \end{center} \vskip -0.7 cm \caption{Taxonomy of cloud systems, showing the infrastructure-as-a-service, platform-as-a-service, and software-as-a-service layers, as well as examples of both public and private versions of each layer~\cite{cloud_grid_saga}. \label{fig:cloud-ontology}} \end{figure} Note that the sections that follow describing the commercial infrastructures differ slightly from the previous sections because the goal of these infrastructures is a combination of direct and indirect profit, and neither the management nor the roadmaps for future development are publicly known. \inputc{amazon} \inputc{azure} \section{Summary and Conclusions} Having discussed the infrastructures individually, we now consider them together, looking at their history and evolution, the usage modalities they support, and how their resources are allocated to users, We conclude the chapter with some observations about abstract models and interoperability. \subsection{The Infrastructures and Their Evolution} TeraGrid began as an infrastructure to explore grid computing for compute-intensive tasks, mostly HPC applications, and like DEISA it became a collection of mostly HPC systems tied together by common services. Both OSG and EGEE started as infrastructures to support data-intensive tasks, where loosely coupled HTC computing could be run on the distributed datasets. Although on a smaller scale than TeraGrid/DEISA or EGEE/EGI/OSG, the NGS initially focused on both data-intensive computing (HTC) and HPC. Most of the research infrastructures (Grid'5000, PlanetLab, DAS) were bottom-up developments that grew out of computer science research needs; they were collaborations of groups of computer scientists who realized their research would benefit from larger-scale platforms that could be developed and supported only by such collaborations. FutureGrid, on the other hand, was a top-down project, which came from the US NSF deciding to build and support a grid for such research and issuing a call for proposals. The commercial infrastructures appear to have dual motivations, though understanding the internal decisions within the corporations that have built them is not easy. EC2 and S3 are widely thought to have been an effort by Amazon to sell spare capacity, as the company's own operations require its peak capacity only for short periods each year. Azure has been developed by Microsoft as a way to adapt to a new business model comprising advertising-supported services and software, with the expectation that this model will lead to increased revenue. Some of the technological advances and economic trends behind EC2/S3 and Azure, and cloud computing in general, relate to advantages arising from the economies of scale achieved by large data centers: the lowering of data center energy and management costs along with the increasing scale and efficiency of operation. Others arise from requirements such as aggregation and dealing with large volumes of datasets or from the energy costs of data movement. In general, the rise of the datacenter to support web-scale computing requirements has been an important driver for the recent advances in cloud computing. \subsubsection{Evolution and Supported Capabilities} Understanding the evolution of certain infrastructure capabilities in response to application and user needs is both instructive and interesting. Given OSG's need to support HTC, Condor has evolved from a scavenging system in the late 1980s to become the basic building block for OSG's infrastructure in the 2000s. Condor Flocking, which provides aggregation of resources, is a fine example of continuous transition versus discontinuous transition. Similarly, experiences from SETI@home led to BOINC, which was then used for other @home applications, such as climate{\em prediction}.net. Gateways on TeraGrid emerged when a number of computationally savvy application developers realized that simplifying the process for using TeraGrid resources (identification and authorization of the user as well as the submission mechanisms for the work to be done) would allow other people in their communities to benefit from those resources. The gateways that have been developed often use a graphical user interface to hide complexity, and provide capabilities such as workflows, visualization software and hardware, resource discovery, job execution services, access to data collections, applications, and data analysis and movement tools. The number of cycles used through science gateways increased by a factor of 5 from 2007 to 2008. By working with some of the initial gateways developers, TeraGrid has developed capabilities that can be used by other developers to build new gateways. However, in several cases the requirements of a class of distributed applications are often out of phase with the deployed capabilities of infrastructure. One example is the requirement of distributed pilot job applications~\cite{novel_submission_mode} to simultaneously use multiple resources on production grids to obtain results more quickly by using a coscheduling capability. This is an interesting case study because it involves both policy and technical challenges. The policy issues have been a barrier because HPC centers are unwilling to relinquish the batch-queue mode of operation on individual systems. Technically, while methods other than coscheduling can clearly meet this requirement, such as statistical/probabilistic approaches to co-allocation or best-effort co-allocation, these have not been made available on production resources. The emphasis on batch-queue mode, corresponding to an emphasis on overall utilization of a HPC resource, has inhibited other modes of computing, such as urgent computing, ensembles, and quality-of-service-based (QoS-based) computing (e.g., user {\it x} will be allowed {\it y} jobs over period {\it z}). Another example of a new type of application is found in dynamic data-driven distributed application systems (DDDAS). The growth of DDDAS applications has been driven by the emergent abundance of accessible sensor data and the desirability of coupling real-time simulations to live sensor data, combined with the maturity of workflow tools. Currently, none of the science infrastructures can support large-scale DDDAS out of the box and without significant customization. For applications such as LEAD~\cite{lead} and SCOOP~\cite{scoop} there is a need for guaranteed throughput, which could be supported by coscheduling, high-priority mechanisms, or QoS-based computing, none of which are generally available. Beyond this, OSG and EGEE/EGI support HTC but not large-scale HPC, while TeraGrid, DEISA, and NGS support HPC but do not natively support dynamic requirements. Other external factors will cause new types of distributed applications to come of age. Anticipating these trends and supporting them on science infrastructures would benefit the wider scientific community. As new types of applications appear, the underlying infrastructure and capabilities also change, often more quickly than the timescale on which previously developed scientific distributed applications were expected to remain usable. For example, clouds have rather suddenly emerged and become prominent. However, the basic principles and requirements of distribution have not changed; the fundamental problem of coordinating distributed data and computation remains. Therefore, it is imperative that distributed application developers consider developing their applications using programming systems, tools, and interfaces that provide immunity from the natural evolution of infrastructure and capabilities. Well-designed distributed programming abstractions can be critical in supporting these requirements~\cite{grid_cloud_interop}. \subsection{Usage Modalities Supported} Usage modalities can be classified as user-intent, when-to-run, submission-mech\-anism, targeted resources, and job/resource coupling modalities. In this subsection, we discuss each, including which infrastructures support them. \subsubsection{User-Intent} The user-intent modalities are production, exploration/porting, and education. Of the infrastructures we have examined, all the science infrastructures (TeraGrid/XSEDE, DEISA, OSG, EGEE/EGI, and NGS) support all of the user-intent modalities. The research infrastructures (Grid'5000, PlanetLab, DAS, FutureGrid) generally do not support science production, although they do support computer science experiments. They also support exploration/porting and education. The commercial infrastructures (AWS, Azure) support all three user-intent modalities, but these modalities generally are not considered separately; rather, they are all just usage, and the intent of usage is not the concern of the commercial infrastructures. \subsubsection{When-to-Run} When-to-run modalities include batch, interactive, urgent (immediate), urgent (high-priority), and reservation. Batch is not the primary usage mode on clouds, but it can easily be supported on clouds. For example, Azure queues can be used simply as submission queues for worker roles. The interactive modality is supported on the commercial infrastructures. On some TeraGrid/XSEDE resources, it is supported when prearranged with the resource owner. On DEISA, it is supported only for setup, test, and development, not for production. On visualization resources within TeraGrid/XSEDE and NGS, it is supported. Note that in most cases, a clever job (such as a shell) submitted to a batch queue can support an interactive session. The research infrastructures all support interactive usage, although on DAS it is (by default) limited to 15 minutes during the daytime to allow quick access to a large portion of the resources. In some situations, some TeraGrid/XSEDE resources support urgent usage and reservations, as do OSG and EGEE/EGI, in all cases subject to advance discussion with the infrastructure. DEISA and NGS do not support urgent usage, though NGS does support reservations, again under some circumstances. Of the research infrastructures, Grid'5000 and DAS can support urgent usage and reservations in some situations, PlanetLab does support urgent usage in general but has limited support for reservations, and FutureGrid does not yet have a determined policy on urgent computing or reservations. The ideas of urgent computing and reservations are not directly supported on the research infrastructures, but the basic ideas can be supported by clever use of applications. \subsubsection{Submission-Mechanism} Four submission-mechanism modalities exist: command lines, grid tools, science gateways, and metaschedulers. In science infrastructures, TeraGrid and DEISA support the first three, and XSEDE aims to develop metascheduling, which exists under some tools for single-processor jobs. OSG does not allow user login to compute nodes and therefore does not allow command-line submission, but it does support the other three modalities. EGEE/EGI, while generally a partner with OSG, supports all four modalities. NGS also supports all four modalities. Of the research infrastructures, Grid'5000 supports the first three modalities, while DAS supports command-line and grid tool submission and is experimenting with metaschedulers. PlanetLab supports only the command-line modality, but users can add other layers once they have the resources. It is not yet clear which of these FutureGrid will support. Of the commercial infra\-structures, EC2, similarly to PlanetLab, allows users to manage resources using a web portal, a command-line client, and various other client applications. Having started a resource, users can log into these resources using SSH or the remote desktop protocol for Windows resources. Similarly, Azure provides a portal application for managing resources and deployment of applications. In addition, Azure resources can be managed from within Visual Studio and Eclipse. Direct access to resources is possible by using the Remote Desktop Protocol. \subsubsection{Targeted Resources} Targeted resource modalities include the coupling of multiple resources of the same type within the infrastructure, multiple types of resources within the infrastructure, and the coupling of these resources with other resources that are not part of the infrastructure. To understand these, we also need to know what types of resources are in the infrastructure and whether there are infrastructure-wide policies that support tools and services to enable the coupling, either concurrent or sequential, of multiple distinct resources, even if they are not coupled by the resource providers. TeraGrid resources included HPC, HTC, storage, and data analysis and visualization resources. On TeraGrid, one could use multiple TeraGrid resources together. One could also, with a fair amount of work, use TeraGrid resources and non-TeraGrid resources together; however, with the exception of science gateways, this was not directly supported by TeraGrid. DEISA resources are strictly HPC and storage resources. On DEISA, the use of multiple DEISA resources together is a supported usage modality, but the use of DEISA resources with other resources is not supported. OSG resources are primarily HTC and storage resources, which can be used together and can also be used with HTC resources from other infrastructures. EGEE (and EGI) and NGS are also primarily HTC and storage resources. They are designed to be used together and with other standards-compliant resources. The research infrastructures (Grid'5000, PlanetLab, DAS, and FutureGrid) are designed primarily for coordinated use of the resources within their own infrastructure. This does not mean that they cannot be used with other resources, only that this is not the primary concern of the infrastructure developers. We note, however, that FutureGrid is specifically designed to use standards-compliance to allow external resources to be used together with internal resources. EC2 and Azure resources can easily be combined with other types of resources, such as grid resources, using tools and capabilities such as SAGA~\cite{saga}. In general, several factors influence the use of resources. Where there are a small number of resources with large individual capacity (e.g., TeraGrid, DEISA), there is less incentive, and perhaps less user need, to use multiple resources together. In many of the current infrastructures, it is also more difficult to use multiple resources together than to use a single resource. Similarly, where an infrastructure has a large capacity internally, there can be less incentive and less user need to use this infrastructure with resources from another infrastructure. Furthermore, using multiple infrastructures together inevitably involves extra work. In both cases, nontechnical issues also are at play, such as the incentive of the resource owners or infrastructure partners to work with other resource owners or infrastructures, who may see advantages in having a captive market, may have to support multiple sets of users with different expectations and requirements, or may feel as though they are competing against the others. Our analysis reveals a spectrum of infrastructure types. At one end of the spectrum is a small number (O(10)) of large resources, such as TeraGrid and DEISA. In the middle of the spectrum is a moderate number (O(100)) of smaller resources, such as OSG and EGEE/EGI. And at the far end of the spectrum is a large number (O(10000+)) of small resources, such as volunteer computing grids. Unsurprisingly, most infrastructures are built around roughly ``equal'' styles and types of resources, and so there remains a challenge for applications or users that might want or need to span different infrastructures. \subsubsection{Job/Resource Coupling} The job/resource coupling modalities are independent, independent but related, tightly coupled, and dependent. (Note: an infrastructure might support the running of MPI jobs on a particular resource within that infrastructure, but tightly coupled is used here in the distributed context, meaning across multiple resources.) TeraGrid, NGS, Grid'5000, and DAS support all four, as will FutureGrid. DEISA, OSG, and EGEE/EGI support all but the tightly coupled modality. PlanetLab and EC2 support none of these; they provide resource slices and resources respectively, which the user can then use as desired. Azure supports all four, with the limitation that tightly coupled jobs are best when the VM is constrained to a node/processor, and MPI jobs in particular are supported only on a single VM instance, not across multiple instances, because of a limitation of the communication endpoint model that is used, which does not support dynamic port ranges. \subsection{Allocations and Usage} The methods for obtaining the ability to use resources on the infrastructures also vary. Four basic paradigms exist. In all cases, the infrastructure owners have some process for deciding who is eligible to use the resources. For example, TeraGrid/XSEDE can be used by researchers led by a person affiliated with a US institution who intends to do research that will be published in the open literature. Similarly, DAS can be used by researchers within or collaborating with the five organizations that own and host the DAS resources. In the first paradigm, as on TeraGrid/XSEDE, DEISA, and FutureGrid, individual users write proposals (that may be for themselves or a team) for resources, and these proposals are peer reviewed. On TeraGrid/XSEDE, a proposal can also represent a community account, such as for a science gateway, where the proposer will reallocate the resources among a community. These proposals effectively return an allocation of the resources over a period of time. For both TeraGrid/XSEDE and DEISA, allocation decisions are made by the project. FutureGrid currently uses a review by the FutureGrid project to provide access to the grid; but as FutureGrid becomes a production element in XSEDE, this process will be incorporated in XSEDE's regular review process. Once the allocation decisions are made, a queuing system is used on most resources, where users submit jobs and the system maps the queued jobs to the resources over time. In the second paradigm, as on OSG and EGEE/EGI, decisions about which users can use which resources are made by the resource owners, in contrast to the central decisions made in the first paradigm. On OSG, the resource owners generally reserve some fraction for their own use and offer unused resources to others in one or more virtual organizations (VOs). EGEE/EGI resource owners simply offer their resources to one or more VOs. All users are members of at least one VO, and through their VO they have the opportunity to compete for use of the resources where their VO is able to run. In the third paradigm, as on the NGS, PlanetLab, Grid'5000, and DAS, no process exists for allocating the resources, and all the users fight for them through batch queues or other mechanisms, possibly with first-come, first-served or fair-share policies. In the fourth paradigm, in use on the commercial grids EC2/S3 and Azure, usage is simply paid for. There are no batch queues; when a user requests resources, they are either available or not. \subsection{Applications Use of Infrastructures} Our discussion of applications and infrastructures and our own experience in developing applications for both parallel and distributed infrastructures point to certain barriers in the effective development and deployment of distributed applications. When developing an application, the developer has to frame the potential application in terms of functions that can be implemented on the infrastructure on which the application will run. In parallel computing, there has been an approximately 20-year span under which the abstract infrastructure model has been well known: a set of interconnected nodes, each with a processor and a memory. The MPI standard assumes this model. As multiprocessor nodes and multicore processors have appeared, however, this model is no longer sufficient to write optimal code, though it is still sufficient to write portable code. For distributed applications, however, no abstract model of the potential infrastructures seems sufficient. Not only do all the hardware and system level issues that challenge a parallel program developer also challenge a distributed application developer, but one can argue that issues of policy, deployment, and execution tools, and environment make the distributed applications more complicated. Additionally, the lack of an abstract model of potential infrastructures is coupled with an empirical observation that similar ``functionality'' has been provided by using very different tools and capabilities. For example, two of the most popular and large high-throughput distributed-computing infrastructures---OSG and EGEE/EGI---have very different environments for data management and managing jobs or tasks, thus creating a barrier to interoperability. Developers who use a model of a volunteer computing grid and want to run on some of the DEISA systems are not making good use of the systems and will likely not successfully pass through the review process to obtain an allocation to run on such systems. On the other hand, an application written to run well on DEISA probably will not run at all on a volunteer computing grid. Additionally, there is no equivalent to the MPI standard for distributed computing, although of course it would be hard to have such a standard without first having a common abstract infrastructure model on which to think about and design a standard. Infrastructure providers have a similar problem. They need to design and provide an infrastructure that meets a set of user needs, so that users can build applications that run on the infrastructure. But users generally state their needs in terms of what they think is feasible: what they think the infrastructures can provide. In some cases, the providers and the users can work things out. For example, the EGEE/EGI and OSG infrastructures have been driven by a specific model of mostly sequential jobs, originally coming from the high energy physics community. These infrastructures providers have been able to build an infrastructure that meets this need, and application developers in other science domains have used this model and built new applications that work. Perhaps the answer is that there is no single abstract infrastructure model for distributed applications, but rather there are a number of distinct models, and application developers need to choose one of them and then use the infrastructures that match their model. If this is so, there could be a standard for each model, similar to the MPI standard that has been used for the model of parallel nodes, each with CPUs and associated memory. In some ways, EGEE/EGI and OSG use a model similar to this, one of distributed slots of computing, each with some associated storage. But TeraGrid has an implicit variety of models, one of which is the distributed set of parallel computers that is the main model in DEISA. In general, some standards are important in all the infrastructures we have discussed. For example, GridFTP is supported by all the infrastructures. In other areas, standards are used in some of the infrastructures, particularly where they provide a needed capability. For example, many OGF standards are supported by EGEE/EGI because this is really a federated infrastructure, where different providers choose different software on different parts of the infrastructure. Standards allow these different choices to work together. On the other hand, TeraGrid does not use many of the OGF standards. Instead, the project requires all parts of the infrastructure to use the Globus Toolkit, which becomes a de facto standard. This requirement obviously leads to difficulties if a user wants to use EGEE/EGI and TeraGrid together; but because there are not many such users, they can deal with this situation by writing custom adaptors or using tools that have already developed adaptors, such as SAGA-based tools or AHE~\cite{ahe}. A final issue for the use of science grid infrastructures is the timescale of change. Currently, the infrastructures are changing faster than the applications. This situation is partly because distributed infrastructures and their provisioning are correlated to existing and emerging technologies; distributed applications are not easy to reformulate or refactor. For example, infrastructures generally appear to last for three to seven years. But applications often take years of development and then are expected to last for 20 or more years. Currently, there appears to be no satisfying solution to this discrepancy, but perhaps the use of a small number of distributed abstractions that enable the decoupling of applications from infrastructures will help. For example, given that a large number of applications now use MapReduce, infrastructure providers will likely continue to support this abstraction as they change their infrastructure. And thus there will emerge MapReduce the pattern, MapReduce the programming model and execution environment, and finally specific implementations of MapReduce on different infrastructures. Identifying such abstractions is one of the goals of the book from which this report is derived. \subsection{The UK National Grid Service} The UK National Grid Service (NGS) is a national consortium of computational and data resources that use defined, open standard grid interfaces to provide services to academia. The NGS Collaboration is made up of the universities of Belfast, Birmingham, Bristol, Brunel, Cardiff, Durham, Edinburgh, Glasgow, Imperial College, Keele, Lancaster, Leeds, Liverpool, Manchester, Oxford, Reading, Royal Holloway (University of London), Sheffield, Southampton, Westminster, and York; Rutherford Appleton \& Daresbury Laboratories (STFC); HPCx and HeCTOR (supercomputers); and the Welcome Trust Sanger Centre. \minihead{History} The NGS has been operational for over four years, providing the underpinnings of a national e-infrastructure (for the UK) through the establishment of the solid management methods and interface definitions necessary to allow users to access the available resources. The first phase (2004--2006) funded four sites that each hosted resources, split into two ``compute'' systems and two ``data'' systems. These were complemented by a support center that administered systems for managing user access, information aggregation, and monitoring, as well as providing end-user support and training. The basic technical coordination and management structure of the project was also defined at this point. The core resources were upgraded during phase two (starting 2006) of the project, with identical compute systems at each site, in addition to the existing data systems. \minihead{Mission/Vision} The UK NGS mission statement is ``to enable coherent electronic access for {\em all} UK researchers to {\em all} computational and data based resources and facilities required to carry out their research, independent of resource or researcher location.'' The goals are as follows: \begin{shortlist} \item Enable a production-quality e-infrastructure. \item Deliver core services and support. \item Integrate with international infrastructures following user community demand. \end{shortlist} This final goal links to the NGS becoming the UK representative within the European distributed e-infrastructure project, EGI. \minihead{Management} Following the successful commissioning of the core resources, the NGS has expanded significantly through contributions from partner institutions. This expansion has led to the development of policies and procedures to ensure the consistent quality of resources that are attached to the NGS. A resource can join the NGS at two levels: partner or affiliate. A partner should provide a `significant resource or service' for NGS users. The procedure for joining is defined such that after notifying the NGS of their intention to join, sites gain assistance in installing the necessary software interfaces. Once installed, these must complete a full week of compliance testing without error before being certified as a recognized NGS resource. Additionally, they must complete a service-level description (SLD), detailing the resource and level of support they intend to offer users. They are also eligible to nominate a representative from their organization to attend the NGS Technical Board. In contrast, an affiliate, while still having to pass interface and service tests, does not have to provide an SLD and may maintain control over the user community that is being served. As of November 1, 2010, 30 institutions were members of the NGS, with 10 partners and 18 affiliates including the national HPC resources and 7 institutions that are community members. \minihead{Roadmap/Future} The UK NGS has been nominated by its funding agency as the UK representative within the European EGI project. The result has been a closer integration between the NGS and the GridPP\footnote{GridPP is a collaboration of particle physicists and computer scientists from the UK and CERN who have built a distributed computing grid across the UK for particle physicists.} project, which has until recently been the main method of UK engagement with European grids. (GridPP is an example of a community self-organizing to provide resources that their users need; and as more large research infrastructures are built throughout Europe to which UK researchers will need access, other communities may follow suit.) Through the alignment of core functions, shared services are provided to the two grids. The NGS will continue to provide the central services that are shared by communities and, as such, are aligning with services required for performing its nominated EGI central functions, as well as national versions of other services. \subsubsection{Characteristics} The NGS provides a single-point-of-contact HelpDesk for support and queries, for example, digital certificate issues and requests for new application software. The NGS has a set of interfaces defined through a ``core software stack'' developed with a desire to maintain compatibility with other large infrastructures such as GridPP and EGEE. This has meant defining an interface for which a number of software solutions can be used. The solution chosen by the core nodes has included the usage of the pre-Web Services version of the Globus Toolkit~\cite{GT} (GT 2) packaged within the Virtual Data Toolkit~\cite{VDT}. The interfaces provided include job submission, information publishing, data movement, and grid security infrastructure-based (GSI-based) secure shell access. This is one of a number of solutions that the NGS has documented and that are available for a site to install. Other middleware that can be installed before obtaining NGS membership includes GridSAM~\cite{gridsam} and Globus Toolkit version 4 (GT 4, with Web Services.) Each NGS installation is tested at regular intervals to ensure compliance, using an INCA-based~\cite{INCA} monitoring framework. Building on the lower-level services provided by the middleware, the NGS also has a number of different services that provide higher-level functionality. These include resource brokering, preconfigured application portals, and resource information publishing. Overall, the NGS has a managed approach to change, providing stable, robust services and supporting them over a reasonable period. At the same time, it is recognized that new services need to be developed, deployed, and supported for the future growth of the NGS; therefore, the communities can depend on the services NGS provides in the longer term. Although paid-for, or subscription, services are possible, current NGS services are free at the point of use, funded by the UK funding agencies EPSRC, JISC, and CCLRC (now STFC). As of late 2010, the NGS had about 1,200 users, about 80 of whom submit jobs in any given month, and with significant usage in computer science, chemistry, physics, biology, engineering, biochemistry, informatics, mathematics, and medicine. In 2010, the NGS supplied about 600 CPU-days of computing per day, about one fourth of which were submitted through Globus~\cite{ngs-data}. \subsubsection{Usage Modes} The NGS user communities operate on the system in a number of different modes depending on the type of resources they use and the type of underlying problem they are working on. Those using the resources primarily for computing can submit a task to a resource, either a prechosen system or one automatically selected through a resource-matching or brokering functionality. These jobs may be one of a number of independent tasks or a single parallel job that is run on a single ``MPI or OpenMP''-capable resource. A number of different types of portal systems also are available. The first of these is the NGS Application Repository, which makes a number of preconfigured applications available through a JSR 168 compliant portal framework. Here, the NGS installs the applications and sets up the appropriate pages within the portal for the application. An alternative is the Application Hosting Environment (AHE)~\cite{ahe}, which assumes that a research group or community has an expert who is able to configure and install the applications needed and will then make them available to the rest of the group. AHE can submit jobs to a number of the resources on the NGS and is currently being used by groups in biochemistry, chemistry, and materials science. NGS users also use other systems for automating job submission and management, including the parameter sweep managing Nimrod/G system~\cite{nimrodG} and the GridBS~\cite{gridBS} resource broker. These are intended primarily for users or institutional communities to submit tasks to the full range of resources to which they have access. The NGS is also deploying further high-level services, such as programming abstractions, that users can use in their software systems. In addition to using grid-type interfaces for job submission, a significant number of users access NGS resources by logging in to the end system using a single sign-on-enabled version of SSH and interact directly with the local distributed resource manager. These users often come from institutions with overloaded HPC resources, HPC resources with charges for usage, or no HPC system. The applications that run on the NGS are wide and diverse. Within the HPC communities, a significant number of users are using commercial or community codes. Within the HTC community of users, the situation is almost completely reversed, with the majority using their own developed codes, though these may depend on commercial or community libraries. Thus, these applications are more easily distributed around the grid systems, particularly when they are statically linked so that version interoperability difficulties for libraries are minimized. The communities that have used the NGS have been extremely broad, from STEM (Science, Technology, Engineering, and Mathematics) to art, humanities, and social sciences. Well-known examples have been used to create case studies to publicize the ongoing user communities. This approach has been particularly effective because of researchers being much more willing to listen to ``their own'' than to a set of service operators or even their own institutional computing services. Recent work has also included the provisioning of a test system for cloud-type services, with the intention that using this technology will allow for user services such as clients, portals, and workflow engines. These will be installed, demonstrated, and used by communities who wish to have a unified software face to their collaborations and work but who feel that installations on desktops and local resources are too difficult or time consuming. They also may have licensing restrictions that limit how useful the software would be for a whole community. \subsubsection{Successes and Limitations} The NGS has attracted about 700 users from a wide variety of academic fields (e.g., biology, physics, and computing) with a variety of computational and data problems (e.g., simulation of UK population dynamics) and ranging from part of a large collaboration to the individual researcher. The enabling effect of NGS resources has been acknowledged in a significant number of academic publications. Overall, the NGS has been extremely successful, although because of the way that the UK has developed two parallel grid systems, user communities have sometimes been confused. The GridPP system is a significant contributor to the EGEE system and, as such, has a large community of users. There are also a significant number of EC-funded projects that should be making use of e-infrastructure but possibly because of the duality, the UK contributors are not making use of the NGS. Also, a significant investment in the UK university sector in mid-range HPC systems has led to a number of NGS users moving to these systems. Overall, it appears while a significant number of users want to use HTC and grid-type computing, they often need significantly more user support than the NGS is funded to provide. To counter this situation, the NGS is engaging community and institutional champions to enable communities to support themselves. Additionally, licensing can be a significant impediment to the use of some applications on some NGS resources. Users can work around licensing issues, however, by binary building, distribution of runtime environments, and use of open source compatible equivalents. \subsection{Open Science Grid} Open Science Grid (OSG)~\cite{osg} is a US distributed-computing infrastructure for large-scale scientific research, primarily loosely coupled, high-throughput computing. OSG contributes to the Worldwide Large Hadron Collider (LHC) Computing Grid as the US shared distributed-computing facility used by the ATLAS and CMS experiments. OSG collaborated with the EGEE project in Europe to provide interoperating federated infrastructures that could be used transparently by the LHC experiments' software. \minihead{History} OSG began in 2005, with many roots that started before 2000, including the needs for computing from the Laser Interferometer Gravitational Wave Observatory (LIGO)~\cite{ligo} and US LHC~\cite{lhc} projects and three computer science/physics projects (PPDG, GriPhyN, and iVDGL) that came together as the Trillium and then Grid3 projects~\cite{avery}. \minihead{Mission/Vision} OSG, jointly funded by the US Department of Energy and NSF, is an open collaboration of scientific, research and educational communities, including users and hardware and software providers, to build, operate, use, and evolve a shared, national high-throughput computational facility based on common concepts, technologies, and processes. OSG staff maintain the distributed computational facility; provide support for the facility's (over 3,000) users, software, and services; and manage the interfaces to external contributors and peer grid infrastructures. \minihead{Management} OSG is defined by the set of operational services, software, and processes that enable the contributed resources to act as a coherent distributed system in support of the users. OSG extends the capabilities and capacities of the facility and enables and interfaces to campus and regional cyberinfrastructures. OSG does not own or maintain computer resources; these are maintained by the owners. Nor does OSG develop software (middleware and applications); these are acquired from external software development groups. OSG does support integrated software releases (based on the OSG's Virtual Data Toolkit (VDT)~\cite{VDT}, which includes Condor-G~\cite{condor-g}, Globus~\cite{globus}, VOMS~\cite{voms}, etc.) and works closely with software developers to ensure the current and future needs of the user communities will be met. \minihead{Roadmap/Future} The OSG facility expands continuously as a result of the integration of new sites, the installation of new resources, and the joining of new member communities together with new partnerships and collaborations. OSG's current funding is expected to end in mid-2011, and the OSG management team has proposed a new OSG to follow. \subsubsection{Characteristics of OSG} OSG is a sustained collaboration of domain researchers, computer scientists, computing resource administrators, software providers, and OSG staff. The time and effort needed to maintain this organization and manage the work are significant and receive ongoing attention. It took over a year to define consortium governance; and as the organization matures, the details have been revisited about every two years. OSG usage includes physics event simulation, molecular dynamics, protein structure prediction, biology, climate, text mining, and computer science. The user communities with the most challenging needs are the large physics collaborations in the US. The OSG provides the US computing infrastructure for the LHC ATLAS~\cite{atlas} and CMS~\cite{cms} experiments. Other major users are LIGO~\cite{ligo}, the Tevatron experiments (D0 and CDF)~\cite{d0}, and the STAR Relativistic Heavy Ion Experiment~\cite{star}. This diverse mix of (currently) more than 30 user communities and applications ensures the evolution of a generic, nationwide cyberinfrastructure, currently including more than 60 sites. The late-2010 average daily use of OSG was more than 45,000 CPU-days per day. The physics communities account for about 85\% of the usage. OSG's methods and processes are based on virtual organizations (VOs), ranging from dynamic, ad hoc collections with a specific short-term purpose to long-lived, stable collaborations with well-defined governances. VOs can contain other VOs; can interface with each other and share resources; and can have common services, common organizational policies and methods, and common members. Where communities layer their own grid over OSG, the community's VO registers with OSG to enable members of the user community to use additional OSG resources and services. This approach enables university, regional, research, and scientific communities with their own grid infrastructures to integrate with and/or rely on some or all of the OSG facility. OSG itself is a VO with people, resources, and services having a common purpose and governance. OSG provides access to and sharing of resources through common, shared services for monitoring, accounting, security, problem reporting, and tracking. Additionally, OSG provides a common, shared integration and validation facility and processes for testing new releases of software, services, and applications. OSG packages, releases, documents, and supports a well-defined set of software to enable the interfaces to and use of the contributed resources. This software, including the VDT, provides technologies used by both OSG and other infrastructures, such as TeraGrid and EGEE. Each project, including OSG, augments the VDT with specific configuration scripts and utilities for its own environment and users. OSG works to bridge its infrastructure and services with other grids, enabling transparent access, movement of jobs, and management of data. This strategy is crucial for the main OSG stakeholders, such as LHC scientists for whom OSG is ``merely'' the US part of the larger worldwide Worldwide LHC Computing Grid (WLCG). \begin{table} \begin{tabular}{|l|} \hline \begin{minipage}{11.5 cm} \vspace{0.1cm} Key Principles \begin{shortlist} \item Phased deployment with clear operations model. \item OSG technologies and user needs allow for 100\% use of all resources. \item Services should work toward minimizing their impact on the hosting resource, while fulfilling their functions. \item Local rules come first: All services should support the ability to function and operate in the local environment when disconnected from the OSG environment. \item Supplementary services: OSG will provide baseline services and a reference implementation. Use of other services will be allowed. VOs can deploy additional services. \item Middleman: Users are not required to interact directly with resource providers. Users and consumers (possible programs) will interact with the infrastructure and services. \item Inclusive participation: The requirements for participating in OSG should promote inclusive participation both horizontally (across a wide variety of scientific disciplines) and vertically (from small organizations such as high schools to large ones such as national labs). \end{shortlist} Best Practices \begin{shortlist} \item OSG's architecture is VO-based. Most services are instantiated in the context of a VO. OSG's baseline services and reference implementation can support operations within and shared across multiple VOs. \item Resource providers should provide the same interface to local use of the resource as they do to use by the distributed services. \item Every service will maintain state sufficient to explain expected errors. There will be methods to extract this state. There will be a method to determine whether the service is up and usable. \item OSG's infrastructure will support development and execution of (user) applications in a local context, without an active connection to the distributed services. \end{shortlist} \vspace{0.1cm} \end{minipage} \\ \hline \end{tabular} \caption{Some key principles and best practices of OSG (paraphrased).\label{Tab:OSGPrinciplesPractices}} \end{table} Another characteristic of OSG is the set of underlying principles, listed in Table~\ref{Tab:OSGPrinciplesPractices}. In all OSG activities, these principles are applied to the implementation concepts and design and are measured against the practices and procedures. This approach contributes to a coherent, consistent technical path through a diverse set of developments. \subsubsection{Usage Modes} OSG offers a data center service relationship to its users as customers, including the standing operations, support, and organizational services that a user community can depend on and use with little overhead. The modes of use include ``guaranteed'' (where the resources are owned by the user community), ``agreed upon expectations'' (where there has been negotiation between the user and resource owner communities on the expected level of throughput and support), and ``opportunistic'' (where the users make use of available resources based on the standard policies of the owners as members in the OSG Consortium). OSG helps integrate and support the use of multiple infrastructures as needed by its members, through multiplexing software and services that hide differences in infrastructure, as well as bridges and gateways that transform and translate information and control to the interfaces and schema of the differing services of the production infrastructure and the resources accessible through it. Some of these services are defined as ``critical'' to the use of the infrastructure by one or more of the user communities. For example, the US LHC relies on the publishing of information about OSG resources to the WLCG. The availability of such services is measured, with the target availability being agreed to with the users. Critical services (e.g., the information publisher) are being made available. OSG is particularly effective for high-throughput, pleasingly parallel\footnote{We prefer the term ``pleasingly parallel'' to the somewhat more common ``embarrassingly parallel,'' since we don't find parallelism to be at all embarrassing.} applications; job runs of between one hour and several days; jobs that can be checkpointed; explicit management of large scale data movement and storage; and ensembles that can effectively run across a large number of resources. Table~\ref{Tab:OSG_apps} summarizes the types and characteristics of applications running on OSG. Any application may have one or more such characteristics. Applications are supported by OSG software, which provides capabilities for remote job scheduling, resource selection, and data movement and access. Particular aspects of support for the different application types are shown in Table~\ref{Tab:OSG_support}. \begin{table} \begin{footnotesize} \begin{tabular}{|p{1.8cm}|p{9.6cm}|} \hline {\bf Application Type} & {\bf Characteristics and Examples} \\ \hline Simulation and modeling & CPU-intensive, large number of independent jobs, e.g., physics Monte Carlo event simulation \\ \hline Production processing & Significant I/O of data from remote sources and long sequences of similar jobs passing through data sets, e.g., processing of physics raw event data \\ \hline Complex workflow & Use of VO-specific higher-level services and dependencies between tasks, e.g., analysis, text mining \\ \hline Real time response & Short runs and semi-guaranteed response times, e.g., grid operations and monitoring \\ \hline Small-scale parallelism & Allocation of multiple CPUs simultaneously \& use of MPI libraries; e.g., protein analysis, MD \\ \hline \end{tabular} \end{footnotesize} \caption{Types of applications running on Open Science Grid\label{Tab:OSG_apps}} \end{table} \begin{table} \begin{footnotesize} \begin{tabular}{|p{1.8cm}|p{4.6cm}|p{4.6cm}|} \hline & Support & Challenges \\ \hline Simulation and Modeling & Batch-system services and prioritization policies; small amount of data storage & Ensuring full usage of dynamically available resources wherever they are located \\ \hline Production processing & Job and workload management tools; data placement and access management tools & \multirow{2}{*}{ \hspace{-0.3cm}\begin{tabular}{p{4.6cm}} Automation of conditional workflows, retries, etc.; common tools for efficient placement and co-location of data and jobs; support for VO-defined policies applied effectively across the autonomous resources \end{tabular} } \\ \cline{1-2} Complex workflow & Tools for managing workflows; pre-placement of application tools and databases at remote sites; tools for error reporting, response and tracking & \\ \hline Real time response & Prioritization services to allow immediate or minimum latency execution of jobs & Support for checkpointing and restart of other applications; dynamic nature of available set of resources precludes deterministic response times \\ \hline Small-scale parallelism & Local support for MPI; OSG support for publishing necessary information of site-specific configurations and software versions & Automated use across multiple MPI site configurations and implementations \\ \hline \end{tabular} \end{footnotesize} \caption{Support for OSG Application Types\label{Tab:OSG_support}} \end{table} The OSG provides resource information and matchmaking software for automated selection of remote sites on which to execute jobs. Users embed interfaces to this information and/or do manual selection of sites. Such selections are configured to match the processing and storage needs and timelines of the applications. \subsubsection{Successes and Limitations} OSG has successfully worked with the US high energy physics (HEP) community and EGEE to build an infrastructure that allows both HEP and processing for other science and research. The challenges OSG faces include meeting the planned (and anticipating the unplanned) capacity and capability needs of the current user communities; managing and accommodating heterogeneity across facilities that scale from small university department clusters to leadership-class computing facilities, with user communities that scale from individual PIs and students to very large collaborations; and developing and measuring an agreed-upon, sustainable economic model for growth that takes account of OSG's bartering and brokering approach. OSG best supports loosely coupled applications as well as small parallel applications that fit on a single multicore CPU. \subsection{PlanetLab} PlanetLab ~\cite{planetlab} is an open, globally distributed platform for developing, deploying, and accessing planetary-scale network services. It has been used primarily as a research and education testbed for distributed computing services and applications. \minihead{History} In March 2002, a small community of researchers interested in planetary-scale network services proposed PlanetLab as a community testbed. The initial participants were Berkeley, MIT, Washington, Rice, Princeton, Columbia, Duke, Carnegie Mellon, and Utah. Intel Research provided the initial 100 machines, which by October 2002 spanned 42 sites. In February 2003, PlanetLab nodes came online at three of the points of presence (PoPs, or access points for the network) on Internet2's Abilene backbone. All 11 Abilene PoPs were hosting PlanetLab nodes by the end of 2003. In 2003, NSF announced a \$4.5M award to Princeton, UC Berkeley, and Washington for supporting and enhancing PlanetLab. In January 2004, Princeton, Berkeley, and Washington formally created the PlanetLab Consortium, with Intel and HP as charter commercial members. Princeton began hosting the consortium, and operational responsibility for PlanetLab moved from Intel to Princeton. By June 2007, PlanetLab passed the 800-node mark. In July 2007, PlanetLab federated with the OneLab project, which began to support PlanetLab-Europe (PlanetLab-EU). As of mid-2010, PlanetLab had 1,132 nodes at 518 sites. \minihead{Mission/Vision} PlanetLab's goal is to support both experiments (short-term) and network services (continuously running) and ultimately to develop and demonstrate a new set of network services at planetary scale. \minihead{Management} The PlanetLab Consortium is a collection of academic, industrial, and government institutions cooperating to support and enhance the PlanetLab overlay network. It is responsible for overseeing the long-term growth of PlanetLab's hardware infrastructure, designing and evolving its software architecture, providing day-to-day operational support, and defining policies that govern appropriate use. Institutions join the consortium by signing a membership agreement and connecting two or more nodes to the PlanetLab infrastructure. A governance document describes how the consortium is organized. \minihead{Roadmap/Future} PlanetLab is in the early stages of federation; it is creating autonomous authorities that are responsible for subsets of the global slices and nodes. These authorities will then peer with each other to build a federated system. This effort is being done with an eye to eventual federation across a collection of testbeds. One of these autonomous authorities, the OneLab project, will create independent slice and management authorities spanning Europe. Universit\'{e} Pierre et Marie Curie (UPMC) will run a subauthority (PlanetLab-EU) under the PlanetLab root authority. PlanetLab-EU is expected to operate in a way that is consistent with the PlanetLab's primary mission as a global testbed for developing, deploying, and accessing planetary-scale network services, but it will otherwise be an independent management authority (responsible for the stability of a set of nodes) and slice authority (responsible for the behavior of a set of slices). PlanetLab and PlanetLab-EU will run independent operations teams, although the two teams will work to define a common response procedure and template messages. \subsubsection{Characteristics} PlanetLab is a collection of machines distributed over the globe. Most of the machines are hosted by research institutions, although some are in colocation and routing centers (e.g., on Internet2's Abilene backbone). PlanetLab has a common software package. All PlanetLab machines run this package, which includes a Linux-based operating system; mechanisms for bootstrapping nodes and distributing software updates; a collection of management tools that monitor node health, audit system activity, and control system parameters; and a facility for managing user accounts and distributing keys. PlanetLab supports running short-term experiments, as well as long-running services that support a client base. PlanetLab is a microcosm of the next Internet. Not only are researchers evaluating and deploying end-user services on top of PlanetLab, but they are also expected to develop foundational subservices that can be folded back into PlanetLab, thereby enhancing the facility for others. Researchers who make claims about protocols and services running on the Internet use PlanetLab to demonstrate how their designs hold up under realistic network conditions. PlanetLab has hosted over 4,700 users in its six-year history, approximately 3,700 of whom have been students. Whether these students are working on their PhD research or doing course assignments, they are gaining valuable experience with network systems running at a global scale---including coping with transient failures, differences in connectivity cliques, variations in latency and bandwidth, and abuses (some of which are malicious) inflicted by real users. Additionally, a set of graduate and undergraduate courses have been designed to take advantage of PlanetLab. \subsubsection{Usage Modes} PlanetLab is used primarily by systems researchers to understand the requirements for deploying network services (e.g., resource discovery, network protocols, content distribution, P2P routing). Many concurrent experiments are run across the shared infrastructure. PlanetLab applications typically exploit the wide-area connectivity provided by its many sites, for example, new peer-to-peer systems and applications. A user acquires a slice (a set of nodes), deploys onto the slice, then releases it when the experiment is done. Some applications run on a small slice to test small-scale services (tens to hundreds of nodes). Other applications, such as monitoring and content distribution, tend to run on all of the available nodes providing the largest degree of network coverage. These large applications tend to be long-running or persistent, while the smaller-scale services are generally used for short-term experiments and are transient. \subsubsection{Successes and Limitations} PlanetLab has proven to be a valuable platform for learning about network-wide phenomena, creating new network protocols, evaluating new and existing network services, gaining experience with network systems running at a global scale, and deploying novel network services that enhance the capabilities of the Internet. PlanetLab also has formed the basis for NSF's GENI initiative~\cite{GENI} into new Internet designs. Quantifying the broader impact of this research is difficult, but anecdotal evidence strongly indicates that research leveraging PlanetLab is having a far-reaching impact. The following is a s small sample. \begin{shortlist} \item The iPlane~\cite{iPlane} and Hubble network measurements~\cite{hubble} projects have been a valuable resource for the networking research community, with more than 20 research projects using the structured network topology information produced by the systems. \item BitTyrant~\cite{BitTyrant} is a highly optimized and strategic BitTorrent client whose development was aided by extensive experimentation on PlanetLab. BitTyrant was publicly released in 2007 and was downloaded by more than a million users in its first year. \item PlanetLab was used for experimentation with localizing optimizations for peer-to-peer systems. Out of this work came a new proposal, P4P~\cite{P4P}, an interface that allows ISPs and peer-to-peer systems to coordinate and optimize for both network-level efficiency and application-level performance. \end{shortlist} In PlanetLab, however, it is difficult to run repeatable experiments because of the lack of resource guarantees. PlanetLab itself can be volatile, with machine availability fluctuating wildly. Moreover, little attempt has been made to maintain the health or uptime of PlanetLab nodes: these activities are left to the sites themselves. Thus, it can be difficult to get a global picture of the state of PlanetLab. \subsection{TeraGrid} Funded by the US National Science Foundation (NSF), TeraGrid~\cite{teragrid,TeraGridScience} was an advanced, nationally distributed, open cyberinfrastructure that enabled and supported leading-edge scientific discovery and promoted science and technology education. TeraGrid included resources (supercomputers, experimental, storage, and visualization systems, data collections, and science gateways) connected by high-bandwidth networks and integrated by software and by coordinated policies and operations, all supported by computational leaders and technology experts. At the end of the TeraGrid project (June 2011), TeraGrid resources included more than 2 petaflops of computing capability and more than 60 petabytes of online and archival data storage, with rapid access and retrieval over high-performance networks. Researchers could also access more than 100 discipline-specific databases. \minihead{History} In 2001 NSF made an award to four centers to establish a distributed terascale facility (DTF). The DTF became known to users as TeraGrid, a multiyear effort to build and deploy the world's largest, fastest, most comprehensive distributed infrastructure for general scientific research. The initial TeraGrid was homogeneous and ``griddy,'' with users foreseen to be running on multiple systems, both because their codes could run ``anywhere'' and because, in some cases, multiple systems would be needed to support the large runs that were desired. TeraGrid included a set of software packages that were identical on all systems. TeraGrid subsequently expanded in capability and number of resource providers. The expansion introduced heterogeneity and thus added complexity to the grid ideals of the initial DTF, since the common software no longer could be identical. This situation led to the concept of common interfaces, with potentially different software underneath the interfaces. Additionally, the users of the national centers' supercomputers were merged into TeraGrid, motivating TeraGrid to increase its focus on supporting these users and their traditional parallel/batch usage modes. \minihead{Mission/Vision} The TeraGrid mission was twofold: (1) to enable and support leading-edge computational research through the provision of an advanced, distributed, comprehensive, and open cyberinfrastructure and (2) to promote the use of this cyberinfrastructure in research and education. TeraGrid achieved its purpose and fulfilled its three goals: \begin{itemize} \item {\bf Deep}: let the most experienced users use the most powerful computational resources and advanced computational expertise/support to do their work; \item {\bf Wide}: find larger and more diverse communities of researchers and educators who can use the resources, including through science gateways; \item {\bf Open}: facilitate simple migration between TeraGrid and other resources through use of open interfaces, partnerships with other grids, and collaborations with research and education institutions. \end{itemize} \minihead{Management} TeraGrid's Grid Infrastructure Group (GIG) at the University of Chicago, working in partnership with eleven resource provider sites, coordinated the functions and operation of TeraGrid~\cite{teragrid}. \minihead{Roadmap/Future} The overall management of the TeraGrid changed in 2011, as the TeraGrid transitioned under a new NSF funding program called eXtreme Digital, or XD. The XD solicitation called for broadening access as the main new feature, and the new project that has replaced TeraGrid is called XSEDE. XSEDE intends to continue many of the successful parts of TeraGrid, and in general the features described here are intended to describe XSEDE as well as TeraGrid, unless otherwise mentioned. SInce NSF awards to providers last for two to four years, TeraGrid resources and resource providers often changed; and this process is also expected to continue with XSEDE. \subsubsection{Characteristics} The nodes of TeraGrid spanned a wide variety of architectures, sizes, and purposes: clusters, massively parallel systems, shared-memory systems, and systems dedicated to remote visualization, ranging from entry-level and experimental resources to a 1-petaflop system. The TeraGrid network provided high-capacity links among these resources. Each resource provider maintained at least 10 Gbps of connectivity to one of three TeraGrid hubs, which were interconnected via 10-Gbps fiber optic links. In 2009, TeraGrid delivered about 700 million core-hours to about 4,800 users. TeraGrid had a single allocations process with a national peer review; a single point of access via a user portal; a set of coordinated software and services kits based on GT4 technology deployed on each resource according to its architecture and purpose; and a unified user support, documentation, training, and educational system. The TeraGrid project introduced important new methods and tools, such as science gateways~\cite{TGSG}, for making high-end computing available and useful to a wide range of academic communities. The Campus Champions~\cite{TGCC} program actively spread news on campuses across the country about the availability of resources for research and education. \subsubsection{Usage Modes} TeraGrid users submitted jobs to the batch queues of the particular system on which they wanted to run their application, either directly from that system or indirectly from another system, using Grid software. Users were encouraged to use the TeraGrid User Portal to monitor the batch queues and to use the batch queue predictor to assist them in selecting the systems best suited to their needs. Users could request special handling of jobs, including access to dedicated system time, to address special job-processing requirements. TeraGrid usage modes, as shown in Table~\ref{teragridusage}, can be divided in deep and wide categories, two of the three TeraGrid goals. Note that this table shows numbers of users, not the amount of usage. Deep users use far more of the resources, both per user and in sum, than do the wide users. In fact, in the third quarter of 2009, the top 24\% of the users used more than 80\% of the resources. The deep usage modes of TeraGrid resources, by experienced computational scientists and engineers, exploited TeraGrid's large-scale resources and the intellectual expertise of the staff at the resource providers. Included was the ability to run batch jobs on the high-end resources, as well as data storage, management, analysis, and transfer capabilities. Complex and heterogeneous work and data flows, urgent computing, and interactive computing are also being enabled. Moreover, as new methodologies for large-scale, data-intensive computational science (data mining, statistical analysis, etc.) continue to explode in popularity and importance, TeraGrid/XSEDE must support the high-end users in these modalities also. The wide usage modes of TeraGrid aimed to increase the overall impact of TeraGrid's advanced computational resources on larger and more diverse communities through user interfaces, domain-specific portals, and enhanced support that facilitate scientific discovery without requiring people to become high-performance computing experts. Features included the development and support of simpler and more powerful interfaces---ranging from common user environments to science gateways and portals, through more focused outreach and collaboration with science domain research groups---and educational and outreach efforts that will help inspire and educate future scientists. \begin{table} \footnotesize \begin{tabular}{|l|l|l|} \hline {\bf Usage Mode} & {\bf Type} & {\bf Number of Users}\\ \hline \hline Batch Computing on Individual Resource & mostly deep & 850 \\ \hline Exploratory and Application Porting & N/A & 650 \\ \hline Science Gateway Access & mostly wide & 500 \\ \hline Workflow, Ensemble and Parameter Sweep & deep \& wide & 250\\ \hline Remote Interactive Steering and Visualization & mostly deep & 35 \\ \hline Tight-Coupled Distributed Computation & deep & 10 \\ \hline \end{tabular} \caption{\em TeraGrid usage mode distribution for 2006, the latest year for which data is available. \label{teragridusage}} \normalsize \end{table} Additionally, within the open usage modes, TeraGrid wanted to enable, simplify, and encourage scaling into its large-scale resources. To this end, TeraGrid provided interfaces and APIs, and it went further to include appropriate policies, support, training, and community building. TeraGrid tried, with varying levels of success, to make its cyberinfrastructure accessible from, and even integrated with, cyberinfrastructure of all scales, including not just other grids but also campus cyberinfrastructures and even individual researcher laboratories and systems. Numerous commercial and academic applications were available across the various computing systems to support users from multiple domains. More than 75 software applications supported research across multiple domains, including molecular biosciences, chemistry, physics, astronomy, materials research, chemical and thermal systems, and atmospheric sciences. TeraGrid conducted surveys and interviews with the user community throughout the year to assess their needs and requirements, and it utilized this information to improve the resources and services offered to the user community. This process will be more formal in XSEDE. \subsubsection{Successes and Limitations} The success of TeraGrid was attested to by the impressive number of publications resulting from its use. Indeed, each year, the TeraGrid research community reported results in over 1,000 publications in various professional journals. TeraGrid evolved far beyond the scope and the architectural stages adopted in 2001 and 2005. As called for in the NSF ``eXtreme Digital'' (XD) solicitation, a new technological and organizational framework was needed, and XSEDE intends to provide this. TeraGrid's mission evolved from an infrastructure that supports distributed execution across multiple sites to a collection of mostly stand-alone HPC machines. A common complaint about TeraGrid was that it supported the use of individual resources well but did not focus on the challenge of collectively utilization of multiple machines. In other words, TeraGrid addressed the requirements to enable applications to scale up well, but did not address the requirements to scale out as much~\cite{dpa-grid2009}. It is unclear how this will change in XSEDE.
1,314,259,992,833
arxiv
\section{Introduction} \label{sec:introduction} The quantum nature of the vacuum provides a variety of physically interesting phenomena, including the Casimir effect \cite{C48}. The so-called dynamical (nonstationary) Casimir effect (DCE), as well as the static force, has been investigated extensively \cite{P68-69,M70,FD76,RT85,JS96,BE93-BC95-CB95,LJR96, Law94,SPS98,D95-DK96,D98,CDM01-02,SH02,IT04-05, Y89,LTV95,CDLM04,UPSS04,DD05-06,R06,HE06-07,RITZ09} (and references therein), where photons are created from the vacuum fluctuation due to nonadiabatic change of the system such as vibration of a cavity or expansion of the universe. It is, however, difficult experimentally to realize the mechanical vibration of the cavity with a sufficient magnitude at the resonant frequency $ \sim $ GHz which is required to create a significant number of photons for detection. As a feasible alternative, it has been proposed recently that the oscillating wall can be simulated by a plasma mirror of a semiconductor slab which is irradiated by periodic laser pulses \cite{IT04-05} (see also Refs. \cite{Y89,LTV95}). In this paper, we investigate quantum mechanically the photon creation via the DCE and its detection with Rydberg atoms. We particularly intend to examine the experimental realization of DCE with a plasma mirror of a semiconductor slab \cite{IT04-05,RITZ09}. In Sec. \ref{sec:Hamiltonian}, the canonical Hamiltonian for DCE is derived in terms of the creation and annihilation operators, where the field operators are expanded simply with the initial modes. Then, in Sec. \ref{sec:mirror} the time-varying frequencies and squeezing couplings of the Hamiltonian are calculated in an effective 1+1 dimensional scalar field model with a plasma mirror. They exhibit the enhancement of effective wall oscillation for the DCE which is simulated by the nonstationary plasma mirror. In Sec. \ref{sec:creation}, the number of photons created via the DCE is evaluated as squeezing from the Heisenberg equations for the creation and annihilation operators. The results appear to agree essentially with those obtained by the usual instantaneous-mode approach. In Sec. \ref{sec:detection}, we investigate the excitation process of Rydberg atoms through the atom-field interaction, which is utilized to detect the created photons. Some conditions on the physical parameters are clarified for the efficient photon detection. In Sec. \ref{sec:realization}, the experimental realization of DCE with a semiconductor plasma mirror is discussed. Section \ref{sec:summary} is devoted to a summary. \section{Canonical Hamiltonian} \label{sec:Hamiltonian} We consider a scalar field in 3+1 space-time dimensions as an effective description of the electromagnetic field in a resonant cavity. The Lagrangian is given by \begin{equation} {\cal L} = \frac{1}{2} \epsilon ({\bf x},t) ( {\dot \phi} )^2 - \frac{1}{2} ( \nabla \phi )^2 - \frac{1}{2} m^2 ({\bf x},t) \phi^2 \end{equation} ($ \hbar = c = 1 $) \cite{Law94,SPS98,UPSS04,CDLM04,BE93-BC95-CB95}. Here, $ \epsilon ({\bf x},t) $ and $ m^2 ({\bf x},t) $ represent the dielectric permittivity and conductivity (effective ``mass" term), respectively, in the matter region such as a semiconductor slab. As specified later, they are space-time dependent, simulating the boundary oscillation. Conventionally, the instantaneous modes $ {\bar f}_\alpha ({\bf x},t) $ (real, orthonormal and complete) at each time $ t $ with time-varying frequencies $ {\bar \omega}_\alpha (t) $ are adopted according to the boundary oscillation: \begin{eqnarray} [ - \nabla^2 + m^2 ({\bf x},t) ] {\bar f}_\alpha ({\bf x},t) = \epsilon ({\bf x},t) {\bar \omega}_\alpha^2 (t) {\bar f}_\alpha ({\bf x},t) \end{eqnarray} with the orthonormalization \begin{eqnarray} \int_V \epsilon ({\bf x},t) {\bar f}_\alpha ({\bf x},t) {\bar f}_\beta ({\bf x},t) d^3 x = \delta_{\alpha \beta} / [ 2 {\bar \omega}_\alpha (t) ] . \end{eqnarray} Instead, we here specify the particle representation simply in terms of the initial modes \begin{eqnarray} f^0_\alpha ({\rm x}) = {\bar f}_\alpha ({\bf x},t=0) , \ \omega^0_\alpha = {\bar \omega}_\alpha (t=0) . \end{eqnarray} The canonical field operators in the Heisenberg picture are expanded with the creation and annihilation operators $ a_\alpha^\dagger (t) $ and $ a_\alpha (t) $ as \begin{eqnarray} \phi ({\bf x},t) &=& \sum_\alpha [ a_\alpha (t) + a_\alpha^\dagger (t) ] f^0_\alpha ({\bf x}) , \\ \Pi ({\bf x},t) &=& \epsilon ({\bf x},0) \sum_\alpha i \omega^0_\alpha [ - a_\alpha (t) + a_\alpha^\dagger (t) ] f^0_\alpha ({\bf x}) , \end{eqnarray} where $ \Pi ({\bf x},t) = \partial {\cal L} / \partial {\dot \phi} = \epsilon ({\bf x},t) {\dot \phi} ({\bf x},t) $. Then, the canonical Hamiltonian is presented by the usual procedure as \begin{eqnarray} H_{\rm F} (t) &=& \int_V \frac{1}{2} \left\{ \frac{\Pi^2}{\epsilon ({\bf x},t)} + \phi [ - \nabla^2 + m^2 ({\bf x},t) ] \phi \right\} d^3x \nonumber \\ &=& \sum_{\alpha} \omega_{\alpha} (t) \left( a_\alpha^\dagger a_\alpha + \frac{1}{2} \right) + \sum_{\alpha \not= \beta} \mu_{\alpha \beta} (t) a_\alpha^\dagger a_\beta \nonumber \\ &{}& + \sum_{\alpha , \beta} i \left[ g_{\alpha \beta} (t) a_\alpha^\dagger a_\beta^\dagger - g_{\alpha \beta}^* (t) a_\beta a_\alpha \right] , \label{eqn:Ht} \end{eqnarray} where the space-integral is taken over the whole region $ V $ which is fixed suitably (not time-dependent) according to the physical setup, as illustrated later for the case of a cavity with a nonstationary plasma mirror. [The usual oscillating boundary may also be described as a periodic shift of the region of a high potential wall represented by $ m^2 ({\bf x},t) $.] The explicit time-dependence of the Hamiltonian $ H_{\rm F} (t) $ in Eq. (\ref{eqn:Ht}) represents the variation of the couplings which originates from the nonstationary behavior of the c-number external quantities $ \epsilon ({\bf x},t) $ and $ m^2 ({\bf x},t) $. The second-order field equation (Klein-Gordon equation) is derived from the Heisenberg equations for $ \phi ({\bf x},t) $ and $ \Pi ({\bf x},t) $. The mode frequencies $ \omega_{\alpha} (t) $, intermode couplings $ \mu_{\alpha \beta} (t) $ and squeezing terms $ g_{\alpha \beta} (t) $ are calculated by considering the orthonormality of $ f^0_\alpha ({\bf x}) $ which obey the wave equation with $ \epsilon ({\bf x},0) $ and $ m^2 ({\bf x},0) $: \begin{eqnarray} \omega_{\alpha} (t) &=& \omega^0_\alpha + \mu_{\alpha \alpha} (t) \equiv \omega^0_\alpha + \delta \omega_{\alpha} (t) , \label{eqn:omt} \\ \mu_{\alpha \beta} (t) &=& 2 G^\epsilon_{\alpha \beta} (t) + 2 G^m_{\alpha \beta} (t) , \label{eqn:mut} \\ g_{\alpha \beta} (t) &=& - i [ - G^\epsilon_{\alpha \beta} (t) + G^m_{\alpha \beta} (t) ] , \label{eqn:gt} \\ G^\epsilon_{\alpha \beta} (t) &=& \frac{1}{2} \omega^0_\alpha \omega^0_\beta \int_{\delta V (t)} \frac{\epsilon^2 ({\bf x},0)}{\epsilon_\Delta ({\bf x},t)} f^0_\alpha ({\bf x}) f^0_\beta ({\bf x}) d^3 x , \label{eqn:Ge} \\ G^m_{\alpha \beta} (t) &=& \frac{1}{2} \int_{\delta V (t)} m_\Delta^2 ({\bf x},t) f^0_\alpha ({\bf x}) f^0_\beta ({\bf x}) d^3 x . \label{eqn:Gm} \end{eqnarray} The space-integrals for $ G^{\epsilon,m}_{\alpha \beta} (t) $ are evaluated actually in the subregion $ \delta V (t) $ ($ \subseteq V $), possibly time-dependent when a moving boundary is considered, where $ \epsilon ({\bf x},t) $ and $ m^2 ({\bf x},t) $ vary in time as \begin{eqnarray} \epsilon_\Delta^{-1} ({\bf x},t) & \equiv & \epsilon^{-1} ({\bf x},t) - \epsilon^{-1} ({\bf x},0) , \\ m_\Delta^2 ({\bf x},t) & \equiv & m^2 ({\bf x},t) - m^2 ({\bf x},0) . \end{eqnarray} Here, $ G^{\epsilon,m}_{\alpha \beta} (0) = 0 $ with $ \epsilon_\Delta^{-1} ({\bf x},0) = 0 $ and $ m_\Delta^2 ({\bf x},0) = 0 $ at $ t = 0 $, as the Hamiltonian $ H_{\rm F} (0) $ is diagonalized in terms of the initial modes $ f^0_\alpha ({\bf x}) $. Similar formulas are presented for the effective Hamiltonian with the instantaneous modes \cite{Law94,SPS98}. This effective Hamiltonian involves even the time-derivatives of the mode functions since the quantum time evolution is traced along the instantaneous modes. On the other hand, in the present approach the time evolution is viewed on the initial modes according to the Heisenberg equations. The canonical Hamiltonian is calculated without the time-derivatives of the mode functions, and applicable readily for various physical setups, e.g., the case of a plasma mirror, clarifying its dependence on the experimental parameters. There may be some claim concerning the ambiguity on the particle representation and photon number since the basis modes are changing during the DCE. This ambiguity is, however, spurious physically (but might be essential for the case of the expanding universe, which is beyond the present scope). In fact, the instantaneous modes return to the initial modes at each period of the oscillation, where the photon number operators of the respective descriptions coincide with each other by definition. We can check explicitly that when the mode functions are not deformed largely in time, as usually considered, this canonical treatment provides essentially the same result for the DCE as the instantaneous-mode approach. The effects of the intermode couplings will be less significant in the instantaneous-mode approach, where the Hamiltonian is diagonalized at each time. Anyway, the intermode couplings are usually off resonant, providing subleading contributions to the DCE. \section{Vibration with a plasma mirror} \label{sec:mirror} We next calculate the time-varying frequencies and squeezing couplings of the Hamiltonian for DCE in an effective 1+1 dimensional scalar field model with a nonstationary plasma mirror which is realized with a semiconductor slab irradiated by periodic laser pulses \cite{IT04-05}. The dielectric response of the plasma is given by $ \epsilon ( \omega ) = \epsilon_1 [ 1 - ( \omega_p^2 / \omega^2 ) ] $ with the plasma frequency $ \omega_p = ( n_e e^2 / \epsilon_1 m_* )^{1/2} $ in terms of the effective electron mass $ m_* $ and the conduction electron number density $ n_e $ proportional to the laser power $ W_{\rm laser}/{\rm pulse} $. This response for the dispersion relation, $ \epsilon ( \omega ) \omega^2 = \epsilon_1 \omega^2 - ( n_e e^2 / m_* ) $, can be taken into account in the slab region $ [ l , l + \delta ] $ around $ x = l $ with a thickness $ \delta ( \ll L ) $ as \begin{eqnarray} \epsilon (x,t) = \epsilon_1 (t) , m^2 (x,t) = m_p^2 (t) \equiv n_e (t) e^2 / m_* , \label{eqn:em-xt} \end{eqnarray} where $ m_p^2 (0) = 0 $ for $ W_{\rm laser} (0) = 0 $. (The spatial distribution of the conduction electrons along the $ x $ direction may also be considered readily.) The instantaneous mode functions are given as \begin{equation} {\bar f}_k (x,t) = \left\{ \begin{array}{ll} D \sin k x & [ 0 , l ) \\ B {\rm e}^{i k^\prime x} + C {\rm e}^{- i k^\prime x} & [ l , l + \delta ]:{\rm slab} \\ A \sin k [ x - \delta + \xi (t) ] & ( l + \delta , L ] \end{array} \right. \label{eqn:fkb-pm} \end{equation} with the dispersion relations \begin{eqnarray} {\bar \omega}_k^2 = ( k^2 + {\bf k}_\bot^2 ) / \epsilon_0 = ( {k^\prime}^2 + {\bf k}_\bot^2 + m_p^2 ) / \epsilon_1 \end{eqnarray} ($ k^\prime = i | k^\prime | $ for $ {k^\prime}^2 < 0 $ with large $ m_p^2 $), where $ {\bf k}_\bot $ is the momentum in the orthogonal spatial two dimensions (not shown explicitly) \cite{D98,CDM01-02,R06}. The Dirichlet boundary condition is adopted at $ x = 0 , L $ with $ \sin k [ L - \delta + \xi (t) ] = 0 $, corresponding to the case of TE modes. The case of TM modes can be treated in a similar way by adopting $ m^2 (x,t) = [ ( \partial n_e / \partial x ) e^2 / ( {\rm k}_\bot^2 m_* ) ] $ \cite{RITZ09}. The diagonal couplings $ \delta \omega_k (t) $ and $ g_{kk} (t) $ are specifically calculated in Eqs. (\ref{eqn:omt})--(\ref{eqn:Gm}) with Eq. (\ref{eqn:fkb-pm}) for $ f^0_k (x) $ at $ t = 0 $ as \begin{eqnarray} \delta \omega_k (t) &=& \omega^0_k [ \delta_\epsilon (t) + \delta_m (t) ] / L , \label{eqn:dom-pm} \\ g_{kk} (t) &=& - (i/2) \omega^0_k [ - \delta_\epsilon (t) + \delta_m (t) ] / L . \label{eqn:g-pm} \end{eqnarray} Here, the effective wall oscillation is enhanced as \begin{eqnarray} \delta_\epsilon (t) / \delta & \simeq & - [ \epsilon_1 (0) / \epsilon_0 ] [ 1 - \epsilon_1 (0) / \epsilon_1 (t)] \sin^2 kl , \label{eqn:dlt-e} \\ \delta_m (t) / \delta & \simeq & [ n_e (t) e^2 / m_* \epsilon_0 ( \omega^0_k )^2 ] \sin^2 kl . \label{eqn:dlt-m} \end{eqnarray} This effect is almost proportional to the square of the mode function around the slab, $ [ f^0_k (l) ]^2 \propto \sin^2 kl $, since $ \int_l^{l+\delta} [ f^0_k (x) ]^2 dx \simeq [ f^0_k (l) ]^2 \delta $ for $ k^\prime \delta \sim [ \epsilon_1 (0) / \epsilon_0 ]^{1/2} ( \delta / L ) \ll 1 $ at $ t = 0 $. If the slab is placed at the boundary $ x = l = 0 $, $ \sin^2 kl $ is replaced with $ ( k \delta )^2 / 3 \sim ( \delta / L )^2 \ll 1 $, as observed in Ref. \cite{UPSS04} claiming that the DCE is suppressed in the TE mode. The significant photon creation, however, can take place even in the TE mode if the slab is placed apart from the boundaries $ x = 0 , L $ which are the nodes of $ f^0_k (x) $ \cite{CDLM04,RITZ09}. The shift $ \xi (t) $ in the instantaneous mode of Eq. (\ref{eqn:fkb-pm}) is determined mainly proportional to $ \delta $ to give the frequency modulation $ \delta {\bar \omega}_k (t) $. The diagonal squeezing coupling $ {\bar g}_{kk} (t) $ is then calculated with the formulas for the effective Hamiltonian \cite{Law94,SPS98}. After some calculations we find the relations \begin{eqnarray} \delta {\bar \omega}_k (t) \simeq \delta \omega_k (t) , \ {\bar g}_{kk} (t) \simeq [ i / 2 {\bar \omega}_k (t) ] {\dot g}_{kk} (t) , \label{eqn:dom-g} \end{eqnarray} where the change of dielectric is assumed to be small, $ | \epsilon_1 (t) - \epsilon_1 (0) | \ll \epsilon_1 (0) $ as usual \cite{UPSS04}. These relations in Eq. (\ref{eqn:dom-g}) ensure almost the same result for the DCE in the canonical and instantaneous-mode approaches (except for the small contribution of the off-resonant intermode couplings). This will be checked numerically in the next section. The above calculations of $ \delta \omega_k (t) $ and $ g_{kk} (t) $ are valid up to $ | \delta \omega_k (t) | / \omega_k^0 = | \delta_\epsilon (t) + \delta_m (t) | / L \sim 0.1 $, which is still a significant enhancement of the effective displacement $ | \delta_{\epsilon , m} | \gg \delta $ for the DCE. The present approach on the fixed basis, however, does not work effectively in an extreme situation where the mode functions are largely deformed in time with $ | \delta \omega_k (t) | \sim \omega_k^0 $. In such a case the instantaneous-mode approach is rather suitable though the deformation of the mode functions cannot be treated perturbatively \cite{RITZ09}. Anyway, as seen in the following, a reasonable deformation to induce $ | \delta \omega_k (t) | / \omega_k^0 \sim 0.01 - 0.1 $ is sufficient to create a significant number of photons for detection with atoms. \section{Photon creation as squeezing} \label{sec:creation} Once the Hamiltonian is presented in terms of the creation and annihilation operators, the time evolution for the DCE is determined by the Heisenberg equations $ {\dot a}_\alpha (t) = i[ H_{\rm F} (t) , a_\alpha (t) ] $ and $ {\dot a}_\alpha^\dagger (t) = i[ H_{\rm F} (t) , a_\alpha^\dagger (t) ] $. It is described as the Bogoliubov transformation, \begin{eqnarray} a_\alpha (t) &=& A_{\alpha \beta} (t) a_\beta + B_{\alpha \beta}^* (t) a_\beta^\dagger , \\ a_\alpha^\dagger (t) &=& A_{\alpha \beta}^* (t) a_\beta^\dagger + B_{\alpha \beta} (t) a_\beta . \end{eqnarray} The master equations for the Bogoliubov transformation are derived from the Heisenberg equations as \begin{eqnarray} {\dot A}_{\alpha \beta} &=& - i \omega_\alpha (t) A_{\alpha \beta} - i \mu_{\alpha \gamma} (t) A_{\gamma \beta} + 2 g_{\alpha \gamma} B_{\gamma \beta} , \label{eqn:master-A} \\ {\dot B}_{\alpha \beta} &=& i \omega_\alpha (t) B_{\alpha \beta} + i \mu_{\alpha \gamma}^* (t) B_{\gamma \beta} + 2 g_{\alpha \gamma}^* A_{\gamma \beta} , \label{eqn:master-B} \end{eqnarray} where the intermode couplings are renamed suitably as $ \mu_{\alpha \gamma} ( 1 - \delta_{\alpha \gamma}) \rightarrow \mu_{\alpha \gamma} $ with $ \mu_{\alpha \alpha} \equiv 0 $. In the following, we illustrate the characteristic features of DCE by concentrating on a single resonant mode with time-varying frequency $ \omega (t) = \omega_0 + \delta \omega (t) $ and squeezing coupling $ g(t) $ (the mode index ``$ k $" omitted). The intermode couplings will not provide significant contributions since they are fairly off resonant generally for the nonequidistant frequency differences \cite{D95-DK96,CDM01-02,R06}. The master equations read \begin{eqnarray} {\dot A} = - i \omega (t) A + 2 g (t) B , \ {\dot B} = i \omega (t) B + 2 g^* (t) A \label{eqn:master} \end{eqnarray} for the Bogoliubov transformation, \begin{eqnarray} a(t) = A(t) a + B^* (t) a^\dagger , a^\dagger (t) = A^* (t) a^\dagger + B (t) a . \label{eqn:squeeze} \end{eqnarray} The solution is expressed as squeezing and phase rotation \cite{P68-69}, \begin{eqnarray} A(t) = \cosh r(t) {\rm e}^{i \phi_A (t)} , B(t) = \sinh r(t) {\rm e}^{i \phi_B (t)} , \end{eqnarray} with the initial condition $ A(0) = 1 , B(0) = 0 $, ensuring $ | A(t) |^2 - | B(t) |^2 = 1 $. An analytic solution for $ A(t) $ and $ B(t) $ is obtained in the RWA (rotating-wave approximation) by replacing \begin{eqnarray} \omega (t) & \rightarrow & \omega_0 + \langle \delta \omega \rangle ({\mbox{average}}) , \\ g (t) & \rightarrow & \langle g \rangle_\Omega {\rm e}^{-i \Omega t} ({\mbox{Fourier component}}) , \end{eqnarray} where $ \omega_0 = \omega (0) $. By noting the time-evolution of the number operator $ a^\dagger (t) a(t) = | B(t) |^2 a a^\dagger + \ldots $, we obtain the photon creation via DCE (vacuum squeezing) as \begin{eqnarray} n_\gamma (t) &=& \langle 0 | a^\dagger (t) a(t) | 0 \rangle = | B(t) |^2 \nonumber \\ & \simeq & \left| \frac{2 \langle g \rangle_\Omega}{\chi} \right|^2 \times \left\{ \begin{array}{ll} \sinh^2 \chi t & ( | \Delta | < | 2 \langle g \rangle_\Omega | ) \\ | \chi |^2 t^2 & ( | \Delta | = | 2 \langle g \rangle_\Omega | ) \\ \sin^2 | \chi | t & ( | \Delta | > | 2 \langle g \rangle_\Omega | ) \end{array} \right. \label{eqn:ngamma} \end{eqnarray} with the effective squeezing rate \begin{eqnarray} \chi = {\sqrt{| 2 \langle g \rangle_\Omega |^2 - \Delta^2}} . \label{eqn:chi} \end{eqnarray} Here, the detuning $ \Delta $ is introduced for the frequency $ \Omega $ of laser pulses \cite{D98,CDM01-02} as \begin{eqnarray} \Omega = 2 ( \omega_0 + \langle \delta \omega \rangle + \Delta ) . \label{eqn:Omega} \end{eqnarray} The resonance condition for DCE is then given by \begin{eqnarray} \Omega ({\rm resonance}) = 2 ( \omega_0 + \langle \delta \omega \rangle ) , \label{eqn:Omega-resonance} \end{eqnarray} involving the average shift of the frequency $ \langle \delta \omega \rangle $ \cite{CDLM04,RITZ09}, rather than the naive condition $ \Omega = 2 \omega_0 $. If $ \Omega = 2 \omega_0 $ is taken with $ \Delta = - \langle \delta \omega \rangle $, the squeezing rate $ \chi $ is significantly reduced, even possibly becomes imaginary with $ n_\gamma (t) \lesssim 1 $ oscillating as $ \sin^2 | \chi | t $. The photon damping with the factor $ e^{- \Gamma t} $ due to the cavity loss should further be taken into account, where \begin{eqnarray} \Gamma = \omega_0 / Q \end{eqnarray} with the cavity quality factor $ Q $. Hence, the threshold condition for the squeezing by DCE is placed as \begin{eqnarray} \chi > \Gamma / 2 , \end{eqnarray} which is readily satisfied with a large enough $ Q $. We have solved numerically the master equations in Eq. (\ref{eqn:master}) without the RWA. The time-varying couplings are taken typically as $ \omega (t) = \omega_0 + \langle \delta \omega \rangle ( 1 - \cos \Omega t ) $ and $ g(t) = 2 \langle g \rangle_\Omega ( 1 - \cos \Omega t ) $, where $ | 2 \langle g \rangle_\Omega | \sim | \langle \delta \omega \rangle |/2 $ as indicated in Eqs. (\ref{eqn:dom-pm}) and (\ref{eqn:g-pm}) for the plasma mirror. The instantaneous-mode solution has also been obtained by considering the relations $ \delta {\bar \omega} (t) = \delta \omega (t) $ and $ {\bar g} (t) = [ i / 2 {\bar \omega} (t) ] {\dot g} (t) $ in Eq. (\ref{eqn:dom-g}). In Fig. \ref{fig:Npnps}, the photon creation $ n_\gamma (t) $ in the early stage of DCE is plotted for $ N_{\rm pulse} = t ( \Omega / 2 \pi ) \leq 30 $ (the number of periodic laser pulses). The results of the canonical and instantaneous-mode approaches are shown with the solid and dotted curves, respectively. Here, the parameters are taken typically as $ \langle \delta \omega \rangle = 0.02 \omega_0 $, $ 2 \langle g \rangle_\Omega = i 0.01 \omega_0 $, and $ \Delta = 0 $ (upper curves), $ - \langle \delta \omega \rangle $ (lower curves) for $ \Omega $ in Eq. (\ref{eqn:Omega}). We can see that $ n_\gamma (t) $ increases rapidly via the DCE on the resonance with $ \Omega = 2 ( \omega_0 + \langle \delta \omega \rangle ) $ ($ \Delta = 0 $), while $ n_\gamma (t) $ does not grow for $ \Omega = 2 \omega_0 $ ($ \Delta = - \langle \delta \omega \rangle $) due to the effective detuning brought by the average shift $ \langle \delta \omega \rangle $. In Fig. \ref{fig:Npnp}, the photon creation $ n_\gamma (t) $ is plotted through the DCE period for $ N_{\rm pulse} = t ( \Omega / 2 \pi ) \leq 300 $. The squeezing rate is determined from this plot to be $ \chi \simeq 0.01 \omega_0 $, as indicated in Eq. (\ref{eqn:chi}) with $ \Delta = 0 $. This result confirms that a large number of photons can be created via the DCE with a reasonable squeezing rate $ \chi \sim 0.01 \omega_0 $ when the laser pulses are applied many times. It is also found that the canonical and instantaneous-mode approaches provide almost the same result (except for the small contribution of the off-resonant intermode couplings). The analytic solution under the RWA in Eq. (\ref{eqn:ngamma}) overlaps almost with the instantaneous-mode result though it is not plotted explicitly in Figs. \ref{fig:Npnps} and \ref{fig:Npnp}. \begin{figure} \includegraphics[width=.8\linewidth]{Npnps.eps} \caption{Photon creation $ n_\gamma (t) $ (linear plot) in the early stage of DCE for $ N_{\rm pulse} = t ( \Omega / 2 \pi ) \leq 30 $ (the number of periodic laser pulses). The results of the canonical and instantaneous-mode approaches are shown with the solid and dotted curves, respectively. The parameters are taken typically as $ \langle \delta \omega \rangle = 0.02 \omega_0 $, $ 2 \langle g \rangle_\Omega = i 0.01 \omega_0 $, and $ \Delta = 0 $ (on-resonance: upper curves), $ - \langle \delta \omega \rangle $ (off-resonance: lower curves) for $ \Omega $.} \label{fig:Npnps} \end{figure} \begin{figure} \includegraphics[width=.8\linewidth]{Npnp.eps} \caption{Photon creation $ n_\gamma (t) $ (log plot) through the DCE period for $ N_{\rm pulse} = t ( \Omega / 2 \pi ) \leq 300 $. The results of the canonical and instantaneous-mode approaches are shown with the solid and dotted curves, respectively. The parameters are taken typically as $ \langle \delta \omega \rangle = 0.02 \omega_0 $, $ 2 \langle g \rangle_\Omega = i 0.01 \omega_0 $, and $ \Omega = 2.04 \omega_0 $ ($ \Delta = 0 $).} \label{fig:Npnp} \end{figure} We briefly discuss the effect of the intermode couplings. Specifically, the coupling $ \mu_{12} a_1^\dagger a_2 + \mu_{12}^* a_2^\dagger a_1 $ between the modes 1 and 2 becomes resonant under a condition $ \omega_2^0 = 3 \omega_1^0 $ $ \rightarrow $ $ \omega_2^0 - \omega_1^0 = 2 \omega_1^0 \approx \Omega $ for the case of the $ {\rm TE}_{111} $ and $ {\rm TE}_{115} $ modes in a cubic cavity due to the relation $ ( 1^2 + 1^2 + 5^2 )^{1/2} = 3 ( 1^2 + 1^2 + 1^2 )^{1/2} $. Then, through this resonant intermode coupling the significant photon creation occurs in both the modes 1 and 2 as $ n_{\gamma 1} (t) \sim n_{\gamma 2} (t) $, increasing the total of photon numbers \cite{CDM01-02,R06,RITZ09}. The photons of the mode 2 are, however, fairly off resonant with the Rydberg atoms tuned to detect the photons of the mode 1. Hence, they cannot be detected efficiently. \section{Detection with Rydberg atoms} \label{sec:detection} The photons created via the DCE are detected suitably by Rydberg atoms with principal quantum number $ n \approx 100 $ and transition frequency $ \sim {\rm GHz} $ \cite{D95-DK96,RITZ09}. Rydberg atoms may be treated as a two-level system with a transition frequency $ \omega_e $ for the resonant photon absorption with $ \omega_e \approx \omega_0 $. They are initially prepared in the lower level $ | g \rangle $, and injected into the cavity. A part of these atoms are excited to the upper level $ | e \rangle $ by absorbing the photons, and detected outside the cavity as the signal of photons. Recently, a high-sensitivity measurement of blackbody radiation has been performed at a frequency 2.527 GHz and low temperatures 67 mK -- 1 K by employing a Rydberg-atom cavity detector with a newly developed selective field ionization scheme for $ n \approx 100 $ (the atoms excited by absorbing photons are selectively ionized by applying an electric field) \cite{SPD06}. Here, we note that in order to observe purely the vacuum squeezing via DCE, the cavity should be cooled well below 100 mK to suppress the thermal photons as $ n_\gamma ({\rm thermal}) \ll 1 $. In fact, if photons are present initially with an expectation value $ \langle a^\dagger a \rangle $, they are also amplified by the DCE as $ ( 1 + 2 | B(t) |^2 ) \langle a^\dagger a \rangle $. Consider that $ N_{\rm Ryd} $ Rydberg atoms (actually $ N_{\rm Ryd} \sim 100 - 1000 $ \cite{SPD06}), which are all prepared at the lower level $ | g \rangle $, are injected into the cavity to detect the created photons after the period of DCE, for simplicity of argument. (The following features for the photon detection are essentially valid even if the atomic beam is injected continuously during and after the DCE, as discussed later.) The $ n_\gamma $ photons and $ N_{\rm Ryd} $ atoms (all located at the same position for simplicity) are coupled with the Jaynes-Cummings Hamiltonian under the RWA as \begin{eqnarray} H_{\rm AF} = \kappa \sqrt{N_{\rm Ryd}} ( a D_+ + a^\dagger D_- ) . \label{eqn:HAF} \end{eqnarray} (The effect of the counter-rotating terms is negligible near the resonance.) Here, the collective atomic spin-like operators are defined (in the Schr{\"o}dinger picture) \cite {HR84} by \begin{eqnarray} D_+ & \equiv & \sum_{i=1}^{N_{\rm Ryd}} | e \rangle \langle g |_{(i)} / \sqrt{N_{\rm Ryd}} , \label{eqn:D+} \\ D_- & \equiv & \sum_{i=1}^{N_{\rm Ryd}} | g \rangle \langle e |_{(i)} / \sqrt{N_{\rm Ryd}} , \label{eqn:D-} \end{eqnarray} and the complex phase for $ \kappa $ is absorbed in the atomic levels. The single atom-photon coupling $ \kappa $ is explicitly given by \begin{eqnarray} \kappa = d \sqrt{\omega_0 / 2 \epsilon_0 V} ( | f^0 ({\bf x}_1 ) | / | f^0 ({\bf x}_0 ) | ) \end{eqnarray} in terms of the magnitude of the electric dipole transition matrix element $ d $, the cavity volume $ V $ and the mode function $ f^0 ({\bf x}) $, where $ {\bf x}_1 $ and $ {\bf x}_0 $ represent the atomic position and the antinode, respectively. The collective atom-photon coupling is suitably defined by \begin{eqnarray} {\bar \kappa} = \kappa \sqrt{N_{\rm Ryd}} . \end{eqnarray} The single atom-field coupling is typically $ \kappa \sim 3 \times 10^3 {\rm s}^{-1} $ at the antinode for the Rydberg atom of principal quantum number $ n \approx 100 $ with $ \omega_e \approx \omega_0 \sim 1.5 \times 10^{10} {\rm s}^{-1} $ ($ 2.4 {\rm GHz} \times 2 \pi $) and $ V \sim ( 0.1 {\rm m} )^3 $ \cite{SPD06,HR84}. Then, the collective coupling amounts to $ {\bar \kappa} \sim 10^5 {\rm s}^{-1} \sim 10^{-5} \omega_0 $ for $ N_{\rm Ryd} \sim 10^3 $, which is still much smaller than the resonant frequency $ \omega_e \approx \omega_0 $. The commutation relations among the collective operators are given by \begin{eqnarray} [ D_+ , D_- ] &=& D_z \nonumber \\ & \equiv & \sum_{i=1}^{N_{\rm Ryd}} [| e \rangle \langle e |_{(i)}-| g \rangle \langle g |_{(i)}]/N_{\rm Ryd} , \label{eqn:Dz} \\ {[ D_z , D_\pm ]} &=& \pm ( 2 / N_{\rm Ryd} ) D_\pm . \label{eqn:Dpm} \end{eqnarray} The operators $ {\hat N}_e $ and $ {\hat N}_g $ to represent the populations of the upper and lower levels $ | e \rangle $ and $ | g \rangle $, respectively, are given by \begin{eqnarray} {\hat N}_e &=& \sum_{i=1}^{N_{\rm Ryd}} | e \rangle \langle e |_{(i)} = ( N_{\rm Ryd} / 2 ) ( 1 + D_z ) , \\ {\hat N}_g &=& \sum_{i=1}^{N_{\rm Ryd}} | g \rangle \langle g |_{(i)} = ( N_{\rm Ryd} / 2 ) ( 1 - D_z ) , \end{eqnarray} satisfying the completeness \begin{eqnarray} {\hat N}_e + {\hat N}_g = \sum_{i=1}^{N_{\rm Ryd}} [ | e \rangle \langle e |_{(i)}+| g \rangle \langle g |_{(i)} ] \equiv N_{\rm Ryd} . \end{eqnarray} The created photons are detected by counting the number of excited atoms which is represented by $ {\hat N}_e $ with eigenvalues $ 0 , 1 , \ldots , N_{\rm Ryd} $. The initial atomic state is prepared as \begin{eqnarray} | 0_e \rangle = | g_1, g_2, \ldots, g_{N_{\rm Ryd}} \rangle , \end{eqnarray} which is an eigenstate of $ {\hat N}_e $ with zero atomic excitation satisfying $ D_- | 0_e \rangle = 0 $. The one-excitation state is generated as \begin{eqnarray} | 1_e \rangle &=& D_+ | 0_e \rangle \nonumber \\ &=& \frac{1}{\sqrt{N_{\rm Ryd}}} \sum_{i=1}^{N_{\rm Ryd}} | g_1 , \ldots, e_i , g_{i+1} , \ldots , g_{N_{\rm Ryd}} \rangle , \end{eqnarray} and so on for the multi-excitation states. The Heisenberg equations are derived by taking the total Hamiltonian $ H_{\rm A} + H_{\rm AF} + H_{\rm F} $ with $ H_{\rm A} = ( N_{\rm Ryd} / 2 ) \omega_e D_z $ for the free atomic system: \begin{eqnarray} {\dot a} &=& - i \omega_0 a - i {\bar \kappa} D_- , \label{eqn:a-eq} \\ {\dot D}_- &=& - i \omega_e D_- + i {\bar \kappa} a D_z , \label{eqn:D-eq} \\ {\dot D}_z &=& - i ( 2 / N_{\rm Ryd} ) {\bar \kappa} ( a D_+ - a^\dagger D_- ) . \label{eqn:Dz-eq} \end{eqnarray} We solve these equations perturbatively to see the evolution of the atomic excitation $ N_e (t) = \langle {\hat N}_e (t) \rangle $. First, Eqs. (\ref{eqn:a-eq}) and (\ref{eqn:D-eq}) for $ a(t) $ and $ D_- (t) = D_+^\dagger (t) $ are integrated up to the first order of $ {\bar \kappa} $ with the initial atomic operators $ D_\pm ( t_1 ) $ in Eqs. (\ref{eqn:D+}) and (\ref{eqn:D-}) and the photon operator $ a( t_1 ) $ at $ t = t_1 $ after the DCE with one sequence of $ N_{\rm pulse} $ laser pulses for the duration \begin{eqnarray} t_1 = N_{\rm pulse} ( 2 \pi / \Omega ) . \end{eqnarray} Then, the results are applied to Eq. (\ref{eqn:Dz-eq}) to obtain $ D_z (t) $ up to the second order of $ {\bar \kappa} $. This determines the atomic excitation as \begin{eqnarray} N_e (t) &=& \langle {\hat N}_e (t) \rangle = ( N_{\rm Ryd} / 2 ) [ 1 + \langle D_z (t) \rangle ] \nonumber \\ & \simeq & n_\gamma ( 2 {\bar \kappa} / \Delta_e )^2 \sin^2 [ \Delta_e ( t - t_1 ) / 2 ] , \label{eqn:net} \end{eqnarray} where the atomic detuning is given by \begin{eqnarray} \Delta_e = \omega_e - \omega_0 . \end{eqnarray} In these calculations, the following relations are considered: $ \{ a , a^\dagger \} D_z + \{ D_+ , D_- \} = 2 ( a^\dagger a D_z + D_+ D_- ) $, $ [ a , a^\dagger ] D_z - [ D_+ , D_- ] = 0 $, $ \langle 0_e | D_\pm ( t_1 ) | 0_e \rangle = 0 $, $ \langle 0_e | D_+ ( t_1 ) D_- ( t_1 ) | 0_e \rangle = 0 $, $ \langle 0_e | D_z ( t_1 ) | 0_e \rangle = - 1 $, and $ \langle 0 | a^\dagger ( t_1 ) a ( t_1 ) | 0 \rangle = n_\gamma $ (the photons created via the DCE). Note here that $ N_e (t) \ll N_{\rm Ryd} $ with $ \langle D_z \rangle \approx - 1 $ in the early epoch of photon detection (the linear regime). Although it is difficult in practice to trace exactly the time evolution beyond the linear regime for the system of the many atoms interacting with the resonant cavity mode, we may survey the essential features for the atomic excitation to detect the photons as follows. Suppose that $ n_\gamma \gg N_{\rm Ryd} $, namely the photons are created much more than the Rydberg atoms, as desired and feasible experimentally. Then, the atomic excitation is eventually saturated as $ N_e (t) \sim ({\bar \kappa} t)^2 n_\gamma \sim N_{\rm Ryd} $ for $ t \sim 1 / ( \kappa \sqrt{n_\gamma} ) $, which is expected by extrapolating Eq. (\ref{eqn:net}) roughly up to $ {\bar \kappa} t \sim \sqrt{N_{\rm Ryd} / n_\gamma} \ll 1 $ near the resonance $ \Delta_e \approx 0 $ (henceforth $ t - t_1 \rightarrow t $). This excitation process may be viewed as the onset of Rabi oscillation between $ | g \rangle $ and $ | e \rangle $ at a rate \begin{eqnarray} \Omega_e \sim \kappa \sqrt{n_\gamma} , \end{eqnarray} which takes place almost independently for the $ N_{\rm Ryd} $ atoms in the presence of the large field (many photons with $ n_\gamma \gg N_{\rm Ryd} $). On the other hand, if $ n_\gamma < N_{\rm Ryd} $ though less interesting experimentally, the excitation is exchanged between the atoms and field as $ N_e (t) \sim n_\gamma / 2 $ on average for $ {\bar \kappa} t \sim 1 $. This may be understood from the fact that the interaction Hamiltonian $ H_{\rm AF} $ in Eq. (\ref{eqn:HAF}) describes the oscillation with a rate $ \Omega_e \sim {\bar \kappa} = \kappa \sqrt{N_{\rm Ryd}} $ between the atomic and field operators in the linear regime. The collective atomic excitation can be treated as a quantum oscillator, satisfying approximately the bosonic commutation relation $ [ D_- , D_+ ] \approx - \langle D_z \rangle \approx 1 $ with $ n_\gamma \ll N_{\rm Ryd} $ in Eq. (\ref{eqn:Dz}), that is $ D_+ $ and $ D_- $ act as the creation and annihilation operators, respectively \cite{HR84}. The cavity loss eventually becomes significant for $ t \gtrsim 1 / \Gamma $. Then, the atomic excitation is also relaxed with a rate \begin{eqnarray} \Gamma_e \sim \left\{ \begin{array}{ll} 4 ( {\bar \kappa} / \Gamma )^2 \Gamma & ( {\bar \kappa} < \Gamma / 4 ) \\ \Gamma / 2 & ( {\bar \kappa} \geq \Gamma / 4 ) \end{array} \right. \end{eqnarray} through the transition $ | e \rangle \rightarrow | g \rangle + \gamma $ and the loss of the emitted photon in the cavity \cite{HR84}. We also note that the atom-field interaction terminates when the atoms transit through the cavity. The atomic transit time is given by \begin{eqnarray} t_{\rm tr} = L/v \equiv \Gamma_{\rm tr}^{-1} , \end{eqnarray} where $ v $ and $ L $ are the atomic velocity and the cavity length, respectively. We have typically \begin{eqnarray} \Gamma_{\rm tr} \sim \frac{300 {\rm m}/{\rm s}}{0.1 {\rm m}} = 3 \times 10^3 {\rm s}^{-1} , \end{eqnarray} which is comparable to the single atom-field coupling $ \kappa $. By considering these damping effects, we realize that the created photons are detected efficiently with the atoms under the conditions, \begin{eqnarray} \Omega_e & \gtrsim & \Gamma , \Gamma_{\rm tr} , \\ \Gamma_{\rm tr} & \gtrsim & \Gamma_e . \end{eqnarray} That is, the atomic excitation should take place for $ t \sim \Omega_e^{-1} $ before the significant loss of the created photons due to the cavity damping ($ \Gamma \geq 2 \Gamma_e $), and the actual cutoff of the atom-field interaction by the atomic transit ($ \Gamma_{\rm tr} $). It is also required that the excitation damping ($ \Gamma_e $) induced by the cavity loss does not become significant before the atoms transit through the cavity ($ \Gamma_{\rm tr} $). As investigated so far, if the photons are created copiously via the DCE with $ n_\gamma \gg N_{\rm Ryd} $, they are detected by the atomic excitation as \begin{eqnarray} N_e ( t_{\rm tr} ) \sim N_{\rm Ryd} / 2 . \end{eqnarray} Here, the condition $ \Omega_e \gtrsim \Gamma_{\rm tr} $ is less restrictive, requiring merely $ n_\gamma \gtrsim ( \Gamma_{\rm tr} / \kappa )^2 \sim 1 $ for $ \Gamma_{\rm tr} \sim \kappa $. The atomic detuning may be suppressed readily as $ \Delta_e < \Omega_e $, e.g., for $ \Omega_e \sim 3 \times 10^6 {\rm s}^{-1} $ with $ \kappa \sim 3 \times 10^3 {\rm s}^{-1} $ and $ n_\gamma \sim 10^6 $. The conditions $ \Omega_e \gtrsim \Gamma $ and $ \Gamma_{\rm tr} \gtrsim \Gamma_e \sim 2 ( {\bar \kappa} / \Gamma )^2 \Gamma $ ($ {\bar \kappa} < \Gamma / 4 $) imply lower and upper bounds, respectively, on the cavity quality factor, \begin{eqnarray} ( \omega_0 / \kappa ) / \sqrt{n_\gamma} \lesssim Q \lesssim ( \omega_0 / \kappa ) ( \Gamma_{\rm tr} / \kappa ) / N_{\rm Ryd} , \label{eqn:Q1} \end{eqnarray} where $ \omega_0 / \kappa \sim 5 \times 10^6 $. These bounds are combined as a requirement for the number of created photons, \begin{eqnarray} n_\gamma \gtrsim ( \kappa / \Gamma_{\rm tr} )^2 N_{\rm Ryd}^2 \gg N_{\rm Ryd} . \end{eqnarray} For example, we estimate $ Q \sim 5 \times 10^3 $ and $ n_\gamma \sim 10^6 $ for $ \Gamma_{\rm tr} \sim \kappa $ and $ N_{\rm Ryd} \sim 10^3 $. This range of $ Q $ meets consistently the condition $ {\bar \kappa} < \Gamma / 4 $ for $ \Gamma_e $. On the other hand, if $ \Gamma_e = \Gamma / 2 $ ($ {\bar \kappa} \geq \Gamma / 4 $) the condition $ \Gamma_{\rm tr}\gtrsim \Gamma_e $ places a significant bound \begin{eqnarray} Q \gtrsim \omega_0 / \Gamma_{\rm tr} \sim 5 \times 10^6 . \label{eqn:Q2} \end{eqnarray} This range of $ Q $ meets consistently the condition $ {\bar \kappa} \geq \Gamma / 4 $ for $ \Gamma_e $. We also note that $ N_e ( t_{\rm tr} ) \sim n_\gamma / 2 $ for $ n_\gamma < N_{\rm Ryd} $. In this case with $ \Omega_e \sim {\bar \kappa} $, the condition $ \Omega_e \gtrsim \Gamma $ implies $ {\bar \kappa} \geq \Gamma / 4 $. Hence, the above range of $ Q $ in Eq. (\ref{eqn:Q2}) is effective either for $ n_\gamma \gtrsim N_{\rm Ryd} $ or $ n_\gamma < N_{\rm Ryd} $. The atomic beam may be injected continuously through the period of DCE. Then, we can show that the atomic excitation is squeezed together as $ N_e (t) \sim ( {\bar \kappa} / \omega_0 )^2 n_\gamma (t) $ during the DCE. This atomic excitation is usually smaller than $ N_{\rm Ryd} \sim 100 - 1000 $, e.g., for $ {\bar \kappa} / \omega_0 \sim 10^{-5} $ and $ n_\gamma < 10^{10} $. Anyway, the created photons are detected with the atoms efficiently after the DCE. \section{Experimental realization} \label{sec:realization} We now discuss a feasible experimental realization of DCE with a semiconductor plasma mirror \cite{IT04-05,RITZ09}. Based on the analyses presented so far for the DCE and photon detection, we can find desired values for the physical parameters. The photons are created as \begin{eqnarray} n_\gamma \sim \frac{1}{4} e^{2 \chi t_1} \sim \frac{1}{4} e^{2 \pi ( \chi / \omega_0 ) N_{\rm pulse}} \end{eqnarray} with the squeezing rate $ \chi $ for the resonant mode, where $ t_1 = N_{\rm pulse} ( 2 \pi / \Omega ) $ and $ \Omega \simeq 2 \omega_0 $ (see also Fig. \ref{fig:Npnp}). Hence, the desired number $ n_\gamma $ of created photons places a requirement for the squeezing rate as \begin{eqnarray} \chi / \omega_0 \sim \frac{\ln ( 4 n_\gamma )}{2 \pi N_{\rm pulse}} . \end{eqnarray} Typically, $ \chi \sim 0.01 \omega_0 $ to obtain $ n_\gamma \sim 10^6 - 10^8 $ with $ N_{\rm pulse} = 300 $ laser pulses, where the threshold condition $ \chi > \Gamma / 2 $ for the DCE is also satisfied sufficiently with $ Q \gtrsim 10^3 $. The effective displacement in Eq. (\ref{eqn:dlt-m}) is achieved by applying a laser power $ W_{\rm laser}/{\rm pulse} $ for the period $ T = 2 \pi / \Omega \sim 0.2 {\rm ns} $: \begin{eqnarray} \delta_m / L \sim ( n_{\rm s} e^2 / \epsilon_0 m_* ) L / \pi^2 , \end{eqnarray} where $ \sin^2 kl = 1 $ for definiteness (the slab is placed in the middle of cavity $ l = L/2 $), $ \omega_0 L \sim \pi $, and $ n_{\rm s} = n_e \delta $ ($ \propto W_{\rm laser} $) is the surface number density of electrons. We may readily obtain $ ( n_{\rm s} e^2 / \epsilon_0 m_* ) L \sim 1 $ with a reasonable laser power $ W_{\rm laser}/{\rm pulse} \sim 0.01 \mu {\rm J}/{\rm pulse} $ \cite{RITZ09}, achieving a significant displacement $ \delta_m \sim 0.1 L $. In this case, the conductivity effect $ \delta_m $ in Eq. (\ref{eqn:dlt-m}) dominates over the dielectric effect $ \delta_\epsilon $ in Eq. (\ref{eqn:dlt-e}) for $ \epsilon_1 (0) \sim 1 - 10 $ and $ \epsilon_1 (0) \leq | \epsilon_1 (t) | $ [the photon damping by the complex $ \epsilon_1 (t) $ does not exceed the squeezing by the DCE mainly with $ \delta_m $]. We estimate the variation of the mode frequency as \begin{eqnarray} \delta \omega \simeq ( \delta_m / L ) \omega_0 \sim 0.1 \omega_0 ( W_{\rm laser} / 0.01 \mu {\rm J} ) . \end{eqnarray} By noting the relation $ | \delta \omega | \simeq | 2 g | $, the desired squeezing rate for the DCE can be obtained in Eq. (\ref{eqn:chi}) with $ \Delta = 0 $ as \begin{eqnarray} \chi = | 2 \langle g \rangle_\Omega | \sim 0.01 \omega_0 ( r_\Omega / 0.1 ) ( W_{\rm laser} / 0.01 \mu {\rm J} ) . \end{eqnarray} Here, the factor $ r_\Omega $ represents the Fourier component $ \langle g \rangle_\Omega {\rm e}^{- i \Omega t} $ of $ g(t) $, which may be optimized by suitably designing the time-profile $ W_{\rm laser} (t) $ of laser pulse. As seen in Eqs. (\ref{eqn:Omega}) and (\ref{eqn:Omega-resonance}), the tuning of $ \Omega $ is required for the resonance by taking into account the average shift $ \langle \delta \omega \rangle / \omega_0 \sim 0.01 - 0.1 $. As for the photon detection, the analyses in Sec. \ref{sec:detection} indicate that roughly $ N_{\rm Ryd}/2 \sim 100 $ atomic excitations are detected per mean atomic transit time $ t_{\rm tr} \sim 0.1 {\rm ms} $ for the creation of $ n_\gamma \sim 10^6 - 10^8 $ photons via the DCE. The quality factor of cavity should be chosen suitably to ensure the efficient atomic excitation and detection. Specifically, $ Q \sim 5 \times 10^3 $ in Eq. (\ref{eqn:Q1}) or $ Q \gtrsim 5 \times 10^6 $ in Eq. (\ref{eqn:Q2}). We note that even if an excessive amount of photons ($ n_\gamma \gg 10^8 $) are created, their detection is actually limited by the number of Rydberg atoms $ N_{\rm Ryd} \sim 100 - 1000 $. After the detection, the photons remaining in the cavity are relaxed finally as $ n_\gamma \rightarrow 0 $ for $ t \gtrsim 10 {\rm ms} \gg \Gamma^{-1} , t_{\rm tr} $; namely the field returns to the vacuum. Then, the subsequent rounds of photon creation and detection are performed repeatedly. \section{Summary} \label{sec:summary} We have investigated quantum mechanically the photon creation via DCE and its detection with Rydberg atoms, specifically considering the experimental realization in a resonant cavity with a plasma mirror of a semiconductor slab irradiated by laser pulses. The canonical Hamiltonian for the DCE is derived in terms of the creation and annihilation operators showing the explicit time-variation which originates from the external configuration such as the nonstationary plasma mirror. Then, the photon creation is evaluated as squeezing from the Heisenberg equations. This confirms that a sufficiently large number of photons can be created via the DCE with a reasonable squeezing rate when the laser pulses are applied many times. The atomic excitation process to detect the photons is described with the atom-field interaction, which clarifies the conditions for the efficient detection. Based on these analyses, desired values of the physical parameters are considered for a feasible experiment for DCE and its detection with a plasma mirror and Rydberg atoms. \begin{acknowledgments} The authors appreciate valuable discussions with S. Matsuki, Y. Kido, T. Nishimura, W. Naylor and the Ritsumeikan University group. \end{acknowledgments}
1,314,259,992,834
arxiv
\chapter{Density Matrix Renormalization} \documentstyle[epsfig]{article} \input epsf.tex \setlength{\textwidth}{6.0truein} \setlength{\textheight}{8.5truein} \setlength{\topmargin}{0truein} \setlength{\oddsidemargin}{0.3truein} \setlength{\evensidemargin}{0.3truein} \begin{document} \bibliographystyle{prsty} \title{Density Matrix Renormalization} \author{Karen Hallberg\\ \small\it Centro At\'omico Bariloche and Instituto Balseiro\\ \small\it 8400 Bariloche, Argentina \\} \date{September 1999} \maketitle \begin{abstract} The Density Matrix Renormalization Group (DMRG) has become a powerful numerical method that can be applied to low-dimensional strongly correlated fermionic and bosonic systems. It allows for a very precise calculation of static, dynamic and thermodynamic properties. Its field of applicability has now extended beyond Condensed Matter, and is successfully used in Statistical Mechanics and High Energy Physics as well. In this article, we briefly review the main aspects of the method. We also comment on some of the most relevant applications so as to give an overview on the scope and possibilities of DMRG and mention the most important extensions of the method such as the calculation of dynamical properties, the application to classical systems, inclusion of temperature, phonons and disorder and a recent modification for the {\it ab initio} calculation of electronic states in molecules. \end{abstract} \section{Introduction} The basics of the Density Matrix Renormalization Group were developed by S. White in 1992\cite{white1} and since then DMRG has proved to be a very powerful method for low dimensional interacting systems. Its remarkable accuracy can be seen for example in the spin-1 Heisenberg chain: for a system of hundreds of sites a precision of $10^{-10}$ for the ground state energy can be achieved. Since then it has been applied to a great variety of systems and problems (principally in one dimension) including, among others, spin chains, fermionic and bosonic systems, disordered models, impurities, etc. It has also been improved substantially in several directions like two dimensional (2D) classical systems, phonons, molecules, the inclusion of temperature and the calculation of dynamical properties. Some calculations have also been performed in 2D quantum systems. All these topics are treated in detail and in a pedagogical way in a book published recently, where the reader can find an extensive review on DMRG\cite{book}. In this article we will attempt to cover the different areas where it has been applied. Regretfully, however, we won't be able to review the large number of papers that have been written using different aspects of this very efficient method. We have chosen what, in our opinion, are the most representative contributions and we suggest the interested reader to look for further information in these references. Our aim here is to give the reader a general overview on the subject. One of the most important limitations of numerical calculations in finite systems is the great amount of states that have to be considered and its exponential growth with system size. Several methods have been introduced in order to reduce the size of the Hilbert space to be able to reach larger systems, such as Monte Carlo, renormalization group (RG) and DMRG. Each method considers a particular criterion to keep the relevant information. The DMRG was originally developed to overcome the problems that arise in interacting systems in 1D when standard RG procedures were applied. Consider a block B (a block is a collection of sites) where the Hamiltonian $H_B$ and end-operators are defined. These traditional methods consist in putting together two or more blocks (e.g. B-B', which we will call the superblock), connected using end-operators, in a basis that is a direct product of the basis of each block, forming $H_{BB'}$. This Hamiltonian is then diagonalized, the superblock is replaced by a new effective block $B_{new}$ formed by a certain number $m$ of lowest-lying eigenstates of $H_{BB'}$ and the iteration is continued (see Ref.~\cite{white2}). Although it has been used successfully in certain cases, this procedure, or similar versions of it, has been applied to several interacting systems with poor performance. For example, it has been applied to the 1D Hubbard model keeping $m\simeq 1000$ states. For 16 sites, an error of 5-10\% was obtained \cite{braychui}. Other results\cite{panchen} were also discouraging. A better performance was obtained \cite{xiangghering} by adding a single site at a time rather than doubling the block size. However, there is one case where a similar version of this method applies very well: the Kondo model. Wilson\cite{wilson} mapped the one-impurity problem onto a one-dimensional lattice with exponentially descreasing hoppings. The difference with the method explained above is that in this case, one site (equivalent to an ``onion shell") is added at each step and, due to the exponential decrease of the hopping, very accurate results can be obtained. Returning to the problem of putting several blocks together, the main source of error comes from the election of eigenstates of $H_{BB'}$ as representative states of a superblock. Since $H_{BB'}$ has no connection to the rest of the lattice, its eigenstates may have unwanted features (like nodes) at the ends of the block and this can't be improved by increasing the number of states kept. Based on this consideration, Noack and White\cite{noackwhite} tried including different boundary conditions and boundary strengths. This turned out to work well for single particle and Anderson localization problems but, however, it did not improve significantly results in interacting systems. These considerations lead to the idea of taking a larger superblock that includes the blocks $BB'$, diagonalize the Hamiltonian in this large superblock and then somehow project the most favorable states onto $BB'$. Then $BB'$ is replaced by $B_{new}$. In this way, awkward features in the boundary would vanish and a better representation of the states in the infinite system would be achieved. White\cite{white1,white2} proposed the density matrix as the optimal way of projecting the best states onto part of the system and this will be discussed in the next section. The justification of using the density matrix is given in detail in Ref.\cite{book}. A very easy and pedagogical way of understanding the basic functioning of DMRG is applying it to the calculation of simple quantum problems like one particle in a tight binding chain \cite{whitebook,sierraparticle}. In the following Section we will briefly describe the standard method; in Sect. 3 we will mention some of the most important applications; in Sect. 4 we review the most relevant extensions to the method and finally in Sect. 5 we concentrate on the way dynamical calculations can be performed within DMRG. \section{The Method} The DMRG allows for a systematic truncation of the Hilbert space by keeping the most probable states describing a wave function ({\it e.g.~}the ground state) instead of the lowest energy states usually kept in previous real space renormalization techniques. The basic idea consists in starting from a small system ({\it e.g} with $N$ sites) and then gradually increase its size (to $N+2$, $N+4$,...) until the desired length is reached. Let us call the collection of $N$ sites the {\it universe} and divide it into two parts: the {\it system} and the {\it environment} (see Fig.\ \ref{figSuperblock}). The Hamiltonian is constructed in the {\it universe} and its ground state $|\psi_0>$ is obtained. This is considered as the state of the {\it universe} and called the {\it target state}. It has components on the {\it system} and the {\it envorinment}. We want to obtain the most relevant states of the {\it system}, i.e., the states of the {\it system} that have largest weight in $|\psi_0\rangle$. To obtain this, the {\it environment} is considered as a statistical bath and the density matrix\cite{feynman} is used to obtain the desired information on the {\it system}. So instead of keeping eigenstates of the Hamiltonian in the block ({\it system}), we keep eigenstates of the density matrix. We will be more explicit below. \begin{figure} \epsfxsize=2.0in \epsfysize=1.0in \epsffile{fig1montreal.eps} \caption[]{A scheme of the superblock (universe) configuration for the DMRG algorithm\cite{white2}.} \label{figSuperblock} \end{figure} Let's define block [{\bf B}] as a finite chain with $l$ sites, having an associated Hilbert space with $m$ states where operators are defined (in particular the Hamiltonian in this finite chain, $H_B$ and the operators at the ends of the block, useful for linking it to other chains or added sites). Except for the first iteration, the basis in this block isn't explicitly known due to previous basis rotations and reductions. The operators in this basis are matrices and the basis states are characterized by quantum numbers (like $S^z$, charge or number of particles, etc). We also define an added block or site as [{\bf a}] having $n$ states. A general iteration of the method consists of: i) Define the Hamiltonian $H_{BB'}$ for the superblock (the {\it universe}) formed by putting together two blocks [{\bf B}] and [{\bf B'}] and two added sites [{\bf a}] and [{\bf a'}] in this way: [{\bf B a a' B' }] (the primes are only to indicate additional blocks, but the primed blocks have the same structure as the non-primed ones; this can vary, see the finite size algorithm below). In general, blocks [{\bf B}] and [{\bf B'}] come from the previous iteration. The total Hilbert space of this superblock is the direct product of the individual spaces corresponding to each block and the added sites. In practice a quantum number of the superblock can be fixed (in a spin chain for example one can look at the total $S^z=0$ subspace), so the total number of states in the superblock is much smaller than $(mn)^2$. As, in some cases, the quantum number of the superblock consists of the sum of the quantum numbers of the individual blocks, each one must contain several subspaces (several values of $S^z$ for example). Here periodic boundary conditions can be attached to the ends and a different block layout should be considered (e.g. [{\bf B a B' a' }]) to avoid connecting blocks [{\bf B}] and [{\bf B'}] which takes longer to converge. The boundary conditions are between [{\bf a'}] and [{\bf B}]. For closed chains the performance is poorer than for open boundary conditions \cite{white2}. ii) Diagonalize the Hamiltonian $H_{BB'}$ to obtain the ground state $|\psi_0\rangle$ (target state) using Lanczos\cite{lanczos} or Davidson\cite{davidson} algorithms. Other states could also be kept, such as the first excited ones: they are all called target states. iii) Construct the density matrix: \begin{equation} \rho_{ii'}=\sum_j \psi_{0,ij}\psi_{0,i'j} \end{equation} on block [{\bf B a}], where $\psi_{0,ij}=\langle i\otimes j|\psi_0\rangle $, the states $|i\rangle $ belonging to the Hilbert space of the block [{\bf B a}] and the states $|j\rangle $ to the block [{\bf B' a'}]. The density matrix considers the part [{\bf B a}] as a system and [{\bf B' a'}], as a statistical bath. The eigenstates of $\rho$ with the highest eigenvalues correspond to the most probable states (or equivalently the states with highest weight) of block [{\bf B a}] in the ground state of the whole superblock. These states are kept up to a certain cutoff, keeping a total of $m$ states per block. The density matrix eigenvalues sum up to unity and the truncation error, defined as the sum of the density matrix eigenvalues corresponding to discarded eigenvectors, gives a qualitative indication of the accuracy of the calculation. iv) With these $m$ states a rectangular matrix $O$ is formed and it is used to change basis and reduce all operators defined in [{\bf B a}]. This block [{\bf B a}] is then renamed as block [{\bf B$_{new}$}] or simply [{\bf B}] (for example, the Hamiltonian in block [{\bf B a}], $H_{Ba}$, is transformed into $H_{B}$ as $H_{B}=O^\dagger H_{Ba} O$). v) A new block [{\bf a}] is added (one site in our case) and the new superblock [{\bf B a a' B'}] is formed as the direct product of the states of all the blocks. vi) This iteration continues until the desired length is achieved. At each step the length is $N=2l+2$ (if [{\bf a}] consists of one site). When more than one target state is used, {\it i.e} more than one state is wished to be well described, the density matrix is defined as: \begin{equation} \label{eq:pl} \rho_{ii'}=\sum_l p_l \sum_j \phi_{l,ij} \phi_{l,i'j} \end{equation} where $p_l$ defines the probability of finding the system in the target state $|\phi_l\rangle $ (not necessarily eigenstates of the Hamiltonian). The method described above is usually called the {\it infinite system algorithm} since the system size increases in two lattice sites (if the added block [{\bf a}] has one site) at each iteration. There is a way to increase precision at each length $N$ called the {\it finite system algorithm}. It consists of fixing the lattice size and zipping a couple of times until convergence is reached. In this case and for the block configuration [{\bf B a a' B' }], $N=l+1+1+l'$ where $l$ and $l'$ are the number of sites in $B$ and $B'$ respectively. In this step the density matrix is used to project onto the left $l+1$ sites. In order to keep $N$ fixed, in the next block configuration, the right block $B'$ should be defined in $l-1$ sites such that $N=(l+1)+1+1+(l-1)'$. The operators in this smaller block should be kept from previous iterations (in some cases from the iterations for the system size with $N-2$)\cite{book}. The calculation of static properties like correlation functions is easily done by keeping the operators in question at each step and performing the corresponding basis change and reduction, in a similar manner as done with the Hamiltonian in each block\cite{white2}. The energy and measurements are calculated in the superblock. A faster convergence of Lanczos or Davidson algorithm is achieved by choosing a good trial vector\cite{cavo,white4}. An interesting analysis on DMRG accuracy is done in Ref. \cite{ors}. Fixed points of the DMRG and their relation to matrix product wave functions were studied in \cite{ostlund} and an analytic formulation combining the block renormalization group with variational and Fokker-Planck methods in \cite{delgadorg}. The connection of the method with quantum groups and conformal field theory is treated in \cite{sierraqg}. There are also interesting connections between the density matrix spectra and integrable models\cite{peschelint} via corner transfer matrices. These articles give a deep insight into the essence of the DMRG method. \section{Applications} Since its development, the number of papers using DMRG has grown enormously and other improvements to the method have been performed. We would like to mention some applications where this method has proved to be useful. Other applications related to further developments of the DMRG will be mentioned in Sect. 4. A very impressive result with unprecedented accuracy was obtained by White and Huse \cite{white3} when calculating the spin gap in a $S=1$ Heisenberg chain obtaining $\Delta=0.41050 J$. They also calculated very accurate spin correlation functions and excitation energies for one and several magnon states and performed a very detailed analysis of the excitations for different momenta. They obtained a spin correlation length of 6.03 lattice spacings. Simultaneously S{\o}rensen and Affleck\cite{sorensen} also calculated the structure factor and spin gap for this system up to length 100 with very high accuracy, comparing their results with the nonlinear $\sigma$ model. In a subsequent paper\cite{sorensen2} they applied the DMRG to the anisotropic $S=1$ chain, obtaining the values for the Haldane gap. They also performed a detailed study of the $S=1/2$ end excitations in an open chain. Thermodynamic properties in open $S=1$ chains such as specific heat, electron paramagnetic resonance (EPR) and magnetic susceptibility calculated using DMRG gave an excellent fit to experimental data, confirming the existence of free spins 1/2 at the boundaries\cite{batista}. A related problem, {\it i.e.} the effect of non-magnetic impurities in spin systems (dimerized, ladders and 2D) was studied in \cite{laukamp}. For larger integer spins there have also been some studies. Nishiyama and coworkers\cite{nishi} calculated the low energy spectrum and correlation functions of the $S=2$ antiferromagnetic Heisenberg open chain. They found $S=1$ end excitations (in agreement with the Valence Bond Theory). Edge excitations for other values of $S$ have been studied in Ref. \cite{qinedge}. Almost at the same time Schollw{\"o}ck and Jolicoeur\cite{uli1} calculated the spin gap in the same system, up to 350 sites, ($\Delta=0.085 J$), correlation functions that showed topological order and a spin correlation length of 49 lattice spacings. More recent accurate studies of $S=2$ chains are found in \cite{wangqin,wada}. In Ref. \cite{qin} the dispersion of the single magnon band and other properties of the $S=2$ antiferromagnetic Heisenberg chains were calculated. Concerning $S=1/2$ systems, DMRG has been crucial for obtaining the logarithmic corrections to the $1/r$ dependence of the spin-spin correlation functions in the isotropic Heisenberg model \cite{karen12}. For this, very accurate values for the energy and correlation functions were needed. For $N=100$ sites an error of $10^{-5}$ was achieved keeping $m=150$ states per block, comparing with the exact finite-size Bethe Ansatz results. For this model it was found that the data for the correlation function has a very accurate scaling behaviour and advantage of this was taken to obtain the logarithmic corrections in the thermodynamic limit. Other calculations of the spin correlations have been performed for the anisotropic case \cite{hiki}. Similar calculations have been performed for the $S=3/2$ Heisenberg chain \cite{karen32}. In this case a stronger logarithmic correction to the spin correlation function was found. For this model there was interest in obtaining the central charge $c$ to elucidate whether this model corresponds to the same universality class as the $S=1/2$ case, where the central charge can be obtained from the finite-size scaling of the energy. Although there have been previous attempts\cite{adriana}, these calculations presented difficulties since they involved also a term $\sim 1/\ln^3N$. With the DMRG the value $c=1$ was clearly obtained. In Ref. \cite{yamashita}, DMRG was applied to an effective spin Hamiltonian obtained from an SU(4) spin-orbit critical state in 1D. Another application to enlarged symmetry cases (SU(4)) was done to study coherence in arrays of quantum dots\cite{onu}. Dimerization and frustration have been considered in Refs. \cite{bursill,dim1,dim2,dim3,dim4,dim5,dim6,kaburagi} and alternating spin chains in \cite{patialt}. The case of several coupled spin chains (ladder models) \cite{noackchain}, magnetization properties and plateaus for quantum spin ladder systems\cite{tandon} and finite 2D systems like an application to $CaV_4O_9$ reaching 24x11 square lattices \cite{cavo} have also been studied. There has been a great amount of applications to fermionic systems such as 1D Hubbard and t-J models \cite{noackhubb}. Also several coupled chains at different dopings have been considered \cite{fermion}. Quite large systems can be reached, for example in \cite{liangpang}, a 4x20 lattice was considered to study ferromagnetism in the infinite U Hubbard model; the ground state of a 4-leg t-J ladder in \cite{w1}; the one and two hole ground state in a 10x7 t-J lattice\cite{w2}; a doped 3-leg t-J ladder\cite{w3} and the study of striped phases and domain walls in 19x8 t-J systems\cite{w4}. Impurity problems have been studied for example in one- \cite{teo} and two-impurity \cite{egger} Kondo systems, Kondo and Anderson lattices \cite{kondoins,kondolatt,kondoneck}, Kondo lattices with localized $f^2$ configurations\cite{wata}, a t-J chain coupled to localized Kondo spins\cite{chen} and ferromagnetic Kondo models for manganites \cite{riera}. \section{Other extensions to DMRG} There have been several extensions to DMRG like the inclusion of symmetries to the method such as spin and parity\cite{ramasesha,affleck}. Total spin conservation and continuous symmetries have been treated in \cite{culloch} and in interaction-round a face Hamiltonians\cite{sierrairf}, a formulation that can be applied to rotational-invariant sytems like $S=1$ and 2 chains\cite{wada}. A momentum representation of this technique \cite{xiang2} that allows for a diagonalization in a fixed momentum subspace has been developed as well as applications in dimension higher than one\cite{cavo,ducroo} and Bethe lattices\cite{pastor}. The inclusion of symmetries is essential to the method since it allows to consider a smaller number of states, enhance precision and obtain eigenstates with a definite quantum number. Other recent applications have been in nuclear shell model calculations where a two level pairing model has been considered\cite{dukelsky} and in the study of ultrasmall superconducting grains, in this case, using the particle (hole) states around the Fermi level as the system (environment) block\cite{duksierra}. A very interesting and successful application is a recent work in High Energy Physics\cite{delgadoqcd}. Here the DMRG is used in an asymptotically free model with bound states, a toy model for quantum chromodynamics, namely the two dimensional delta-function potential. For this case an algorithm similar to the momentum space DMRG\cite{xiang2} was used where the block and environment consist of low and high energy states respectively. The results obtained here are much more accurate than the similarity renormalization group\cite{wilson2} and a generalization to field-theoretical models is proposed based on the discreet light-cone quantization in momentum space\cite{dlcq}. Below we briefly mention other important extensions, leaving the calculation of dynamical properties for the next Section. \subsection{Classical systems} The DMRG has been very successfully extended to study classical systems. For a detailed description we refer the reader to Ref. \cite{nishino}. Since 1D quantum systems are related to 2D classical systems\cite{clasico}, it is natural to adapt DMRG to the classical 2D case. This method is based on the renormalization group transformation for the transfer matrix $T$. It is a variational method that maximizes the partition function using a limited number of degrees of freedom, where the variational state is written as a product of local matrices\cite{ostlund}. For 2D classical systems, this algorithm is superior to the classical Monte Carlo method in accuracy, speed and in the possibility of treating much larger systems. A further improvement to this method is based on the corner transfer matrix\cite{baxter}, the CTMRG\cite{okunishi} and can be generalized to any dimension\cite{okunishi2}. It was first applied to the Ising model\cite{nishino,drz} and also to the Potts model\cite{carlon}, where very accurate density profiles and critical indices were calculated. Further applications have included non-hermitian problems in equilibrium and non-equilibrium physics. In the first case, transfer matrices may be non-hermitian and several situations have been considered: a model for the Quantum Hall effect\cite{kondev} and the $q$-symmetric Heisenberg chain related to the conformal series of critical models\cite{peschel}. In the second case, the adaptation of the DMRG to non-equilibrium physics like the asymmetric exclusion problem\cite{hieida} and reaction-diffusion problems \cite{peschel1,carlon1} has shown to be very successful. \subsection{Finite temperature DMRG} The adaptation of the DMRG method for classical systems paved the way for the study of 1D quantum systems at non zero temperature, by using the Trotter-Suzuki method \cite{bursill2,trotter,xiangwang,shibata}. In this case the system is infinite and the finiteness is in the level of the Trotter approximation. Standard DMRG usually produces its best results for the ground state energy and less accurate results for higher excitations. A different situation occurs here: the lower the temperature, the less accurate the results. Very nice results have been obtained for the dimerized, $S=1/2$, XY model, where the specific heat was calculated involving an extremely small basis set\cite{bursill2} ($m=16$), the agreement with the exact solution being much better in the case where the system has a substantial gap. It has also been used to calculate thermodynamic properties of the a\-ni\-so\-tropic $S=1/2$ Heisenberg model, with relative errors for the spin susceptibility of less than $10^{-3}$ down to temperatures of the order of $0.01J$ keeping $m=80$ states\cite{xiangwang}. A complete study of thermodynamic properties like magnetization, susceptibility, specific heat and temperature dependent correlation functions for the $S=1/2$ and 3/2 Heisenberg models was done in \cite{xiangt}. Other applications have been the calculation of the temperature dependence of the charge and spin gap in the Kondo insulator\cite{ammon1}, the calculation of thermodynamic properties of ferrimagnetic chains\cite{scholl}, the study of impurity properties in spin chains\cite{rommer}, frustrated quantum spin chains\cite{maisinger}, t-J ladders\cite{ammon} and dimerized frustrated Heisenberg chains\cite{klumper}. An alternative way of incorporating temperature into the DMRG procedure was developed by Moukouri and Caron\cite{moukouri}. They considered the standard DMRG taking into account several low-lying target states (see Eq.~\ref{eq:pl}) to construct the density matrix, weighted with the Boltzmann factor ($\beta$ is the inverse temperature): \begin{equation} \label{eq:pl2} \rho_{ii'}=\sum_l e^{-\beta E_l} \sum_j \phi_{l,ij} \phi_{l,i'j} \end{equation} With this method they performed reliable calculations of the magnetic susceptibility of quantum spin chains with $S=1/2$ and $3/2$, showing excellent agreement with Bethe Ansatz exact results. They also calculated low temperature thermodynamic properties of the 1D Kondo Lattice Model\cite{moukouri3} and Zhang et al.\cite{zhang} applied the same method in the study of a magnetic impurity embedded in a quantum spin chain. \subsection{Phonons, bosons and disorder} A significant limitation to the DMRG method is that it requires a finite basis and calculations in problems with infinite degrees of freedom per site require a large truncation of the basis states\cite{moukouri2}. However, Jeckelmann and White developed a way of including phonons in DMRG calculations by transforming each boson site into several artificial interacting two-state pseudo-sites and then applying DMRG to this interacting system\cite{jeck} (called the ``pseudo-site system"). The idea is based on the fact that DMRG is much better able to handle several few-states sites than few many-state sites\cite{noackberlin}. The key idea is to substitute each boson site with $2^N$ states into $N$ pseudo-sites with 2 states\cite{jeckbook}. They applied this method to the Holstein model for several hundred sites (keeping more than a hundred states per phonon mode) obtaining negligible error. In addition, up to date, this method is the most accurate one to determine the ground state energy of the polaron problem (Holstein model with a single electron). An alternative method (the ``Optimal phonon basis")\cite{jeck2} is a procedure for generating a controlled truncation of a large Hilbert space, which allows the use of a very small optimal basis without significant loss of accuracy. The system here consists of only one site and the environment has several sites, both having electronic and phononic degrees of freedom. The density matrix is used to trace out the degrees of freedom of the environment and extract the most relevant states of the site in question. In following steps, more bare phonons are included to the optimal basis obtained in this way. A variant of this scheme is the ``four block method", as described in \cite{bursillphonon}. They obtain very accurately the Luttinger liquid-CDW insulator transition in the 1D Hostein model for spinless fermions. The method has also been applied to pure bosonic systems such as the disordered bosonic Hubbard model\cite{krish}, where gaps, correlation functions and superfluid density are obtained. The phase diagram for the non-disordered Bose-Hubbard model, showing a reentrance of the superfluid phase into the insulating phase was calculated in Ref. \cite{monien} The DMRG has been also been generalized to 1D random systems, and applied to the random antiferromagnetic and ferromagnetic Heisenberg chains\cite{hida}, including quasiperiodic exchange modulation\cite{hida1} and a detailed study of the Haldane phase in these systems\cite{hida2}. It has also been used in disordered Fermi systems such as the spinless model\cite{schmitt}. In particular, the transition from the Fermi glass to the Mott insulator and the strong enhancement of persistent currents in the transition was studied in correlated one-dimensional disordered rings\cite{jala}. \subsection{Molecules} There have been several applications to molecules and polymers, such as the Pariser-Parr-Pople (PPP) Hamiltonian for a cyclic polyene\cite{ppp} (where long-range interactions are included). It has also been applied to conjugated organic systems (polymers), adapting the DMRG to take into account the most important symmetries in order to obtain the desired excited states\cite{ramasesha}. Also conjugated one dimensional semiconductors \cite{barford} have been studied, in which the standard approach can be extended to complex 1D oligomers where the fundamental repeat is not just one or two atoms, but a complex molecular building block. Recent attempts to apply DMRG to the {\it ab initio} calculation of electronic states in molecules have been successful\cite{whitemol,whiteorth}. Here, DMRG is applied within the conventional quantum chemical framework of a finite basis set with non-orthogonal basis functions centered on each atom. After the standard Hartree-Fock (HF) calculation in which a Hamiltonian is produced within the orthogonal HF basis, DMRG is used to include correlations beyond HF, where each orbital is treated as a ``site" in a 1D lattice. One important difference with standard DMRG is that, as the interactions are long ranged, several operators must be kept, making the calculation somewhat cumbersome. However, very accurate results have been obtained in a check performed in a water molecule (keeping up to 25 orbitals and $m\simeq 200$ states per block), obtaining an offset of 0.00024Hartrees with respect to the exact ground state energy\cite{bau}, a better performance than any other approximate method\cite{whitemol}. In order to avoid the non-locality introduced in the treatment explained above, White introduced the concept of {\it orthlets}, local, orthogonal and compact wave functions that allow prior knowledge about singularities to be incorporated into the basis and an adequate resolution for the cores\cite{whiteorth}. The most relevant functions in this basis are chosen via the density matrix. \section{Dynamical correlation functions} The DMRG was originally developed to calculate static ground state properties and low-lying energies. However, it can also be useful to calculate dynamical response functions. These are of great interest in condensed matter physics in connection with experiments such as nuclear magnetic resonance (NMR), neutron scattering, optical absorption, photoemission, etc. We will describe three different methods in this Section. \subsection{Lanczos and correction vector techniques} An effective way of extending the basic ideas of this method to the calculation of dynamical quantities is described in Ref.\cite{karendin}. It is important to notice here that due to the particular real-space construction, it is not possible to fix the momentum as a quantum number. However, we will show that by keeping the appropriate target states, a good value of momentum can be obtained. We want to calculate the following dynamical correlation function at $T=0$: \begin{equation} \label{eq:ca} C_A(t-t')=\langle\psi_0|A^{\dagger}(t) A(t')|\psi_0 \rangle , \end{equation} where $A^{\dagger}$ is the Hermitean conjugate of the operator $A$, $A(t)$ is the Heisenberg representation of $A$, and $|\psi_0 \rangle $ is the ground state of the system. Its Fourier transform is: \begin{equation} C_A(\omega )=\sum_n |\langle \psi_n | A |\psi_0 \rangle |^2 \; \delta (\omega - (E_n-E_0)), \end{equation} where the summation is taken over all the eigenstates $|\psi_n \rangle$ of the Hamiltonian $H$ with energy $E_n$, and $E_0$ is the ground state energy. Defining the Green's function \begin{equation} \label{eq:din} G_A(z)=\langle \psi_0 | A^{\dagger}(z-H)^{-1} A |\psi_0 \rangle, \end{equation} the correlation function $C_A(\omega)$ can be obtained as \begin{equation} C_A(\omega)=-\frac{1}{\pi}\lim_{\eta\to 0^+}{\rm Im} \; G_A(\omega+i\eta +E_0). \end{equation} The function $G_A$ can be written in the form of a continued fraction: \begin{equation} \label{eq:frac} G_A(z)=\frac{\langle \psi_0 | A^{\dagger} A|\psi_0\rangle}{z-a_0-\frac{b_1^2} {z-a_1-\frac{b_2^2}{z-...}}} \end{equation} The coefficients $a_n$ and $b_n$ can be obtained using the following recursion equations \cite{carlos,proj}: \begin{equation} |f_{n+1}\rangle =H|f_n\rangle -a_n|f_n\rangle -b_n^2|f_{n-1}\rangle \end{equation} where \begin{eqnarray} |f_0\rangle &=& A|\psi_0\rangle \nonumber \\ a_n&=&\langle f_n|H|f_n\rangle/\langle f_n|f_n\rangle, \nonumber \\ b_n&=&\langle f_n|f_n\rangle/\langle f_{n-1}|f_{n-1}\rangle; \;\; b_0=0 \end{eqnarray} For finite systems the Green's function $G_A(z)$ has a finite number of poles so only a certain number of coefficients $a_n$ and $b_n$ have to be calculated. The DMRG technique presents a good framework to calculate such quantities. With it, the ground state, Hamiltonian and the operator $A$ required for the evaluation of $C_A(\omega)$ are obtained. An important requirement is that the reduced Hilbert space should also describe with great precision the relevant excited states $|\psi_n \rangle $. This is achieved by choosing the appropriate target states. For most systems it is enough to consider as target states the ground state $|\psi_0\rangle$ and the first few $|f_n\rangle $ with $n=0,1...$ and $|f_0\rangle= A|\psi_0\rangle$ as described above. In doing so, states in the reduced Hilbert space relevant to the excited states connected to the ground state via the operator of interest $A$ are included. The fact that $|f_0\rangle$ is an excellent trial state, in particular, for the lowest triplet excitations of the two-dimensional antiferromagnet was shown in Ref.~\cite{linden}. Of course, if the number $m$ of states kept per block is fixed, the more target states considered, the less precisely each one of them is described. An optimal number of target states and $m$ have to be found for each case. Due to this reduction, the algorithm can be applied up to certain lengths, depending on the states involved. For longer chains, the higher energy excitations will become inaccurate. Proper sum rules have to be calculated to determine the errors in each case. As an application of the method we calculate \begin{equation} \label{eq:szzn} S^{zz}(q,\omega)=\sum_n |\langle \psi_n | S^z_q |\psi_0 \rangle |^2 \; \delta (\omega - (E_n-E_0)), \end{equation} for the 1D isotropic Heisenberg model with spin $S=1/2$. The spin dynamics of this model has been extensively studied. The lowest excited states in the thermodynamic limit are the des~Cloiseaux-Pearson triplets \cite{descloi}, having total spin $S^T=1$. The dispersion of this spin-wave branch is $\omega^l_q=\frac{J\pi}{2}|\sin (q)|$. Above this lower boundary there exists a two-parameter continuum of excited triplet states that have been calculated using the Bethe Ansatz approach \cite{yam} with an upper boundary given by $\omega^u_q=J\pi|\sin(q/2)|$. It has been shown \cite{1haas}, however, that there are excitations above this upper boundary due to higher order scattering processes, with a weight that is at least one order of magnitude lower than the spin-wave continuum. In Fig. 2 we show the spectrum for $q=\pi$ and $N=24$ for different values of $m$, where exact results are available for comparison. The delta peaks of Eq.~(\ref{eq:szzn}) are broadened by a Lorentzian for visualizing purposes. As is expected, increasing $m$ gives more precise results for the higher excitations. This spectra has been obtained using the infinite system method and more precise results are expected using the finite system method, as described later. \begin{figure}[htbp] \begin{center} \vspace*{0.5cm} \epsfxsize=3.3in \epsfysize=2.75in \epsffile{fig2montreal.eps} \end{center} \vspace*{0.3cm} \caption[]{Spectral function for a Heisenberg chain with $N=24$ and $q=\pi$. Full line: exact result {\protect{\cite{haas}}}. The rest are calculated using DMRG with $m=100$ (long-dashed line), $m=150$ (dashed line) and $m=200$ (dotted line). } \label{fig1karen} \end{figure} In Fig. 3 we show the spectrum for two systems lengths and $q=\pi$ and $q=\pi/2$ keeping $m=200$ states and periodic boundary conditions. For this case it was enough to take 3 target states, {\it i. e.~} $|\psi_0\rangle$, $|f_0\rangle = S^z_{\pi}|\psi_0\rangle$ and $|f_1\rangle$. Here we have used $\sim 40$ pairs of coefficients $a_n$ and $b_n$, but we noticed that if we considered only the first ($\sim 10$) coefficients $a_n$ and $b_n$, the spectrum at low energies remains essentially unchanged. Minor differences arise at $\omega /J\simeq 2$. This is another indication that only the first $|f_n\rangle$ are relevant for the low energy dynamical properties for finite systems. In the inset of Fig. 3 the spectrum for $q=\pi/2$ and $N=28$ is shown. For this case we considered 5 target states {\it i. e.~} $|\psi_0\rangle$, $|f_0\rangle = S^z_{\pi/2}|\psi_0\rangle$, $|f_n\rangle\; n=1,3$ and $m=200$. Here, and for all the cases considered, we have verified that the results are very weakly dependent on the weights $p_l$ of the target states (see Eq.(\ref{eq:pl})) as long as the appropriate target states are chosen. For lengths where this value of $q$ is not defined we took the nearest value. \begin{figure}[htbp] \begin{center} \epsfxsize=3.3in \epsfysize=2.75in \epsffile{fig3montreal.eps} \end{center} \caption[]{Spectral densities for $q=\pi$, $N=28$ (continuous line) and $N=40$ (dotted line). Inset: Spectral density for $q=\pi/2$ for $N=28$ ($\eta=0.05$). } \label{fig2karen} \end{figure} Even though we are including states with a given momentum as target states, due to the particular real-space construction of the reduced Hilbert space, this translational symmetry is not fulfilled and the momentum is not fixed. To check how the reduction on the Hilbert space influences the momentum $q$ of the target state $|f_0\rangle =S^z_q|\psi_0\rangle$, we calculated the expectation values $\langle \psi_0 |S_{-q'}^z S_q^z|\psi_0\rangle$ for all $q'$. If the momenta of the states were well defined, this value is proportional to $\delta_{q-q'}$ if $q\neq 0$. For $q=0$, $\sum_r S^z_r=0$. The momentum distribution for $q=\pi$ is shown in Fig. 4 in a semilogarithmic scale where the $y$-axis has been shifted by .003 so as to have well-defined logarithms. We can see here that the momentum is better defined, even for much larger systems, but, as expected, more weight on other $q'$ values arises for larger $N$. \begin{figure}[htbp] \epsfig{file=fig4montreal.eps,width=6cm,angle=-90} \vspace*{0.5cm} \caption[]{Momentum weights of a target state with $q=\pi$ for $N=28$ (circles), $N=44$ (squares), $N=60$ (diamonds) and $N=72$ (triangles). The dotted lines are a guide to the eye. } \label{fig3karen} \end{figure} As a check of the approximation we calculated the sum rule \begin{equation} \frac{1}{4\pi^2}\int_0^{\infty}d\omega \int_{q=0}^{2\pi} S^{zz}(q,\omega)\equiv \langle \psi_0 |(S_{r=0}^z)^2|\psi_0\rangle =\frac{1}{4} \end{equation} for $N=28$, 5 target states and $m=200$. We obtain a relative error of 0.86\%. Recently, important improvements to this method have been published \cite{kuhner}: By considering the finite system method in open chains, K\"uhner and White obtained a higher precision in dynamical responses of spin chains. In order to define a momentum in an open chain and to avoid end effects, they introduce a filter function with weight centered in the middle of the chain and zero at the boundaries. In this section we presented a method of calculating dynamical responses with DMRG. Although the basis truncation is big, this method keeps only the most relevant states and, for example, even by considering a $0.1\%$ of the total Hilbert space (for $N=28$ only $\sim$ 40000 states are kept) a reasonable description of the low energy excitations is obtained. We show that it is also possible to obtain states with well defined momenta if the appropriate target states are used. \subsubsection{Correction vector technique} Introduced in Ref.~\cite{cvramasesha} in the DMRG context and improved in Ref.~\cite{kuhner}, this method focuses on a particular energy or energy window, allowing a more precise description in that range and the possibility of calculating spectra for higher energies. Instead of using the tridiagonalization of the Hamiltonian, but in a similar spirit regarding the important target states to be kept, the spectrum can be calculated for a given $z=w+i\eta$ by using a correction vector (related to the operator $A$ that can depend on momentum $q$). Following (\ref{eq:din}), the (complex) correction vector $|x(z)\rangle$ can be defined as: \begin{equation} |x(z)\rangle = \frac{1}{z-H}A |\psi_0 \rangle \end{equation} so the Green's function can be calculated as \begin{equation} G(z)=\langle \psi_0 |A^{\dagger} |x(z)\rangle \end{equation} Separating the correction vector in real and imaginary parts $|x(z)\rangle = |x^r(z)\rangle + i |x^i(z)\rangle$ we obtain \begin{equation} ((H-w)^2 + \eta^2)|x^i(z)\rangle = -\eta A |\psi_0 \rangle \end{equation} and \begin{equation} |x^r(z)\rangle= \frac{1}{\eta}(w-H)|x^i(z)\rangle \end{equation} The former equation is solved using the conjugate gradient method. In order to keep the information of the excitations at this particular energy the following states are targeted in the DMRG iterations: The ground state $|\psi_0 \rangle$, the first Lanczos vector $A |\psi_0 \rangle$ and the correction vector $|x(z)\rangle$. Even though only a certain energy is focused on, DMRG gives the correct excitations for an energy range surrounding this particular point so that by running several times for nearby frequencies, an approximate spectrum can be obtained for a wider region \cite{kuhner}. \subsection{Moment expansion} This method\cite{pang} relies on a moment expansion of the dynamical correlations using sum rules that depend only on static correlation functions which can be calculated with DMRG. With these moments, the Green's functions can be calculated using the maximum entropy method. The first step is the calculation of sum rules. As an example, and following \cite{pang}, the spin-spin correlation function $S^z(q,w)$ of the Heisenberg model is calculated where the operator $A$ of Eq.~(\ref{eq:ca}) is $S^z(q)=N^{-1/2}\sum S^z(l) \exp(iql)$ and the sum rules are\cite{hohenberg}: \begin{eqnarray} m_1(q)&=&\int_0^\infty \frac{dw}{\pi}\frac{S^z(q,w)}{w}= \frac{1}{2}\chi(q,w=0) \nonumber \\ m_2(q)&=&\int_0^\infty \frac{dw}{\pi} w \frac{S^z(q,w)}{w}= \frac{1}{2}S^z(q,t=0) \nonumber \\ m_3(q)&=&\int_0^\infty \frac{dw}{\pi} w^2 \frac{S^z(q,w)}{w}= -\frac{1}{2}\langle [[H,S^z(q)],S^z(-q)] \rangle \nonumber \\ &=& 2[1-\cos(q)]\sum_i\langle S^+_{i}S^-_{i+1}+S^-_{i}S^+_{i+1}\rangle \end{eqnarray} where $\chi(q,w=0)$ is the static susceptibility. These sum rules can be easily generalized to higher moments: \begin{eqnarray} m_l(q)&=&\int_0^\infty \frac{dw}{\pi} w^{l-1} \frac{S^z(q,w)}{w} \nonumber \\ &=& -\frac{1}{2} \langle [[H,...,[H,S^z(q)]...],S^z(-q)]\rangle \end{eqnarray} for $l$ odd. A similar expression is obtained for $l$ even, where the outer square bracket is replaced by an anticommutator and the total sign is changed. Here $H$ appears in the commutator $l-2$ times. Apart from the first moment which is given by the static susceptibility, all the other moments can be expressed as equal time correlations (using a symbolic manipulator). The static susceptibility $\chi$ is calculated by applying a small field $h_q\sum_i n_i \cos(qi)$ and calculating the density response $\langle n_q \rangle = 1/N \sum_i \langle n_q \rangle \cos(qi)$ with DMRG. Then $\chi= \langle n_q \rangle / h_q$ for $h_q\to 0$. These moments are calculated for several chain lengths and extrapolated to the infinite system. Once the moments are calculated, the final spectra is constructed via the Maximum Entropy method (ME), which has become a standard way to extract maximum information from incomplete data (for details see Ref. \cite{pang} and references therein). Reasonable spectra are obtained for the XY and isotropic models, although information about the exact position of the gaps has to be included. Otherwise, the spectra are only qualitatively correct. This method requires the calculation of a large amount of moments in order to get good results: The more information given to the ME equations, the better the result. \subsection{Finite temperature dynamics} In order to include temperature in the calculation of dynamical quantities, the Transfer Matrix RG described above (TMRG\cite{bursill2,xiangwang,shibata}) was extended to obtain imaginary time correlation functions\cite{wangbook,mutou,naef}. After Fourier transformation in the imaginary time axis, analytic continuation from imaginary to real frequencies is done using maximum entropy (ME). The combination of the TMRG and ME is free from statistical errors and the negative sign problem of Monte Carlo methods. Since we are dealing with the transfer matrix, the thermodynamic limit can be discussed directly without extrapolations. However, in the present scheme, only local quantities can be calculated. A systematic investigation of local spectral functions is done in Ref. \cite{naef} for the anisotropic Heisenberg antiferromagnetic chain. They obtain good qualitative results especially for high temperatures but a quantitative description of peaks and gaps are beyond the method, due to the severe intrinsic limitation of the analytic continuation. This method was also applied with great success to the 1D Kondo insulator\cite{mutou}. The temperature dependence of the local density of states and local dynamic spin and charge correlation functions was calculated. \section{Conclusions} We have presented here a very brief description of the Density Matrix Renormalization Group technique, its applications and extensions. The aim of this article is to give the unexperienced reader an idea of the possibilities and scope of this powerful, though relatively simple, method. The experienced reader can find here an extensive (however incomplete) list of references covering most applications to date using DMRG, in a great variety of fields such as Condensed Matter, Statistical Mechanics and High Energy Physics. \section*{Acknowledgments} The author acknowledges hospitality at the Centre de Recherches Mathematiques, University of Montreal and at the Physics Department of the University of Buenos Aires, Argentina, where this work has been performed. We thank S. White for a critical reading of the manuscript and all those authors that have updated references and sent instructive comments. K. H. is a fellow of CONICET, Argentina. Grants: PICT 03-00121-02153 and PICT 03-00000-00651.
1,314,259,992,835
arxiv
\section{Introduction} Recognizing and analyzing facial affective statements from human behaviors is a long-standing problem in the intersection area of the computer science and psychology community. An ideal human-computer interaction system is expected to capture the vivid human emotions, mostly conveyed by facial performances, and to react respectively. Because of the diverse environments and varying contexts where emotions occur, the perception of facial effectiveness is always natural to our human beings but never straightforward to the artificial intelligent machines. Thanks to the continuous research of psychology and rapid development of deep learning methods, especially recent published large scale in-the-wild annotated datasets e.g., \textit{Aff-Wild}~\cite{zafeiriou2017aff} and \textit{Aff-Wild2}~\cite{kollias2019expression}, the automatic affective recognition approaches are now pushed to meet the real-world requirements. Different from most existed facial emotion datasets~\cite{jaffe, zhang2014bp4d,zhang2016multimodal,jiang2020dfew} that contain only one of the three common used emotional representations: Categorical Emotions (CE), Action Units (AU), and Valence Arousal (VA), the \textit{Aff-Wild2}~\cite{kollias2019expression} dataset is annotated with all three kinds of emotional labels, containing extended facial behaviors in random conditions and increased subjects/frames to the former \textit{Aff-Wild}~\cite{zafeiriou2017aff} dataset. Consequently, the multi-task affective recognition can benefit from it, for example, the works~\cite{deng2020multitask,gera2020affect,zhang2020m,saito2020action} participated in the first Affective Behavior Analysis in-the-wild (ABAW) Competition~\cite{kollias2020analysing}. In this work, we propose a novel multi-task affect recognition framework for the ABAW2 Competition~\cite{2106.15318}. In contrast to the previous methods which take the multiple emotion recognition problems as parallel tasks, we design our algorithm pipeline in a streaming structure to fully exploit the hierarchical relationships among the three representation spaces. Specifically, we make our single-flow network first estimates the action units from input images, then the emotion labels, and finally the VA distribution. Such arrangements are made due to a heuristic that the regressing order AU$\to$CE$\to$VA should match the underlying semantic level of three target emotion representations. For instance, the facial action coding system (FACS) defines AU based on local patches and therefore AU-related features could provide low-level information for the global categorical emotion (CE) classification task. Moreover, the seven-dimensional emotion distributions (spanned by the categorical classes) can be compressed into 2D with the two principle components: Valence and Arousal (VA). Another contribution of our framework is that we utilize an advanced facial expression embedding model to employ helpful prior knowledge for the downstream tasks, i.e., AU detection, CE classification, and VA regression. Despite the traditional facial expression recognition (FER) models have regressed continuous expression distributions for discrete classification, they can hardly encode the fine-grained expression features. In this work, we adopt the triplet-based expression embedding~\cite{zhang2021learning} model as the backbone of the entire framework. Since the expression embedding is trained to distinguish minor expression similarities between different subjects, it can provide powerful expression-related priors to the high-level emotion recognition task. In participating the second ABAW2 Competition, we conduct extensive experiments on the \textit{Aff-Wild2}\cite{kollias2019expression} dataset. In order to improve the generalization ability of our multi-task model, we augment the training dataset with BP4D~\cite{zhang2014bp4d}, BP4D+~\cite{zhang2016multimodal}, DFEW~\cite{jiang2020dfew} and AffectNet~\cite{mollahosseini2017affectnet}. Because of the multi-task framework and streaming design, each module of our network can be fine-tuned on images with no need for all the three emotion representation labels to exist. In sum, the contributions of this work are two-folds: \begin{itemize} \item We propose a streaming network to handle the multi-task affect recognition problem. By heuristically designing the regression order, the streaming structure allows to exploit inner relationships across different emotional spaces. \item We employ an identity-invariant expression prior model as backbone. With fine-grained expression related features, our network can well capture the high-level information for the emotional recognition tasks. \end{itemize} \section{Related Works} In this section, we briefly review some concepts, works and datasets related with the affective recognition problem. \subsection{Emotional Representation} Human affective behavior analysis has attracted great interest in Human-Computer Interaction. With the help of effective emotional representation, the computer will gain a better understanding of how human brain behave, leading to the user friendly experience between humans and machines. There are three common used emotional representations: 7 basic emotion categories~\cite{ekman2003darwin}, Facial Action Units (AUs)~\cite{FACS} and 2-D Valence and Arousal (VA) Space~\cite{russell1980circumplex}. The 7 basic emotions includes Anger, Disgust, Fear, Happiness, Sadness, Surprise and Neutral. AUs~\cite{FACS} include 32 atomic facial action descriptors based on facial muscle groups, which facilitate the physical and fine-grained understanding of human facial expressions. The detection of facial AU occurrence offers crucial information for emotion recognition~\cite{AU4EmoctionRecog}, micro-expression detection~\cite{AU4MicroExpression}, and mental health diagnosis~\cite{AUD4diagnose}. The Valance in VA space represents the degree of emotional positiveness and negativeness and the Arousal shows whether the emotion is passive or active. \subsection{Affect Annotation Dataset} The 2nd Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW)~\cite{zafeiriou2017aff,kollias2019expression, kollias2020analysing, kollias2019face, kollias2019deep, kollias2021affect, kollias2021distribution} provides a benchmark dataset Aff-Wild2 for three recognition challenges: 7 basic emotion classification, 12 AUs detection and VA regression. Extened from Aff-wild~\cite{zafeiriou2017aff}, Aff-wild2 increases the number of annotated videos with 545 videos annotated by valence-arousal, 539 videos annotated by 7 basic emotion categories and 534 videos annotated by 12 AUs. Aff-Wild2 is currently the largest in the wild dataset in aspect of VA, AU and basic emotion expressions. \subsection{Automatic Affective Behavior Analysis} The challenges of affective behavior analysis has attracted lots of research efforts. We will briefly introduce some related works. Kuhnke~\etal~\cite{kuhnke2020two} use the multi-model information of vision and audio, proposing a two-stream aural-visual network for multi-task training. Considering the problems of unbalanced data and missing label, Deng~\etal~\cite{deng2020multitask} propose a structure of Teacher-Student to learn from the unlabelled data by way of soft label. Besides the multi-task frameworks, Gera~\etal~\cite{gera2020affect} focus on the task of discrete emotion classification and propose the network based on attention mechanism. Zhang~\etal~\cite{zhang2020m} propose a multi-model approach $M^{3}T$ for valence-arousal estimation using the visual feature extracted from 3D convolution network and a bidirectional recurrent neural network and the audio features extracted from a acoustic sub-network. Saito~\etal~\cite{saito2020action} tackle the problem of AUs label inconsistency, proposing a Pseudo-intensity Model to learn the degree of facial appearance change and a mapping model to predict the AUs. \section{Method} In this section, we introduce our method for affective behavior analysis in the 2nd ABAW2 Competition. The overall pipeline is illustrated in Fig.~\ref{fig:pipeline}. The entire framework consists of two components: a prior model for extracting prior expression embedding knowledge and a streaming model for exploiting the hierarchical relationships among three emotional representations. \begin{figure*} \centering \includegraphics[width=1\linewidth]{Figure/pipeline.pdf} \caption{To be updated.} \label{fig:pipeline} \end{figure*} \subsection{Overview} As described in the official white paper~\cite{2106.15318}, the ABAW2 Competition contains three challenges, corresponding to the three commonly used emotion representations: seven categorical emotions, twelve action units, and two dimensional valence arousal. We propose a general framework to jointly handle the three individual tasks. Despite the different psychological philosophies of the three emotional representations, it is widely agreed that the representations are intrinsically associated with each other~\cite{tian2001recognizing}. One of the evidence is that the similar facial muscle movements (action units) mostly indicate the similar inner statements, and so the perceived facial emotions. However, most previous research works on multi-task emotion recognition omit this fact and they just model the different tasks in parallel branches. Inspired by the observation above, we design the recognition process in a serial manner AU$\to$CE$\to$VA, from local action units to global emotion states. The streaming structure is helpful to adjust the hierarchical distributions on different feature levels. For example, the optimizing energy from the most high-level VA space should be back-propagated to the low-level features and thus help the other two tasks in training. Due to the limited subjects and unbalanced annotations of existed affective datasets, it is a challenging issue to prevent the emotion recognition model from overfitting on the disturbing factors, like background or random noise. To tackle this problem, we adopt a prior facial expression embedding model~\cite{zhang2021learning}, which can capture the detailed expression similarities across different people, into our framework. The expression embedding brings at least two advantages. First, by training on even larger facial image datasets with the identity invariant constraint, the embedding itself is independent to the identity attributes and therefore can improve the network's generalizability to unseen subjects. Second, the expression embedding model~\cite{zhang2021learning} is targeted for discriminating the minor expression similarities within triplet training data. It provides a nice initialization for our latter emotion recognition tasks. Combining with the prior and the streaming model, we train our multi-task affective recognition model in an end-to-end manner. Given an image $\mathbf{\mathcal{I}}_k$ with at least one of the three emotional annotations $\{\mathbf{V}_{AU}^k, \mathbf{V}_{CE}^k, \mathbf{V}_{VA}^k\}$, we send it to the full network for training and compute corresponding losses on its existed labels. In the following, we will introduce the network structure and loss functions in detail. \subsection{Prior Model} We adopt the Deviation Learning Network (DLN) from~\cite{zhang2021learning} as the expression prior to our framework. In order to generate a compact and continuous expression embedding space disentangled from the identity factor, the DLN model are trained on more than 500 thousands of annotated triplets from the FECNet~\cite{vemulapalli2019compact} dataset. Following the idea from~\cite{vemulapalli2019compact,zhang2020facial}, the DLN aims to map the similar expression image pair (\textit{anchor} and \textit{positive}) close to each other in the low-dimensional space, while keep the dissimilar expression image pair (\textit{anchor} and \textit{negative}) away from each other. To efficiently exclude the identity attributes from the extracted image features, the DLN model proposes a deviation module by subtracting the identity vectors (produced by a pre-trained face recognition model) from the facial ones. Since the original DLN model maps the facial expression images into a 16 dimensional space, which leaves quite tight room for optimization in our problem, we only take the pre-trained deviation module from~\cite{zhang2021learning} that produces 512 dimensional features. Specifically, given a facial image $\mathbf{\mathcal{I}}_k$ from training dataset, the prior model is expected to generate a 512 dimensional embedding vector $\mathbf{Emb}^k$ that contains identity-invariant expression information. In training the entire framework, we also make the expression embedding model to be trainable and adaptively adjust the embedding vector results. \subsection{Streaming Model} With prior generated expression embedding vector, we first construct three individual feature extractor to downsample $\mathbf{Emb}^k$ from 512 to $12\times 16$, $64$, $64$, respectively. We start from the AU branch and introduce our streaming regression process on each of three tasks. For AU features in $\mathbb{R}^{12\times 16}$, we directly send it into a multilayer perceptron (MLP) to regress for the AU score per each of twelve classes. Denote the final output of AU predictions as $\mathbf{\tilde{V}}_{AU}^k = \{\tilde{v}_1, \tilde{v}_1,..., \tilde{v}_{12}\} \in \mathbb{R}^{12}$ and the ground-truth AU label $\mathbf{V}_{AU}^k=\{v_0, v_1,...,v_12\}\in \{0, 1\}^{12}$, we apply the multi-label cross entropy loss~\cite{he2018joint} as following: \begin{equation} \begin{split} \mathcal{L}_{AU}=&\log(1+\sum_{i\in \Omega_{0}}e^{\tilde{v}_i})+\log(1+\sum_{j\in \Omega_{1}}e^{-\tilde{v}_j}),\\ \text{where}&\\ \Omega_0 =& \{~i~~ |~~ \text{if}~~ v_i=0~\},\\ \Omega_1 =& \{~j~~ |~~ \text{if}~~ v_j=1~\}. \end{split} \end{equation} On the other hand, the AU features are sent into the CE branch after translated by the AU$\to$CE model. We concatenate the translated AU features ($\mathbb{R}^{64}$) with CE ones ($\mathbb{R}^{64}$) to be a joint vector. Then the 128 dimensional features are sent into CE layers for emotion classification. The output CE possibility vector $\mathbf{\tilde{V}}_{CE}^k$ and the annotated emotion label $\mathbf{V}_{CE}^k$ is evaluated by the Softmax Classifier Loss: \begin{equation} \mathcal{L}_{CE}=\text{Softmax}(\mathbf{\tilde{V}}_{CE}^k, \mathbf{V}_{CE}^k). \end{equation} The \{AU,CE\}$\to$VA model takes the CE joint features as input and generates another 64 dimensional feature to aid the VA regression. Similar to the last operation on CE task, we concatenate the VA features with the translated ones and send them into the VA layers. Defining the two dimensional vector output as $\mathbf{\tilde{V}}_{VA}^k=\{\tilde{v},\tilde{a}\}$ and the ground-truth as $\mathbf{V}_{VA}^k=\{v,a\}$, the VA loss is computed by the Concordance Correlation Coefficient (CCC) metric: \begin{equation} \mathcal{L}_{VA}=CCC_v + CCC_a. \end{equation} The total loss of the streaming network can be formulated as: \begin{equation} \mathcal{L}_{total} = \alpha_{AU} \cdot \mathcal{L}_{AU} + \alpha_{CE} \cdot \mathcal{L}_{CE} + \alpha_{VA} \cdot \mathcal{L}_{VA}, \end{equation} where $\alpha_{\cdot}$ is boolean valueable indicating the existence of groundtruth label on each track. \subsection{Algorithm Details} \noindent \textbf{Data Augmentation}. In addition to the original training set of \textit{Aff-Wild2}~\cite{kollias2019expression}, our model is further trained on the BP4D~\cite{zhang2014bp4d}, BP4D+~\cite{zhang2016multimodal}, DFEW~\cite{jiang2020dfew}, and AffectNet~\cite{mollahosseini2017affectnet}. While processing the external datasets, we only keep the annotated classes that are consistent with the \textit{Aff-Wild2}~\cite{kollias2019expression}. \noindent \textbf{Pseudo Label}. Another approach we proposed for alleviating the overfitting / data unbalancing issue is to generate reliable pseudo labels for training. We exploit the underlying relationships between AU and CE. Particularly, some AUs are always mapped to the same CE. In this way, we can quickly infer the missing CE labels from explicit AU annotations. \section{Experiments} \begin{table}[t] \begin{center} \begin{tabular}{c|c|c|c} \hline \textbf{Method} & \textbf{AU} & \textbf{CE} & \textbf{VA}\\ \hline Baseline~\cite{2106.15318} & 0.310 & 0.366 & 0.220 \\ Ours w/o prior & 0.464 & 0.718 & 0.422 \\ Ours w/o streaming & 0.677 & 0.677 & 0.447 \\ Ours & \textbf{0.742} & \textbf{0.790} & \textbf{0.495} \\ \hline\\ \end{tabular} \caption{Ablation comparison to our method w/o prior model or streaming structure and the baseline~\cite{2106.15318}. The best result per each track is indicated in bold. } \label{tab:ablation} \end{center} \end{table} \begin{table*}[t] \begin{center} \begin{tabular}{c|ccc|ccc|ccc} \hline \multirow{2}*{\diagbox{\textbf{Validation set}}{\textbf{Track}}} &\multicolumn{3}{c|}{\textbf{AU}} & \multicolumn{3}{c|}{\textbf{CE}} & \multicolumn{3}{c}{\textbf{VA}}\\ ~ & F1 & \textit{TAcc} & Score & F1 & \textit{TAcc} & Score & $CCC_V$ & $CCC_A$ & Score \\ \hline Original & 0.588 & 0.896 & 0.742 & 0.757 & 0.856 & 0.790 & 0.488 & 0.502 & 0.495 \\ Fold-1 & - & - & 0.753 & - & - & \textbf{0.783} & - & - & 0.578 \\ Fold-2 & - & - & \textbf{0.772} & - & - & 0.725 & - & - & 0.591 \\ Fold-3 & - & - & 0.755 & - & - & 0.762 & - & - & 0.532 \\ Fold-4 & - & - & 0.753 & - & - & 0.770 & - & - & \textbf{0.621} \\ Fold-5 & - & - & 0.758 & - & - & 0.765 & - & - & 0.606 \\ \hline\\ \end{tabular} \caption{Quantitative results of our prior aided streaming on different validation sets. The best result per each track is indicated in bold.} \label{tab:validation} \end{center} \end{table*} In this section, we give some experimental results based on the validation dataset of \textit{Aff-Wild2}~\cite{kollias2019expression}, as well as the 5-fold cross validation results. As part of submission to the ABAW2 Competition, we also upload our code for open release. \subsection{Training} We processed all videos in the \textit{Aff-Wild2} dataset into frames by OpenCV and employ the OpenFace~\cite{baltrusaitis2018openface} detector to eatract and resize all facial images into $224\times 224$ scale. We trained the entire framework on a NVIDIA RTX 3090 graphics card for around 20 hours. \subsection{Results} There are two kinds of validation set to be evaluated in our experiment. One is the official provided validation set, the other is the 5-fold cross validation set. We report both quantitative results in Tab.~\ref{tab:validation}. In order to evaluate the effectiveness of our proposed algorithm design, i.e., prior model and streaming network, we conduct ablation studies by comparing the models trained without the components. The quantitative results shown in Tab.~\ref{tab:ablation} indicate that both modules help to improve the recognition/classification performance on each emotion representation track. \section{Conclusion} In this paper, we introduce our deep learning based framework for multi-task affective recognition in the second ABAW2 Competition. We propose a streaming network by exploiting the hierarchical relationships between different emotion representations. Besides, we employ an expression prior model to improve the generalization ability of our model to the test set. The quantitative comparisons prove that each component is effective to the affective recognition tasks. We have also presented the experimental results on the official validation dataset.
1,314,259,992,836
arxiv
\section{Introduction} Supernovae (SNe) are among the most violent events in the Universe, ejecting gas and returning material from dense molecular clouds into the more diffuse interstellar medium and the galactic halo. When the expanding blast wave encounters a dense molecular cloud, the SN shocks drive excitation, chemical reaction, and dynamic motion of the gas, and destroy dust grains by collisions or thermal sputtering (Jones {et al.\/~}\ 1994; Andersen {et al.\/~}\ 2011). This interaction may determine the fate of the molecular cloud, either dispersing it or triggering collapse of dense cores leading to a subsequent generation of star formation. A few signs of molecular clouds and SN (MC-SN or MC-SNR) interactions have been found such as {\it Mixed-morphology} SNRs, or detections of OH masers and molecular hydrogen (H$_2$). A signspot of the interaction is center-filled, thermal X-ray emission \citep[e.g.][and references therein]{rho98, pannuti14} that can be produced through thermal conduction \citep{tilley06a,tilley06b, orlando09}. However, it is still not clear if the interaction with clouds is a unique mechanism to produce Mixed-morphology SNRs. A better indicator of SN-MC interactions is the presence of {\it 1720 MHz OH masers} which are detected from $\sim$20 SNRs \citep{frail96, yusef-zadeh99, hewitt08}. These masers are thought to be collisionally excited and they suggest SN-MC interactions, but maser emission is not well understood \citep{yusef-zadeh99}. In 3C391, a maser position in the southwestern shell is correlated with broad CO lines, but a maser position in the northeastern shell show only narrow CO lines \citep{reach99}. Other evidence of shock interaction with clouds is {\it H$_2$ emission} from collisionally excited shocked gas, but the H$_2$ emission could also originate from UV pumping \citep{burton92}. Examples of collisionally excited H$_2$ emission from shock interaction with molecular clouds are IC 443 \citep{burton90, richter95}, W44 and W28 \citep{reach05,neufeld07}. A one-to-one correspondence between H$_2$ and broad CO emission has been found in IC 443 \citep{rho01}. Eighteen interacting remnants are found using infrared (IR) colors from the \textit{Spitzer}\ GLIMPSE data (Reach {et al.\/~}\ 2006). Follow-up \textit{Spitzer}\ spectroscopy confirms detection of H$_2$ lines as well as ionic fine-structure lines and shock-processed dust (Hewitt {et al.\/~}\ 2009; Andersen {et al.\/~}\ 2010). A strong correlation between molecular interacting SNRs and $\gamma$-ray emission has been known since the EGRET era \citep{esposito96}. Recently Fermi and HESS observations revealed extended $\gamma$-ray emission associated with molecular interacting SNRs like IC 443 \citep{abdo10a} , W44 \citep{uchiyama10, abdo10b} and W28 \citep{hanabata14, aharonian08, abdo10c}, emphasizing the astrochemical processes of molecules and hadronic particles by SNe. The $\gamma$-ray emission is associated with shocked molecular material previously identified with millimeter (mm) CO and infrared H$_2$ emission (Burton et al. 1990; Reach \& Rho 2006; Reach, Rho, \& Jarrett 2005). With {\it Fermi} observations, there is a growing number of $\gamma$-ray emitting SNRs, many of which have the indicators of MC-SNR interactions described above \citep[][and references therein]{abdo09, hewitt12, wu11, daniel10, hewitt15}. The study of MC-SNR interactions is advancing rapidly because of multi-wavelength observations; strong correlations among them provide opportunities to discover true samples of SNRs interacting with clouds. The clearest evidence for interaction between SNRs and molecular clouds is the detection of emission from the shocked molecules themselves. Millimeter observations provide direct, unambiguous evidence of interaction when a broad ($>$10 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}) line caused by dynamic motion of shocked gas is detected. There has been a long-term effort to search for interactions with clouds using millimeter observations \citep[]{huang86, jeong13, zhou14,liszt09}. However, detection of broad CO lines is still limited to a half dozen SNRs. Millimeter observations of SNRs which have evidence of shocks by showing broad lines are IC 443 \citep[e.g.][references therein]{vandishoeck93}, W44 \citep{wootten77, reach05, seta04, anderl14}, W28 \citep{arikawa99, reach05}, 3C391 \citep{reach99}, W51C \citep{koo97}, and HB21 \citep{koo01, shinn10}. For some cases, changes in CO velocity profiles or a small amount of broadening in the line profiles may indicate interactions with clouds \citep{dubner04, zhou11}. Recently \citet{kilpatrick16, kilpatrick14} observed SNRs emitting $\gamma$-ray emission using HESS and detected evidence of interaction in millimeter from a few SNRs. Another example of broad molecular lines is a water line at 557 GHz from the SNR G349.7+0.2 (with a FWHM of 144 km s$^{-1}$) using Herschel HIFI observations \citep{rho15}. IC 443 is another case showing a broad infrared water line \citep{snell05}. The SNR G357.7+0.3 is relatively unknown and under-studied. Radio observations identified this source as a SNR \citep{reich84}. It is named $``$Square Nebula" because of its square-like radio morphology as shown in Figure \ref{g357radio}. Soft, faint blobs of X-ray emission using {\it Einstein IPC} have been detected with an inferred temperature of 5.4$\times$10$^{6}$ K and an age of 10,000 yr \citep{leahy89}. OH masers have been detected around -35 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ \citep{yusef-zadeh99} and they are shown to be extended from the western edge \citep{hewitt08}. A study of surrounding material of the SNR using \textit{Spitzer}\ IRAC images by \citet{phillips09,phillips10} provides circumstantial evidence of interacting with molecular clouds, but its direct evidence still lacks. There is no previous detection of this SNR in optical or infrared. The distance to G357.7+0.3 is 6.4 kpc which is consistent with -35 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ (from OH lines) in Galactic rotation curve \citep[see][]{yusef-zadeh99}. In this paper, we report direct evidence that the SNR G357.7+0.3 interacts with molecular clouds resulting in broad millimeter lines (such as CO and HCO$^+$). The dynamic motion is revealed in the shocked clouds. We also report detection of shocked H$_2$ emission in mid-infrared from G357.7+0.3 using \textit{Spitzer}\ spectroscopy IRS data. Section 2 describes observations which include 10 independent observing runs (see Table \ref{Tobs}). In Section 3.1, we report direct evidence that the SNR G357.7+0.3 interacts with molecular clouds with broad millimeter lines (such as CO and HCO$^+$) using ground-based telescopes. Spectral mapping of CO lines is described in Section 3.2 and 3.3. The detection of an atomic line of [C~II] at 158$\mu$m and the upper limit of high-J CO line in Section 3.4. Large-scale molecular cloud maps surrounding the SNR are presented in Section 3.5. The detection of molecular hydrogen line with {\it Spitzer} IRS is in Section 3.6. In Section 4, we discuss the physical conditions of shocked gas including excitation of molecular hydrogen and CO gas. Our paper presents the first direct evidence that the SNR G357.7+0.3 interacts with molecular clouds. Because SNR-MC interactions are valuable astrophysical laboratories, our paper provides a new and exciting laboratory where molecular astro-chemistry and shocks can be studied. \begin{figure*} \includegraphics[scale=0.6,width=16.5truecm]{f1.eps} \caption {The positions of OH masers from \citet[][crosses in green]{yusef-zadeh99}, CO broad molecular lines (BML; crosses in black), and the center (the same as BML-B) and fields of view (marked in rectangular boxes) of {\it Spitzer} IRS (in green for SL1, magenta for SL2, red for LL2, and yellow for LL1) are marked on the northwestern part of radio image of G357.7+0.3. The field of view is shown in the radio image of the entire SNR (in the inserted image marked as a box in green). The contours are from CO braod molecular line maps (blue-wing in blue and red-wing in red; see Section 3.2 for details). The 20 cm radio continuum image shows 32$'$$\times$35$'$ field of view (FOV) centered on R.A. $17^{\rm h} 38^{\rm m} 36.5^{\rm s}$ and Decl.\ $-30^\circ$37$^{\prime} 26.2^{\prime \prime}$ (J2000), and the scale of color bar has a unit of Jy/beam (synthesized beam of 15$''$). } \label{g357radio} \end{figure*} \begin{table*} \caption[]{Summary of Millimeter and Infrared Observations }\label{Tobs} \begin{center} \begin{tabular}{lllll} \\ \hline \hline Date & Telescope & Lines \\ 2003 May 18, 19 & HHSMT & CO(2-1) \\ 2003 Jun 8 & HHSMT & CO(2-1) \\ 2005 August 14 & {Spitzer} IRS & H$_2$ \\ 2006 March 3, 4 & HHSMT & CO(3-2) \\ 2006 March 6-9 & ARO 12-Meter & $^{13}$CO(1-0), HCO$^+$ \\ 2006 April 14, 15 & ARO 12-Meter & $^{13}$CO(1-0) maps \\ 2007 September 19 &MOPRA & HCO$^+$(1-0), HCN(1-0) etc \\ 2013 July 18 & SOFIA GREAT & [C~II], CO(11-10) \\ 2015 April 16 & APEX (Obs. 1) SHeFI & CO(2-1), $^{13}$CO(2-1) \\ 2015 August 16, 17 & APEX (Obs. 2) FLASH, SHeFI & CO(4-3), CO(3-2), $^{13}$CO(3-2) \\ \hline \hline \end{tabular} \end{center} \renewcommand{\baselinestretch}{0.8} \end{table*} \section{Observations} We performed ten independent observing runs using ground-based telescopes, {\it Spitzer} Space telescope, and the Stratospheric Observatory for Infrared Astronomy (SOFIA) air-borne telescope over a time span of more than 10 years. The ground-based observing runs are using Heinrich Hertz Submillimeter Telescope (HHSMT), 12-Meter (12-m, hereafter) Kitt Peak telescope, and Atacama Pathfinder Experiment (APEX) telescope. The observational dates are summarized in Table \ref{Tobs}. The positions of G357.7+0.3 we observed are listed in Table \ref{Tg357pos} and marked on Figure \ref{g357radio}. All maps in this paper have equatorial coordinates in J2000. \begin{table} \caption[]{Observed positions of G357.7+0.3 }\label{Tg357pos} \begin{center} \begin{tabular}{lllll} \\ \hline \hline Position & Offset & R.A., Decl.\\ OH-A1/BML-A &(0,0) &17:38:30.39,-30:33:17.7\\ APEX Obs. 2 & (0,0) &17:38:30.39,-30:33:17.7\\ OH-A2&(+9.8,-23) &17:38:30.39,-30:33:17.7\\ \hline BML-B& (-60,-60) &17:38:26.9,-30:34:17.7\\ {\it Spitzer} IRS & (-60,-60) & 17:38:26.9,-30:34:17.7\\ MOPRA & (-50,-50) &17:38:27,-30:34:08 \\ SOFIA (H$_2$ peak) & (-48,-48) & 17:38:27.25,-30:34:06.4 \\ APEX Obs. 1 & (-60,-60) &17:38:26.9,-30:34:08\\ \hline BML-C &(-30,-90) &17:38:28.7,-30:34:47.7\\ BML-D & (+60,+30) &17:38:38.8,-30:32:47.7\\ \hline \hline \end{tabular} \end{center} \renewcommand{\baselinestretch}{0.8} \end{table} \subsection{HHSMT observations} We performed mm and submillimeter (submm) observations of G357.7+0.3 using the HHSMT (or SMT) {\footnote[1]{http://www.as.arizona.edu/aro/}} located at Mt. Graham, Arizona centered on one of the OH maser positions, OH-A1 of G357.7+0.3 (R.A.\ $17^{\rm h} 38^{\rm m} 30.39^{\rm s}$ and Dec.\ $-30^\circ$ 33$^{\prime} 17^{\prime \prime}$, J2000) on 2003 May 18-19 and June 8 and 2006 March 3-4. We observed $^{12}$CO(3-2) and $^{12}$CO(2-1) using acousto-optic spectrometer (AOS) and filterbank. The AOS spectra (see Figure \ref{smt12mspec}) were measured with a 2048-channel, 1 GHz total bandwidth AOS with an effective resolution of 1 MHz. The observations were made with three facility SIS mixer receivers placed at the Nasmyth focus. The beam efficiencies of single-polarization receivers in the frequency bands 210 –- 275 GHz and 430 –- 480 GHz are 0.77 (an example of CO(2-1)) and 0.45, and a dual-polarization receivers covering the frequency band 320 –- 375 GHz is 0.48 for CO (3-2). The telescope beam size is 34$''$ at 217 GHz, 22$''$ at 347 GHz, and 18$''$ at 434 GHz. One of challenges of observation is to find a clear reference position since the SNR is located at the Galactic plane; we tried 2-10 positions and verified the emission from the reference positions were clear. The final reference position used is RA.\ $17^{\rm h} 35^{\rm m} 15.5^{\rm s}$ and Dec.\ $-30^\circ$10$^{\prime} 48^{\prime \prime}$ (J2000). We performed spectral GRID mapping (10x10 spectra) in CO(2-1) covering 4.5$'$$\times$5$'$ area with a spacing of 30$''$ centered on BML-B position using HHSMT. Details are described in Section 3.2. \subsection{12-Meter Telescope observations} The 12-Meter Telescope is located on Kitt Peak and is one of the Arizona Radio Observatory Telescopes. We observed the SNR G357.7+0.3 on 2006, March 6-16 and the lines we observed are $^{13}$CO (1-0) and HCO$^+$ (1-0) (see Table \ref{TCOlines}). We observed a few positions for HCO$^+$ lines and a large $^{13}$CO(1-0) map of 30$'$$\times$35$'$ area covering the SNR G357.7+0.3 and its surroundings. \subsection{{\it Spitzer} IRS observations} We performed an IRS spectral mapping centered on the northwestern shell of G357.7+0.3 (R.A.\ 17$^{\rm h} 38^{\rm m} 26.9^{\rm s}$ and Dec.\ $-30^\circ$34$^{\prime} 17.7^{\prime \prime}$, J2000; position $``$BML-B" which is a peak of CO broad molecular line (BML) as a part of {\it Spitzer} IRAC GTO time (PI: Giovanni Fazio). The short-low (SL: 8-15 $\mu$m) covered 75$''$$\times$60$''$ (supplement data covered the same area in the south for SL2, and in the north for SL1), and long-low (LL) covered 170$''$$\times$55$''$ (supplementary data covered the same area to the west for SL2, and to the east for LL1). The Long Low (LL: 15-40 $\mu$m) IRS data were taken on 2005 August 14 with 6 cycles of 30 sec exposure time; this yields a total exposure time of 360 sec for the first and second staring positions. The SL IRS observations were made with 3 cycles of 60 sec exposure time and one cycle covers 2 dither positions; this yields a total exposure time of 360 sec per sky position. The spatial resolution (=1.2$\times$$\lambda$/D sr where D is 85 cm for {\it Spitzer}) of H$_2$ image generated from the IRS data is approximately 2$''$, 3$''$, 3.6$''$, 5$''$ and 8.3$''$ at 6.9, 9.6, 12.2, 17, and 28$\mu$m, respectively. The IRS spectra (AORkey of 21819136) were processed using the S18.8 pipeline products and reduced using CUBISM. The spectra are extracted for the bright part of H$_2$ emission with a rectangular region of 40$''$$\times$50$''$ centered on the IRS position of R.A.\ $17^{\rm h} 38^{\rm m} 26.9^{\rm s}$ and Dec.\ $-30^\circ$34$^{\prime}17.7^{\prime \prime}$ (J2000) as shown in Figure \ref{g357radio}. The region is the overlapped region among SL1, SL2, LL2 and LL1. \subsection{MOPRA observations} Observations of selected positions were obtained using the 22m Australia Telescope MOPRA{\footnote[2]{http://www.narrabri.atnf.csiro.au/mopra/obsinfo.html}} antenna during September 2007. G357.7+0.3 was observed using the MOPS spectrometer backend configured in zoom mode to simultaneously observe 16 windows within an 8.3 GHz bandwidth in both linear polarizations. Each window has 137.5 MHz sampled over 4096 channels. Final spectra (see Figure \ref{mopraspec}) are smoothed with a four-channel Gaussian function, giving a velocity resolution of 0.44 km s$^{-1}$ at 90 GHz. For all sources we observed C$^{34}$S (2-1) at 96.4 GHz, CH$_{3}$OH (2-1) at 95.9 GHz, CH$_{3}$OH (8-7) at 95.1 GHz, N$_2$H$^+$ (1-0) at 93.2 GHz, $^{13}$CS (2-1) at 92.5 GHz, HNC (1-0) at 90.7 GHz, HCO$^+$(1-0) at 89.2 GHz, and HCN (1-0) at 88.7 GHz. HCN J=1--0 is a triplet with F=2--1,1--1,0--1 at 88.631847, 88.630416 and 88.633936 GHz, respectively. When fitting this line we fit the F=2--1 line and assume the two other lines of the triplet are found at a fixed velocity separation of +4.84, -7.08 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ and fixed line strengths relative to F=2--1 of 0.5, 0.25 for the F=1--1, F=1--0 transitions respectively. The main beam efficiency, T$_{mb}$, is 0.4--0.49 in the band \citep{ladd05}. HCO+ and HCN lines are detected as shown in Figure \ref{mopraspec} (see Section 3.2 for details), and the upper limit of C$^{34}$S(2-1), CH$_{3}$OH (2-1), CH$_{3}$OH (8-7), N$_2$H+(1-0), $^{13}$CS (2-1), and HNC (1-0) is 0.05 K for each. \subsection{APEX observations} APEX observations were conducted on 2015 April 16 and on 2015 August 16 and 17. The observing time of 2015 April 16 observations was granted (PI: Andersen) from ESO open proposal call, and 2015 August time is through APEX instrument PI allocated time. We used SHeFi, FLASH345, and FLASH460 (Heyminck et al. 2006). The beam efficiency of CO(3-2) and CO (4-3) are 0.73, and 0.60, respectively, and the beam sizes are listed in Table \ref{TCOlines}. The data were reduced in CLASS{\footnote[3]{See http://www.iram.fr/IRAMFR/GILDAS}} and the final results were exported into IDL. \begin{table*} \caption[]{Summary of Molecular line Properties \label{TCOlines}} \begin{center} \begin{tabular}{llrllrrll} \\ \hline \hline Position &line & Frequency & Telescope & V$_{lsr}$ & FWHM & $\int Tdv$ &RMS & t$_{int}$ \\ & &(GHz) & [beam size ($''$)] & (km s$^{-1}$) & (km s$^{-1}$) & (K km s$^{-1}$) &(K) & (min) \\ \hline OH-A1 & CO (2-1) & 230.5379 & HHSMT [30] & -35.46$\pm$0.17 & 17.23$\pm$0.58 &89.94$\pm$1.83 & 0.125 &22\\ OH-A1 & CO (2-1) & 230.5379 &APEX SHeFI [30] & -35.85$\pm$0.16& 18.37$\pm$0.53 & 90.08$\pm$1.99 &0.106 & 11 \\ OH-A1 & CO (3-2) &345.7959 &HHSMT [22] & -35.86$\pm$0.07 &17.33$\pm$0.20 & 163.64$\pm$1.35 & 0.189 & 18\\ OH-A1 &CO (3-2) &345.7959 &APEX FLASH [22] &-35.18$\pm$0.06 & 17.26$\pm$0.18&90.09$\pm$0.64&0.068 & 21\\ OH-A1 & CO (4-3) &461.0407 &APEX FLASH [13] & -34.65$\pm$0.04 & 14.71$\pm$0.10 & 64.29$\pm$0.33 &0.134 & 43\\ OH-A1 &$^{13}$CO(1-0) & 110.2013 & ARO 12-m [47] &-35.07$\pm$0.10 & 3.95$\pm$0.31 & 4.57$\pm$0.26& 0.100 &20\\ OH-A1 &$^{13}$CO(2-1) &220.3986 &APEX SHeFI (30) &-34.89$\pm$0.07 & 7.52$\pm$0.23 & 7.09$\pm$0.16 & 0.056 &27\\ OH-A1 & $^{13}CO$(3-2) &330.5879 & APEX FLASH [22] &-34.73$\pm$0.14& 10.76$\pm$0.39 &4.22$\pm$0.12 & 0.040 &21 \\ OH-A1 &HCO$^+$ &89.1885 & ARO 12-m [58] & -33.47$\pm$0.34 & 25.03$\pm$0.94 & 4.04$\pm$0.12 & 0.017 &123 \\ \hline \hline BML-B &CO (2-1) & 230.5379 &APEX-1 (SHeFI) [28]& -34.04$\pm$0.09& 26.79$\pm$0.19 & 111.52$\pm$0.71 &0.077& 6\\ BML-B & CO (3-2) &345.7959 & HHSMT [22] & -35.21$\pm$0.08 & 22.01$\pm$0.19 & 200.50$\pm$1.42&0.145 & 14 \\ BML-B & $^{13}CO$(2-1) &220.3986 & APEX (SHeFI) [28] & -34.932$\pm$0.231 & 16.67$\pm$0.60 & 6.28$\pm$0.18 &0.073 & 6\\ BML-B &HCO$^+$ (1-0)& 89.1885& ARO 12-m [58] & -33.56$\pm$0.01 & 27.96$\pm$0.02 & 4.88$\pm$1.30 &0.012 & 98 \\ BML-B &HCO$^+$ (1-0)& 89.1885&MOPRA [38] &-34.75$\pm$0.01&25.16$\pm$0.01 &3.16$\pm$0.82 &0.023 &68\\ BML-B &HCN (1-0) &88.6316&MOPRA [38] &-32.16$\pm$0.01&27.25$\pm$0.01 &2.67$\pm$0.74 &0.024 & 68\\ BML-B & [C~II] & 1900.5369 & SOFIA GREAT [14.1] & -30.32$\pm$1.54 & 15.69$\pm$3.34 & 5.61$\pm$1.08 & 0.110 &4.9\\ \hline \hline \end{tabular} \end{center} \renewcommand{\baselinestretch}{0.8} \end{table*} \subsection{SOFIA Observations} We observed the SNR G357.7+0.3 with the German Receiver for Astronomy at Terahertz Frequencies (GREAT) \citep{heyminck12} on board the SOFIA airborne observatory \citep{young12}. GREAT is a far-infrared high resolution spectrometer with a resolving power of $\sim$10$^6$. Only low frequency detectors covering 1.25 - 1.5 THz (wavelengths of 240 - 200$\mu$m; L1) and 1.82 - 1.92 THz (165 - 156$\mu$m; L2) were available during the cycle 1. The observation was a program (proposal ID of O1\_0059; PI: Hewitt) of Guest Observer cycle 1 campaign. Five hrs of observation time was awarded, but a total of $\sim$ 2 hrs ($\sim$1 hr on July 18 and $\sim$1 hr on July 28) were included in the flight series and the flights for remaining observing time of 3 hrs were cancelled. The observation of G357.7+0.3 took place on 2013 July 18 toward the peak of H$_2$ emission as listed in Table \ref{TCOlines}. The observed integration time for [C~II] and CO (11-10) lines are 5 min for each, because the first block of observations toward the first position of G357.7+0.3 was lost due to tracking and wobbler issues. Successful observing time was about 30 min on July 18. Note that the efficiencies of SOFIA flights and observations have been significantly improved since 2014. The observations made a map of 1$'$$\times$1$'$, but since the signal is very weak, we have averaged the spectrum over the area (see Figures \ref{greatciiline} and \ref{greatcoline}). The main beam efficiencies of 0.67 were used for both of the GREAT channels L1 \& L2. \begin{figure} \psfig{figure=f2.ps,height=10truecm,width=9truecm,angle=90} \caption{CO and HCO$^+$ spectra of G357.7+0.3 using HHSMT and 12-m telescope. The velocity range is from -100 to 25 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}. $^{13}$CO(1-0) line is shown from positive to negative, in order to demostrate the emission causes the absorption dip in the broad lines of $^{12}$CO(3-2), $^{12}$CO(2-1), and HCO$^{+}$.} \label{smt12mspec} \end{figure} \begin{figure} \includegraphics[scale=0.4,angle=0]{f3.eps} \caption{HHSMT CO(2-1) spectrum superposed on spectral model fit. The fit is a combination of a broad emission line between -55 and -20 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ and a narrow absorption line between -32 and -38 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}.} \label{specfit} \end{figure} \begin{figure} \psfig{figure=f4.ps,height=9.7truecm,width=12truecm,angle=90} \caption{APEX spectra of G357.7+0.3 in CO(2-1), CO(3-2), CO(4-3), $^{13}$CO (2-1), and $^{13}$CO (3-2) lines.} \label{apexspec} \end{figure} \section{Results} \subsection{Broad millimeter lines from shocked gas} \begin{figure*} \includegraphics[scale=1.,angle=90,width=15truecm,height=21truecm]{f5.eps} \caption{GRID spectra of CO (2-1) of G357.7+0.3 with a spacing of 30$''$ (units of x- and y-axis are offsets ($''$) from the OH-A1 position). The BML-A, BML-B, BML-C and BML-D positions (see Table \ref{Tg357pos}) with representative CO spectra are marked as A, B, C, and D, respectively. Broad lines appear in extended regions, and the structure shows elongated from northeast to southwest. The individual spectra have X-axis in velocity from -100 to 25 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ in velocity, and y-axis in temperature from -0.5 to 5.8 in T$_{mb}$(K). } \label{cogridspectra} \end{figure*} \begin{figure*} \epsscale{0.85} \includegraphics[scale=0.64,angle=0]{f6.eps} \caption{CO(2-1) broad line maps in three velocity ranges (R.A. and Decl. are labeled in Figure \ref{cowing2}). Image of blue wing (left in blue contours) is from the velocity ranges between -53 and -38 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}, red wing image (middle; in red contours) between -31 and -27 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}, middle velocity image where the spectrum shows broad line with self-absorption (right; in green contours is between -38 and -31 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}. Contours on blue wing image (left) are 13, 20.8, 28.7, 36.6, 44.4, 52.3, 20.14, and 68 K \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}, on red wing image (middle) are 8, 10.5, 13.1, 15.7, 18.3, 20.8, 23.4, 23.4 and 26 K \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}, and on the middle velocity map are 14, 17.8, 21.7, 25.5, 29.3, 33.3 and 37.0 K \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}.} \label{cowing1} \end{figure*} \begin{figure*} \plottwo{f7a.eps}{f7b.eps} \caption{Comparison of blue and red wing images from CO(2-1) shown in Figure \ref{cowing1} (left). Three color images of three velocity range maps from the broad line shown in Figure \ref{cowing1}; blue wing, red wing and middle velocity map are represented in blue, red and green, respectively (right).} \label{cowing2} \end{figure*} Figure \ref{smt12mspec} (the second panel) shows the $^{12}$CO (3-2) line toward the OH-A1 position of G357.7+0.3 observed with HHSMT. The spectrum shows a broad line with a FWHM of 17.32 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ at a velocity of -35.86 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$} (see Table \ref{TCOlines}) and a narrow ($\sim$ 4\hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}) absorption dip at -35.30 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}. Such a broad line is caused by strong SN shocks passing through dense molecular clouds, and the detection of such broad lines is direct, unambiguous evidence that G357.7+0.3 is a SNR interacting with molecular clouds. Figure \ref{g357radio} shows the area (over 4.5$'$$\times$5$'$) where broad lines are detected (see Section 3.2 for details). Figure \ref{smt12mspec} includes other millimeter molecular lines of $^{13}$CO(1-0), $^{12}$CO(2-1), and HCO$^+$ observed with HHSMT and 12-m telescopes. HCO$^+$ spectrum has a lower signal-to-noise than those of CO(3-2) and CO(2-1), but the line profile is the same as those of the other two lines, indicating that the HCO$^+$ line also shows a broad line with a self-absorption. Both of $^{12}$CO(2-1) and HCO$^+$ spectra show similar profiles to that of $^{12}$CO(3-2) with the widths of $\sim$20-30 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}. We fit each spectrum either using a gaussian profile with a mask of the velocity range of the self-absorption line, or two gaussian components with a broad emission line and a narrow absorption line. The results from the two methods yielded similar results and are summarized in Table \ref{TCOlines}. The RMS noises are obtained using a long baseline typically between -200 and 100 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}. An example of a spectral fit is shown in Figure \ref{specfit}. The line properties (e.g. for broad lines) are summarized in Table \ref{TCOlines}. We observed additional CO lines with APEX, which offers simultaneous observations of three lines. The spectra of $^{12}$CO(2-1), $^{12}$CO(3-2), $^{12}$CO(4-3), $^{13}$CO(2-1) and $^{13}$CO(3-2) are shown in Figure \ref{apexspec}. The CO lines of $^{12}$CO(4-3), $^{13}$CO(2-1) and $^{13}$CO(3-2) were only observed with APEX. The $^{13}$CO(3-2) line shows a broad line, and $^{13}$CO(2-1) show a combination of a broad line similar to the one in $^{13}$CO(3-2) and a narrow emission line similar to the one in $^{13}$CO(1-0). \subsection{Spectral mapping of CO(2-1)} Spectral GRID mapping was made in CO(2-1) using the HHSMT for a 4.5$'$$\times$5$'$ area. The GRID spectra are shown in Figure \ref{cogridspectra}, and a typical RMS of CO(2-1) line is given in Table \ref{TCOlines}. The broad line structures are extended over a 4.5$'$$\times$5$'$ region (the entire GRID map we observed) and elongated from northeast to southwest. Three maps of the blue wing (with a velocity range between -58 and -38 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}), red wing (between -31 and -27 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}), and the middle velocity, the broad line with the self-absorption (between -38 and -31 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}) are shown in Figures \ref{cowing1} and \ref{cowing2}. We chose these velocity ranges based on the CO(2-1) spectra in order to avoid materials that are not related to the shocked gas in the SNR G357.7+0.3. The CO spectra show two weak lines at the velocity -56 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ and at -23 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ (between -27 to 20 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}) on the top of the broad lines (see Figures \ref{smt12mspec} and \ref{apexspec}) and the materials at these velocities are likely unrelated to the SNR. These features are less noticeable at higher transition lines of CO(4-3) and CO(3-2). The blue wing, the material moving toward us, is located in the south relative to the pre-shock gas (in green; this emission may include pre-shock gas) and the red wing, the material moving away from us, is located in the north as shown in Figure \ref{cowing2}. \begin{figure*} \plotone{f8.eps} \caption{CO(2--1) position-velocity maps (a: top) cut at constant R.A. = $17^{\rm h} 38^{\rm m} 26.9^{\rm s}$ and (b: bottom) cut at constant decl=$-30^\circ$ 33$^{\prime} 57.07^{\prime \prime}$. Broad lines from -50 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ to -10 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ with an absorption dip at 34 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}. The broad line feature continues outside the map in R.A. direction. Contours are 0.36, 1.07,1.78, 2.49, 3.20, 3.91, 4.63, 5.45, 6.05 and 6.76 K. T he positions are labeled in decimal degrees, for both (a) declination and (b) right ascension.} \label{positionvelmap} \end{figure*} Figure \ref{positionvelmap} shows position-velocity maps that slice through the spectral data centered on BML-B (the center of IRS map) in R.A. and Decl direction, respectively. The exceptionally wide velocity dispersion of the broad molecular regions compared to the ambient gas (which has a typical width of $<$7 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ FWHM; for example at $\sim$0 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}) is evident. The map shows broad lines from -50 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ to -10 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ with an absorption dip at 34 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}. The broad line feature continues outside the map in R.A. direction. We named three representative broad molecular line (BML) positions from the CO(2-1) grid spectra as BML-B, BML-C and BML-D in addition to the OH-A1 position (we call this position BML-A). The positions are marked in Figure \ref{g357radio} and listed in Table \ref{Tg357pos}. BML-B is the peak of broad lines and has broader CO lines than the OH-A1(BML-A) and other positions. We note that the peak of the CO broad lines does not coincide with any of the OH positions as shown in Figure \ref{g357radio}. We made ARO 12-m, {\it Spitzer} IRS, MOPRA, SOFIA, and additional APEX observations toward the BML-B position or its vicinity. The spectra of $^{13}$CO(1-0), CO(3-2) and HCO$^+$ toward BML-B, BML-C and BML-D are shown in Figure \ref{g357bcdspec}. CO(3-2) and HCO$^+$ spectra show broad lines with the widths of up to 27 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ from shocked gas (see Table \ref{TCOlines}) and $^{13}$CO(1-0) shows narrow components from cold gas. The HCO$^+$ taken with the 12-m telescope shows the broadest line with a FWHM of 27.96$\pm$0.02 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ we detected as shown in Figure \ref{g357bcdspec}. The MOPRA spectrum also detects a broad line of HCO$^+$ as shown in Figure \ref{mopraspec}, although the signial-to-noise is not as good as that from the 12-m telescope. As we have seen in the position of OH-A1, the broad lines of CO or HCO$^+$ spectra show anti-correlation with $^{13}$CO(1-0). APEX spectra toward BML-B in Figure \ref{apexspec2} show detections of CO(2-1) and $^{13}$CO(2-1). The lines are broad with widths of 26 and 16 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}, respectively (see Table \ref{TCOlines}). The line profiles also show self-absorption lines, but the $^{13}$CO(2-1) line shows only a very small amount of self-absorption. MOPRA additionally detects a broad line of HCN with a FWHM of $\sim$25 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}. \begin{figure*} \includegraphics[scale=1.2,angle=90,width=18truecm,height=11truecm]{f9.ps} \caption{Comparison of $^{13}$CO(1-0), CO(3-2) and HCO$^+$ spectra for the positions of BML-B, BML-C and BML-D. CO (3-2) data are taken with HHSMT, and $^{13}$CO(1-0) and HCO$^+$ data are taken with 12-m telescope.} \label{g357bcdspec} \end{figure*} \begin{figure} \includegraphics[scale=0.25,angle=90,height=5.5truecm,width=7truecm]{f10.ps} \caption{MOPRA spectra of HCO$^+$(1-0) and HCN(1-0) toward the position of BML-B.} \label{mopraspec} \end{figure} \begin{figure} \includegraphics[scale=0.25,angle=90,height=6.5truecm,width=8.5truecm]{f11.ps} \caption{APEX spectra of $^{12}$CO(2-1) and $^{13}$CO(2-1) toward the position BML-B. } \label{apexspec2} \end{figure} \subsection{Self-absorption line in broad molecular line} CO or HCO$^+$ spectra toward BML-B, BML-C, and BML-D show similar anti-correlation of the broad lines of CO(3-2) and HCO$^+$ with $^{13}$CO(1-0) as shown in Figure \ref{g357bcdspec}. In Table \ref{Tg357pos}, the positions of B, C, and D are where CO(2-1) spectra reveal representative BMLs. The narrow line components are unshocked clouds which are likely a part of parent molecular clouds of the shocked CO gas because they are approximately at the velocity similar to that of shocked gas. The line-of-sight absorption is stronger for the CO(2-1) transition, which can be absorbed by cold gas with substantial populations in the level J=1, than for the CO(3-2) or CO(4-3) transitions. In other words, the $^{13}$CO(1-0) emission, which traces the total column density and is dominated by the cold gas along the line of sight, matches very well the center and width of the apparent ``notch'' cut out of the $^{12}$CO spectra. The optical depth of CO emission from low-lying energy levels of gas in quiescent molecular clouds is generally greater than unity. For HCO$^+$(1--0) the transition is out of the ground state, so even cold gas is readily detected in absorption. The precise correspondences between the $^{13}$CO emission and the apparent $^{12}$CO and HCO$^+$ absorption notches indicate they are due to cold molecular gas in the parent molecular cloud. Conversely, the {\it lack} of $^{13}$CO emission from the broader component that is so bright in $^{12}$CO indicates it is optically {\it thin} (see Section 4.3 for detailed discussion) and due to a smaller column density of more-highly-excited gas. A similar combination of broad emission with narrow superposed absorption lines was observed in the molecular-cloud-interacting SNRs W44 and W28 \citep{reach05}. When there is bright emission from hot, shocked gas behind cold, unshocked gas, an absorption component appears. The $^{12}$CO (2-1) line also has narrow components at -57.9, -13, -1.8 and 13 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}; these narrow components (line widths $<$4 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}) are from cold gas in other molecular clouds along the line of sight, unrelated to G357.7+0.3. The narrow absorption line in the spectra of $^{12}$CO(4-3), $^{12}$CO(3-2), $^{12}$CO(2-1) and HCO$^+$ is anti-correlated with $^{13}$CO(1-0) where the narrow line appears in emission (see Figures \ref{smt12mspec} and \ref{apexspec}). The $^{13}$CO(3-2) line in Figure \ref{apexspec} shows a broad wing which is from shocked gas, and $^{13}$CO(1-0) shows a narrow line with a line width of 3.9\hbox{\kern 0.20em km\kern 0.20em s$^{-1}$} which is pre-shocked gas. The $^{13}$CO(2-1) shows a combination of the two, a broad and narrow components. One gaussian component fit to $^{13}$CO(2-1) yielded a line width of 7.5 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$} as listed in Table \ref{TCOlines}. When we fit with two gaussian components, the fit yielded a broad component with a width of 8.57$\pm$0.43 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$} and a narrow component with a width of 2.22$\pm$0.14 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}. This is consistent with the idea that $^{13}$CO(2-1) shows a combination of pre-shock and post-shock CO emission. \subsection{SOFIA GREAT spectra} In Figure \ref{greatciiline} the SOFIA GREAT spectrum of [C~II] at 158$\mu$m shows a broad line with a FWHM of 15.7 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ (see Table \ref{TCOlines}). Although the detection is only 3$\sigma$, the line profile and the FWHM of [C~II] is similar to those of broad CO lines. The line strength is equivalent to 2.84$\times$10$^{-13}$ erg~s$^{-1}$~cm$^{-2}$. Using a beam size of 14.1$''$, the surface brightness is 7.85$\times$10$^{-5}$ erg~s$^{-1}$~cm$^{-2}$ sr$^{-1}$. Ionic lines commonly originate from high velocity J-shocks \citep{hollenbach89, rho01, reach00, hewitt09}. A resolved [O~I] spectrum of another SNR 3C391 at 63$\mu$m using ISO LWS shows a line width of 100\hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}. This indicates that atomic fine-structure lines such as [O~I] and [C~II] oxygen can originate from J-shocks. However, the line width of [C~II] in G357.7+0.3 is small, 15.7 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}, similar to the broad component of CO lines. This suggests that [C~II] comes from the same shock responsible for the CO lines. In other words, [C~II] is from a low ($\sim$15 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}) velocity shock. \citet{draine83} show that ionic lines could be more important coolants than molecular lines even for low velocity ($<$20 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}) shock \citep[see Figure 4 of][]{draine83}. Unfortunately the models do not include [C~II] at 158$\mu$m itself, so we cannot directly compare the line brightness of [C~II] with the model. Non-quasi-steady model of CJ-shocks (introducing a J-type discontinuity in a C-type flow at a point in the steady-state profile that is located in downstream of the shock) show enhanced cooling by ions (e.g. oxygen) before molecular line cooling such as H$_2$ and CO as described by \citet{lesaffre04a}. These spectra may be the first direct evidence of such CJ shocks. The [C~II] line is known to be an important cooling line in J-shock \citep{hollenbach89} and the detection of [C~II] with the width of 16 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ indicates that cooling by [C~II] may be moderate in C-shock or CJ-shock in addition to cooling by various molecular lines \citep{kaufman96}. The SOFIA spectrum of CO (11-10) line is not detected as shown in Figure \ref{greatcoline} and its upper limit is 0.14 K which is a RMS noise estimated between -200 to 100 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}. \begin{figure} \includegraphics[scale=0.4,angle=0]{f12.eps} \caption{SOFIA GREAT spectrum of [C~II] line is superposed on a gaussian fit. The width of the line is $\Delta$V=16 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ and the line profile of [C~II] is similar to those of CO lines. } \label{greatciiline} \end{figure} \begin{figure} \includegraphics[scale=0.37,angle=270]{f13.eps} \caption{SOFIA GREAT spectrum of CO(11-10) line.} \label{greatcoline} \end{figure} \subsection{Molecular Cloud Maps Surrounding the SNR} \begin{figure} \includegraphics[scale=1,angle=0,width=10.5truecm]{f14.eps} \caption{$^{13}$CO(1-0) image of G357.7+0.3 integrated over velocities -41 to -31 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$} (greyscale) obtained with the ARO 12-m telescope. The greyscale bar indicates the summed brightness in K; to convert to K~km~s$^{-1}$, multiplied by 2.72 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}. Overlaid on the $^{13}$CO image are cyan-colored contours of the non-thermal radio brightness from the MOST Galactic Centre Survey \citep{gray94}. The red circle indicates the location of the OH-A1 1720 MHz maser, where broad molecular lines are seen. The image is centered on R.A.\ $17^{\rm h} 38^{\rm m} 31.9^{\rm s}$ and Decl.\ $-30^\circ$39$^{\prime} 19^{\prime \prime}$ (J2000) with a FOV of $29.4'\times 30.1'$. } \label{12m13comap} \end{figure} \normalsize \begin{figure*} \plotone{f15.eps} \caption{Maps of $^{13}$CO (1-0) emission toward the SNR G357.7+0.3 integrated over 4 velocity ranges. Each panel is labeled in the top right with its velocity in red, with the location of the highest intensity in each panel as a red square. The blue contours repeated on each panel are the non-thermal radio brightness. The area showing the broad $^{12}$CO lines toward the SNR is marked as a circle on the $^{13}$CO (1-0) map with -31 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}. The scale bar below the images for the greyscale $^{13}$CO images is in units of antenna temperature summed over channels; to convert to brightness temperature integrated over velocity (K~km~s$^{-1}$), multiply by 2.3.} \label{12m13cochannelmap} \end{figure*} We have mapped a large-scale molecular cloud structure surrounding the SNR G357.7+0.3 using $^{13}$CO(1-0) line that typically traces cold clouds. The spatial resolution of the map is 47$''$ and the map covers 32$'$$\times$32$'$ larger than the size of SNR 24$'$$\times$24$'$. $^{13}$CO(1-0) image of G357.7+0.3 integrated over velocities -41 to -31 km~s$^{-1}$ obtained with the ARO 12-m telescope is shown in Figure \ref{12m13comap}. The supernova appears to be bounded by molecular gas, in particular its southern, western, and northwestern portions. The sharpness of the eastern radio continuum boundary in these contours is exaggerated because of taper of the MOST field of view. Maps of $^{13}$CO emission toward SNR G357.7+0.3 integrated over 4 velocity ranges (+121$\pm$10, +102$\pm$10, +7$\pm$10 and -31$\pm$10 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}) are shown in Figure \ref{12m13cochannelmap}. Each panel is labeled in the top right with its velocity in red, with the location of peak as a red square. The blue contours repeated on each panel are the non-thermal radio brightness from the MOST Galactic Centre Survey \citep{gray94}. Representative spectra from the locations of the bright peak of the 4 velocity maps (they are marked as squares in Figure \ref{12m13cochannelmap}) are shown in Figure \ref{12m13cochannelspec}. The 4 velocity components are likely at significantly different distance along the line of sight, but their kinematic distances cannot be inferred from the velocity and the galactic rotation curve, because the line of sight is so close to the Galactic center. The +7 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ component may be associated with the eastern boundary of the SNR but has not been shown to be associated. The supernova appears to be bounded by molecular gas in the -31 km~s$^{-1}$ component, in particular its southern, western, and northwestern portions. The velocity of this component corresponds to that of the shocked gas and in particular the narrow self-absorption toward the broad CO. The red circle indicates the location of the OH 1720 MHz masers which are at -35 to -37 km~s$^{-1}$. We already show broad molecular and carbon line detection in its northwestern portions, and possible extended interaction sites may be found to western and southern portions. \begin{table*} \caption[]{Observed Line brightness in the {\it Spitzer} IRS Spectra \label{tableirsline}} \begin{center} \begin{tabular}{llllllll} \hline \hline Wavelength & Line & FWHM & Line Brightness & De-reddened Brightness\\ ($\mu$m) & & ($\mu$m) & (erg~s$^{-1}$~cm$^{-2}$~sr$^{-1}$) & (erg~s$^{-1}$~cm$^{-2}$~sr$^{-1}$) \\ \hline 5.5004$\pm$ 0.0016& H$_2$ S(7) & 0.056$\pm$ 0.004& 2.29E-05$\pm$ 2.52E-06& 3.39E-05$\pm$ 3.74E-06\\ 6.9104$\pm$ 0.0006& H$_2$ S(5) & 0.072$\pm$ 0.002& 8.92E-05$\pm$ 2.44E-06& 1.21E-04$\pm$ 3.30E-06\\ 8.0037$\pm$ 0.0007& H$_2$ S(4) & 0.061$\pm$ 0.002& 3.52E-05$\pm$ 2.65E-06& 6.11E-05$\pm$ 4.59E-06\\ 9.6674$\pm$ 0.0003& H$_2$ S(3) & 0.127$\pm$ 0.001& 1.20E-04$\pm$ 9.82E-07& 5.71E-04$\pm$ 4.65E-06\\ 12.2821$\pm$ 0.0002& H$_2$ S(2) & 0.082$\pm$ 0.001& 6.13E-05$\pm$ 1.60E-06& 1.14E-04$\pm$ 2.97E-06\\ 17.0310$\pm$ 0.0002& H$_2$ S(1) & 0.164$\pm$ 0.001& 2.04E-04$\pm$ 8.03E-07& 3.46E-04$\pm$ 1.36E-06\\ 28.1932$\pm$ 0.0012& H$_2$ S(0) & 0.299$\pm$ 0.003& 3.08E-05$\pm$ 3.14E-07& 4.19E-05$\pm$ 4.28E-07\\ 34.8466$\pm$ 0.0037& [Si~II] & 0.196$\pm$ 0.009& 3.19E-05$\pm$ 2.66E-06& 4.04E-05$\pm$ 3.37E-06\\ \hline \hline \end{tabular} \end{center} \renewcommand{\baselinestretch}{0.8} \end{table*} \subsection{Molecular Hydrogen with {\it Spitzer}} The {\it Spitzer} IRS spectrum where the broad CO emission from the SNR G357.7+0.3 is detected as shown in Figure \ref{g357h2}. All rotational H$_2$ lines within the IRS wavelength range are detected except S(6) line at 6.109$\mu$m (note the feature appeared around 6.1$\mu$m is a part of PAH emission and H$_2$ S(6) line is not detected; for comparison with those in other SNRs, see Hewitt et al.\ 2011). The S(7) line is a weak detection. The detected H$_2$ lines are S(0), S(1), S(2), S(3), S(4), S(5), and S(7) as listed in Table \ref{tableirsline}. Interestingly, G357.7+0.3 shows a significant lack of bright ionic lines compared with other SNRs which emit H$_2$ emission \citep{andersen11, hewitt09, neufeld06}. The only ionic line detected is weak [Si II] at 34.8$\mu$m. The H$_2$ maps at different wavelengths are shown in Figure \ref{g357h2maps}. The H$_2$ map at 5.5$\mu$m is too weak to see any structures, and the map at 8$\mu$m is blended with polycyclic aromatic hydrocarbon (PAH) features. The H$_2$ maps show somewhat different structures from each other. This is also seen in other regions such as HH objects or other molecular interacting SNRs with shocked H$_2$ emission \citep[][]{neufeld06, neufeld07}. The H$_2$ maps at 6.9 and 9.6$\mu$m (see Figures \ref{g357h2maps}a and \ref{g357h2maps}b) both show a northwestern rim along the CO blue wing emission and other H$_2$ maps at 12.2, 17 and 28$\mu$m (see Figures \ref{g357h2maps}c, \ref{g357h2maps}d and \ref{g357h2maps}e) show elongated emission in the east-west direction, around the CO peaks of red wing emission. The H$_2$ maps show a concentration of emission at the peak of CO broad emission. However, because of different spatial resolution between H$_2$ (3-6$''$) and CO maps (beam size of 3$''$ vs. 22$''$), detailed structure could not be compared. The one-to-one correspondence between H$_2$ emission and broad CO emission is observed in IC 443 \citep[see][]{rho01}. H$_2$ emission from G357.7+0.3 is probably collisionally excited from a shock, but we can not rule out contribution from UV pumping. Near-IR observations of vibrational H$_2$ lines are required to determine if there is contribution from UV pumping. High resolution CO images such as using ALMA could produce CO maps with a high spatial resolution comparable to or greater than those of H$_2$ images. \begin{figure} \includegraphics[width=8cm]{f16.eps} \caption{Respresentative $^{13}$CO(1-0) spectra of 4 velocity channel maps in Figure \ref{12m13cochannelmap}. The extracted regions are marked as boxes located in the northeast on the +102 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ velocity map, in the north on the +121 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}, in the east on the 7 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ map, and in the southern region on the -31 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ map. } \label{12m13cochannelspec} \end{figure} \begin{figure} \includegraphics[width=9cm]{f17.ps} \caption{{\it Spitzer} IRS spectrum of G357.7+0.3 showing strong H$_2$ but lacking ionic lines (only [Si~II] is shown).} \label{g357h2} \end{figure} We have examined \textit{Spitzer}\ IRAC and MIPS images (aorkey of 14296832 and 14658560), and we did not find any emission associated with the SNR. IRAC band 2 emission includes bright H$_2$ lines and shows shocked H$_2$ emission for many SNRs as shown in the GLIMPSE survey \citep{reach06}. When we carefully examined IRAC band 2 emission, we note that the background emission (6-7 MJy sr$^{-1}$) is a factor of 2 -3 higher than those in other SNRs. The reason is likely because the background emission near the Galactic Center is high. PAH emission appears in the spectrum of G357.7+0.3 shown in Figure \ref{g357h2}. However, it is unclear if PAH emission belongs to the SNR or not because the area covered by the IRS observations seem to be mostly inside the SNR, except possibly for the SL1. We examined H$_2$ map at 9.6$\mu$m where the northern part of IRS image may cover outside the SNR (see Figure \ref{g357h2maps}b; note the boundary of the shock front is unclear because of lack of high resolution images). When we assume the emission in the northern part is background, the PAH emission disappeared. When we examined mosaiked post-calibrated image (pbcd), we don't see much variation of PAH emission. Many middle-aged SNRs show PAH emission as noted by \citet{andersen11}. However, because the SNR is large compared to the area we covered, future observations to cover a larger area including the area outside the SNR are necessary to confirm or disprove that PAH emission doesn't belong to the SNR G357.7+0.3. We estimated the extinction value by using the silicate absorption dip around 10$\mu$m using PAHFIT (Smith et al. 2007). The fit yielded a value of optical depth at 9.7$\mu$m, $\tau$(9.7$\mu$m), of 0.47, which is equivalent to an extinction value of A$_v$ = 8.3 mag using an averaged value of Av/$\tau$(9.7)=18.5$\pm$2.0 from observed values \citep{draine03}. This is equivalent to the line of sight column of N$_H$ = 1.5$\times$10$^{22}$ cm$^{-2}$. This estimate is comparable to the value by \citet{leahy89} assuming an average value of 3.$\times$10$^{21}$ cm$^{-2}$ per kpc. The column density of G357.7+0.3 in the line of sight is comparable to that of W44 \citep{rho94}. \begin{figure*} \epsscale{0.85} \includegraphics[scale=0.5,angle=0]{f18.eps} \caption{H$_2$ maps of G357.7+0.3 at (a) S(5) at 6.9$\mu$m, (b) S(3) at 9.6$\mu$m, (c) S(2) at 12.2$\mu$m, (d) S(1) at 17$\mu$m, (e) S(0) at 28$\mu$m, and (f) CO(2-1) red wing of broad line image (in red) superposed on blue wing contours. The region covered and contours is the same as in Figure \ref{cowing2}. The colorbar applies to panels (a)-(e) and the labeled numbers apply to only panel (d) in units of MJy sr$^{-1}$. The FOV of the images is 4.3$'$x3$'$ centered on R.A.\ $17^{\rm h} 38^{\rm m} 28^{\rm s}$ and Dec.\ $-30^\circ$ 34$^{\prime} 00^{\prime \prime}$.} \label{g357h2maps} \end{figure*} \begin{figure} \plotone{f19.eps} \caption{H$_2$ excitation diagram of G357.7+0.3. The IRS data are marked as squares. The fitting results of two-temperature LTE fit are shown.} \label{g357h2ex} \end{figure} \begin{figure} \includegraphics[scale=1.2,angle=90,width=9truecm,height=10truecm]{f20.ps} \caption{Comparison of fitted shock models to excitation of H$_2$. The data are shown with errors. From top to bottom, a single C-shock, two C-shocks and a combination of one C-shock and one J-shock. The best fit results are listed in Table \ref{Tshockmodel}. In cases of multiple-shocks, the slower shock is plotted with a dashed line and the faster shock with a dotted line. The total contribution of the two shock components is plotted as a solid line.} \label{g357shockmodel} \end{figure} \section{Discussion} \subsection{Excitation of Molecular Hydrogen} The set of detected rotational H$_{2}$ lines in Table \ref{tableirsline} is an excellent diagnostic of physical conditions in the shocked gas of G357.7+0.3. An excitation diagram of the rotational H$_{2}$ lines was presented by \citet{hewitt09}. An excitation diagram from the de-reddened brightnesses of H$_2$ lines is presented in Figure \ref{g357h2ex}. We have fit the data using a model of two-temperature local-thermal equilibrium (LTE). The two-temperature fit yields a warm temperature (T$_{\rm warm}$) of 197 K with a column density (N$_{\rm warm}$) of 2.3$\times$10$^{21}$ cm$^{-2}$ and an ortho-to-para ratio (OPR) of 2.1, and a high temperature (T$_{hot}$) of 663 K with a column density (N$_{\rm hot}$) of 2.7$\times$10$^{19}$ cm$^{-2}$ and an OPR of 3. The warm temperature (197 K) component shows an OPR of 2, indicating the emission has not reached an equilibrium OPR. Conversion of para-to ortho-H$_2$ behind the shock depends on collisions with atomic hydrogen and is inefficient at low temperature due to the energy barrier (E/k $\sim$4000) in that conversion. \citet{neufeld06} suggested that the non-equilibrium H$_2$ OPRs were consistent with shock models in which the gas is warm for a time period shorter than that required for reactive collisions between H and para-H$_2$ to establish an equilibrium OPR. Hewitt et al. (2009) show detection of H$_2$ from six SNRs and the H$_2$ emission has a warm component with T$_w$ $\sim$250 - 550 K and a column density of $\sim$10$^{20}$ cm$^{-2}$, and a hot component with T$_h$ $\sim$ 1000 - 2000 K and a column N$_w$$\sim$ 10$^{19}$ cm$^{-2}$. IC 443 shows a higher, warm temperature of 627 K \citep{neufeld07} while G357.7+0.3 shows a lower value 197 K for a warm temperature. There are differences between SNRs, and non-LTE model may help to distinguish their difference in physical conditions as the H$_2$ fitting is done with a simplified LTE model, H$_2$ S(7) line at 5.5$\mu$m in G357.7+0.3 is above the two-temperature model, indicating there may be a third (hotter, $>$1000 K) component present as other molecular SNRs show such hot temperature components \citep{hewitt09,richter95,neufeld07}. However, the S(7) line in G357.7+0.3 is relatively weak compared with those in other molecular SNRs. Near-infrared follow-up observations of H$_2$ lines combined with our {\it Spitzer} rotational lines and non-LTE model may be required to identify differences of physical condition of H$_2$ in G357.7+0.3. Nevertheless, we find a difference in the best-fitted shock model in G357.7+0.3 from those of other molecular SNRs as described below. \subsection{Implication of Shock Models from Molecular Hydrogen} We compare several shock models with the observed H$_2$ emission. We use published models; C-shocks are from Le Bourlot et al. (2002), which was applied to Orion and Wilgenbus et al. (2000), and J-shock models are from \citet{hollenbach89}. We use a grid of shock models and the H$_2$ excitation was fitted using least squares fitting; detailed methods were described in \citet{hewitt09}. A grid of computed C-shock models spans (log$_{10}$n$_{0}$) = 3, 4, 5, 6 cm$^{-3}$, v$_s$ = 10, 15, 20, 25, 30, 40 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}, and OPR = 0.01, 1, 2, 3. A grid of J-shock models are with densities of 10$^3$, 10$^4$, 10$^5$, 10$^6$ cm$^{-3}$ and shock velocities of 30-150 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ in 10 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ increments. The fitted results are shown in Figure \ref{g357shockmodel} and are summarized in Table \ref{Tshockmodel}. The two C-shock model yielded the best model over one component shock model or a combination of C-shock and J-shock models. The best fit is a combination of two slow C-shock models; a C-shock model with a density of 10$^4$ cm$^{-3}$ and a velocity of 10\hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}, and a second component of C-shock with a density of 10$^5$ cm$^{-3}$ and the same velocity of 10\hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}. The errors of the estimated density, shock velocity and OPR are limited to the available grid of the shock models. Other molecular SNRs generated an equivalent quality of the fitting between a model of two C-shocks and a combination of C- and J-shock models \citep[e.g.][]{hewitt09}. In contrast, G357.7+0.3 strongly favors the two C-shock model over a combination of C- and J-shocks based on H$_2$ model fitting. G357.7+0.3 also lacks ionic lines in the IRS spectra; again in contrast to other molecular SNRs shown in \citet{andersen11} and \citet{neufeld07}. Most importantly, the detection of [C~II] using SOFIA GREAT shows the FWHM is only 16 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\, comparable to those of millimeter CO lines. All these facts show evidence of C-shocks and against the presence of J-shocks. This is in contrast to the results in the SNR G349.7+0.2 which shows the presence of J-shocks by showing a large line width ($\sim$150 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}) of a molecular water line \citep{rho15}. \begin{table} \caption[]{Summary of shock model fitting based on H$_2$ data \label{Tshockmodel}} \begin{center} \begin{tabular}{llllll} \hline \hline Model & $\Delta\chi^2$ & density & velocity & OPR \\ & & (cm$^{-3}$) & (km s$^{-1}$) & &\\ \hline One C-shock & 800 & n=10$^3$ & 30 & 3 \\ \hline {\bf Two C-shock} & {\bf 17} & {\bf n=10$^4$} & {\bf 30 (or 10)} & {\bf 2}\\ & & {\bf n=10$^5$} & {\bf 10} & -- \\ \hline C-shock & 148 & n=10$^3$ & 10 & --\\ + J-shock & & n=10$^5$ & 5 & 3\\ \hline C-shock & 300 & n=10$^3$ & 20 & --\\ + J-shock & & n=10$^6$ & 150 & --\\ \hline \hline \end{tabular} \end{center} \end{table} \subsection{Line Opacity of the CO Molecular Gas} We use two pairs of $^{12}$CO and $^{13}$CO lines to estimate line opacities of CO. The first pair is $^{12}$CO(3-2) and $^{13}$CO(3-2) lines and the second pair is $^{12}$CO(2-1) and $^{13}$CO(2-1) lines. The ratio of $^{12}$CO (3-2) and $^{13}$CO(3-2) line intensities of the broad line is 17-40 where the ratio varies depending on the velocity. The broad line is defined as the lines at the velocity between -58 and -38 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ (blue CO wing), and at the velocity between -31 and -27 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$} (red CO wing), and excludes the velocity range with the self-absorption. The ratio between $^{12}$CO(2-1) and $^{13}$CO(2-1) is almost the same as the ratio between $^{12}$CO(3-2) and $^{13}$CO(3-2) lines. The ratio of the $^{12}$CO/$^{13}$CO lines is related to the optical depth of the $^{12}$CO line, $\tau$, and the abundance ratio, $X$, of $^{12}$CO over $^{13}$CO: \begin{equation} \frac{T_{12}}{T_{13}} = \frac{1-e^{-\tau_{12}}}{1-e^{-\tau_{13}}} = \frac{1-e^{-\tau_{12}}}{1-e^{-{\tau_{12}/X}}} \end{equation} We solved $\tau_{12}$ iteratively based on the observed ratio of $T_{12}$/$T_{13}$ and we adopted $X=60$ \citep{lucas98}. From the observed line ratio (17-40) of the broad lines, the optical depth is in the range 0.9-3.6, which we observed is optically thin or slightly optically thick gas. We estimate the optical depth of the apparent absorption in the $^{12}$CO lines (at the velocity between -38 and -31 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}). Estimating the center of the apparent absorption dip in intensity as a factor of 3-17, the optical depth is 3.6-25. The cloud includes optically thick gas which is the portion of the cold gas located in front of the warm, shocked gas. The optical depth of CO is estimated for the molecular clouds at velocities of 13 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ in Figures \ref{smt12mspec} and \ref{positionvelmap}. The ratio of the line brightnesses $^{12}$CO/$^{13}$CO at 13 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ is $\sim$5, which corresponds to the optical depth of 13. The clouds at 13 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ are optically thick from unshocked gas and are not related to the SNR G357.7+0.3. \begin{figure} \includegraphics[scale=0.8,angle=90]{f21.eps} \caption{Comparison of the CO line brightness to RADEX models at different H$_2$ density, $n({\rm H}_2)$, CO column density, $N({\rm CO})$, and gas temperature, $T$. The contours are the goodness-of-fit, $\chi^2$, for models compared to all three line brightnesses; contours are 99\%, 90\% and 67\% confidence intervals.} \label{g357radexgrid} \end{figure} \begin{figure*} \includegraphics[scale=0.84,angle=0]{f22.ps} \caption{CO surface brightness diagram as a function of J$_{up}$ with HHSMT and APEX observations. The upper limit of SOFIA GREAT (green arrow) is shown. The best-fit of RADEX model is shown in a dotted line (dark green).} \label{g357cosbmodel} \end{figure*} \vskip 1truecm \subsection{Large Velocity Gradient Analysis of the CO Molecular Gas} Using the three observed line brightnesses of $^{12}$CO(2-1), $^{12}$CO(3-2) and $^{12}$CO(4-3) in Table \ref{TCOlines}, we constrain the physical conditions in the shocked CO gas. An average value between APEX and HHSMT line intensities for the same line is used. We have made non-LTE analysis using RADEX \citep{vandertak07}, which is a radiative transfer code at the “intermediate” level; the most advanced methods that drop the local approximation and solve for the intensities (or the radiative rates) as functions of depth into the cloud, as well as of velocity. We use an average velocity width of 17.3 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}, and the line brightnesses calculated from the averages of the measured line integrals divided by the line widths from Table~\ref{TCOlines}. We assume that the emission is uniform on the scale of the largest beam of 30$''$. Note that we don't have spatial information on scales smaller than 30$''$ because we only have one spectrum for each line except CO(2-1) and the beam size of CO(2-1) is 30$''$. Using the RADEX model, confidence contours of a H$_2$ volume density density [n(H$_2$)], CO column density [N(CO)], and gas kinetic temperature [T] are obtained as shown in Figure \ref{g357radexgrid}. The best fit yields n(H$_2$)=1.7$^{+0.8}_{-0.5}$$\times$10$^{4}$ cm$^{-3}$, N(CO) = 5.6$^{+0.1}_{-0.1}$$\times$10$^{16}$ cm$^{-2}$, and T=75$^{+30}_{-15}$ K. The temperature inferred from the CO lines is roughly comparable to the `warm' H$_2$ emission traced by the low-J H$_2$ rotational lines; if these gases are from the same material, then the abundance of [CO/H$_2$] is $2.4\times 10^{-5}$. That abundance is only 5\% of the maximum CO abundance that would occur if all C and most O were locked into CO molecules ($5\times 10^{-4}$), which means much of the C is likely in atomic gas (as C$^+$) or solids (contributing to the infrared continuum). Combining the volume density inferred from the CO excitation with the H$_2$ column density from the H$_2$ line brightness, the emitting region is 0.04$\pm 0.02$ pc along the line of sight. This short path length corresponds to 1$''$ on the sky, which is smaller than the resolutions of the telescopes, despite the source appearing extended. When we examine the H$_2$ image at 6.9$\mu$m (which has the highest spatial resolution in all H$_2$ images) as shown in Figure \ref{g357h2maps}, the size of the smallest knot is about 3$''$. This is the limit of spatial resolution of the IRAC image. Thus, the 1$''$ structures were not resolved by our H$_2$ images. The likely geometry of the emitting region is thin sheets, which are the post-shock regions with shock fronts spanning regions large than the beam. Figure \ref{g357cosbmodel} shows the CO surface brightness as a function of upper rotation level (J$_{upper}$) of CO. The best fit model is shown as a dotted line. The upper limit of SOFIA CO(11-10) line is above the brightness from the model and the model indicates that a longer observation with SOFIA GREAT would have detected the line (note that the integration time of CO(11-10) was just 5 mininues). The volume density, n(H$_2$), or the emitting CO gas in G357.7+0.3 is lower than those of a few SNRs (which includes a density of 10$^{6}$ cm$^{-3}$) \citep[for example][]{hewitt09}, but comparable to that of W28 \citep{neufeld14}. The critical density of CO(4-3) is 3.2$\times$10$^{4}$ (3.7$\times$10$^{4}$, 2.4$\times$10$^{4}$) cm$^{-3}$ for 200 K (30, 3000 K), and CO(3-2) is 1.03$\times$10$^{4}$ (1.1$\times$10$^{4}$, 8.6$\times$10$^{3}$) cm$^{-3}$) for 200 K (30, 3000 K), and CO(2-1) is 2.04$\times$10$^{3}$ (2.2$\times$10$^{3}$, 1.9$\times$10$^{3}$) cm$^{-3}$ for 200 K (30, 3000 K), respectively. Because the critical density of CO is low, CO is often used as a thermometer of the ISM. The volume density derived by a RADEX model is slightly lower than the critical densities of CO(4-3) and CO(3-2) and higher than that of CO(2-1). Shocked gas of CO(2-1) is collisionally dominated while the CO gas emitting CO (4-3) and CO(3-2) is partially subthermal. \section{Conclusion} 1. From the relatively unknown SNR G357.7+0.3, we discover broad molecular lines of CO(2-1), CO(3-2), CO(4-3), $^{13}$CO (2-1) and $^{13}$CO (3-2), HCO$^+$ and HCN using the HHSMT, 12-Meter Telescope, APEX and MOPRA telescopes. The widths of the broad lines are 15-30 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}, that are caused by strong supernova (SN) shocks passing through dense molecular clouds. The detection of such broad lines is unambiguous, direct evidence of shocked gas. This is the first evidence showing that G357.7+0.3 is a SNR interacting with molecular clouds. 2. We present detection of shocked molecular hydrogen (H$_2$) in the mid-infrared using the {\it Spitzer} IRS observations. The observations covered an area of about 1$'$ with short-low and $\sim$3$'$$\times$1$'$ with long-low. The rotational H$_2$ lines of S(0)-S(5), and S(7) with the IRS are detected. The detection of H$_2$ lines is also evidence that G357.7+0.3 is interacting with molecular clouds. The two-temperature LTE fit yields a warm temperature (T$_{\rm warm}$) of 197 K with a column density (N$_{\rm warm}$) of 2.3$\times$10$^{21}$ cm$^{-2}$ and an ortho-to-para ratio (OPR) of 2.1, and a high temperature (T$_{hot}$) of 663 K with a column density (N$_{\rm hot}$) of 2.7$\times$10$^{19}$ cm$^{-2}$ and an OPR of 3. The ortho-to-para ratio of the low temperature component is less than 3, indicating that the SNR G357.7+0.3 is propagating into cold quiescent clouds. 4. We observed [C~II] at 158$\mu$m and high-J CO(11-10) observations with the GREAT on board SOFIA. CO(11-10) is not detected, but GREAT spectrum of [C~II] shows a 3$\sigma$ detection, with a broad line of a width of 15.7 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\ that had a line profile similar to those of millimeter CO lines. The line width of [C~II] implies that ionic lines can come from a low-velocity C-shock. 5. We have mapped a large-scale molecular cloud structure surrounding the SNR G357.7+0.3 using the $^{13}$CO(1-0) line that typically traces cold clouds. The supernova component at -31 km~s$^{-1}$ (integrated over -21 to -41 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}) appears to be bounded by molecular gas that is located at its southern, western, and northwestern portions. We show broad molecular and carbon line detections in its northwestern portions, and possible extended interaction sites may be found toward western and southern portions with future observations. 6. We compare shock models with the observed H$_2$ emission. A two C-shock model yielded the best fit over a one component shock model or a combination of C-shock and J-shock models. The best fit model of two slow C-shock models; a C-shock model with a density of 10$^4$ cm$^{-3}$ and a velocity of 10\hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}, and a second component of C-shock with a density of 10$^5$ cm$^{-3}$ and the same velocity of 10\hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}. G357.7+0.3 also lacks ionic lines in the IRS spectra. Most importantly, the detection of [C~II] using SOFIA GREAT shows the FWHM is $\sim$16 \hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}\, comparable to those of millimeter CO lines. All these facts show evidence of C-shocks and against the presence of J-shocks. 7. We estimate the CO density, column density, and temperature by running RADEX models, using an average velocity width of 17.3 km/s. The best fit yields n(H$_2$)=1.7$^{+0.8}_{-0.5}$$\times$10$^{4}$ cm$^{-3}$, N(CO) = 5.6$^{+0.1}_{-0.1}$$\times$10$^{16}$ cm$^{-2}$, T=75$^{+30}_{-15}$ K. This model is consistent with the upper limit of CO(11-10) brightness. G357.7+0.3 shows broad CO lines for 4.5$'$$\times$5$'$ area and the broad lines may extend to NE and to SW, beyond the area covered by our observations. The interaction area showing CO broad lines and H$_2$ emission is large so the pattern of the molecular cloud interaction with the SNR may be similar to those of the well-known molecular SNRs of IC 443 and W44. It would be worthwhile to extend the millimeter maps such as CO and infrared maps in H$_2$ which would reveal the entire regions of interaction between the SNR and molecular clouds. The newly discovered molecular cloud interaction with SNR G357.7+0.3 offers many exciting opportunities of astrophysical laboratory to study dynamics of shocks, molecular astro-chemistry, and high-energy phenomena in shocks and dense environment. \acknowledgements We thank Sebastien Bardeau, a staff scientist at IRAM for helping with various issues of CLASS softwares, and Miguel Angel Requena Torres and Friedrich Wyrowski for helping with SOFIA and APEX observations and data processing, respectively. We thank anonymous referee for helpful comments. The Arizona Radio Observatory is part of the Steward Observatory at the University of Arizona and receives partial support from the National Science Foundation. Based [in part] on observations made with the NASA/DLR Stratospheric Observatory for Infrared Astronomy. SOFIA Science Mission Operations are conducted jointly by the Universities Space Research Association, Inc., under NASA contract NAS2-97001, and the Deutsches SOFIA Institut under DLR contract 50 OK 0901. APEX is a collaboration between the Max-Planck-Institut f\"{u}r Radioastronomie, the European Southern Observatory, and the Onsala Space Observatory.
1,314,259,992,837
arxiv
\section{Introduction}\label{section:intro} A \textit{hemisystem of lines} of a generalized quadrangle{} of order $(s^2,s)$ is a set $\mathcal{H}$ of lines such that every point $P$ is incident with $(s+1)/2$ elements of $\mathcal{H}$; that is, exactly half of the lines incident with each point lie in $\mathcal{H}$. The complementary set of lines to a hemisystem is also a hemisystem that may or may not be equivalent under the automorphism group of the generalized quadrangle{} --- if it is equivalent to its complement then we call it {\em self-complementary}. Hemisystems give rise to various other combinatorial objects, including partial quadrangles (Cameron \cite{Cameron75}), strongly regular graphs with certain parameters, and $4$-class imprimitive cometric $Q$-antipodal association schemes\footnote{In fact, these cometric association schemes have Krein array $\{(q^2+1)(q+1),(q^2-q+1)^2/q,(q^2-q+1)(q-1)/q,1;1,(q^2-q+1)(q-1)/q,(q^2-q+1)^2/q,(q^2+1)(q-1)\}$.} that are not metric (see van Dam, Martin and Muzychuk \cite{MartinMuzychukvanDam}), all of which were thought to be somewhat rare. The notion of a hemisystem was introduced in 1965 by Segre \cite{Segre65} in his work on \textit{regular systems} of the Hermitian surface, and he proved that there is a unique hemisystem of lines (up to equivalence) of the classical generalized quadrangle{} $\linear{3}$. It was long thought that this was the only hemisystem in $\linear{q}$ and indeed Thas \cite{Thas95} conjectured this as late as 1995. However, forty years after Segre's seminal paper, Cossidente and Penttila \cite{CossidentePenttila05} constructed an infinite family of hemisystems of the classical quadrangles $\linear{q}$ and other authors subsequently constructed sporadic examples in $\linear{q}$ \cite{BKLP07,CossidentePenttila09} and a single example in the non-classical generalized quadrangle{} $\ftwkb{5}$ (see \cite{BambergDeClerckDurante09}). The first main result of this paper extends the complete classification of hemisystems to the (known) generalized quadrangles{} of order $(5^2,5)$. \begin{theorem} \label{thm:q=5} A hemisystem of the classical generalized quadrangle{} $\linear{5}$ is equivalent to one of the two self-complementary hemisystems described in Table~\ref{tab:H35} and a hemisystem of the Fisher-Thas-Walker-Kantor-Betten generalized quadrangle{} $\ftwkb{5}$ is equivalent to one of the three complementary pairs described in Table~\ref{tab:ftwkb5}. \end{theorem} All known generalized quadrangles{} of order $(s^2,s)$, $s$ odd, arise from flocks of the quadratic cone and hence are called \textit{flock generalized quadrangles{}}. In \cite{BambergGiudiciRoyle1} we gave a general construction for hemisystems that produces a hemisystem in every flock generalized quadrangle{}, known or unknown. In fact (as pointed out to us by Tim Penttila), our construction shows that the number of hemisystems in any infinite family of flock generalized quadrangles{} grows exponentially with the size of the generalized quadrangle{}. Therefore, far from being rare, hemisystems and their associated partial quadrangles, strongly regular graphs etc. actually exist in great profusion. Of course this is an asymptotic result only, and so in this companion paper to \cite{BambergGiudiciRoyle1}, we consider hemisystems in the small (known) flock generalized quadrangles{}, namely those of order $(s^2,s)$ for all (odd) $s \le 11$. Using a mixture of computation and analysis driven by the computational data, we discover large numbers of new hemisystems that do not arise from our general construction. Apart from the smallest generalized quadrangles{}, our searches all assume the existence of some group of symmetries stabilising the hemisystem and so are necessarily incomplete. Table~\ref{knownhemis} summarises the results of our investigations, dividing the hemisystems into those of Type I arising from construction of \cite{BambergGiudiciRoyle1} which we review in Section~\ref{section:construction}, and those that do not arise from this construction. In this table, notation of the form $6 \times 2 + 2$ is used to indicate that, up to equivalence under the automorphism group of the generalized quadrangle{}, there are 6 complementary pairs of hemisystems and 2 self-complementary hemisystems, for a total of 14 hemisystems. In Theorem~\ref{thm:stab} we show that a hemisystem of Type I in a generalized quadrangle{} of order $(q^2,q)$ is invariant under an elementary abelian group of order $q^2$, so one way to verify that a hemisystem is not of Type I is to show that it is not invariant under such a group. By analysing the computational data for the classical generalized quadrangles{} $\linear{q}$, we identify patterns that suggest the existence of three possible new infinite families of hemisystems. For these candidate families, we extend the computations to higher values of $q$ and, based on these computations, conjecture that just two of the three candidate families continue indefinitely. These families are discussed in Section~\ref{sec:inffamilies}. We end the paper in Section \ref{sec:probs} by discussing a number of questions and directions for future research suggested by our results. \renewcommand{\arraystretch}{1.2} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $q$&GQ&Type I hemisystems&Other hemisystems&Total\\ \hline\hline $3$&$\linear3$&$1$&$0$&$1$\\ \hline $5$&$\linear5$&$2$&$0$&$2$\\ &$\ftwkb5$&$1\times 2$&$2 \times 2$&$6$\\ \hline $7$&$\linear7$&$2$&$4$&$6$\\ &$\kmonom7$&$6 \times 2 + 2$&$6 \times 2+2$&$28$\\ \hline $9$&$\linear9$&$3$&$4$&$7$\\ &$\kknuth9$&$3$&$2$&$5$\\ &$\fish9$&$6 \times 2 + 9$&$4 \times 2 + 5$&$34$\\ \hline $11$&$\linear{11}$&$6$&$1 \times 2 + 5$&$13$\\ &$\ftwkb{11}$&$10 \times 2$&&$20$\\ &$\fish{11}$&$42 \times 2 + 6$&$6 \times 2$&$102$\\ &$\pentmon{11}$&$ 74 \times 2 + 8$&$18 \times 2$&$192$\\ \hline \end{tabular} \end{center} \caption{Known hemisystems in the flock generalized quadrangles{} of order $(s^2,s)$ for $s \le 11$} \label{knownhemis} \end{table} \section{Some basic background theory}\label{section:background} A {\em generalized quadrangle{}} is an incidence structure of points and lines such that if $P$ is a point and $\ell$ is a line not incident with $P$, then there is a unique line through $P$ which meets $\ell$ in a point. From this property, in the finite case, if there is a line containing at least three points or if there is a point on at least three lines, then there are constants $s$ and $t$ such that each line is incident with $s+1$ points, and each point is incident with $t+1$ lines. Such a generalized quadrangle{} is said to have \emph{order} $(s,t)$, and its point-line dual is a generalized quadrangle{} of order $(t,s)$. In this paper we are concerned with generalized quadrangles{} of order $(s^2,s)$, for $s$ odd. The classical example is the incidence structure of all points and lines of a non-singular Hermitian variety in $\mathsf{PG}(3,q^2)$, which forms the \textit{classical} {generalized quadrangle} $\linear{q}$ of order $(q^2,q)$ (see \cite[3.2.3]{FGQ}). Further examples can be constructed from BLT-sets using the Knarr model. We briefly outline this construction below. \subsection{Flocks of quadratic cones and BLT-sets}\label{flocks} A \textit{flock} of the quadratic cone $\mathcal{C}$ with vertex $v$ in $\mathsf{PG}(3,q)$ is a partition of the points of $\mathcal{C}\backslash\{v\}$ into conics. J. A. Thas \cite{Thas87} showed that a flock gives rise to an elation generalized quadrangle{} of order $(q^2,q)$, which we call a \textit{flock quadrangle}. A \textit{BLT-set of lines} of $\mathsf{W}(3,q)$ is a set $\mathcal{O}$ of $q+1$ lines of $\mathsf{W}(3,q)$ such that no line of $\mathsf{W}(3,q)$ is concurrent with more than two lines of $\mathcal{O}$. In \cite{BLT}, it was shown that, for $q$ odd, a flock of a quadratic cone in $\mathsf{PG}(3,q)$ gives rise to a BLT-set of lines of $\mathsf{W}(3,q)$. Conversely, a BLT-set gives rise to possibly many flocks, however we only obtain one flock quadrangle up to isomorphism (see \cite{PayneRogers90}). For $q$ odd, Knarr \cite{Knarr92} gave a direct geometric construction of a flock quadrangle from a BLT-set of lines of $\mathsf{W}(3,q)$. Applying this construction to a \textit{linear} BLT-set of lines (i.e., a \textit{regulus} obtained from field reduction of a Baer subline) of $\mathsf{W}(3,q)$, yields a generalized quadrangle{} isomorphic to the classical object $\linear{q}$. The BLT-sets of lines of $\mathsf{W}(3,q)$ have been classified by Law and Penttila \cite{LawPenttila03} for prime powers $q$ at most $29$, and this has recently been extended by Betten \cite{Betten} to $q\le 67$. We outline the main infinite families in Section \ref{qclans}. \subsection{The Knarr model} The symplectic polar space $\mathsf{W}(5,q)$ of rank $3$ is the geometry arising from taking the one-, two- and three-dimensional vector subspaces of $\mathsf{GF}(q)^6$ for which a given alternating bilinear form restricts to the zero form (i.e., the \textit{totally isotropic} subspaces). For example, one can take this alternating bilinear form to be defined by $$\beta(\bvec{x} , \bvec{y} ) = x_1y_6-x_6y_1+x_2y_5-x_5y_2+x_3y_4-x_4y_3.$$ In particular $\beta(\bvec{x},\bvec{y})=\bvec{x}J\bvec{y}^T$ where {\small $$J=\left(\begin{array}{rrrrrr} 0&0&0&0&0&1\\ 0&0&0&0&1&0\\ 0&0&0&1&0&0\\ 0&0&-1&0&0&0\\ 0&-1&0&0&0&0\\ -1&0&0&0&0&0 \end{array}\right)$$} This bilinear form also determines a null polarity $\perp$ of the ambient projective space $\mathsf{PG}(5,q)$, defined by $U\mapsto U^\perp := \{\bvec{v}\in \mathsf{GF}(q)^6: \beta( \bvec{u},\bvec{v} ) =0\text{ for all } \bvec{u} \in U\}$. The ingredients of the Knarr construction are as follows: \begin{itemize} \item a null polarity $\perp$ of $\mathsf{PG}(5,q)$; \item a point $P$ of $\mathsf{PG}(5,q)$; \item a BLT-set of lines $\mathcal{O}$ of $\mathsf{W}(3, q)$. \end{itemize} Note that the totally isotropic lines and planes incident with $P$ yield the quotient polar space $P^\perp/P$ isomorphic to $\mathsf{W}(3, q)$. So we will abuse notation and identify $\mathcal{O}$ with a set of totally isotropic planes on $P$. Then we construct a generalized quadrangle{} $\mathcal{K}(\mathcal{O})$ as in Table \ref{tab:flockgq}. \begin{table}[ht] \begin{tabular}{lp{6.5cm}|lp{6.5cm}} &Points && Lines \\ \hline (i) &points of $\mathsf{PG}(5,q)$ not in $P^\perp$&(a)& totally isotropic planes not contained in $P^\perp$ and meeting some element of $\mathcal{O}$ in a line \\ (ii) &lines not incident with $P$ but contained in some element of $\mathcal{O}$&(b)& elements of $\mathcal{O}$\\ (iii)& the point $P$&&\\ \hline \\ \end{tabular} \medskip Incidence is inherited from that of $\mathsf{PG}(5,q)$. \caption{Knarr model for a flock generalized quadrangle{}} \label{tab:flockgq} \end{table} We now describe how the Knarr model leads to some obvious automorphisms of the resulting generalized quadrangle{} $\mathcal{K}(\mathcal{O})$. Let $G$ be the semisimilarity group of the form $\beta$, that is, the group of all semilinear transformations $g$ of $\mathsf{GF}(q)^6$ for which there exists $\lambda\in\mathsf{GF}(q)$ and $\sigma\in\mathsf{Aut}(\mathsf{GF}(q))$ such that $\beta(\bvec{u}^g,\bvec{v}^g)=\lambda\beta(\bvec{u}, \bvec{v})^{\sigma}$ for all $\bvec{u}, \bvec{v}\in\mathsf{GF}(q)^6$. Let $H$ be the group of similarities of $\beta$, that is, the group of all linear transformations that preserve $\beta$ up to a scalar. Then $$H=\{A\in\mathsf{GL}(6,q)\mid AJA^T=\lambda J \text{ for some }\lambda\in\mathsf{GF}(q)\}\cong \mathsf{GSp}(6,q).$$ Let $$J'=\begin{pmatrix} 0&0&0&1\\ 0&0&1&0\\ 0&-1&0&0\\ -1&0&0&0\\ \end{pmatrix}$$ and take $P$ to be the span of $[1,0,0,0,0,0]$. Then $H_P=E\rtimes (Q\times R)$ where $$\begin{array}{rl} E &=\left\{\begin{pmatrix} 1&0&0\\ (J')^T\bvec{a}^T&I&0\\ z&\bvec{a}&1 \end{pmatrix}\Big\vert \,\, \bvec{a}\in\mathsf{GF}(q)^4,z\in\mathsf{GF}(q)\right\}\\ Q &=\left\{\begin{pmatrix} \lambda&0&0\\ 0&I&0\\ 0&0&\lambda^{-1} \end{pmatrix}\Big\vert \,\, \lambda\in\mathsf{GF}(q)\backslash\{0\}\right\}\cong C_{q-1}\\ R &=\left\{\begin{pmatrix} \lambda&0&0\\ 0&A&0\\ 0&0&1 \end{pmatrix}\Big\vert \,\, A\in\mathsf{GL}(4,q), AJ'A^T=\lambda J'\right\}\cong\mathsf{GSp}(4,q) \end{array}$$ and $(H_P)_{\mathcal{O}}=E\rtimes (Q\times R_{\mathcal{O}})\cong E\rtimes (Q\times \mathsf{GSp}(4,q)_{\mathcal{O}})$. Moreover, $G_P=\langle H_P,\sigma\rangle$, where $\sigma$ is the standard Frobenius map. Note that $\langle R,\sigma\rangle \cong\Gamma\mathsf{Sp}(4,q)$ and acts on $E/Z(E)$ as in its natural action on a 4-dimensional vector-space over $\mathsf{GF}(q)$. Moreover, $(G_P)_{\mathcal{O}}=E\rtimes (Q\rtimes \langle R,\sigma\rangle_{\mathcal{O}})\cong E\rtimes (Q\rtimes \Gamma\mathsf{Sp}(4,q)_{\mathcal{O}})$. The group $(G_P)_{\mathcal{O}}$ preserves the flock generalized quadrangle{} $\mathcal{K}(\mathcal{O})$ and contains the subgroup $Z$ of all scalar matrices. Hence $E\rtimes \Gamma\mathsf{Sp}(4,q)_{\mathcal{O}}\cong (G_P)_{\mathcal{O}}/Z\leqslant\mathsf{Aut}(\mathcal{K}(\mathcal{O}))$. In fact, if the flock quadrangle $\mathcal{K}(\mathcal{O})$ is not classical, then these are the only automorphisms that you get, that is, $\mathsf{Aut}(\mathcal{K}(\mathcal{O}))=E\rtimes \Gamma\mathsf{Sp}(4,q)_{\mathcal{O}}$ \cite[IV.1 and IV.2]{paynethasflockauto}. (Note: In the paper \cite{BambergGiudiciRoyle1} we incorrectly claimed that additional automorphisms could arise for the Kantor-Knuth generalized quadrangles{}.) \section{Hemisystems of Type I and their automorphisms}\label{section:construction} In this section we revise the construction given in \cite{BambergGiudiciRoyle1} and discuss the stabiliser of the resulting hemisystems. \begin{lemma}[{Bamberg, Giudici and Royle \cite{BambergGiudiciRoyle1}}]\label{eqrelation} Consider a set $\mathcal{O}$ of totally isotropic planes of $\mathsf{W}(5,q)$ each incident with a point $P$ such that $\{\pi/P:\pi\in\mathcal{O}\}$ is a BLT-set of lines of the quotient symplectic space $P^\perp/P\cong\mathsf{W}(3,q)$. Let $\ell$ be a line of $\mathsf{W}(3,q)$ not meeting any element of $\mathcal{O}$. Define a binary relation $\equiv_\ell$ on $\mathcal{O}$ by setting $\pi\equiv_\ell \pi'$ if and only if $$\pi=\pi'\quad\text{ or }\quad\{\langle Y,Y^\perp\cap\pi\rangle\mid Y\in\ell\} \cap\{\langle Y,Y^\perp\cap\pi'\rangle\mid Y\in\ell\}=\varnothing.$$ Then $\equiv_\ell$ is an equivalence relation yielding a partition of $\mathcal{O}$ into two parts of equal size. \end{lemma} \begin{theorem}[Bamberg, Giudici and Royle \cite{BambergGiudiciRoyle1}]\label{construction} Consider a set $\mathcal{O}$ of totally isotropic planes of $\mathsf{W}(5,q)$ each incident with a point $P$ such that $\{\pi/P:\pi\in\mathcal{O}\}$ is a BLT-set of lines of the quotient symplectic space $P^\perp/P\cong\mathsf{W}(3,q)$. Suppose that we have a line $\ell$ of $\mathsf{W}(5,q)$ not meeting any element of $\mathcal{O}$, and let $\equiv_\ell$ be the binary relation on $\mathcal{O}$ defined in Lemma \ref{eqrelation} with equivalence classes $\mathcal{O}^+$ and $\mathcal{O}^-$. Let $\mathcal{S}$ be a subset of the totally isotropic planes on $\ell$ of size $(q-1)/2$, not containing $\langle P,\ell\rangle$, and let $\mathcal{S}^c$ be the complementary set of planes on $\ell$. Let \begin{enumerate} \item[(i)] $\mathcal{L}^{+}_{\mathcal{S}}$ be the totally isotropic planes that meet some element of $\mathcal{O}^{+}$ in a line, and which meet some element of $\mathcal{S}$ in a point; and \item[(ii)] $\mathcal{L}^{-}_{\mathcal{S}^c}$ be the totally isotropic planes that meet some element of $\mathcal{O}^{-}$ in a line, and which meet some element of $\mathcal{S}^c$ in a point; \end{enumerate} Then $\mathcal{O}^+\cup \mathcal{L}^+_{\mathcal{S}}\cup \mathcal{L}^-_{\mathcal{S}^c}$ is a hemisystem of lines of $\mathcal{K}(\mathcal{O})$. \end{theorem} Recall that Cossidente and Penttila showed that for each odd $q$, there exists a hemisystem $\mathcal{H}_q$ of $\mathsf{H}(3,q^2)$ admitting $\mathsf{P\Omega}^-(4,q)$. It was shown in \cite{BambergGiudiciRoyle1} that these hemisystems could be constructed using Theorem \ref{construction}. Moreover, the number of hemisystems produced by this construction grows exponentially with $q$. To see this, note that the number of choices of $(q-1)/2$ things from $q$ things is the binomial coefficient; asymptotically this has value $\frac{2^{q}\sqrt {2 / \pi}}{\sqrt{q+1}}$, or basically, $\Theta(2^q / \sqrt{q})$. Whereas the automorphism group of the generalized quadrangle{} is polynomial in size and hence there are exponentially many inequivalent choices. \begin{theorem} \label{thm:stab} Let $\mathcal{H}$ be the hemisystem exhibited in Theorem \ref{construction} and let $G$ be the automorphism group of the generalized quadrangle{} $\mathcal{K}(\mathcal{O})$. Then $G_{\mathcal{H}}$ contains $T\rtimes \mathsf{Sp}(4,q)_{\mathcal{O}^+,\mathcal{O}^-,\ell'}$, where $T$ is an elementary abelian group of order $q^2$ and $\ell'$ is the line of $\mathsf{W}(3,q)$ obtained by projecting $\ell$ onto $P^\perp/P$. The group $T$ acts semiregularly on the set of lines of type (a) of $\mathcal{K}(\mathcal{O})$ and fixes each line of type (b). \end{theorem} \begin{proof} Consider the group $$ E=\left\{\begin{pmatrix} 1&0&0\\ J^T\bvec{a}^T&I&0\\ z&\bvec{a}&1 \end{pmatrix} \Big\vert \,\, \bvec{a}\in\mathsf{GF}(q)^4,z\in\mathsf{GF}(q) \right\}$$ which acts on the generalized quadrangle{} $\mathcal{K}(\mathcal{O})$. Let $\mathcal{O}$ be our BLT-set, considered as a set of lines in $\mathsf{W}(3,q)$. Each $\langle \bvec{u}_1,\bvec{u}_2\rangle\in\mathcal{O}$ is identified with the 3-space $\langle P, [0,\bvec{u}_1,0],[0,\bvec{u}_2,0]\rangle$ in $V$. Note that $$[0,\bvec{u},0]\begin{pmatrix} 1&0&0\\ J^T\bvec{a}^T&I&0\\ z&\bvec{a}&1 \end{pmatrix} =[\bvec{u}J^T\bvec{a}^T,\bvec{u},0]$$ Hence $E$ fixes each plane on $P$ and hence each element of $\mathcal{O}$. Moreover, given a line $\ell$ in $P^{\perp}$ that is disjoint from every element of $\mathcal{O}$, we have that $E$ fixes $\langle P,\ell\rangle$. Now $\langle P,\ell\rangle$ contains $q^2$ lines not on $P$. If we take $\ell'=\langle [0,\bvec{w}_1,0],[0,\bvec{w}_2,0]\rangle$ to be a line on $\langle \ell,P\rangle$ we see that $$E_{\ell'}=\left\{\begin{pmatrix} 1&0&0\\ J^T\bvec{a}^T&I&0\\ z&\bvec{a}&1 \end{pmatrix} \Big\vert \,\, z\in\mathsf{GF}(q),\bvec{w}_1J^T\bvec{a}^T=\bvec{w}_2J^T\bvec{a}^T=0\right\}$$ which has order $q^3$. Thus $E$ acts transitively on the set of lines of $\langle P,\ell\rangle$ not on $P$ and so we may choose $\ell=\langle [0,\bvec{w}_1,0],[0,\bvec{w}_2,0]\rangle$. We let $\ell'=\langle \mathbf{w}_1,\mathbf{w}_2\rangle$, a totally isotropic line in $\mathsf{W}(3,q)$. Let $\mathcal{R}$ be the set of totally isotropic planes on $\ell$ other than $\langle P,\ell\rangle$. Note that $E_{\ell}$ fixes $\mathcal{R}$ setwise. These planes are of the form $\langle \ell,[x_1,0,0,0,0,1]\rangle$ with $x_1\in\mathsf{GF}(q)$. Let $T$ be the elementary abelian subgroup of $E_{\ell}$ of order $q^2$ consisting of all elements with $z=0$. Then $$[x_1,0,0,0,0,1]\begin{pmatrix} 1&0&0\\ J^T\bvec{a}^T&I&0\\ 0&\bvec{a}&1 \end{pmatrix} =[x_1,\bvec{a},1]$$ Since $\bvec{a}\in \langle \bvec{w}_1,\bvec{w}_2\rangle^{\perp}=\langle \bvec{w}_1,\bvec{w}_2\rangle$, it follows that $[x_1,\bvec{a},1]\in\langle \ell,[x_1,0,0,0,0,1]\rangle $ and so $T$ fixes each element of $\mathcal{R}$. Let $\mathcal{S}$ be a subset of size $(q-1)/2$ of $\mathcal{R}$ and $\mathcal{S}^c$ be the complementary set of totally isotropic planes of size $(q+1)/2$. Then $T$ fixes $\mathcal{S}$ and $\mathcal{S}^c$ elementwise. Hence $T$ fixes the hemisystem $\mathcal{H}=\mathcal{O}^+\cup\mathcal{L}_{\mathcal{S}}^+\cup \mathcal{L}_{\mathcal{S}^c}^-$. Let $B\in\mathsf{Sp}(4,q)_{\mathcal{O}}$ and consider the element $$X=\begin{pmatrix} 1 & 0_{1\times 4} &0\\ 0_{4\times 1}&B&0_{4\times 1}\\ 0 &0_{1\times 4} &1 \end{pmatrix}$$ which acts on the flock generalized quadrangle{} $\mathcal{K}(\mathcal{O})$. If $B\in \mathsf{Sp}(4,q)_{\mathcal{O}^+,\mathcal{O}^-,\ell'}$ then $X$ fixes each element of $\mathcal{R}$ setwise and hence stabilises the hemisystem $\mathcal{O}^+\cup\mathcal{L}_{\mathcal{S}}^+\cup \mathcal{L}_{\mathcal{S}^c}^-$. Thus $T\rtimes \mathsf{Sp}(4,q)_{\mathcal{O}^+,\mathcal{O}^-,\ell'}\leqslant G_{\mathcal{H}}$. The lines of $\mathcal{K}(\mathcal{O})$ are the elements of $\mathcal{O}$ and the totally isotropic planes not on $P$ and meeting some element of $\mathcal{O}$ in a line. We have seen already that $T$ fixes each of the elements of $\mathcal{O}$. Now let $U=\langle P,[0,\mathbf{u}_1,0],[0,\mathbf{u}_2,0]\rangle\in\mathcal{O}$ and recall that $\langle \mathbf{u}_1,\mathbf{u}_2\rangle\cap \langle \bvec{w}_1,\bvec{w}_2\rangle=\{0\}$. Then $$T_{\langle [0,\mathbf{u}_1,0],[0,\mathbf{u}_2,0]\rangle}=\left\{\begin{pmatrix} 1&0&0\\ J^T\bvec{a}^T&I&0\\ 0&\bvec{a}&1 \end{pmatrix}\in T\Big\vert \,\, \bvec{u}_1J^T\bvec{a}^T=\bvec{u}_2J^T\bvec{a}^T=0\right\}$$ Since such elements lie in $T$ they also satisfy $\bvec{w}_1J^T\bvec{a}^T=\bvec{w}_2J^T\bvec{a}^T=0$. If $\bvec{a}\neq \bvec{0}$, we have $\{\bvec{x}\mid \bvec{x}J^T\bvec{a}^T=0\}$ has dimension 3 but contains the complementary 2-spaces $\langle \mathbf{u}_1,\mathbf{u}_2\rangle$ and $\langle \bvec{w}_1,\bvec{w}_2\rangle$. This is a contradiction and so $T_{\langle [0,\mathbf{u}_1,0],[0,\mathbf{u}_2,0]\rangle}=1$. Thus $T$ acts regularly on the $q^2$ lines in $U$ not containing $P$, and hence acts semiregularly on the totally isotropic planes not on $P$ and meeting some element of $\mathcal{O}$ in a line. \qed\end{proof} \begin{remark} The stabiliser $G_{\mathcal{H}}$ can be larger than the group given by Theorem \ref{thm:stab}. Sometimes extra automorphisms can arise from the structure of the Knarr model. For example, if $\mathcal{S}$ were chosen to be $\{\langle \ell,[x^2,0,0,0,0,1]\rangle \mid x\in\mathsf{GF}(q)\}$ then the elements $$\begin{pmatrix} \lambda & 0_{1\times 4} &0\\ 0_{4\times 1}&I_{4\times 4}&0_{4\times 1}\\ 0 &0_{1\times 4} &\lambda^{-1} \end{pmatrix}$$ will fix $\mathcal{H}$. Similarly, suitable choices of $\mathcal{S}$ may give rise to semisimilarities of $\beta$ that stabilise $\mathcal{H}$. Alternatively, $\linear{q}$ has more automorphisms than those arising from the Knarr model. The Cossidente-Penttila hemisystems in these generalized quadrangles{} admit at least $\mathsf{P}\Sigma\mathsf{L}(2,q^2)$. \end{remark} \section{Potential new infinite families of hemisystems of $\linear{q}$} \label{sec:inffamilies} Examination of our computational data uncovered three promising candidates for new infinite families of hemisystems of $\mathsf{H}(3,q^2)$, and in this section we investigate these possible families in more detail. \subsection{Hemisystems that are invariant under a Singer type element} \label{sec:metacyclic} In this section, we present a way of viewing hemisystems of $\mathsf{H}(3,q^2)$ that are invariant under a \textit{Singer type element}, and we give some computational data which shows the existence of such hemisystems for all $q\le 29$, except (curiously) $q\in\{13,25\}$. For $q=3$ we obtain the Segre hemisystem, for $q=5$ we obtain the hemisystem invariant under $(3\cdot A_7).2$ discovered by Cossidente and Penttila \cite{CossidentePenttila05} and for $q=7,9$ the examples are given in \cite{BKLP07}. In what follows, we will work in the dual generalized quadrangle{}; the points and lines of the elliptic quadric $\mathsf{Q}^-(5,q)$. A hemisystem of lines of $\mathsf{H}(3,q^2)$ transfers to a \textit{hemisystem of points} of $\mathsf{Q}^-(5,q)$. We begin with $\mathsf{GF}(q^6)$ and equip it with the following bilinear form over $\mathsf{GF}(q)$: $$B(x,y) := \mathrm{Tr}_{q^6\to q}(xy^{q^3}).$$ (Note that $ \mathrm{Tr}_{q^6\to q}$ is the relative trace map $x\mapsto x+x^q+x^{q^2}+x^{q^3}+x^{q^4}+x^{q^5}$). This form is symmetric and defines an elliptic orthogonal space isomorphic to $\mathsf{Q}^-(5,q)$. Now let $\omega=\xi^{(q^3-1)(q+1)}$ where $\xi$ is a primitive element of $\mathsf{GF}(q^6)$. Let $K=\langle \omega\rangle$ and note that $K$ is independent of the choice of $\xi$ (it is just the set of elements $x$ such that $x^{q^2-q+1}=1$). Then $K$ is irreducible and acts semiregularly on the totally isotropic points of $\mathsf{Q}^-(5,q)$, and is occasionally known as a \textit{Singer type} isometry of $\mathsf{Q}^-(5,q)$. So the number of orbits of $K$ on totally isotropic points is $(q+1)^2$. It is not difficult to see that each point orbit is of the form $$\{\langle u\rangle \mid u^{(q^2-q+1)(q-1)}=r\},$$ where $r$ is a singular element of $\mathsf{GF}(q^6)^*$ such that $r^{(q+1)}\in\mathsf{GF}(q^3)$. In what follows, we will use the underlying vectors instead of the projective points as the equations will be simpler. Note that the $K$-orbits on singular nonzero vectors are each of the form $$ \{ u\in\mathsf{GF}(q^6)^* \mid u^{q^2-q+1}=r\},\quad r\in R$$ where $$R:=\{r\in\mathsf{GF}(q^6)\mid r^{q+1}\in\mathsf{GF}(q^3), \mathrm{Tr}_{q^6\to q}(r^{q+1})=0\}.$$ The elements of $R$ lie on the mutually disjoint lines $$\ell_a:X^{q^2}-aX=0$$ where $a$ is an element of $\mathsf{GF}(q^3)$ such that $a^{q+1}+a+1=0$. So to construct a hemisystem, we need to construct a set of $\half(q+1)^2$ elements of $R$. Of the hemisystems we found, all were invariant under the field automorphism $\tau:a\mapsto a^{q^2}$ fixing $\mathsf{GF}(q^2)$ elementwise, and it acts on the set of lines $\{\ell_a\}$. The orbits of $\langle \tau\rangle$ on $\{\ell_a\}$ are the zero sets of the $\mathsf{GF}(q^2)$-irreducible factors of the polynomial $X^{q+1}+X+1$. Now every element $r\in R$ can be uniquely represented by the pair $(r^{q^2-1},r^{q^3-1})$. The possible values of $r^{q^2-1}$ are the $q+1$ zeros of $X^{q+1}+X+1$, and the possible values of $r^{q^3-1}$ are the $q+1$ solutions to $X^{q+1}=1$; let this latter set be denoted by $N$. So a $\langle \tau\rangle$-orbit on $R$ is uniquely determined by a $\mathsf{GF}(q^2)$-irreducible factor $i(X)$ of $X^{q+1}+X+1$ and an element $n\in N$: $$\{ r\in R: i(r^{q^2-1})=0, r^{q^3-1}=n\}.$$ The hemisystems we construct arise from unions of these orbits. Below we list the hemisystems that we found for $3\le q \le 9$. In each table we describe each solution by unions of $\langle \tau\rangle$-orbits on $R$. The constituents of these unions are described by which values of $N$ appear as right-hand values for each $i(X)$. \begin{example}\hrule\medskip For $q=3$, $N=\{1,-1,z^2,z^6\}$, where $z$ is the primitive root of $\mathsf{GF}(q^2)$. The $\mathsf{GF}(q^2)$-irreducible factors of $X^{q+1}+X+1$ are $$i_1(X):X-1\quad\text{ and }\quad i_2(X):X^3+X^2+X-1.$$ Let $\Pi$ be the subset of the ordered pairs $\{i_1,i_2\}\times N$ described by specifying the right-hand coordinates per possible left-hand coordinate: \begin{center} \begin{tabular}{|c|c|} \hline $X-1$& $1, z^6$ \\ $X^3+X^2+X-1$&$-1, z^2$ \\ \hline \end{tabular} \end{center} Now let $$\mathcal{H}^R_\Pi:=\{ r\in R \mid i(r^{q^2-1})=0, (i(X), r^{q^3-1})\in\Pi\}.$$ Then our hemisystem of points of $\mathsf{Q}^-(5,q)$ is simply $$\mathcal{H}_\Pi:=\left\{\langle u\rangle\mid u^{(q^2-q+1)(q-1)}\in \mathcal{H}^R_\Pi \right\}.$$ Moreover, we know that $\mathcal{H}_\Pi$ is projectively equivalent to the Segre hemisystem.\medskip\hrule \end{example} In each case below, $z$ denotes the primitive element of $\mathsf{GF}(q^2)$. For each $q$ below, we list one solution, and all the solutions can be obtained by taking the given solution and its orbit under the action of $\langle z\rangle$. \begin{table}[H] \begin{center}\small{ \begin{tabular}{c|c|c} $q$& $i(X)$ & $N$ \\ \hline $3$ &$X-1$& $1, z^6$ \\ & $X^3+X^2+X-1$&$-1, z^2$ \\ \hline $5$ & $X^3+2X^2-X-1$ & $1 , z^8 , z^{16}$\\ & $X^3+3X^2-1$ & $1 , z^4 , z^{20}$\\ \hline $7$ & $X+3$ & $z^{6} , z^{12} , z^{30} , z^{36} $ \\ & $X+5$ & $ 1 , -1 , z^{18} , z^{42}$ \\ & $ X^3+4X-1$ & $1,-1,z^{18} , z^{42}$ \\ & $X^3-X^2+3X-1$& $ z^{6} , z^{12} , z^{30} , z^{36}$ \\ \hline $9$ & $X-1$& $ 1, z^{8}, z^{24}, z^{56}, z^{72}$\\ & $X^3-X^2-X-1$ & $ 1, z^{16}, z^{32}, z^{48}, z^{64}$ \\ & $X^3+z^{50}X^2+z^{50}X-1 $ & $ 1, z^{8}, z^{16}, z^{64}, z^{72}$\\ & $X^3+z^{70}X^2+z^{70}X-1$ & $ 1, z^{24}, z^{32}, z^{48}, z^{56}$\\ \hline \end{tabular}} \caption{Sets $\Pi$ of ordered pairs $(i(X), r^{q^3-1})$.} \end{center} \end{table} We have found hemisystems for larger $q$ and we summarise them below. \begin{center} \begin{tabular}{c|c|c} $q$&$q^2-q+1$&Stabiliser\\ \hline 3&7&$\mathrm{PSL}(3,4). 2$\\ 5&21&$3\cdot A_7\cdot 2$\\ 7&43&$43: 6$\\ 9&73&$73:6$\\ 11&111&$111:6$, $333:3$\\ 17&273&$273: 3$\\ 19&1715&$1715: 6$\\ 23&507&$507: 6$\\ 27&703&at least $703: 3$\\ \hline \end{tabular} \end{center} \begin{problem} \label{prob:metacyclic} Does there exist a hemisystem invariant under a Singer type element for all odd prime powers $q\not\equiv 1\pmod{12}$? \end{problem} \subsection{Hemisystems invariant under the stabiliser of a triangle: tyranny of the small?} \label{sec:triangular} Another interesting sequence of hemisystems apparent in our data is that for $q=7,9$ and 11, the generalized quadrangle{} $\linear{q}$ contains a hemisystem invariant under a group $K=C_{q+1}^2:S_3$. In fact, for $q=9$ and $11$ there are several such hemisystems. Moreover, the stabiliser of the Segre hemisystem for $q=3$ contains such a subgroup, as does the group $(3\cdot A_7).2$ for $q=5$. The group $K$ can be realised as follows. The stabiliser of a nondegenerate hyperplane of $\linear{q}$ contains a group $H\cong C_{q+1}^3:(S_3\times C_{2f})$ where $q=p^f$ that fixes a set $T$ of mutually orthogonal nondegenerate points $\{\langle v_1\rangle,\langle v_2\rangle,\langle v_3\rangle\}$ of the underling projective space. In particular, taking $v_1,v_2,v_3$ as the first three elements of a basis of the underlying vector space, the pointwise stabiliser in $\mathsf{PGU}(4,q)$ of $T$ is the group $D$ of all diagonal matrices $\mathrm{diag}(\lambda_1,\lambda_2,\lambda_3,1)$ such that $\lambda_1^{q+1}=\lambda_2^{q+1}=\lambda_3^{q+1}=1$. Letting $\sigma$ and $\tau$ be the permutation matrices such that $\sigma:v_1\mapsto v_2\mapsto v_3\mapsto v_1$ and $\tau:v_1\mapsto v_1, v_2\mapsto v_3\mapsto v_1$, we have $\langle\sigma,\tau\rangle\cong S_3$. Moreover, $H=D: (\langle \sigma,\tau\rangle \times \langle\phi \rangle)$ where $\phi$ is the field automorphism such that $\phi:\sum\lambda_iv_i \mapsto \sum\lambda_i^pv_i$. The group $H$ contains a normal subgroup $R$ isomorphic to $C_{q+1}^2$ given by $$R:=\{\mathrm{diag}(\lambda_1,\lambda_2,\lambda_3,1)\mid \lambda_i^{q+1}=1,\lambda_1\lambda_2\lambda_3=1\}.$$ The group $K$ that leaves invariant a hemisystem for the values of $q$ examined is $R\rtimes \langle \sigma,\mathrm{diag}(\lambda,\lambda,\lambda,1)\tau\phi^f\rangle$ where $\lambda$ is an element of order $q+1$. So naturally we may ask if there exists a hemisystem of $\linear{q}$ invariant under $K$ for all $q$? For $q=13$ and $q=17$, we constructed the group $K$ and, as anticipated, found hemisystems stabilised by $K$, but to our surprise the sequence appears to stop there and for $q = 19$, $23$, $25$ and $27$ there are no hemisystems stabilised by $K$. (We were sufficiently surprised by this that we ran the linear program with a second integer programming package --- GLPK --- in addition to Gurobi.) \subsection{Hemisystems invariant under $2^4.A_5$} \label{sec:fixedgp} A further interesting sequence is that for $\mathsf{H}(3,7^2)$ and $\mathsf{H}(3,11^2)$ there is a hemisystem with stabiliser of shape $2^4.A_5$. The stabiliser of the Segre hemisystem for $q=3$ also contains such a subgroup, and further calculations have verified the existence of a hemisystem invariant under $2^4.A_5$ when $q=19$. The group $\mathsf{PGU}(3,q)$ contains a subgroup $H$ isomorphic to $2^4.A_6$ for all $q\equiv 3\pmod 4$ (such a subgroup is usually referred to as a $\mathcal{C}_6$-group, or the normaliser of a symplectic type $r$-group, see for example \cite[\S 4.6]{KL}). The group $H$ contains two groups of shape $2^4.A_5$, corresponding to the two classes of $A_5$ in $A_6$. The group which arises as a stabiliser of a hemisystem for $q=3,7,11$ and $19$ is the one for which the $A_5$ acts transitively on the nontrivial elements of the $2^4$. \begin{problem} Is there a hemisystem of $\linear{q}$ invariant under $2^4.A_5$ for all $q\equiv 3\pmod 4$? \end{problem} These hemisystems are especially intriguing (and also potentially harder to search for) as the order of their stabiliser is constant. \section{BLT-sets}\label{qclans} In this section, we list some of the known families of BLT-sets. Suppose we are in the $3$-dimensional symplectic space $\mathsf{W}(3,q)$ defined by the form $\beta(\bvec{x},\bvec{y}) = x_1y_4-x_4y_1+x_2y_3-x_3y_2$. Then from Payne's \textit{$q$-clans} (see \cite{Payne85}) we can construct BLT-sets of $\mathsf{W}(3,q)$. For the following, we note that a quadratic form $Q$ over $\mathsf{GF}(q)$ is \textit{anisotropic} if $Q(\bvec{x})=0$ holds only when $\bvec{x}=\bvec{0}$. The following lemma is straight-forward to prove (see \cite[p. 296]{BakerEbertPenttila}). \begin{lemma} \label{lem:qclanrep} Consider the following lines $\mathcal{L}$ of $\mathsf{W}(3,q)$: $$\ell_\infty:= \begin{pmatrix} 0&0&1&0\\ 0&0&0&1 \end{pmatrix}, \quad \ell_t:= \begin{pmatrix} 1&0&f_t&t\\ 0&1&g_t&f_t \end{pmatrix} \text{ for all }t\in\mathsf{GF}(q).$$ Then $\mathcal{L}$ is a BLT-set of lines of $\mathsf{W}(3,q)$ if and only if for all $t,u\in\mathsf{GF}(q)$, $t\ne u$, the following quadratic form on $\mathsf{GF}(q)^2\oplus \mathsf{GF}(q)^2$ is anisotropic: $$(x,y)\mapsto (t-u)x^2+2(f_t-f_u)xy+(g_t-g_u)y^2.$$ \end{lemma} We now summarise the maps $f$ and $g$ that generate the flock quadrangles used in this paper. Our information has been taken from \cite{MaskaThesis}. Four of the families are outlined in Table \ref{tab:qclans}. \begin{table}[H] \begin{center} \begin{tabular}{p{3.2cm}|c|c|c|p{7cm}} Flock quadrangle& Abbreviation & $f_t$&$g_t$ & Conditions\\ \hline Linear & $\mathsf{H}(3,q^2)$ & $0$&$-nt$ &$n$ is a nonsquare in $\mathsf{GF}(q)$\\ Fisher-Thas-Walker-Kantor-Betten&$\ftwkb{q}$& $\tfrac{3}{2}t^2$&$3t^3$ &$q\equiv 2\pmod{3}$\\ Kantor Monomial&$\mathsf{K}_2(q)$& $\tfrac{5}{2}t^3$&$5t^5$ & $q\equiv \pm 2\pmod{5}$, $5$ is a nonsquare in $\mathsf{GF}(q)$\\ Kantor-Knuth&$\mathsf{K}_1(q)$& $0$&$-nt^\sigma$& $n\in\mathsf{GF}(q)$ nonsquare, $q$ not prime, $1\ne \sigma\in\mathsf{Aut}(\mathsf{GF}(q))$\\ \hline \end{tabular} \end{center} \caption{Functions $f$ and $g$ for some flock quadrangles. For each map, the variable $t$ runs over $\mathsf{GF}(q)$.} \label{tab:qclans} \end{table} For the remaining flock quadrangles considered in this paper, the representation of the BLT-set as in Lemma \ref{lem:qclanrep} is more difficult to write down, so we resort to a different model due to Penttila. Consider the dual generalized quadrangle{} of $\mathsf{W}(3,q)$, the points and lines of the parabolic quadric $\mathsf{Q}(4,q)$. So a BLT-set of lines of $\mathsf{W}(3,q)$ corresponds to a \textit{BLT-set of points} of $\mathsf{Q}(4,q)$. Consider $V:=\mathsf{GF}(q^2)\oplus \mathsf{GF}(q^2)\oplus\mathsf{GF}(q)$ as a vector space over $\mathsf{GF}(q)$, and define the following quadratic form on $V$: $$(x,y,a)\mapsto x^{q+1}+y^{q+1}+a^2.$$ This quadratic form defines a parabolic quadric $\mathcal{Q}$ isomorphic to $\mathsf{Q}(4,q)$. The following models were taken from \cite{MaskaThesis}. \subsubsection*{The Fisher BLT-sets:} Fix an element $\beta\in\mathsf{GF}(q^2)$ with $\beta^{q+1}=-1$. Let $$\mathcal{P}=\{ (\beta x^2,0,1) \mid x^{q+1}=1\} \cup \{(0,\beta y^2,1)\mid y^{q+1}=1\}.$$ Then $\mathcal{P}$ defines a BLT-set of points of $\mathcal{Q}$. \subsubsection*{The Penttila-Mondello BLT-sets:} Suppose $q\equiv \pm 1\pmod{10}$ and fix $\beta,\gamma\in\mathsf{GF}(q^2)$ satisfying $\beta^{q+1}=-\tfrac{4}{5}$ and $\gamma^{q+1}=-\tfrac{1}{5}$. Let $$\mathcal{P}=\{(\beta x^2,\gamma x^3,1)\mid x^{q+1}=1\}.$$ Then $\mathcal{P}$ is a BLT-set of points of $\mathcal{Q}$. For $\pentmon{11}$, we may use a representation as in Lemma \ref{lem:qclanrep} given by the functions $f_t$ and $g_t$ in Table \ref{tab:PM11}. \begin{table}[ht] \begin{center} \begin{tabular}{c|lllllllllll} $t$ & 0&1&2&3&4&5&6&7&8&9&10\\ \hline $f_t$& 8 &0&7&4&8&0&1&5&0&0&0 \\ $g_t$ &1&8&3&2&5&6&10&9&4&7&0 \end{tabular} \end{center} \caption{The functions $f_t$ and $g_t$ for $\pentmon{11}$} \label{tab:PM11} \end{table} \section{Computational methods}\label{section:methods} The {\em point-line incidence matrix} of a generalized quadrangle{} is the matrix $A$ with rows indexed by points and columns by lines such that $$ A_{P,\ell} = \begin{cases} 1, & P \text{ is on } \ell;\\ 0, & \text{otherwise}. \end{cases} $$ In order to construct the point-line incidence matrix of a flock generalized quadrangle, we used the GAP package \textsf{FinInG}\footnote{This can be found at \texttt{http://cage.ugent.be/geometry/fining.php}. This package is currently in development.}. This software can construct flock generalized quadrangles{} from the information given in Section \ref{qclans}. A hemisystem is a subset of the columns of $A$ that sum to $(s+1)/2\; \boldsymbol{j}^T$ where $\boldsymbol{j}$ is the all-ones (row) vector or, equivalently, a $\{0,1\}$-vector $\bvec{h}$ such that \begin{equation} A\bvec{h}^T = (s+1)/2\; \boldsymbol{j}^T. \end{equation} For all but the smallest generalized quadrangles, the matrix $A$ is so large that we cannot hope to solve the equations completely. To reduce the problem, we assume the existence of some group $G$ stabilizing the hemisystem. Suppose that $G$ has orbits $ \{ \mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_m \} $ on points and $ \{ \mathcal{L}_1, \mathcal{L}_2, \ldots, \mathcal{L}_n \} $ on lines. Then every point in a point-orbit $\mathcal{P}_i$ is incident with the same number of lines in the line-orbit $\mathcal{L}_j$. If we denote this number by $b_{ij}$ and define the $m \times n$ matrix $B = (b_{ij})$, then a $\{0,1\}$-vector $\bvec{h}$ such that \begin{equation}\label{tactical} B\bvec{h}^T = (s+1)/2\; \boldsymbol{j}^T \end{equation} determines a hemisystem that is stabilised by the group $G$. There are a variety of approaches to solving equations such as \eqref{tactical}. In particular, the system of equations can be viewed either as an {\em integer linear program} or as a {\em constraint satisfaction problem}. After experimenting with software for each type of problem, we determined that the commercial integer programming package Gurobi \cite{gurobi} (available with a free academic license) was the most effective for our purposes. A linear program attempts to find values for variables $x_1, x_2, \ldots, x_n$ that maximise (or minimise) a linear objective function subject to linear constraints. An {\em integer} linear program, or just integer program, is a linear program with the additional restriction that the variables must take integral values. Solving \eqref{tactical} does not involve any maximizing or minimizing and so the objective function can be taken to be a constant, say 0, and then any feasible solution $\bvec{x} = (x_1, x_2, \ldots, x_n)$ to the following integer program yields a hemisystem: \[ \begin{array}{lrcl} \textrm{Maximise:} & 0 & & \\ \textrm{subject to:} & B\bvec{x}^T & = & (s+1)/2\; \boldsymbol{j}^T\\ & x_i & \in & \{0, 1\}. \end{array} \] In order to find {\em all} the solutions to a given system of equations, the system is augmented as each solution is found with an additional constraint excluding that particular solution, and the system is then re-solved. When all the solutions have been found and excluded, the resulting system has no integer feasible solutions. In order to exclude a particular solution $\bvec{h} = (h_1, h_2, \ldots, h_n)$ it suffices to add a constraint of the form \[ \sum_{\{i \mid h_i = 1\}} x_i < \sum_i {h_i} \] which merely says that $\bvec{x}$ cannot agree with $\bvec{h}$ in every coordinate position, and so must differ in at least one place. In principle, a constraint of this form only eliminates vectors {\em identical} to $\bvec{h}$ and still permits the solver to investigate vectors that have almost all of their entries identical to $\bvec{h}$. However, if we know an upper bound, say $\alpha$, on the size of the intersection of two hemisystems, then we can strengthen this constraint to \begin{equation}\label{constraint} \sum_{\{i \mid h_i = 1\}} x_i \le \alpha \end{equation} without missing any hemisystems. The exhaustive search for hemisystems in $\linear{5}$ was made feasible by using two basic techniques to shorten the search time: \begin{itemize} \item Use the automorphism group of $\linear{5}$ to determine the largest possible set of lines that can freely be assumed to be in a hemisystem. \item Use knowledge of the possible intersection sizes of a hemisystem with the two known hemisystems to add strong constraints of the same type as \eqref{constraint} during the search. \end{itemize} A more detailed description of the computation for $\linear{5}$ follows: \smallskip \textsc{Proof of Theorem \ref{thm:q=5} for $\linear{5}$.} \smallskip Let $G$ be the full automorphism group of $\linear{5}$ and let $\mathcal{H}$ be a hemisystem. As $G$ is transitive on the set of lines of $\linear{5}$ we can assume without loss of generality that $\ell_1\in\mathcal{H}$. Then the stabiliser $G_{\ell_1}$ has two orbits on the remaining lines, those disjoint from $\ell_1$ and those that meet $\ell_1$. It is easy to see that any hemisystem containing $\ell_1$ must contain a line disjoint from $\ell_1$ and so we can arbitrarily pick a second line, say $\ell_2$ , and assume without loss of generality that $\ell_1, \ell_2 \in {\mathcal H}$. This process can be continued in a semi-automated fashion as follows: suppose that we have a set $\ell_1,\ldots,\ell_i$ of lines that we can already assume are contained in $\mathcal{H}$, and consider the orbits of the setwise stabiliser $G_{\{\ell_1,\ldots,\ell_i\}}$ on lines. An orbit ${\mathcal O}$ is denoted {\em essential} if a search for a hemisystem that contains $\ell_1,\ldots,\ell_i$ but does not contain {\em any} line from ${\mathcal O}$ is infeasible. If ${\mathcal O}$ is essential, then ${\mathcal H}$ contains at least one line from ${\mathcal O}$, and we can select $\ell_{i+1}$ arbitrarily from ${\mathcal O}$. This process can be continued until the set of lines is sufficiently large that its stabiliser is so small that it has no essential orbits. In this fashion, we found a particular set of 8 lines $\ell_1, \ldots, \ell_8$ that can be assumed to lie in $\mathcal{H}$. The next important step was to determine that no hemisystem has a ``large'' intersection with either of the two known hemisystems. Let ${\mathcal H}_1$ and ${\mathcal H}_2$ be representatives of the two known hemisystems. First we found the maximum possible size in which {\em any} hemisystem (known or unknown) can intersect ${\mathcal H}_1$ by running the integer linear program where the objective function to be maximised is the sum of the variables corresponding to the lines in ${\mathcal H}_1$. This revealed that a hemisystem different from ${\mathcal H}_1$ can intersect ${\mathcal H}_1$ in at most 306 lines. By running the linear program again with the additional constraint that the intersection with ${\mathcal H}_1$ has size {\em exactly} 306, we determined all the hemisystems that intersect ${\mathcal H}_1$ in 306 lines and confirmed that no new hemisystems arose. We repeated this process with the ``next largest'' intersection, which proved to be size 300, then 282, then 270 and then 258, eventually confirming that any hemisystem that meets ${\mathcal H}_1$ in 258 or more lines is isomorphic to either ${\mathcal H}_1$ or ${\mathcal H}_2$. Similar results were obtained for ${\mathcal H}_2$ and similarly we determined that any hemisystem meeting ${\mathcal H}_2$ in 258 or more lines is isomorphic to ${\mathcal H}_1$ or ${\mathcal H}_2$. Finally, the exhaustive search is run where the variables corresponding to $\ell_1, \ldots, \ell_8$ are initially set to 1 and every time a hemisystem is found, it is excluded by adding a constraint similar to \eqref{constraint} with $\alpha = 257$. Notice that this constraint is much stronger than simply excluding the hemisystem that has just been found and will exclude other hemisystems. However if the just-found hemisystem is one of the two known ones, then the ``extra'' hemisystems that are excluded by the constraint are necessarily isomorphic to the known ones, and hence not of interest. Therefore if unknown hemisystems do exist, then at least one of them will be discovered by the search. As this does not occur, we conclude that there are no other hemisystems of $\linear{5}$. \qed In this computation, there is a trade-off involved in choosing the value 258 used in the constraints to exclude solutions as they are found. Using a lower value would make the final exhaustive part of the search run faster, but it would take longer to establish that only known hemisystems intersect ${\mathcal H}_1$ or ${\mathcal H}_2$ in that many lines. The computation for $\ftwkb{5}$ was done in an exactly analogous fashion. \section{A summary of the known hemisystems of flock quadrangles}\label{known} In this section we catalogue all the known hemisystems of lines of flock quadrangles of order $(s^2,s)$ for $s\le 11$. These include those which arise in the pre-existing literature, those obtained via Theorem \ref{construction}, and numerous further examples constructed by computer. Each row of the table describes a complementary pair of hemisystems; the column SC (for ``self-complementary'') indicates whether the hemisystem is equivalent to its complement in which case it contributes just 1 to the total count of hemisystems. The tables contain an exhaustive listing of all the hemisystems that arise by Theorem \ref{construction} and are complete for the known generalized quadrangles{} of order up to $(5^2,5)$. However there may be many more hemisystems, though necessarily with small automorphism groups, that remain to be found. \begin{proposition} Let $\mathcal{H}$ be a hemisystem of a flock quadrangle of order $(s^2,s)$ with $s\le 9$ such that $\mathcal{H}$ arises from Theorem \ref{construction}. Then $\mathcal{H}$ appears in one of the tables in this section. \end{proposition} We also list all hemisystems arising from Theorem \ref{construction} for $\linear{11}$ in Table \ref{tab:H311}. Due to the large number of hemisystems of Type I for the remaining flock quadrangles of order $(11^2,11)$, they are listed in the Appendix, which is only included in the version of this paper on the \textsc{arxiv}. \begin{quote} \begin{framed} \begin{center} \emph{Reconstruction of the hemisystems from the data} \end{center} The data given for the Type I hemisystems in our tables is sufficient to reconstruct the actual hemisystem given some additional knowledge about the particular choices that have been made for the variables in the construction. First of all, the finite fields in \textsf{GAP} have a determined primitive element and the ordering of the elements of the field is first graded by towers of subfields, and then by exponents of the primitive element. Matrices in \textsf{GAP} are ordered lexicographically, row by row. The point $P$ is $(1,0,0,0,0,0)$ and the BLT-sets are the ones given in Section \ref{qclans}. Each totally isotropic plane can be represented uniquely by a $3\times 6$ matrix written in Hermite normal form, whose row space gives us the corresponding 3-dimensional vector subspace. The totally isotropic planes on $\ell$ are sorted by sorting the corresponding $3\times 6$ matrices into lexicographic order, and indexed by $\{1,\ldots,q+1\}$. The chosen subset ${\mathcal S}$ is given by a $(q-1)/2$ subset of this index set. \end{framed} \end{quote} \subsection{Linear, $\linear{3}$} Segre \cite{Segre65} established that there is just one example of a hemisystem (up to projectivity) in $\linear{3}$. The strongly regular graph (and partial quadrangle) arising is the Gewirtz graph on $56$ vertices. \begin{table}[H] \begin{center} \begin{tabular}{l|l|c|c|c|l} Group & \multicolumn{1}{|c|}{Size}& SC & Construction/Author(s) & $\ell$ & Subset ${\mathcal S}$\\ \hline $\mathsf{PSL}(3,4).2$ & 40320 & true & Theorem \ref{construction}, Segre \cite{Segre65},& $\left[\begin{smallmatrix} 0&1&0&1&0&0\\ 0&0& 1&1 &1& 0\\ \end{smallmatrix}\right]$ & any \\ &&&Sections \ref{sec:metacyclic}, \ref{sec:triangular} and \ref{sec:fixedgp}&& \\ \hline \end{tabular} \end{center} \caption{The hemisystem of $\linear{3}$.} \label{tab:H39} \end{table} \subsection{Linear, $\linear{5}$} The full automorphism group of this generalized quadrangle{} is $\mathsf{P}\Gamma\mathsf{U}(4,5)$ which has order $2^9 \times 3^4 \times 5^6 \times 7 \times 13$. There were two previously known hemisystems in this generalized quadrangle{} and our computer searches have confirmed that there are no more. \begin{table}[H] \begin{center} \begin{tabular}{l|r|c|c|c|l} Group & \multicolumn{1}{|c|}{Size}& SC & Construction/Author(s) & $\ell$ & Subset ${\mathcal S}$\\ \hline $\mathsf{P\Sigma L}(2,25)$&15600& true & Theorem \ref{construction}, Cossidente--Penttila \cite{CossidentePenttila05}& $\left[\begin{smallmatrix} 0&1&0&1&0&0\\ 0&0& 1&0 &1& 0\\ \end{smallmatrix}\right]$ & any \\ \hline $(3\cdot A_7).2$&15120& true & Cossidente--Penttila \cite{CossidentePenttila05}, Sections \ref{sec:metacyclic} and \ref{sec:triangular}\\ \cline{1-4} \end{tabular} \end{center} \caption{The hemisystems of $\linear{5}$.} \label{tab:H35} \end{table} \subsection{Fisher-Thas/Walker/Kantor/Betten, $\ftwkb{5}$}\label{FTWKB5} The full automorphism group of this generalized quadrangle{} is $5^{1+4}:(\mathsf{SL}(2,9):C_4)$, which has order $2^6\times 3^2\times 5^6$. There was one previously known hemisystem of this generalized quadrangle{} in the literature and Theorem \ref{construction} yields a second example. Our computer searches uncovered a third example with group $S_3$, and confirmed that there are no more. \begin{table}[H] \begin{center} \begin{tabular}{l|r|c|c|c|l} Group & \multicolumn{1}{|c|}{Size}& SC & Construction/Author(s) & $\ell$ & Subset ${\mathcal S}$\\ \hline $C_5^2:(C_4\times S_3)$& 600 &false&Theorem \ref{construction}& $\left[\begin{smallmatrix} 0&1&0&0&1&0\\ 0&0& 1&1 &0 & 0\\ \end{smallmatrix}\right]$ & any\\ \hline $\mathsf{AGL}(1,5)\times S_3$&120& false& Bamberg--De Clerck--Durante \cite{BambergDeClerckDurante09}\\ $S_3$& 6& false& \textbf{New}\\ \cline{1-4} \end{tabular} \end{center} \caption{The hemisystems of $\ftwkb{5}$.} \label{tab:ftwkb5} \end{table} \subsection{Linear, $\linear{7}$} The full automorphism group of this generalized quadrangle{} is $\mathsf{P}\Gamma\mathsf{U}(4,7)$, which has order $2^{13}\times 3^2\times 5^2\times 7^6\times 43$. There were five previously known hemisystems in this quadrangle and our computer searches have uncovered a sixth. \begin{table}[H] \begin{center} \begin{tabular}{l|r|c|c|c|l} Group & \multicolumn{1}{|c|}{Size}& SC & Construction/Author(s) & $\ell$ & Subset ${\mathcal S}$\\ \hline $\mathsf{P\Sigma L}(2,49)$&117600 &true&Theorem \ref{construction}, Cossidente--Penttila \cite{CossidentePenttila05}& $\left[\begin{smallmatrix} 0&1&0&1&0&0\\ 0&0& 1&1 &1& 0\\ \end{smallmatrix}\right]$& $\{1,3,4\} $ \\ $C_2 \times (C_7^2 : Q_{16})$&1568&true&Penttila (personal communication), & & $\{1,3,5\} $\\ &&&Theorem \ref{construction}&&\\ \hline $2^4.A_5$&960&true&Bamberg--Kelly--Law--Penttila \cite{BKLP07}\\ &&&Section \ref{sec:fixedgp}\\ $C_2\times (C_{43}:C_{6})$&516&true&Bamberg--Kelly--Law--Penttila \cite{BKLP07}\\ $C_8^2:S_3$ & 384&true& \textbf{New}, Section \ref{sec:triangular}\\ $C_2\times\mathsf{PSL}(2,7)$&336&true&Cossidente--Penttila \cite{CossidentePenttila09}, Section \ref{sec:metacyclic} \\ \cline{1-4} \end{tabular} \end{center} \caption{Known hemisystems of $\linear{7}$.} \end{table} \subsection{Kantor Monomial, $\kmonom{7}$} The full automorphism group of this generalized quadrangle{} is $7^{1+4}:(C_3 \times (Q_8:(\mathsf{SL}(2,3).2):2))$, which has order $2^8\times 3^2\times 7^5$. In addition to the 14 examples obtained by Theorem \ref{construction}, we have found a further 15 hemisystems; all are listed in Table~\ref{tab:km7}. \begin{table}[H] \begin{center} \begin{tabular}{l|r|c|c|c|l} Group & \multicolumn{1}{|c|}{Size}& SC & Construction/Author(s) & $\ell$ & Subset ${\mathcal S}$\\ \hline $C_7^2:(C_3\times \mathsf{SL}(2,3))$&3528&false&Theorem \ref{construction}& $\left[\begin{smallmatrix} 0&1&0&1&0&0\\ 0& 0&1 &0& 1&0\\ \end{smallmatrix}\right]$& $\{1,3,4\} $\\ $C_7^2:(\mathsf{SL}(2,3).2)$&2352&false&Theorem \ref{construction}& & $\{1,3,5\} $\\ \hline $C_7^2:(Q_{16}\times C_3)$&2352&false&Theorem \ref{construction}& $\left[\begin{smallmatrix} 0&1&0&0&1&0\\ 0&0& 1&4 &0& 0\\ \end{smallmatrix}\right]$ & $\{1,3,4\} $\\ $(C_7^2:Q_{16})\times C_2$&1568&false&Theorem \ref{construction}& & $\{1,3,5\} $ \\ \hline $C_7^2:(C_6\times C_3)$&882&true&Theorem \ref{construction}& $\left[\begin{smallmatrix} 0&1&0&0&1&0\\ 0&0&1&3&0&0\\ \end{smallmatrix}\right]$ & $\{1, 3, 4\} $\\ $C_7^2:(C_3:C_4)$&588&true&Theorem \ref{construction}& & $\{1, 3, 5\} $ \\ \hline $C_7^2:C_{12}$&588&false&Theorem \ref{construction}& $\left[\begin{smallmatrix} 0&1&0&0&1&0\\ 0&0&1&1&0&0\\ \end{smallmatrix}\right]$ & $\{1,3,4\} $\\ $C_7^2:Q_8$&392&false&Theorem \ref{construction}& & $\{1, 3, 5\} $\\ \hline $C_3\times F_{42}$&126&false&\textbf{New}\\ $C_3\times F_{42}$&126&true&\textbf{New}\\ $C_2\times (C_7:C_3)$&42&false&\textbf{New}\\ $\mathsf{AGL}(1,7)$&42&false&\textbf{New}\\ $(C_2\times Q_8):C_2$&32&false&\textbf{New}\\ $(C_2\times Q_8):C_2$&32&false&\textbf{New}\\ $C_7:C_3$&21&false&\textbf{New}\\ $C_7:C_3$&21&true&\textbf{New}\\ $C_3$ &3 &true&\textbf{New}\\ \cline{1-4} \end{tabular} \end{center} \caption{Known hemisystems of $\kmonom{7}$.} \label{tab:km7} \end{table} \subsection{Linear, $\mathsf{H}(3,9^2)$} The full automorphism group of this generalized quadrangle{} is $\mathsf{P}\Gamma\mathsf{U}(4,9)$, which has order $2^{12}\times3^{12}\times 5^3\times41\times73$. In addition to the two previously known hemisystems, we found two more arising from Theorem~\ref{construction} and three others; all are listed in Table~\ref{tab:lin9}. \begin{table}[H] \begin{center} \begin{tabular}{l|r|c|c|c|l} Group & \multicolumn{1}{|c|}{Size}& SC & Construction/Author(s) & $\ell$ & Subset ${\mathcal S}$\\ \hline $\mathsf{P\Sigma L}(2,81)$& 1062720 &true&Theorem \ref{construction}, Cossidente--Penttila \cite{CossidentePenttila05}& $\left[\begin{smallmatrix} 0&1&0&1&0&0\\ 0&0& 1&0 &1& 0\\ \end{smallmatrix}\right]$ & $\{1,3,4,5\}$\\ $C_3^4:(C_{20}:C_4)$ & 6480 & true &Theorem \ref{construction}& & $\{1,3,5,6\}$\\ $C_3^4:(C_{5}:C_8)$ & 3240& true & Theorem \ref{construction}& & $\{1,3,5,9\}$ \\ \hline $C_{73}:C_{12}$ & 876 & true &Bamberg--Kelly--Law--Penttila \cite{BKLP07},\\ &&& Section \ref{sec:metacyclic} \\ $(C_{10}^2 : C_4) : C_3$ &1200& true &\textbf{New}, Section \ref{sec:triangular}\\ $C_{10}^2:S_3$ &600& true &\textbf{New}, Section \ref{sec:triangular}\\ $(C_5 \times (C_5 : C_4)) : C_4$ &400&true &\textbf{New}\\ \cline{1-4} \end{tabular} \end{center} \caption{Known hemisystems of $\linear{9}$.} \label{tab:lin9} \end{table} \subsection{Kantor-Knuth, $\mathsf{K}_1(9)$} The full automorphism group of this generalized quadrangle{} is $E_9 : (((\mathsf{SL}(2,9).C_4) : C_8) : C_2)$ where $E_9$ is the Heisenberg group of order $9^5$ with centre of order $9$. The order of the automorphism group is $2^{10}\times 3^{12}\times5$. \begin{table}[H] \begin{center} \begin{tabular}{l|r|c|c|c|l} Group & \multicolumn{1}{|c|}{Size}& SC & Construction/Author(s) & $\ell$ & Subset ${\mathcal S}$\\ \hline $C_3^4:(C_4\times C_{8})$&2592&true&Theorem \ref{construction}& $\left[\begin{smallmatrix} 0&1&0&1&0&0\\ 0&0&1&0&1&0\\ \end{smallmatrix}\right]$ & $\{1,3,5,9\}$ \\ $C_3^4:(C_2\times C_{8})$&1296&true&Theorem \ref{construction}& & $\{1,3,5,6\}$ \\ $C_3^4:C_{8}$&648&true&Theorem \ref{construction}& & $\{1,3,4,5\}$ \\ \hline $C_4 \times \mathsf{AGL}(1,9)$ &288&true &\textbf{New}\\ $\mathsf{AGL}(1,9)$ & 72&true &\textbf{New}\\ \cline{1-4} \end{tabular} \end{center} \caption{Known hemisystems of $\kknuth{9}$.} \end{table} \subsection{Fisher, $\fish{9}$} The full automorphism group of this generalized quadrangle{} is $E_9 :(C_5^2: (D_{16}.Q_8))$ which has order $2^{7} \times 3^{10} \times 5^{2}$. (Here $E_9$ is the Heisenberg group of order $9^5$ with centre of order $9$.) \begin{center} \begin{table}[H] \begin{tabular}{l|r|c|c|c|l} Group & \multicolumn{1}{|c|}{Size}& SC & Construction/Author(s) & $\ell$ & Subset ${\mathcal S}$\\ \hline \input Fi9Buggers3 \hline $ C_2 \times \mathsf{AGL}(1,9)$ &144& true & \textbf{New}\\ $\mathsf{AGL}(1,9)$ & 72 & $4 \times 2 + 4$ & \textbf{New}\\ \cline{1-4} \end{tabular} \caption{Known hemisystems of $\fish{9}$.} \end{table} \end{center} \subsection{Linear, $\linear{11}$} The full automorphism group of this generalized quadrangle{} is $\mathsf{P}\Gamma\mathsf{U}(4,11)$, which has order $2^{10} \times 3^{4} \times 5^{2} \times 11^{6} \times 37 \times 61$. \begin{center} \begin{table}[H] \begin{tabular}{l|r|c|c|c|l} Group & \multicolumn{1}{|c|}{Size}& SC & Construction/Author(s) & $\ell$ & Subset ${\mathcal S}$\\ \hline \input Linear11Buggers2 \hline $3.A_6.2$ &2160 & true &\textbf{New}\\ $C_{333}: C_6$ & 1998& true &\textbf{New}, Section \ref{sec:metacyclic}\\ $2^4.A_5$ & 960 & true&\textbf{New}, Section \ref{sec:fixedgp}\\ $C_{12}^2:S_3$& 864 & true &\textbf{New}, Section \ref{sec:triangular}\\ $C_{12}^2:S_3$& 864 & true &\textbf{New}, Section \ref{sec:triangular}\\ $C_{111}: C_6$ & 666& false & \textbf{New}, Section \ref{sec:metacyclic}\\ \cline{1-4} \end{tabular} \caption{Known hemisystems of $\linear{11}$.} \label{tab:H311} \end{table} \end{center} \subsection{Fisher-Thas-Walker-Kantor-Betten, $\ftwkb{11}$} The full automorphism group of this generalized quadrangle{} is $11^{1+4}\rtimes \mathrm{GL}(2,11)$ which has order $2^{4} \times 3 \times 5^{2} \times 11^{6}$. There are 20 hemisystems of Type I, listed in the Appendix\footnote{Due to its size, this Appendix is only included in the \textsc{arxiv} version of this paper}, and we do not know any other hemisystems in this generalized quadrangle{}. \subsection{Fisher, $\fish{11}$} The full automorphism group of this generalized quadrangle{} is $11^{1+4}:(C_5 \times (((C_3 \times (C_3 : C_4)) : Q_8) : C_2))$ which has order $2^{6} \times 3^{2} \times 5 \times 11^{5}$. There are 90 hemisystems of Type I, listed in the Appendix\footnotemark[3], and we know 12 further hemisystems listed in Table~\ref{tab:nonbgrfi11}. {\small \begin{center} \begin{longtable}{|l|r|c|} \hline \multicolumn{1}{|c|}{Group} & \multicolumn{1}{c|}{Size}& \multicolumn{1}{c|}{Number}\\ \hline $\mathsf{AGL}(1,11)$ &110& $6 \times 2$ \\ \hline \caption{Non Type I hemisystems of $\fish{11}$} \label{tab:nonbgrfi11} \end{longtable} \end{center} } \subsection{Penttila-Mondello, $\pentmon{11}$} The full automorphism of this generalized quadrangle{} is $11^{1+4}\rtimes (C_5\times (C_3 \times \mathrm{SL}(2,3).2):2 )$ which has order $2^{5} \times 3^{2} \times 5 \times 11^{5}$. There are 164 hemisystems of Type I, listed in the Appendix\footnotemark[3], and we know 36 further hemisystems listed in Table~\ref{tab:nonbgrmon11}. {\small \begin{center} \begin{longtable}{|l|r|c|} \hline \multicolumn{1}{|c|}{Group} & \multicolumn{1}{c|}{Size}& \multicolumn{1}{c|}{Number}\\ \hline $\mathsf{AGL}(1,11)$ &110& $18 \times 2$ \\ \hline \caption{Non Type I hemisystems of $\pentmon{11}$} \label{tab:nonbgrmon11} \end{longtable} \end{center} } \section{Open Problems} \label{sec:probs} We saw in Section \ref{section:construction} that in any infinite family of generalized quadrangles{} of order $(q^2,q)$ the number of hemisystems arising from Theorem~\ref{construction} grows exponentially in $q$. Hemisystems that do not arise from Theorem \ref{construction} are then of particular interest. \begin{problem} Does every flock generalized quadrangle{} of order $(s^2,s)$ with $s \ge 7$ contain a hemisystem that does not arise from Theorem~\ref{construction}? \end{problem} We have found such hemisystems in all of the generalized quadrangles{} that we have examined with the exception of the small cases ($\linear{3}$ and $\linear{5}$) and $\ftwkb{11}$. Although we have outlined two possibilities for infinite families of hemisystems in $\linear{q}$ in Section \ref{sec:inffamilies}, we do not have any proven general constructions for hemisystems other than Theorem~\ref{construction}. \begin{problem} Find a natural construction for an infinite family of hemisystems (not of Type I) in $\linear{q}$ or in one of the known families of non-classical generalized quadrangles{}. \end{problem} By Theorem \ref{thm:stab}, a hemisystem coming from Theorem \ref{construction} is invariant under a particular elementary abelian group of order $q^2$ denoted by $T$. However, we do not know if the converse is true. \begin{problem} Are there hemisystems invariant under the elementary abelian group $T$ of order $q^2$ described in Theorem~\ref{thm:stab} that do not arise from Theorem \ref{construction}? \end{problem} At the other end of the symmetry spectrum, we currently do not know of any hemisystems with a trivial group. However this is not surprising, as almost all of our searches have assumed the existence of symmetries. \begin{problem} Is there a hemisystem with trivial group? \end{problem} We expect a positive answer, although it may be challenging to find such a hemisystem. Each hemisystem gives a strongly regular graph and the stabiliser of the hemisystem in the automorphism group of the generalized quadrangle{} gives a group of automorphisms of the strongly regular graph. In all cases investigated so far, the automorphism group of the strongly regular graph is induced by the stabiliser of the hemisystem in the automorphism group of the generalized quadrangle. It is not apparent why this should always be the case. \begin{problem} Is the full automorphism group of the strongly regular graph obtained from a hemisystem always induced by the stabiliser of the hemisystem in the automorphism group of the generalized quadrangle? \end{problem} \begin{problem} Are there hemisystems in different generalized quadrangles{} whose associated strongly regular graphs are isomorphic? \end{problem} \section*{Acknowledgements} The authors are extremely grateful to Simon Guest for his computational assistance. \bibliographystyle{amsplain} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,314,259,992,838
arxiv
\section{Introduction} \label{sec:introduction} \let\thefootnote\relax\footnotetext{ $^*$ A. Ignatov and R. Timofte (\{andrey,radu.timofte\}@vision.ee.ethz.ch, ETH Zurich) are the challenge organizers, while the other authors participated in the challenge.\\ The Appendix~\ref{sec:affiliations} contains the authors' teams and affiliations.\\ AIM 2020 webpage: \url{https://data.vision.ee.ethz.ch/cvl/aim20/}} The advances in image manipulation tasks are impressive. In particular, the image manipulation related to portable devices such as smartphone cameras has recently faced an interest boost from the research community to match the users' demands. Multiple novel solutions were proposed in the literature for various tasks, such as image quality enhancement~\cite{ignatov2017dslr,ignatov2018pirm,Timofte_2018_CVPR_Workshops,NTIRE_Dehazing_2019}, style transfer~\cite{gatys2016image,johnson2016perceptual,luan2017deep}, learning of an image signal processor (ISP)~\cite{ignatov2020replacing}, photo segmentation and blurring~\cite{wadhwa2018synthetic,shen2016automatic,chen2017deeplab,badrinarayanan2017segnet}, etc. Moreover, modern mobile devices got powerful GPUs and NPUs that are well suitable for running the proposed deep learning models~\cite{ignatov2018ai,ignatov2019ai}. Rendering an automatic bokeh effect has been one of the most popular topics over past few years, with many solutions that are now included within the majority of smartphone camera applications. In 2014, a seminal work on portrait segmentation~\cite{GoogleBlur2014} was published, and substantial improvements in segmentation accuracy were reported in many subsequent papers~\cite{shen2016automatic,xu2017deep}. Wadhwa~{\emph{et al.\ }}~\cite{wadhwa2018synthetic} provided a detailed description of the synthetic depth-of-field rendering method found in the Google Pixel phones and inspired further development in this field. The AIM 2020 challenge on rendering realistic bokeh builds upon the success of the previous AIM 2019 challenge~\cite{ignatov2019aim}, and advances the benchmarking of example-based single image bokeh effect rendering by introducing two tracks with evaluation on several recent-generation desktop CPUs and smartphone GPUs. The AIM 2020 challenge uses the large-scale EBB!~\cite{ignatov2020rendering} dataset consisting of photo pairs with shallow and wide depth-of-field captured using the Canon 70D DSLR camera. Quantitative and qualitative visual results as well as the inference time and efficiency are used for ranking the proposed solutions. The challenge, the corresponding dataset, the results and the proposed methods are described and discussed in the next sections. This challenge is one of the AIM 2020 associated challenges on: scene relighting and illumination estimation~\cite{elhelou2020aim_relighting}, image extreme inpainting~\cite{ntavelis2020aim_inpainting}, learned image signal processing pipeline~\cite{ignatov2020aim_ISP}, rendering realistic bokeh~\cite{ignatov2020aim_bokeh}, real image super-resolution~\cite{wei2020aim_realSR}, efficient super-resolution~\cite{zhang2020aim_efficientSR}, video temporal super-resolution~\cite{son2020aim_VTSR} and video extreme super-resolution~\cite{fuoli2020aim_VXSR}. \section{AIM 2020 Challenge on Realistic Bokeh} The objectives of the AIM 2020 challenge on rendering realistic bokeh effect is to promote realistic settings as defined by the \textit{EBB!} Bokeh dataset, to push the state-of-the-art in synthetic shallow depth-of-field rendering, and to ensure that the final solutions are efficient enough to run both on desktop and mobile hardware. \subsection{\textit{Everything is Better with Bokeh!} Dataset} \begin{figure*}[t!] \centering \includegraphics[width=1.0\linewidth]{figsBokeh/dataset.png} \caption{Sample wide and shallow depth-of-field image pairs from the EBB! dataset.} \label{fig:sample_images} \end{figure*} One of the biggest challenges in the bokeh rendering task is to get high-quality real data that can be used for training deep models. To tackle this problem, we used a large-scale \textit{Everything is Better with Bokeh!} (EBB!) dataset presented in~\cite{ignatov2020rendering} that is containing more than 10 thousand images collected in the wild during several months. By controlling the aperture size of the lens, images with shallow and wide depth-of-field were taken. In each photo pair, the first image was captured with a narrow aperture (f/16) that results in a normal sharp photo, whereas the second one was shot using the highest aperture (f/1.8) leading to a strong bokeh effect. The photos were taken during the daytime in a wide variety of places and in various illumination and weather conditions. The photos were captured in automatic mode, the default settings were used throughout the entire collection procedure. An example set of collected images is presented in Figure~\ref{fig:sample_images}. The captured image pairs are not aligned exactly, therefore they were first matched using SIFT keypoints and RANSAC method same as in~\cite{ignatov2017dslr}. The resulting images were then cropped to their intersection part and downscaled so that their final height is equal to 1024 pixels. From the resulting 10 thousand images, 200 image pairs were reserved for testing, while the other 4.8 thousand photo pairs can be used for training and validation. \subsection{Tracks and Competitions} The challenge consists of the following phases: \vspace{-0.8mm} \begin{enumerate} \setlength\itemsep{-0.2mm} \item[i] \textit{development:} the participants get access to the data; \item[ii] \textit{validation:} the participants have the opportunity to validate their solutions on the server and compare the results on the validation leaderboard; \item[iii] \textit{test:} the participants submit their final results, models, and factsheets. \end{enumerate} \vspace{-0.8mm} All submitted solutions were evaluated based on the following measures: \vspace{-0.8mm} \begin{itemize} \setlength\itemsep{-0.2mm} \item PSNR measuring fidelity score, \item SSIM, a proxy for perceptual score, \item The runtime of the submitted models on desktop CPUs and mobile GPUs, \item MOS scores measured in the user study for explicit image quality assessment. \end{itemize} \vspace{-0.8mm} The AIM 2020 challenge on realistic bokeh consists of two tracks. In the first ``CPU'' track, the target was to produce a model which runtime is optimized for standard desktop CPUs. In the second, ``Smartphone GPU'' track, the goal was to develop a TensorFlow Lite~\cite{TensorFlowLite2018} compatible solution that was tested on several mobile GPUs using a publicly available$^1$\footnotetext{$^1$ \url{http://ai-benchmark.com}} \textit{AI Benchmark} application~\cite{ignatov2019ai} and an OpenCL-based TFLite GPU delegate~\cite{TFLite2019GPU}. During the development and validation phases, the quantitative performance of the solutions was measured by PSNR and SSIM metric. Since SSIM and PSNR scores are not reflecting many aspects of real quality of the resulted images, during the final test phase we evaluated the solutions based on their Mean Opinion Scores (MOS). For this, we conducted a user study evaluating the visual results of all proposed methods. The users were asked to rate the quality of each submitted solution by selecting one of the five quality levels (5 - comparable perceptual quality, 4 - slightly worse, 3 - notably worse, 2 - poor perceptual quality, 1 - completely corrupted image) for each method result in comparison with the original Canon images exhibiting bokeh effect. The expressed preferences were averaged per each test image and then per each method to obtain the final MOS. \section{Challenge Results} \begin{table*}[tbh!] \centering \resizebox{\linewidth}{!} { \begin{tabular}{l|c|cc|cccc} \multicolumn{2}{c|}{}&\multicolumn{2}{c|}{Factsheet Info}&\multicolumn{4}{c}{Track 1: Desktop CPU}\\ \hline Team \, & \, Author \, & \, Framework \, & \, Training Hardware, GPU \, & \, Avg. Runtime, s \, & \, PSNR$\uparrow$ \, & \, SSIM$\uparrow$ \, & \, MOS$\uparrow$ \\ \hline \hline Airia-bokeh & \, MingQian \, & TensorFlow & Nvidia TITAN RTX & 5.52 & 23.58 & 0.8770 & \textBF{4.2} \\ AIA-Smart & \, JuewenPeng \, & PyTorch & GeForce GTX 1080 & 1.71 & 23.56 & 0.8829 & 3.8 \\ CET\_SP & \, memelvin99 \, & TensorFlow & Nvidia Tesla P100 & 1.17 & 21.91 & 0.8201 & 3.3 \\ CET\_CVLab & \, Densen \, & TensorFlow & Nvidia Tesla P100 & 1.17 & 23.05 & 0.8591 & 3.2 \\ Team Horizon & \, tensorcat \, & PyTorch & GeForce GTX 1080 Ti & 19.27 & 23.27 & 0.8818 & 3.2 \\ IPCV\_IITM & \, ms\_ipcv \, & PyTorch & NVIDIA Titan X & 27.24 & \textBF{23.77} & \textBF{0.8866} & 2.5 \\ CET21\_CV & \, SaagaraMB \, & TensorFlow & Nvidia Tesla P100 & \textBF{0.74} & 22.80 & 0.8628 & 1.3 \\ CET\_ECE & \, Sanjana.A.R \, & TensorFlow & Nvidia Tesla P100 & \textBF{0.74} & 22.85 & 0.8629 & 1.2 \\ xuehuapiaopiao-team \, & \, xuehuapiaopiao \, & TensorFlow & GeForce GTX 1080 Ti & - & 22.98 & 0.8758 & - * \\ Terminator & \, Max\_zheng \, & TensorFlow & GeForce GTX 1080 Ti & - & 23.04 & 0.8756 & - * \\ \end{tabular} } \vspace{2.6mm} \caption{\small{AIM 2020 realistic bokeh rendering challenge, CPU Track: results and final rankings. The results are sorted based on the MOS scores. $^*$ - These teams submitted solutions that are using pre-computed depth maps and therefore were excluded from the final evaluation phase.}} \label{tab:results} \end{table*} \begin{table*}[tbh!] \centering \resizebox{\linewidth}{!} { \begin{tabular}{l|c|cc|cccc} \multicolumn{2}{c|}{}&\multicolumn{2}{c|}{Factsheet Info}&\multicolumn{4}{c}{Track 2: Smartphone GPU}\\ \hline Team \, & \, Author \, & \, Framework \, & \, Training Hardware, GPU \, & \, Avg. Runtime, s \, & \, PSNR$\uparrow$ \, & \, SSIM$\uparrow$ \, & \, MOS$\uparrow$ \\ \hline \hline Airia-bokeh & \, MingQian \, & TensorFlow & Nvidia TITAN RTX & \textBF{1.52} & \textBF{23.58} & 0.8770 & \textBF{4.2} \\ AIA-Smart & \, JuewenPeng \, & PyTorch & GeForce GTX 1080 & 15.2 & 22.94 & \textBF{0.8842} & 4.0 \\ CET\_CVLab & \, Densen \, & TensorFlow & Nvidia Tesla P100 & 2.75 & 23.05 & 0.8591 & 3.2 \\ Team Horizon \, & \, tensorcat \, & PyTorch & GeForce GTX 1080 Ti & - * & 23.27 & 0.8818 & 3.2 \\ \end{tabular} } \vspace{2.6mm} \caption{\small{AIM 2020 realistic bokeh rendering challenge, GPU Track: results and final rankings. The results are sorted based on the MOS scores. The model submitted by the Team Horizon was unable to run on mobile GPUs due to NCHW channel order that is currently not supported by the TensorFlow Lite GPU delegate.}} \label{tab:results_gpu} \end{table*} The Track 1 of the challenge attracted more than 110 registered participants and the Track 2 more than 80. However, only 9 teams provided results in the final phase together with factsheets and codes for reproducibility. Tables~\ref{tab:results} and~\ref{tab:results_gpu} summarize the final challenge results in terms of PSNR, SSIM and MOS scores for each submitted solution in addition to self-reported hardware / software configurations and runtimes. Short descriptions of the proposed solutions are provided in section~\ref{sec:solutions}, and the team details (contact email, members and affiliations) are listed in Appendix~\ref{sec:affiliations}. \subsection{Architectures and Main Ideas} All the proposed methods are relying on end-to-end deep learning-based solutions. Almost all submitted models have a multi-scale encoder-decoder architecture and are processing the images at several scales. This allows to achieve a significantly faster runtime as all heavy image processing is done on images of low resolution, as well as adds the possibility of introducing heavy global image manipulations. The majority of teams were using the $L1$, SSIM / MS-SSIM, VGG-based, Sobel and Charbonnier loss functions, while team Airia-bokeh demonstrated that a proper adversarial loss can significantly boost the quality of the resulting bokeh effect. Almost all teams were using the Adam optimizer~\cite{kingma2014adam} to train deep learning models and TensorFlow or PyTorch frameworks to implement and train the networks. \subsection{Performance} \paragraph{Quality.} Airia-bokeh is the winner of the AIM 2020 challenge on rendering realistic bokeh. Airia-bokeh ranks the best in perceptual quality in both track 1 and track 2 with the same solution (deep model). Only one team -- AIA-Smart, -- submitted different models / solutions for the two tracks of the challenge. Surprisingly, the solution submitted for evaluation on smartphone GPU (Track 2) obtained better SSIM and MOS results than the one for CPU (Track 1). They are coming second to Airia-bokeh in the MOS score while reporting the best SSIM (0.8842) in Track 2. As expected, the perceptual ranking according to the MOS does not strongly correlate with the fidelity measures such as PSNR and SSIM. In particular, IPCV\_IITM team ranks first in terms of SSIM and PSNR but only sixth in terms of perceptual quality (Track 1). Interestingly, the CET\_SP team has the lowest fidelity (PSNR) and SSIM results, though comes third in perceptual quality (MOS). \paragraph{Runtime.} The measured average runtimes of the proposed solutions on standard Nvidia GPU cards (CPU Track 1) vary from ~0.7s to more than 27s per single image. The fastest solutions ($\sim$0.7s) are also among the worst performing in perceptual ranking, while the top fidelity method proposed by IPCV\_IITM requires 27s, and the top perceptual methods of Airia-bokeh and of AIA-Smart require 5.52s and 1.71s, respectively. When it comes to the solutions proposed in Track 2 evaluated on smartphone GPUs, the best perceptual quality solution of Airia-bokeh is also with the lowest inference time, 1.52s. We conclude that the proposed solutions do not meet the requirements for real-time applications on the current generation of smartphones, thus all processing should be done in the background after the image is obtained / captured. \subsection{Discussion} With the AIM 2020 challenge, we went further compared to the previously held challenges and aimed at solutions meant to run efficiently on desktop and smartphone hardware. The challenge employed the EBB!~\cite{ignatov2020rendering}, a novel large dataset containing paired and aligned low- and high-aperture photos captured with a high-end Canon 70D DSLR camera. Several of the proposed approaches produced results with good perceptual quality and runtime suitable for on-device image processing. These methods are gauging the state-of-the-art for the practical bokeh synthesis task learned from pairs of real exemplars. \section{Challenge Methods and Teams} \label{sec:solutions} This section describes solutions submitted by all teams participating in the final stage of the AIM 2020 realistic bokeh rendering challenge. \smallskip \subsection{Airia-bokeh} \begin{figure}[h!] \centering \resizebox{1.0\linewidth}{!} { \includegraphics[width=1.0\linewidth]{figsBokeh/1-Airia.png} } \caption{\small{Bokeh-Glass Network network (top) and PatchGAN-based discriminators (bottom) proposed by Airia-bokeh team.}} \label{fig:Airia} \end{figure} Team Airia-bokeh proposed a Bokeh-Glass Network (BG-Net)~\cite{qian2020bggan} model for rendering realistic bokeh that is illustrated in Fig.~\ref{fig:Airia}. The model consists of two stacked U-Net based networks that were first trained separately using a combination of the $L_1$ and SSIM losses (with weights 0.5 and 1, respectively). During the second stage, two PatchGAN~\cite{zhu2017unpaired} discriminators with different receptive fields were added to improve the quality of the produced images. The generator and the discriminator were trained together using the WGAN-GP algorithm with a batch size of 1. The authors have additionally enhanced the EBB! dataset by removing some image pairs that did not correspond in color or were not in focus. \subsection{AIA-Smart} \begin{figure}[h!] \centering \resizebox{1.0\linewidth}{!} { \includegraphics[width=1.0\linewidth]{figsBokeh/2-AIA.png} } \caption{\small{AIA-Smart network consisting of the defocus estimation, radiance, rendering and upsampling modules.}} \label{fig:AIA} \end{figure} The solution of AIA-Smart team is based on defocus map estimation~\cite{luo2020aim_defocus}. The proposed architecture consists of 4 modules (Fig.~\ref{fig:AIA}): defocus estimation, radiance, rendering and upsampling modules. Defocus estimation module is used to predict a defocus map, which works as a guidance for defocus rendering. Radiance module calculates the weight map used for estimating the weight of each pixel in the rendering process. In the rendering module, the low resolution bokeh result can be obtained with the input of the radiance map, weight map and defocus map using the refocusing pipeline proposed in~\cite{busam2019sterefo}. In the upsampling module, the low resolution bokeh result and the high resolution original image are combined to generate the final full-resolution bokeh image. The training of the network can be divided into 4 stages: 1) predicting the layered defocus maps and render the bokeh result on 1/4 of the original resolution, 2) rendering bokeh effect at 1/2 resolution while using the pretrained network from the first stage to refine the details around foreground boundaries, 3) replacing the multi-channel classification layer with a single-channel regression layer in the defocus estimation module to generate the pleasing 'Circle of Confusion', and 4) rendering the image at 1/2 resolution, upsampling the result by bilinear interpolation and calculating a soft foreground mask from the predicted single-channel defocus map. Finally, the foreground objects of the original image are covered on the rendering result to make the foreground more clear. During the first stage, the model is trained with a combination of the $L_1$, perceptual, SSIM and gradient loss functions using the images of resolution 256$\times$256 pixels. The initial learning rate is set to $1e-4$ with a decay-cycle of 30 epochs. At the second and the third stages, the model is fine-tuned on 512$\times$512 pixel images using the same set of loss functions. \subsection{CET\_CVLab and CET\_SP} \begin{figure}[h!] \centering \resizebox{1.0\linewidth}{!} { \includegraphics[width=1.0\linewidth]{figsBokeh/3-CETCVLab.png} } \caption{\small{Dilated Wavelet CNN model used by CET\_CVLab and CET\_SP teams.}} \label{fig:CETCVLab} \end{figure} Both CET\_CVLab and CET\_SP teams used the same U-Net based Dilated Wavelet CNN model (Fig.~\ref{fig:CETCVLab}) for generating bokeh images. In this network, the standard downsampling and upsampling operations are replaced with a discrete wavelet transform based (DWT) decomposition to minimize the information loss in these layers. The proposed methodology is computationally efficient and is based on the multi-level wavelet-CNN (MWCNN) proposed in~\cite{liu2018multi}. CET\_SP trained the model with a combination of the Charbonnier and perceptual VGG loss, while CET\_CVLab additionally used Sobel and Grayscale ($L_1$ distance between the grayscale images) loss functions. Both models were optimized using the Adam algorithm with a batch size of 10 for 600 and 500 epoch, respectively. \subsection{Team Horizon} \begin{figure}[h!] \centering \resizebox{1.0\linewidth}{!} { \includegraphics[width=1.0\linewidth]{figsBokeh/5-Horizon.png} } \caption{\small{A multiscale encoder-decoder based model proposed by Team Horizon.}} \label{fig:Horizon} \end{figure} The authors proposed an encoder-decoder based model shown in Fig.~\ref{fig:Horizon} that is trained at several scales. At each level, the encoder-decoder module is producing the weight maps that are used together with the input image by the bokeh generation module to render the bokeh image. Generated weight maps and bokeh images are then upscaled and concatenated with the input image in the next level, while the upscaled encoded features are added to the corresponding encoded features used in the next level. The model is trained with a combination of the MS-SSIM and SSIM loss functions. \subsection{IPCV\_IITM} \begin{figure}[h!] \centering \resizebox{1.0\linewidth}{!} { \includegraphics[width=1.0\linewidth]{figsBokeh/6-IPCV.png} } \caption{\small{Depth-guided Dynamic Filtering Dense Network proposed by IPCV\_IITM.}} \label{fig:IPCV} \end{figure} The authors proposed a depth-guided dynamic filtering dense network for rendering shallow depth-of-field (Fig.~\ref{fig:IPCV}). At the onset, the network uses a space-to-depth module that divides each input channel into a number of blocks concatenated along the channel dimension. The output of this layer is concatenated with the outputs of the pre-trained depth estimation~\cite{li2018megadepth} and salient object segmentation~\cite{hou2017deeply} networks to achieve more accurate rendering results. The resulting feature maps are passed to a U-net~\cite{ronneberger2015u} based encoder consisting of densely connected modules. The first dense-block contains 12 densely-connected layers, the second block~-- 16, and the third one~-- 24 densely-connected layers. The weights of each block are initialized using the DenseNet-121 network~\cite{huang2017densely} trained on the ImageNet dataset. The decoder has two difference branches which outputs are summed to produce the final result. The first branch has a U-net architecture with skip-connections and also consists of densely connected blocks. Its output is enhanced through multi-scale context aggregation through pooling and upsampling at 4 scales. The second branch uses the idea of dynamic filtering~\cite{jia2016dynamic} and generates dynamic blurring filters conditioned on the encoded feature map. These filters are produced locally and on-the-fly depending on the input, the parameters of the filter-generating network are updated during the training. We refer to~\cite{purohit2019depth} for more details. \subsection{CET21\_CV} \begin{figure}[h!] \centering \resizebox{1.0\linewidth}{!} { \includegraphics[width=1.0\linewidth]{figsBokeh/7-CET21.png} } \caption{\small{A modified U-Net model used by CET21\_CV team.}} \label{fig:CET21} \end{figure} CET21\_CV proposed a modified U-Net model depicted in Fig.~\ref{fig:CET21}. Compared to the original U-Net implementation, the authors replaced the max-pooling downsampling operation with a strided convolution layer, and the feature maps from shortcut connections are concatenated before applying the activation functions in the decoder module. \textit{Leaky ReLU} activations are used in the convolutional layers, and the entire model is trained to minimize the mean absolute error loss using the Adam algorithm. \subsection{CET\_ECE} \begin{figure}[h!] \centering \resizebox{1.0\linewidth}{!} { \includegraphics[width=1.0\linewidth]{figsBokeh/8-ECE.png} } \caption{\small{CET\_ECE's network architecture.}} \label{fig:ECE} \end{figure} The model architecture proposed by CET\_ECE team was inspired from the wide activation~\cite{yu2018wide} and channel attention~\cite{zhang2018image} based networks. The proposed network (Fig.~\ref{fig:ECE}) is generally consisting of 2 block types: a feature extraction block and a series of wide activation residual blocks. To reduce the model complexity and information loss, a space-to-depth layer with a scale factor of 4 is used before the initial feature extraction block, and a depth-to-space operation is used as the last layer of the network. The Charbonnier loss function is used for training the network as it better captures the edge information compared to the MSE loss. \subsection{Xuehuapiaopiao-team and Terminator} Both teams used a slightly modified PyNET~\cite{ignatov2020rendering} model for generating bokeh images. While the visual results of the proposed solutions were looking fine, they were relying on depth estimation modules that were not included in the submissions, and therefore were not ranked in the final phase of the challenge. \section*{Acknowledgments} We thank the AIM 2020 sponsors: Huawei, MediaTek, Qualcomm, NVIDIA, Google and Computer Vision Lab / ETH Z\"urich. \newpage
1,314,259,992,839
arxiv
\section{Introduction} During the past few years considerable amount of papers have been devoted to the analysis of electromagnetic (EM) wave dynamics in relativistic plasmas primarily in connection with their possible role in a variety of astrophysical phenomena. Highly relativistic electron-positron (e-p) plasmas exist in the pulsar magnetosphere \cite{bib:Sturok} and the corona of the magnetars \cite{bib:Beloborodov}, and also likely to be found in the bipolar outflows (jets) in Active Galactic Nuclei (AGN) \cite{bib:Begelman}. The presence of e-p plasma is also argued in the MeV epoch of the early Universe \cite{bib:Tajima}. The plasma can be termed as a relativistic when either bulk velocities of plasma fluid cells are close to the velocity of light or when the averaged kinetic energy of the particles in the cells are greater than the electron rest energy (e.g. the thermal energy of plasma with temperature $T\geq mc^{2}$). In different astrophysical conditions the density of e-p plasmas can take values varying by many orders of magnitude. It is believed that the rest frame density of the e-p plasma near the pulsar surface is $n\geq 10^{11}\,cm^{-3}$ \cite{bib:Misha}, while in the MeV epoch of the early Universe the density of the optically thick e-p plasma can be as high as $n=10^{32}\,cm^{-3}$ \cite{bib:Weinberg}. Intense e-p pair creation takes place during the process of gravitational collapse of massive stars \cite{bib:Stenflo}. It is argued that the gravitational collapse of the massive stars may lead to the charge separation with the field strength exceeding the Schwinger limit resulting in e-p pair plasma creation with estimated density to be $n=10^{34}\,cm^{-3}$ \ \cite{bib:ruffini}. Superdense e-p plasma may also exist in GRB source where the e-p plasma density can be in the range $n=(10^{30}-10^{37})\,cm^{-3}$ \cite{bib:aksenov}. Dense electron-positron plasma can be soon produced in laboratory conditions as well. Indeed, the modern petawatt lasers systems are already capable of producing ultrashort pulses with the focal intensities \ $I=2\times 10^{22}\,W/cm^{2}$ \ \cite{bib:Yanovski}. Pulses of even higher intensities exceeding \ $I=10^{26}\,W/cm^{2}$ \ are likely to be available soon in lab or in the Lorentz boosted frames \cite{bib:Dunne}. Interaction of such pulses with gaseous or solid targets could lead to the generation of the optically thin e-p plasma with above solid state densities in the range of \ $(10^{23}-10^{28})\,cm^{-3}$ \ \cite{bib:Shukla-Eliasson}. For highly compressed state the plasma behaves as a degenerate Fermi gas provided that averaged inter-particle distance is smaller than the thermal de Broglie wavelength. Mutual interaction of the plasma particles becomes unimportant and plasma becomes more ideal as the density increases \cite{bib:Landau}. If the thermal energy of the particles (electrons and positrons) is much lower than their Fermi energy the plasma may be treated as cold, i.e. having zero temperature, even it is of the order of \ $10^{9}\,K$ \ \cite{bib:Russo}. The Fermi energy of degenerate electrons (positrons) is \ \ $\epsilon _{F}=m_{e}c^{2}\,\left[ \left( 1+R^{2}\right) ^{1/2}-1\right] $ , where \ $R=p_{F}/m_{e}c$ , \ $p_{F}$ \ - is the Fermi momentum which is related to the rest-frame particle density by the following relation \ $p_{F}=m_{e}c\,\left( n/n_{c}\right) ^{1/3}$ \ , \ here \ $n_{c}=5.9\times 10^{29}\,cm^{-3}$ \ is the normalizing critical number-density \cite{bib:Akbari}. Thus, if \ $n\geq n_{c}$ \ then particles inside fluid cells can move with the relativistic velocities and plasma rightfully can be termed as being a relativistic. Here we would like to emphasize that the pair plasmas with such densities can not be in complete thermodynamic equilibrium with the photon gas into which it annihilates \cite{bib:Katz}. Equilibrium is reached within the time-period related mainly to the electron-positron annihilations. The characteristic time for the e-p pair annihilation is \ $\tau_{ann}\approx 1/(\sigma v\,n)$ \cite{bib:zeldovich} where $\sigma $ is the cross section of annihilation and $v$ is the relative velocity of pairs. In relativistic case, for the plasma density \ $n=10^{30}\,cm^{-3}$ \ (i.e. $p_F=m_ec$) \ the particles move with almost velocity of light \ $v\approx c$ \ and for annihilation cross section roughly estimated to be \ $\sigma \approx 10^{-24}cm^{2}$ \ we get \ $\tau_{ann}\approx 0.3\times 10^{-16}\,sec$. Subsequently, the thermodynamic equilibrium between pairs and photons (with zero chemical potential) will be achieved. Plasma becomes optically thick with steady state pair density defined by plasma temperature. For low temperature degenerate plasma the steady state pair density will be considerably smaller than the initial one. Therefore, in order to study electromagnetic properties of relativistic degenerate e-p plasma the characteristic time of the plasma processes should be smaller than the time of pair annihilations. Although the pair annihilation time turns out to be extremely small the characteristic plasma frequency (Langmuir frequency) for high densities is also very high \ [$\sim 10^{18}\,sec^{-1}$] \ and collective plasma oscillations have enough time to develop. \section{Basic Equations} In the present paper we study the possibility of the existence of localized high frequency EM solitary pulses in the relativistic degenerate optically thin e-p plasma. Existence and stability of EM solitary pulses in classical relativistic e-p plasma has been intensively investigated in the past \cite{bib:physreport,bib:levan,bib:kartal,bib:lee}). In contrast, the nonlinear dynamics of EM pulses in quantum degenerate relativistic plasma were investigated just for low frequency modes (see \cite{bib:Khan} and references therein). The fluid equations, valid for each species (electrons and positrons), can be written in a manifestly covariant form \begin{equation} \frac{\partial T^{\alpha \beta }}{\partial x^{\beta }} = qF^{\alpha \beta }nU_{\beta }\ , \label{B1} \end{equation} where the Greek indices go from \ $0$ \ to \ $3$ ; here \ $\partial _{\alpha }\equiv \partial /\partial x^{\alpha }=\left( c^{-1}\partial /\partial t, \mathbf{\nabla }\right) $; \ $T^{\alpha \beta }$ \ is the energy-momentum tensor of the plasma species with charge \ $q$ \ and mass \ $m$ , and \ $U^{\alpha }=\left( \gamma ,\gamma \mathbf{V/}c\right) $ \ is the local four-velocity with \ $\gamma =\left( 1-V^{2}/c^{2}\right) ^{-1/2}$ $\left( U^{\alpha }U_{\alpha }=1\right) $ ; the metric tensor is \ $g^{\alpha \beta }=diag\left( 1,-1,-1,-1\right) $ , \ and \ $q=-e,\ e$ \ for electrons and positrons respectively. Note that we do not label the fluid species by an additional index for brevity. In case when the number of charged particles is conserved for each species, the rest-frame particle density \ $n$ \ satisfies the continuity equation \begin{equation} \frac{\partial nU^{\alpha}}{\partial x^{\alpha}}=0 \ . \label{B2} \end{equation} \bigskip The electromagnetic (EM) field tensor can be formally written as \ $F^{\alpha\beta}=[{\bf E},{\bf B}]$ \ and it satisfies the Maxwell equations \ $\partial_{\beta}F^{\alpha\beta}=-(4\pi/c)J^{\alpha}$ , \qquad $\epsilon^{\alpha\beta\gamma\delta}\partial_{\beta}F_{\gamma\delta}=0$, \ where \ $J^{\alpha}=(c\rho,{\bf J}$ $)$ , and \ $\rho$ \ and \ $\mathbf{J} $ \ are the total charge and the current density of the plasma, respectively. Equation (\ref{B1}) represents the conservation of momentum and energy, where the momentum change due to collisions is ignored. The energy momentum tensor \ $T^{\alpha\beta}$ \ is assumed to be that of an ideal isotropic fluid: \ $T^{\alpha\beta}=wU^{\alpha}U^{\beta}-g^{\alpha \beta}{\mathcal{P}}$ , where \ $w={\mathcal{E}}+{\mathcal{P}}$ \ is the enthalpy per unit volume, and \ ${\mathcal{E}}$ \ is the proper internal energy density, while \ ${\mathcal{P}}$ \ is the pressure of the fluid species. If the thermal energy of the particles (electrons and positrons) is much lower than their Fermi energy the plasma can be treated as being completely degenerate. For a degenerate Fermi gas with \ $nT/{\mathcal{P}}<<1 $ , we have the following relations \cite{bib:Chandra,bib:gurovich} : \begin{equation} {\mathcal{P}} = \frac{m_{e}^{4}c^{5}}{3\pi^{2}\hbar^{3}}f(R) \ , \label{B3} \end{equation} \begin{equation} {\mathcal{E}} =\frac{m_{e}^{4}c^{5}}{3\pi^{2}\hbar^{3}}\left[ R^{3}\left( 1+R^{2}\right) ^{1/2}-f(R) \right] \ , \label{B4} \end{equation} where \begin{equation} 8f\left( R\right) =3\sinh^{-1}R+R\left( 1+R^{2}\right) ^{1/2}\left( 2R^{2}-3\right) \ . \label{B5} \end{equation} In equations (\ref{B3})-(\ref{B5}) \ $R=p_{F}/m_{e}c$ \ with \ $p_{F}$ \ being the Fermi momentum defined above. The degenerate equation of state is given by \ ${\mathcal{P}}\propto$ $n^{5/3}$ \ or \ ${\mathcal{P}}\propto n^{4/3}$ \ for nonrelativistic \ $\left( R\ll 1\right)$ \ and ultrarelativistic \ $\left( R\gg 1\right)$ \ plasma cases, respectively. \bigskip The fluid model presented above implies that the distribution function of electrons and positrons remains locally Juttner-Fermian which for zero temperature case leads to the just density dependent thermodynamical quantities \ ${\mathcal{E}}(n), \ {\mathcal{P}}(n)$ \ and \ $w(n)$ . All these quantities implicitly depend on \ $x_{\alpha}$ \ via \ $n=N/\gamma$ , where \ $N$ \ is the density of the fluid species in laboratory frame. The plasma dynamics is isentropic (moreover, at \ $T\rightarrow 0$ \ the entropy turns out to be zero) and, consequently, we have the following thermodynamical relation \ $d( w/n ) =d{\mathcal{P}}/n$. Applying this relation and passing through straightforward algebra (see for instance \cite{bib:Ohashi,bib:BMY-cooling}), introducing \ $G=w/nm_{e}c^{2}$ \ , Eq. (\ref{B1}) can be reduced to the following system of equations: \begin{equation} \frac{\partial}{\partial t}\left(G{\bf p}\right)+ m_{e}c^{2}\, {\bf \nabla}\left(G\gamma\right) = q\,{\bf E} \ + \ {\bf V\times \Omega} \label{B6} \end{equation} and \begin{equation} \frac{\partial}{\partial t}\,{\bf \Omega}={\bf \nabla\times}\left( {\bf V\times\Omega}\right) \ , \label{B7} \end{equation} where \ ${\bf \Omega=}( q/c ) {\bf B+\nabla\times}\left( G{\bf p}\right)$ \ is the generalized vorticity. Here \ ${\bf p} = \gamma m_{e}{\bf V}$ \ is the hydrodynamic momentum and \ $G=G(n)$ \ can be called as the density dependent "effective mass" factor of the fluid cell. \ The Equations (\ref{B6})-(\ref{B7}) along with the Maxwell and the continuity equations form the complete set of equations for studying the dynamics of degenerate plasma. It is interesting to remark that similar set of equations has been exhibited in Ref.\cite{bib:Ohashi} for classical relativistic plasma obeying Maxwell-Juttner statistics. In this case the effective mass factor of the fluid elements depends on the temperature \ $G=G(T)$ \ whereas \ $T\sim n^{\Gamma-1}$. \ Here the adiabatic index \ $\Gamma=5/3$ \ for the nonrelativistic \ ($T\ll m_{e}c^{2}$) \ and $\Gamma=4/3 $ \ for the relativistic \ ($T\gg m_{e}c^{2}$) \ temperatures, respectively. In the degenerate plasma case \ $w/nm_{e}c^{2}=\left( 1+R^{2}\right) ^{1/2}$ \ and consequently the mass factor depends only on the plasma rest frame density by the following simple relation \ $G=[ 1+( n/n_{c})^{2/3}]^{1/2}$ \ which is valid for the arbitrary strength of relativity defined by the ratio \ - $n/n_c$ . \vspace{1cm} Expressing the EM fields by the vector \ (${\bf A}$) \ and scalar \ ($\varphi $) \ potentials, i.e., \ ${\bf E} = -(1/c){\bf \partial A/\partial}t-{\bf \nabla}\varphi$ , \ and \ ${\bf B}={\bf \nabla\times A}$ , the Maxwell equations (with the Coulomb gauge \ ${\bf \nabla\cdot A}=0$ ) \ can be written as: \begin{equation} \frac{\partial^{2}{\bf A}}{\partial t^{2}}-c^{2}\Delta{\bf A} + c\, \frac{\partial}{\partial t}\left( {\bf \nabla}\varphi\right) - 4\pi c\,{\bf J}=0 \label{B8} \end{equation} and \begin{equation} \Delta\varphi = - 4\pi\,\rho \ . \label{B9} \end{equation} \noindent Here, the charge and the current densities are given by \ $\rho=\sum q\gamma n$ \ and \ ${\bf J}=\sum q\gamma n{\bf V}$, respectively; the summation runs over the charge species. For the current effort, we apply equations (\ref{B6})-(\ref{B7}) for wave processes in an unmagnetized plasma. From Eq.(\ref{B7}) it follows that if the generalized vorticity is initially zero \ (${\bf \Omega} = 0$) \ everywhere in space, it remains zero for all subsequent times. We assume that before the EM radiation is ''switched on'' the generalized vorticity of the system is zero. Accordingly, the Eq.(\ref{B6}) now takes the form \begin{equation} \frac{\partial}{\partial t}\left( G{\bf p} + \frac{q}{c}{\bf A}\right) +{\bf \nabla}\left( m_{e}c^{2}\,G\,\gamma + q\,\varphi\right) = 0 \ . \label{B10} \end{equation} \bigskip We are looking for the localized solution of equations (\ref{B8})-(\ref{B10}) in one-dimensional case. Assuming that all quantities vary only with one spatial coordinate \ $z$ \ and in time \ $t$ \ the transverse component of the equation of motion (\ref{B10}) is immediately integrated to give: \ ${\bf p}_{\perp} = - q\,{\bf A}_{\perp}/(c\,G)$ . The constant of integration is set equal to zero, since the particle hydrodynamic momenta are assumed to be zero at infinity where the field vanishes. \ Due to the gauge condition \ $A_{z}=0$ \ the longitudinal motion of the plasma is coupled with the EM field via \ $\gamma=\left[1+( p_{\perp}^{2}+p_{z}^{2})/m_{e}^{2}c^{2}\right]^{1/2}$ \ which does not depend on the particle charge sign. The EM pressure gives equal longitudinal momenta to both the electrons and positrons \ ($p_{ez} = p_{pz}=p_{z}, \ \gamma_{e}=\gamma_{p}=\gamma$) \ and modifies plasma density without producing the charge separation, i.e., \ $n_{e}=n_{p}=n$ \ and \ $\varphi =0$ . \ Thus, the longitudinal motion of the plasma is entirely determined by the following equation of motion \begin{equation} \frac{\partial}{\partial t}G\,p_{z}+m_{e}c^{2}\,\frac{\partial}{\partial z}\,G\,\gamma=0 \ \label{B11} \end{equation} and the continuity equation (\ref{B2}), which in our case reads as \begin{equation} \frac{\partial}{\partial t}\gamma n+\frac{\partial}{\partial z}\left( n\gamma V_{z}\right) = 0 \ . \label{B12} \end{equation} \noindent The longitudinal component of the current density is zero \ $J_{z}=0$ \ while for the transverse one we have: \ ${\bf J}_{_{\perp}}{\bf =}( 2ne^2/c\,G ) {\bf A}_{\perp}$ . Substituting this expression into Eq.(\ref{B8}) we obtain the following equation \begin{equation} \frac{\partial^{2}{\bf A}_{\perp}}{\partial t^{2}}-c^{2}\frac{\partial ^{2}{\bf A}_{\perp}}{\partial z^{2}} + \Omega_{e}^{2}\left( \frac{n}{n_{0}}\frac{G_{0}}{G}\right) {\bf A}_{\perp}=0 \ , \label{B13} \end{equation} where \ $\Omega_{e}=( 8\pi e^{2}n_0/(m_{e}G_{0})\ )^{1/2}\ $ \ is the Langmuir frequency of the e-p plasma and \ $n_{0}$ \ is the equilibrium density of electrons (positrons). Notice that in the Langmuir frequency we introduced effective electron mass \ $m_{e}G_{0}=m_{e}(1+R_{0}^{2})^{1/2}$ , where \ $R_{0}=(n_{0}/n_{c})^{1/3}$. \bigskip We are looking for the stationary localized solution described by the equations (\ref{B11})-(\ref{B13}) for a circularly polarized EM wave. The vector potential can be expressed as \begin{equation} e{\bf A}_{\perp}/m_{e}c^{2} =(1/2)({\bf x} + i{\bf y})\,A(z)\exp(-i\omega t)+c.c. \ , \end{equation} where \ $A(z)$ \ is the real valued dimensionless amplitude depending only on spatial coordinate \ $z$; \ $\omega$ \ is the frequency and \ ${\bf x}$ \ and \ ${\bf y}$ \ are the unit vectors. In this stationary case \ $p_{z}=0$ \ and integrating the Eq.(\ref{B11}) we obtain the relation \ $G\,\gamma = G_{0}$ , where \ $G=[ 1+R_{0}^{2}( n/n_{0})^{2/3}]^{1/2}$ \ and \ $\gamma =[ 1 + A^{2}/G^{2}]^{1/2}$ . The straightforward algebra gives the following relations: \begin{equation} n=n_{0}\left( 1-A^{2}/R_{0}^{2}\right )^{3/2} \ \label{B14} \end{equation} and \begin{equation} G=G_{0}\left[ 1-A^{2}/(1+R_{0}^{2})\right]^{3/2} \ . \end{equation} It follows from the Eq.(\ref{B14}) that our considerations remain valid provided \ $A\leq R_{0}$ ; the plasma density decreases in the area of EM field localization and if at certain point of this area \ $A\rightarrow R_{0} $ \ then the plasma density becomes zero \ ($n\rightarrow 0$), hence, at that point the cavitation takes place. The Eq.(\ref{B13}) reduces to the following ordinary differential equation: \begin{equation} \frac{d^{2}a}{d\eta^{2}}-\lambda a + f(a^{2})a=0 \ , \label{B15} \end{equation} where the nonlinearity function \ $f(a^{2})$ \ is given by \begin{equation} f(a^{2})=1 - \frac{\left( 1-a^{2}\right)^{3/2}}{\left( 1-\epsilon^{2}a^{2}\right)^{1/2}} \ . \label{B16} \end{equation} Here \ $\eta=z(\Omega_{e}/c)$ \ is the dimensionless coordinate, \ $\lambda=1-\omega^{2}/\Omega_{e}^{2}$ , \ $a=A/R_{0}$ \ and \ $\epsilon^{2}=R_{0}^{2}/(1+R_{0}^{2})$. One can see that the nonlinearity function \ $f$ \ is positive and monotonically increasing function of \ $a$ . For small intensities of EM field \ ($a\ll 1$) \ the nonlinearity function is \ $f= (3-\epsilon ^{2})a^{2}/2$ \ while \ $f\rightarrow 1$ \ for \ $a\rightarrow 1$ . Note that the saturating character of the nonlinearity is related to the plasma cavitation (see Eq.(\ref{B14})). Since \ $0\leq f\leq 1$ \ the Eq.(\ref{B15}) admits the soliton solutions for all allowed intensities of EM field \ ($0\leq a^{2}\leq 1$) \ provided that \ $0\leq\lambda\leq Max[f]=1$ \ \cite{bib:Vakhitov}. The paramerer \ $\lambda $ \ is the nonlinear "frequency shift" and it has the meaning of the reciprocal of the square of the characteristic width of the soliton. \bigskip Integration of Eq.(\ref{B15}) gives \begin{equation} \left( \frac{da}{d\eta}\right)^{2}-\lambda a^{2} + F\left( a^{2},\epsilon\right) = 0 \ . \label{B17} \end{equation} Here we assumed that an integration constant is zero since for the soliton solutions \ $a,a_{\eta}\rightarrow 0$ \ for \ $\left\vert \eta\right \vert \rightarrow \infty $ . The nonlinear function \ $F(a^{2},\epsilon)=\int_{0}^{a^{2}}\,f(\xi ) d\xi=F_{1}(a^{2},\epsilon)-F_{1}(0,\epsilon)$ , where \ $F_{1}(a^{2},\epsilon)=(1/4\epsilon^{4})\sqrt{( 1-\epsilon^{2}\,a^{2}) ( 1-a^{2})}\ [3 + \epsilon^{2}(2a^{2}-5)] - (3/4\epsilon^{5})\,(\epsilon^{2}-1)^{2} \ln (2\epsilon^{2}\,\sqrt{1-a^{2}} + 2 \, \sqrt{1-\epsilon^{2}a^{2}})$ . From Eq.(\ref{B17}) one can see that the relation between \ $\lambda$ \ and the soliton amplitude \ $a_{m}$ \ is given by \ $\lambda=F(a_{m}^{2},\epsilon ) /a_{m}^{2}$ . \section{Results and Summary} The general solution of Eq.(\ref{B15}) cannot be expressed in terms of the elementary function except for the ultra-relativistic degenerate plasma case, \ i.e. for \ $R_{0} \gg 1$ \ \ ($\epsilon\rightarrow 1$) . Indeed, in this case \ $f=a^{2}$ \ and for \ $\lambda_{(\epsilon=1)}=a_{m}^{2}/2$ \ the soliton solution of Eq.(\ref{B15}) obtains a simple form: \begin{equation} a=a_{m}sech\left( \frac{a_{m}}{\sqrt{2}}\,x\right) \ . \label{B18} \end{equation} We would like to emphasize that the soliton solution (\ref{B18}) exists for \ $a_{m}=\left( A_{m}/R_{0}\right)\leq 1$ \ ($\lambda_{(\epsilon=1)\text{ }}\leq 0.5$) . Consequently we can state that in the relativistic degenerate plasma the amplitude of EM soliton can become relativistically strong -- \ $A_{m}\gg 1$ . In the region of the soliton localization the e-p plasma density decreases considerably while for \ $A_{m}\rightarrow R_{0}$ \ the plasma cavitation takes place. For the nonrelativistic degenerate plasma \ ($\epsilon \ll 1$)\ the nonlinearity function can be approximated by the following expression: \ $f=1-(1-a^{2} )^{3/2}$ . The Eq.(\ref{B15}) then has a soliton solution if the nonlinear frequency shift satisfies the relation \ $\lambda_{(\epsilon=0)}=F(a_{m}^{2},0)/a_{m}^{2} = 1-(2/5)(1-a_{m}^{2})^{5/3}/a_{m}^{2}$ . Thus, the soliton solution exists for \quad $\lambda _{(\epsilon=0)}\leq0.6$ . Note that \ $A_{m}\leq R_{0}$ \ ($a_{m}\leq 1$) \ and since \ $R_{0}\ll 1$ \ we conclude that in the nonrelativistic degenerate e-p plasma the EM field intensity is nonrelativistic \ $A_{m}\ll 1$ . However, even the field intensity is weak it can give rise to a plasma cavitation at \ $A_{m}\rightarrow R_{0}$. For the intermediate case when \ $\epsilon < 1 $ \ ($\neq 0$) the soliton solution exists in plasma provided the nonlinear frequency shift \ $\lambda$ \ satisfies an inequality \ $\lambda_{(\epsilon=1)}<\lambda<\lambda_{(\epsilon=0)}$ \ and attains its maximal value \ $\lambda_{\max}$ \ ($0.5<\lambda_{\max}<0.6$) \ that corresponds to the cavitation \ ($a_{m}=1$). \bigskip As we emphasized in the introduction the results obtained in this paper are valid provided that characteristic time of EM field oscillations is smaller than the pair annihilation time, i.e. $\kappa\equiv \Omega_e \,\tau_{ann}/2\pi \gg 1 $ , namely for annihilation time we can use the relation \cite{bib:rohrlich} \begin{equation} \tau_{ann}^{-1}=(\pi r_0^2c)\,Q(G_0)\,n \ , \label{B19} \end{equation} where $r_0$ is an electron radius and \begin{equation} Q(x)=\frac{1}{x(1+x)}\ \left[ \frac{x^2+4x+1}{\sqrt{x^2-1}}\,ln\left( x+\sqrt{x^2-1}\right)-(3+x) \right] \ . \label{B20} \end{equation} Estimations of parameter \ $\kappa$ \ for different densities using eq.-s (\ref{B19}) and (\ref{B20}) as well as the expressions for $G_0$ and $\Omega_e$ given above yield following: \ (1) $\kappa \sim 1.4 \cdot 10^3$ \ for \ $n\sim n_c = 5.9\cdot 10^{29}\,cm^{-3}$; \ (2) $\kappa \sim 60$ \ for \ $n=5\cdot 10^{32}\,cm^{-3}$; \ (3) $\kappa \sim 7$ \ for \ $n=10^{35}\,cm^{-3}$; \ (4) $\kappa = 0.9$ \ for \ $n=10^{37}\,cm^{-3}$. \ Hence, for the degenerate plasma densities up to \ $\sim 10^{35}\,cm^{-3}$ \ the conditions for the localized high frequency pulse solution existence are easily met. \bigskip In conclusion, we have considered the possibility of the high-frequency EM soliton formation in fully degenerate electron-positron plasma. Applying fully relativistic, hydrodynamic approach we have shown that such a plasma supports an existence of stationary soliton solution in over-dense plasma \ ($\omega<\Omega_{e}$). In relativistic degenerate e-p plasma the intensity of EM field can be relativistically strong while for norelativistic degeneracy case the soliton intensity is always nonrelativistic. It is also shown that the cavitation of plasma can occur in both the relativistic and nonrelativistic degenerate plasmas. The generalization for the case of moving soliton is straightforward and is beyond of intended scope of present paper. We believe that the 1-dimensional model of present study can be generalized for the 2D and 3D problems similarly in either so called "pancake" regime of propagation with $L_{||}\ll L_{\perp}$ \cite{bib:BM-PRL,bib:fedele,bib:Mora} or in so-called beam-regime of propagation $L_{\perp}\ll L_{||}$ (where $L_{||}$ and $L_{\perp}$ are the characteristic longitudinal and transverse spatial dimensions of the field, respectively) \cite{bib:Ohashi}. Preliminary analysis shows, that in such cases nonlinear Shr\"odinger equation with saturating nonlinearity similar to (\ref{B16}) can be derived implying that the generation of stable multi-dimensional localized solutions -- "the light bullets" or the solitary filaments -- is possible. The results found in given manuscript can be useful to understand the dynamics of x-ray pulses emanating from the compact astrophysical objects as well as to study the nonlinear interactions of intense laser pulses and dense degenerate plasmas that are relevant for the next-generation intense laser -- solid density plasma experiments. \bigskip N.L. Tsintsadze would like to acknowledge the partial support for his work from GNSF grant project No. FR/101/6-140/13. \vspace{1cm}
1,314,259,992,840
arxiv
\section{Introduction} In a very recent survey paper \cite{Met14} that is devoted to the systems, which exhibit anomalous diffusion, about three hundreds references to the relevant works are given. Many of the cited publications deal with modeling of the anomalous diffusion with the continuous time random walks on the micro-level and with the fractional diffusion equations on the macro-level. The permanently growing number of publications devoted to anomalous diffusion and its modeling with the Fractional Calculus (FC) operators poses some challenges on the mathematical theory of FC and in particular on the theory of the partial differential equations of fractional order. This paper is devoted to one of these challenges, namely, to suggest a suitable definition of the Caputo fractional derivative in the fractional Sobolev spaces, to consider its properties in these spaces, and to apply them for analysis of the fractional diffusion equations in the fractional Sobolev spaces. For the theory of the FC operators, we refer the reader to the encyclopedia \cite{SKM}. The basic theory of the ordinary and partial fractional differential equations can be found e.g. in the monographs \cite{Diet}, \cite{Kil}, and \cite{P}. We mention here also the papers \cite{Beck}, \cite{LuY}, \cite{Lu1}-\cite{Lu3}, \cite{SY}, where some recent developments regarding the partial fractional differential equations are presented. In this paper, we deal with the fractional diffusion equation $$ \ppp_t^{\alpha}u(x,t) = -Lu(x,t) + F(x,t), \ x\in \Omega \subset \R^n,\ 0<t\le T, \eqno{(1.1)} $$ where $-L$ is a differential operator of the elliptic type and $\ppp_t^{\alpha}$ denotes the Caputo derivative that is usually defined by the formula $$ \ppp_t^{\alpha}u(x,t) = \frac{1}{\Gamma(1-\alpha)}\int^t_0 (t-s)^{-\alpha}\frac{\ppp u}{\ppp s}(x,s) ds, \quad 0 < t \le T, \quad 0 < \alpha < 1. \eqno{(1.2)} $$ To avoid switching between notations we consistently write the operator symbol for fractional derivation with round $\partial$, regardless of the number of independent variables. In the formula (1.2), the Caputo derivative $\ppp_t^{\alpha}u$ is a derivative of the order $\alpha, \ 0 < \alpha < 1$. Still, in its definition the first derivative $\frac{\ppp u}{\ppp s}$ is involved that requires extra regularity of the function $u$ and is meaningful only if $\frac{\ppp u}{\ppp s}$ exists in a suitable sense. On the other hand, in many applications one has to deal with the non-differentiable functions and it is important to introduce a weak solution to the fractional diffusion equation (1.1) in the case where $\frac{\ppp u}{\ppp t}$ does not exist in the usual sense (see e.g. \cite{LM} for the theory of the weak solutions of partial differential equations). For partial differential equations, the weak solutions are often constructed in the Sobolev spaces (\cite{LM}). In this paper, we try to extend the theory of the weak solutions to partial differential equations in the Sobolev spaces to the fractional diffusion equation (1.1). The first problem which we have to overcome is to interpret the fractional Caputo derivative $\ppp_t^{\alpha}$ in the fractional Sobolev spaces and not by the pointwise definition (1.2). To the best knowledge of the authors, a solution to this problem was not yet suggested in the literature. It is worth mentioning that there are some publications (see e.g. \cite{Er}, \cite{Bangti} and the references there) devoted to the Riemann-Liouville fractional derivative in the fractional Sobolev spaces. However, their approach via the Fourier transform is essentially different from the approach which we suggest in this paper for defining the Caputo derivative in the fractional Sobolev spaces. One of the main applications of the fractional derivatives in the fractional Sobolev spaces is for introducing the weak or the generalized solution to the fractional differential equations. Of course, like in the theory of partial differential equations, different approaches can be used to attack this problem. In particular, in \cite{Lu2010}, a generalized solution to the initial-boundary-value problems for the fractional diffusion equation in the sense of Vladimirov was introduced and analyzed. This generalized solution is a continuous function that is not necessarily differentiable. To construct the generalized solution, a formal solution in terms of the Fourier series with respect to the eigenfunctions of the operator $L$ from (1.1) was employed. The same idea of the formal solution was used in \cite{SY} for constructing a weak solution to some initial-boundary-value problems for an equation of the type (1.1) and for proving its unique existence for the functions $F\in L^{\infty}(0,T;L^2(\Omega))$ and with an initial condition of the type $\lim_{t\downarrow 0} \Vert u(\cdot,t)\Vert_* = 0$, where $\Vert \cdot\Vert_*$ is a certain norm that is weaker than the $L^2$-norm. A norm estimate for the week solution was however given via the norm of $F$ in $L^2(0,T;L^2(\Omega))$. Thus the results presented in \cite{SY} show a certain inconsistency between the inclusion $F\in L^{\infty}(0,T;L^2(\Omega))$ and the solution norm estimate via the norm of $F$ in $L^2(0,T;L^2(\Omega))$. In this paper, this inconsistency is resolved by a new definition of the weak solution that is based on the suggested definition of the Caputo derivative in the fractional Sobolev spaces. In this way, the maximum regularity of the fractional diffusion equation with the Caputo fractional derivative is shown in this paper. Let us mention that in \cite{Ba} the $W^{\alpha,p}(0,T)$- regularity with $p > 1$ was proved for the fractional differential equations with the Riemann-Liouville time-fractional derivative. The rest of this paper is organized in three sections. In Section 2, the Riemann-Liouville fractional integral and the related Abel integral equations in the fractional Sobolev spaces are revisited. The result (Theorem \ref{t21}) of Section 2 forms a basis for investigation of the Caputo fractional derivative in the fractional Sobolev spaces in Section 3 where we establish the norm equivalence between the $L^2$-norm of $\ppp_t^{\alpha}u$ and the fractional Sobolev norm of $u$ (Theorem \ref{t31}). In particular, we suggest a new interpretation of the Caputo derivative in the fractional Sobolev spaces and prove some important norm equivalences. Finally, Section 4 is devoted to investigation of the maximum regularity of the solutions to some initial-boundary-value problems for the fractional diffusion equations with the Caputo time-derivative in the fractional Sobolev spaces. We introduce a notion of a week solution to the problem under consideration and show both its uniqueness, existence, and the corresponding norm estimates. \section{The Riemann-Liouville integral in the fractional Sobolev spaces} In this section, we first remind the reader of some known properties of the Riemann-Liouville fractional integral operator and then formulate and prove one of our main results. We start this section with some definitions of the operators and the functional spaces which we need in the further discussions. Throughout this paper, we always assume that $0 < \alpha < 1$ if we do not specify another condition. The Riemann-Liouville fractional integral operator $J^{\alpha}: L^2(0,T) \to L^2(0,T)$ is defined by the formula (see e.g. \cite{GV}) $$ (J^{\alpha}y)(t) = \frac{1}{\Gamma(\alpha)}\int^t_0 (t-s)^{\alpha-1}y(s) ds, \quad 0\le t \le T, \quad 0 < \alpha \le 1, \quad J^0 = I. $$ By $L^2 := L^2(0,T)$ and $H^{\alpha}(0,T)$ we mean the usual $L^2$-space and the fractional Sobolev space on the interval $(0,T)$ (see e.g. \cite{Ad}, Chapter VII), respectively. The $L^2$-norm and the scalar product in $L^2$ are denoted by $\Vert\cdot\Vert_{L^2}$ and $(\cdot,\cdot)_{L^2}$, respectively. By $\sim$ we mean a norms equivalence. We set $$ _0 H^{\alpha}(0,T) = \{ u \in H^{\alpha}(0,T): \thinspace u(0) = 0\} $$ if $\frac{1}{2} < \alpha \le 1$ and we identify $_0H^{\alpha}(0,T)$ with $H^{\alpha}(0,T)$ for $0 \le \alpha < \frac{1}{2}$. For Hilbert spaces $X$ and $Y$ and an operator $K: X \to Y$ defined in $X$, by $\mathcal{D}(K)$ and $\mathcal{R}(K)$ we denote the domain and the range of $K$, respectively. It can be easily verified that the Riemann-Liouville operator $J^{\alpha}: L^2 \to L^2$ is injective (Theorem 5.1.1 in \cite{GV}). Therefore there exists an inverse operator to the Riemann-Liouville operator $J^{\alpha}$ and we denote it by $J^{-\alpha}$. By the definition, $\mathcal{D}(J^{-\alpha}) = \mathcal{R}(J^{\alpha})$. To deal with the operator $J^{-\alpha}$ in $\mathcal{D}(J^{-\alpha})$, we thus have to describe the range of the Riemann-Liouville operator $J^{\alpha}$ incorporated with the norm, which is given in the following theorem (for $0 \le \alpha < \frac{1}{2}$, a part of our results is already formulated and proved in Theorem 18.3 from \cite{SKM}). \begin{thm \label{t21} $\mbox{ }$ \\ (i) \begin{align*} &\Vert J^{\alpha}u\Vert_{H^{\alpha}(0,T)} \sim \Vert u\Vert_{L^2}, \quad u \in L^2(0,T),\\ &\Vert J^{-\alpha}v\Vert_{L^2} \sim \Vert v\Vert_{H^{\alpha}(0,T)}, \quad v \in \mathcal{R}(J^{\alpha}). \end{align*} \\ (ii) $$ \mathcal{R}(J^{\alpha}) = \left\{ \begin{array}{rl} &H^{\alpha}(0,T), \quad 0 \le \alpha < \frac{1}{2}, \\ &_{0}H^{\alpha}(0,T), \quad \frac{1}{2} < \alpha \le 1,\\ &\left\{ u \in H^{\frac{1}{2}}(0,T):\thinspace \int^T_0 t^{-1}\vert u(t)\vert^2 dt < \infty\right\}, \quad \alpha = \frac{1}{2}.\\ \end{array}\right. $$ \end{thm} For the proof of Theorem \ref{t21}, some auxiliary statements that were derived in \cite{GY} are needed. For the sake of completeness, we give here both the formulations and the proofs of these results. By $J = J^1$ we denote the integral $(Jy)(t) = \int^t_0 y(s) ds$ for $0\le t \le T$ and by $I: L^2 \to L^2$ the identity mapping. In this section, we consider the space $L^2(0,T)$ over $\C$ with the scalar product $(u,v)_{L^2} = \int^T_0 u(t)\overline{v(t)} dt$, and $\Re \eta$ and $\Im \eta$ denote the real and the imaginary parts of a complex number $\eta$, respectively . \begin{lem} \label{l21} For any $u \in L^2$, the inequality $\Re (Ju,u)_{L^2} \ge 0$ holds true and $\mathcal{R}(I+J) = L^2$. \end{lem} \begin{proof} First the notations $\Re Ju(t) = \va(t)$ and $\Im Ju(t) = \psi(t)$ are introduced. With these notations, the following chain of equalities and inequalities can be easily obtained: \begin{align*} & \Re \thinspace (Ju,u)_{L^2} = \int^T_0 \left(\int^t_0 u(s)ds\right) \overline{u}(t) dt = \int^T_0 Ju(t)\frac{d}{dt}\overline{Ju(t)} dt\\ =& \int^T_0 \left(\va(t)\frac{d\va}{dt} + \psi(t)\frac{d\psi}{dt} \right) dt = \frac{1}{2}(\va(t)^2 + \psi(t)^2)\vert^{t=T}_{t=0} = \frac{1}{2}\vert Ju(T)\vert^2 \ge 0. \end{align*} Therefore $\Re (Ju,u)_{L^2} \ge 0$ for $u \in L^2$. Next we have $$ (\lambda I+J)^{-1}u(t) = \lambda^{-1}u(t) - \lambda^{-2}\int^t_0 e^{-(t-s)/\lambda}u(s) ds, \quad 0\le t \le T. \eqno{(2.1)} $$ Setting $\lambda = 1$, by (2.1) the operator $(I+J)^{-1}u$ is defined for all $u \in L^2$, which implies that $\mathcal{R}(I+J) = L^2$. The proof is completed. \end{proof} It follows from Lemma \ref{l21} that $J$ is a maximal accretive operator (Chapter 2, \S1 in \cite{Ta}) and thus the assumption 6.1 in Chapter 2, \S6 of \cite{Pa} holds valid, so that for $0 < \alpha < 1$ one can define the fractional power $J(\alpha)$ of the integral operator $J$ by the formula $$ J(\alpha)u = \frac{\sin\pi\alpha}{\pi}\int^{\infty}_0 \lambda^{\alpha-1}(\lambda I + J)^{-1}Ju\, d\lambda, \quad u \in \mathcal{D}(J) = L^2 $$ (see also Chapter 2, \S3 in \cite{Ta}). It turns out that the fractional power $J(\alpha)$ of the integral operator $J$ coincides with the Riemann-Liouville fractional integral operator on $L^2$ as stated and proved below. \begin{lem} \label{l22} $$ (J(\alpha)u)(t) = (J^{\alpha}u)(t), \qquad 0\le t \le T, \quad u \in L^2, \quad 0 < \alpha < 1. $$ \end{lem} \begin{proof} By (2.1), we have $$ (\lambda I + J)^{-1}Ju(t) = \lambda^{-1}\int^t_0 e^{-(t-s)/\lambda}u(s) ds, $$ and by the change of the variables $\eta = \frac{t-s}{\lambda}$, we obtain \begin{align*} & \frac{\sin\pi\alpha}{\pi}\int^{\infty}_0 \lambda^{\alpha-1} (\lambda I + J)^{-1}Ju(t)\, d\lambda = \frac{\sin\pi\alpha}{\pi}\int^{\infty}_0 \lambda^{\alpha-2} \left( \int^t_0 e^{-(t-s)/\lambda} u(s)ds \right)d\lambda\\ =& \frac{\sin\pi\alpha}{\pi}\int^t_0 u(s) \left( \int^{\infty}_0 \lambda^{\alpha-2} e^{-(t-s)/\lambda} d\lambda \right)ds = \frac{\sin\pi\alpha}{\pi}\int^t_0 u(s) \left( \int^{\infty}_0 \eta^{-\alpha} e^{-\eta} d\eta\right) (t-s)^{\alpha-1} ds\\ = & \frac{\Gamma(1-\alpha)\sin\pi\alpha}{\pi} \int^t_0 u(s)(t-s)^{\alpha-1} ds. \end{align*} Now the known formula $\Gamma(1-\alpha)\Gamma(\alpha) = \frac{\pi}{\sin\pi\alpha}$ implies the statement of the lemma. \end{proof} Next we consider the differential operator $$ \left\{ \begin{array}{rl} &(Au)(t) = -\frac{d^2u(t)}{dt^2}, \quad 0 < t < T,\\ &\mathcal{D}(A) = \{u \in H^2(0,T):\thinspace u(0) = \frac{du}{dt}(T) = 0\}. \end{array}\right. \eqno{(2.2)} $$ Note that the boundary conditions $u(0) = \frac{du}{dt}(T) = 0$ should be interpreted as the traces of $u$ in the Sobolev space $H^2(0,T)$ (see e.g., \cite{Ad} or \cite{LM}). It is possible to define the fractional power $A^{\frac{\alpha}{2}}$ of the differential operator $A$ for $0 \le \alpha \le 1$ in terms of the eigenvalues and the eigenfunctions of the eigenvalue problem for the operator $A$. More precisely, let $0 < \lambda_1 < \lambda_2 < \cdots $ be the eigenvalues and $\psi_k$, $k \in \N$ the corresponding normed eigenfunctions of $A$. It is easy to derive the explicit formulas for $\lambda_k,\ \psi_k,\ k \in \N$, namely, $\lambda_k = \frac{(2k-1)^2\pi^2}{4T^2}$ and $\psi_k(t) = \frac{\sqrt{2}}{\sqrt{T}} \sin \sqrt{\lambda_k}t$. In particular, we note that $\psi_k(0) = 0$ and $\psi_k \in H^2(0,T)$. It is known that $\{\psi_k\}_{k\in\N}$ is an orthonormal basis of $L^2$. Then the fractional power $A^{\frac{\alpha}{2}},\ 0\le \alpha \le 1$ of the differential operator $A$ is defined by the relations $$ \left\{ \begin{array}{rl} & A^{\frac{\alpha}{2}}u = \sum_{k=1}^{\infty} \lambda_k^{\frac{\alpha}{2}} (u,\psi_k)_{L^2}\psi_k, \quad u \in \mathcal{D}(A^{\frac{\alpha}{2}}),\\ & \mathcal{D}(A^{\frac{\alpha}{2}}) = \{u \in L^2:\thinspace \sum_{k=1}^{\infty} \lambda_k^{\alpha}\vert (u,\psi_k)_{L^2}\vert^2 < \infty\}, \\ & \Vert u\Vert_{\mathcal{D}(A^{\frac{\alpha}{2}})} = \left( \sum_{k=1}^{\infty} \lambda_k^{\alpha}\vert (u,\psi_k)_{L^2}\vert^2 \right)^{\frac{1}{2}}. \end{array}\right. \eqno{(2.3)} $$ According to \cite{F} (see also Lemma 8 in \cite{GY}), the domain $\mathcal{D}(A^{\frac{\alpha}{2}})$ can be described as follows: $$ \mathcal{D}(A^{\frac{\alpha}{2}})= \left\{ \begin{array}{rl} &H^{\alpha}(0,T), \quad 0\le \alpha < \frac{1}{2}, \\ &_{0}H^{\alpha}(0,T), \quad \frac{1}{2} < \alpha \le 1,\\ &\left\{ u \in H^{\frac{1}{2}}(0,T):\thinspace \int^T_0 t^{-1}\vert u(t)\vert^2 dt < \infty\right\}, \quad \alpha=\frac{1}{2}. \\ \end{array}\right. \eqno{(2.4)} $$ The relation (2.4) holds not only algebraically but also topologically, that is, $$ \Vert A^{\frac{\alpha}{2}}u\Vert_{L^2} \sim \Vert u\Vert_{H^{\alpha}(0,T)}, \quad 0\le \alpha \le 1, \thinspace u \in \mathcal{D}(A^{\frac{\alpha}{2}}). \eqno{(2.5)} $$ In particular, the inclusion $\mathcal{D}(A^{\frac{\alpha}{2}}) \subset H^{\alpha}(0,T)$ holds true. We note that in \cite{GY} the case of the operator $A=\frac{d^2}{d\, t^2}$ with the domain $\left\{u \in H^2(0,1): \thinspace \frac{du}{dt}(0) = u(1) = 0\right\}$ was considered, which is reduced to our case by a simple change of the variables. Now we are ready to prove Theorem \ref{t21}. \begin{proof} First of all, it can be directly verified that $\mathcal{D}(J^{-1}) = J(L^2) = {_{0}H^1(0,T)}$, $(J^{-1}w)(t) = \frac{dw(t)}{dt}$, and $$ \Vert J^{-1}v\Vert_{L^2} = \Vert v\Vert_{H^1(0,T)}, \quad v \in {_{0}H^1(0,T)}. $$ Therefore by (2.5) we obtain the norm equivalence $$ \Vert J^{-1}v\Vert_{L^2} \sim \Vert A^{\frac{1}{2}}v\Vert_{L^2}, \quad v \in {_{0}H^1(0,T)} =\mathcal{D}(J^{-1}) = \mathcal{D}(A^{\frac{1}{2}}). $$ Direct calculations show that both $J^{-1}$ and $A^{\frac{1}{2}}$ are maximal accretive in $L^2$. Hence the Heinz-Kato inequality (see e.g. Theorem 2.3.4 in \cite{Ta}) yields $$ \Vert J^{-\alpha}v\Vert_{L^2} \sim \Vert A^{\frac{\alpha}{2}}v\Vert_{L^2}, \quad v \in \mathcal{D}(A^{\frac{\alpha}{2}}), \quad \mathcal{D}(J^{-\alpha}) = \mathcal{D}(A^{\frac{\alpha}{2}}). $$ By (2.5) the norm equivalence $\Vert J^{-\alpha}v\Vert_{L^2} \sim \Vert v\Vert _{H^{\alpha}(0,T)}$ holds true for $v \in \mathcal{D}(J^{-\alpha}) = \mathcal{R}(J^{\alpha})$. Next, setting $v = J^{\alpha}u \in \mathcal{D}(J^{-\alpha})$ with any $u \in L^2$, by (2.5) we obtain the following norm equivalences $$ \Vert u\Vert_{L^2} \sim \Vert A^{\frac{\alpha}{2}}(J^{\alpha}u) \Vert_{L^2} \sim \Vert J^{\alpha}u\Vert_{H^{\alpha}(0,T)}. $$ Thus the proof of Theorem \ref{t21} (i) is completed. Theorem \ref{t21} (ii) follows from the relation (2.4) and the equality $\mathcal{D}(J^{-\alpha}) = \mathcal{R}(J^{\alpha})$. \end{proof} \section{The Caputo derivative in the fractional Sobolev spaces} \noindent The main aim of this section is to define the Caputo derivative in the fractional Sobolev spaces and to prove the norm equivalence (Theorem \ref{t31}). The original definition of the Caputo derivative is by the formula $$ \ppp_t^{\alpha}u(x,t) = \frac{1}{\Gamma(1-\alpha)}\int^t_0 (t-s)^{-\alpha}\frac{\ppp u}{\ppp s}(x,s) ds, \quad 0\le t \le T, \quad 0 < \alpha < 1. $$ Thus $\ppp_t^{\alpha}u$ is defined pointwise for $x \in \Omega$ for $u \in H^1(0,T)$ and the definition requires some suitable conditions on the first order derivative $\frac{\ppp u}{\ppp s}$. On the other hand, since $\ppp_t^{\alpha}u$ is the $\alpha$-th derivative with $0<\alpha <1$, one can expect a natural interpretation of $\ppp_t^{\alpha}u$ for the functions from the fractional Sobolev space $H^{\alpha}(0,T)$. Suggesting this interpretation is the main purpose of this section. We start with introducing a linear span $W$ of the eigenfunctions $\psi_k,\ k\in \N$ of the differential operator $A$ that is defined by the formula (2.2): $$ W = \left\{\sum_{k=1}^N a_k\psi_k:\thinspace N \in \N, \thinspace a_k \in \R\right\}. $$ For the functions from $W$, the following result holds true: \begin{lem} \label{l31} $$ \ppp_t^{\alpha}\va(t) = J^{-\alpha}\va(t), \quad \va \in W, \ \ 0 < t < T. $$ \end{lem} \begin{proof} For any $\va \in W \subset H^2(0,T)$, the Riemann-Liouville fractional derivative is defined by the relation $$ D_t^{\alpha}\va(t) = \frac{1}{\Gamma(1-\alpha)}\frac{d}{dt}\int^t_0 (t-s)^{-\alpha}y(s) ds, \quad 0\le t \le T. $$ Then the relation $$ D_t^{\alpha}\va(t) = \frac{\va(0)t^{-\alpha}}{\Gamma(1-\alpha)} + \ppp_t^{\alpha}\va(t), \quad 0<t<T, \quad \va \in W \eqno{(3.1)} $$ between the Caputo and the Riemann-Liouville fractional derivatives holds true. Indeed, integration by parts implies $$ \int^t_0 (t-s)^{-\alpha}\va(s)ds = \left[ \va(s)\frac{(t-s)^{1-\alpha}}{1-\alpha}\right]^{s=0}_{s=t} + \frac{1}{1-\alpha}\int^t_0 (t-s)^{1-\alpha}\frac{d\va}{ds}(s)ds. $$ Applying the differentiation operator $\frac{d}{dt}$ to both sides of the last formula and noting the inclusion $\va \in H^2(0,T)$, we arrive at the formula (3.1). Since $\va \in W \subset \mathcal{D}(J^{-\alpha}) = \mathcal{R}(J^{\alpha})$, by Theorem \ref{t21} (ii) there exists $\www\va \in L^2$ such that $\va = J^{\alpha}\www\va$. On the other hand, the Riemann-Liouville fractional derivative is the left inverse operator to the Riemann-Liouville fractional integral and we have $$ D_t^{\alpha}J^{\alpha}\www\va = \www\va \eqno{(3.2)} $$ for $\www\va \in L^1(0,T)$ (see e.g. Theorem 6.1.2 in \cite{GV}). Hence $D_t^{\alpha}\va = \www\va$. For $\va \in W$, the condition $\va(0) = 0$ is fulfilled. Thus (3.1) yields $$ D_t^{\alpha}\va(t) = \frac{\va(0)t^{-\alpha}}{\Gamma(1-\alpha)} + \ppp_t^{\alpha}\va(t) = \ppp_t^{\alpha}\va, \quad \va\in W. $$ Therefore $\ppp_t^{\alpha}\va = \www\va = J^{-\alpha}\va$. Thus the proof of the lemma is completed. \end{proof} Lemma \ref{l31} still provides just a pointwise definition of the Caputo fractional derivative $\ppp_t^{\alpha}\va$ for a function $\va \in W$. Now let us consider the closure of the operator $\ppp_t^{\alpha}$ in $W$ to define it over the whole space $\mathcal{R}(J^{\alpha})$. For $\va \in W$, the inequality $$ \Vert \ppp_t^{\alpha}\va\Vert_{L^2} = \Vert J^{-\alpha}\va\Vert_{L^2} \le C\Vert \va\Vert_{H^{\alpha}(0,T)} $$ is guaranteed by Theorem \ref{t21}. Therefore the linear operator $\va \mapsto \ppp_t^{\alpha}\va$ is bounded from $W \subset \mathcal{D}(A^{\frac{\alpha}{2}})$ to $L^2$. Since $W$ is dense in $\mathcal{R}(J^{\alpha}) = \mathcal{D}(A^{\frac{\alpha}{2}})$, the operator $\ppp_t^{\alpha}$ can be uniquely extended from $W$ to the domain $\mathcal{R}(J^{\alpha})$. This extended operator is defined on the whole space $\mathcal{R}(J^{\alpha})$ and is bounded from $\mathcal{R}(J^{\alpha})$ to $L^2$. For the extension of $\ppp_t^{\alpha}$, the same notation as for the pointwise Caputo fractional derivative will be used in the rest of the paper. In other words, let $u \in \mathcal{R}(J^{\alpha}) \subset H^{\alpha}(0,T)$ and a sequence $\va_n \in W$ converge to $u$ in $\mathcal{R}(J^{\alpha})$. Then $\ppp_t^{\alpha}u$ will be interpreted as follows: $$ \ppp_t^{\alpha}u(t) = \lim_{n\to\infty} \left( \frac{1}{\Gamma(1-\alpha)}\int^t_0 (t-s)^{-\alpha}\frac{d \va_n}{ds}(s) ds\right) \quad \mbox{in $L^2$}. \eqno{(3.3)} $$ Let us note that this definition is correct, i.e., it is independent of the choice of the sequence $\va_n$. Indeed, for a function $u \in \mathcal{R}(J^{\alpha})$ let $\{\va_n\}_{n\in \N}$ and $\{\www \va_n\}_{n\in \N} $ be two sequences that approximate the function $u$: $\va_n \to u$ and $\www\va_n \to u$ in $\mathcal{R}(J^{\alpha})$ as $n \to \infty$. Then $\lim_{n\to\infty} \Vert \va_n - \www\va_n\Vert_{H^{\alpha}(0,T)} = 0$ and the boundedness of the operator $\ppp_t^{\alpha}: W \subset H^{\alpha}(0,T) \to L^2$ yields $\lim_{n\to\infty} \Vert\ppp_t^{\alpha} (\va_n - \www\va_n)\Vert_{L^2} = 0$, that is, $\lim_{n\to\infty} \ppp_t^{\alpha}\va_n = \lim_{n\to\infty} \ppp_t^{\alpha}\www\va_n$ in $L^2$, what we wanted to show. In what follows, the Caputo fractional derivative $\ppp_t^{\alpha}u$ for $u \in \mathcal{R}(J^{\alpha})$ will be defined by (3.3) and no more pointwise. Let us now derive some properties of the Caputo fractional derivative $\ppp_t^{\alpha}u$ defined by (3.3). \begin{thm} \label{t31} $\mbox{ }$ \\ (i) $\ppp_t^{\alpha}u = J^{-\alpha}u$ in $L^2$ for $u \in \mathcal{R}(J^{\alpha})$. \\ (ii) $\Vert \ppp_t^{\alpha}u\Vert_{L^2} \sim \Vert u\Vert_{H^{\alpha}(0,T)}$ for $u \in \mathcal{R}(J^{\alpha})$. \end{thm} \begin{proof} The part (i) of the theorem follows directly from the definition (3.3). As to the part (ii), by Theorems \ref{t31} (i) and \ref{t21}, for $u \in \mathcal{R}(J^{\alpha})$ we have the norm equivalence $$ \Vert \ppp_t^{\alpha}u\Vert_{L^2} = \Vert J^{-\alpha}u\Vert_{L^2} \sim \Vert u\Vert_{H^{\alpha}(0,T)}, $$ which completes the proof. \end{proof} Before we start with analysis of the fractional diffusion equation in the fractional Sobolev spaces, let us mention that by Theorem \ref{t31} (i), the solution $u$ to the equation $$ \ppp_t^{\alpha}u = f, \quad f\in L^2 $$ is given by $u = J^{-\alpha}f$ and $$ u \in \left\{ \begin{array}{rl} &H^{\alpha}(0,T), \quad 0 \le \alpha < \frac{1}{2}, \\ &_{0}H^{\alpha}(0,T), \quad \frac{1}{2} < \alpha \le 1,\\ &\left\{ u \in H^{\frac{1}{2}}(0,T):\thinspace \int^T_0 t^{-1}\vert u(t)\vert^2 dt < \infty\right\}, \quad \alpha = \frac{1}{2}.\\ \end{array}\right. $$ This result is well-known (see e.g. \cite{LuG}). Our Theorem \ref{t31} asserts not only this formula but also a characterization of the range $\mathcal{R}(J^{\alpha})$ in the framework of the extended definition (3.3) of the fractional Caputo derivative. Since $u \in \mathcal{R}(J^{\alpha})$ implies $u(0) = 0$ for $\frac{1}{2} < \alpha < 1$, the Caputo and the Riemann-Liouville fractional derivatives coincide in $L^2$, that is, the formula $\ppp_t^{\alpha}u = D_t^{\alpha}u$ holds true, which corresponds to the relation (3.1) for a wider class of the functions compared to $W$. For $0 < \alpha < \frac{1}{2}$, we have $\mathcal{R}(J^{\alpha}) = H^{\alpha}(0,T)$, and with our extended definition of $\ppp_t^{\alpha}$ given by (3.3) and a suitably extended definition of $D_t^{\alpha}$ in $H^{\alpha}(0,T)$, the relation $\ppp_t^{\alpha}u = D_t^{\alpha}u$ is true for $u \in \mathcal{R}(J^{\alpha}) = H^{\alpha}(0,T)$, too. We note that for $0<\alpha < \frac{1}{2}$, the pointwise definition of the Caputo fractional derivative $\ppp_t^{\alpha}u$ involves the first order derivative $\frac{du}{dt}$ and thus it does not make any sense for the functions $u \in H^{\alpha}(0,T)$. \section{Maximal regularity of solutions to the fractional diffusion equation with the Caputo derivative} Let $\Omega \subset \R^n$ be a bounded domain with the smooth boundary $\ppp\Omega$, and let $(u,v)_{L^2(\Omega)}$ be the scalar product in $L^2(\Omega)$, that is, $(u,v)_{L^2(\Omega)} = \int_{\Omega} u(x)v(x) dx$. In the first part of this section, we deal with the following initial-boundary-value problem for the fractional diffusion equation with the Caputo time-fractional derivative: $$ \left\{ \begin{array}{rl} &\ppp_t^{\alpha}u(x,t) = -Lu(x,t) + F(x,t), \quad x\in \Omega, \quad 0 < t < T,\\ &u(x,0) = 0, \quad x \in \Omega,\\ &u\vert_{\ppp\Omega\times (0,T)} = 0, \end{array}\right. \eqno{(4.1)} $$ where $L$ is a symmetric uniformly elliptic operator in the form $$ Lu(x,t) = -\sum_{j,k=1}^n \frac{\partial }{\partial x_j}\left( a_{jk}(x)\frac{\partial }{\partial x_k} u(x,t)\right) - c(x)u(x,t) $$ and the conditions $$ \left\{ \begin{array}{rl} &a_{jk} = a_{kj} \in C^1(\overline{\Omega}), \quad j,k=1,..., n,\\ &\mbox{there exists a constant } \nu_0 > 0 \mbox{ such that}\\ &\sum^n_{j,k=1} a_{jk}(x)\xi_j\xi_k \ge \nu_0\sum_{j=1}^n \xi_i^2, \quad x\in \overline{\Omega}, \thinspace \xi_1, ...., \xi_n \in \R \end{array}\right. \eqno{(4.2)} $$ are fulfilled. Moreover, we assume that $c(x) \le 0,\ x \in C(\overline{\Omega})$ (in the second part of the section this condition will be removed). As to the operator $L$, our method will be applied for a more general elliptic operator in the second part of this section, but first we restrict ourselves to a self-adjoint and positive-definite operator for the sake of simplicity. In this paper, we are interested in a weak solution to the problem (4.1) that is defined on the basis of Theorem \ref{t31}. \begin{defn}[Definition of a weak solution] Let $F \in L^2(0,T; L^2(\Omega))$. We call $u$ a weak solution to the problem (4.1) if $u \in L^2(0,T;H^2(\Omega) \cap H^1_0(\Omega))$ and the following conditions are satisfied \\ (i) $$ J^{-\alpha}u \in L^2(0,T; L^2(\Omega)), $$ (ii) $$ \ppp_t^{\alpha}u(x,t) = -Lu(x,t) + F(x,t) \quad \mbox{in } L^2(0,T;L^2(\Omega)). \eqno{(4.3)} $$ \end{defn} We note that the inclusion $J^{-\alpha}u \in L^2(0,T;L^2(\Omega))$ implies $u(x,\cdot) \in \mathcal{R}(J^{\alpha})$ for almost all $x \in \Omega$. Hence, in the case $\frac{1}{2} < \alpha < 1$ it follows from Theorem \ref{t21} (ii) that $u(x,\cdot) \in {_{0}H^{\alpha}(0,T)}$ and so $u(x,0) = 0$ for almost all $x \in \Omega$. In the case $\alpha = \frac{1}{2}$, the condition $\int^T_0 \frac{\vert u(x,t)\vert^2}{t} dt < \infty$ holds true for $u(x,\cdot) \in \mathcal{R}(J^{\alpha})$. This condition implicitly describes the behavior of the function $u$ in a small neighborhood of the point $t=0$, but one cannot conclude from this condition that $u(x,0) = 0$. In the case $0 < \alpha < \frac{1}{2}$, the initial condition $u(x,0) = 0$ of the problem (4.1) is not meaningful at all, because a function $\eta \in H^{\alpha}(0,T)$ has no trace at $t=0$ if $0 < \alpha < \frac{1}{2}$ in general. Thus an initial condition of the problem (4.1) has to be posed depending on the trace of the functions from $H^{\alpha}(0,T)$ at the point $t=0$. To illustrate the remarks given above, let us consider a simple example. \begin{exa} In the problem (4.1), we set $n=1$, $\Omega = (0,1)$, $0 < \alpha < \frac{1}{2}$, and $$ F(x,t) = \frac{-2\Gamma\left(\delta + \frac{1}{2}\right)} {\Gamma\left(\alpha+\delta+\frac{1}{2}\right)}t^{\alpha+\delta-\frac{1}{2}} + x(x-1)t^{\delta-\frac{1}{2}}, $$ where $\delta > 0$ and $\alpha + \delta - \frac{1}{2} < 0$. Then the inclusion $F \in L^2(0,T;L^2(0,1))$ holds true and we can directly check that the function $$ u(x,t) = \frac{\Gamma\left(\delta + \frac{1}{2}\right)} {\Gamma\left(\alpha+\delta+\frac{1}{2}\right)}x(x-1) t^{\alpha+\delta-\frac{1}{2}} $$ satisfies the fractional diffusion equation $$ \ppp_t^{\alpha}u(x,t) = \ppp_x^2 u(x,t) + F(x,t),\ 0 < x < 1,\ t > 0. $$ Moreover, $u \in L^2(0,T;H^2(0,1) \cap H^1_0(0,1))$ and $\ppp_t^{\alpha}u = x(x-1)t^{\delta-\frac{1}{2}} \in L^2(0,T; L^2(0,1))$ and thus $u$ is a week solution to the problem (4.1). However, since $\alpha + \delta-\frac{1}{2} < 0$, the value of $u(x,0)$ in the sense of $\lim_{t\to 0} \Vert u(\cdot,t)\Vert_{L^2(0,1)}$ does not exist. Thus we see that there exists a solution to the problem (4.1) that does not admit the initial condition. \end{exa} Now we state and prove our main result regarding the weak solution to the problem (4.1). \begin{thm} \label{t41} Let $F \in L^2(0,T;L^2(\Omega))$ and $0 < \alpha < 1$. Then there exists a unique weak solution to the problem (4.1). Moreover there exists a constant $C>0$ such that the norm estimate $$ \Vert u\Vert_{H^{\alpha}(0,T;L^2(\Omega))} + \Vert u\Vert_{L^2(0,T;H^2(\Omega))} \le C\Vert F\Vert_{L^2(0,T;L^2(\Omega))} \eqno{(4.4)} $$ holds true for all $F \in L^2(0,T;L^2(\Omega))$. \end{thm} \begin{proof} The operator $L$ defined by $$ (L v)(x) = -\sum_{j,k=1}^n \frac{\partial }{\partial x_j}\left( a_{jk}(x)\frac{\partial }{\partial x_k} v(x)\right) - c(x)v(x), \quad v \in \mathcal{D}(L) := H^2(\Omega) \cap H^1_0(\Omega) \eqno{(4.5)} $$ is a positive-definite and self-adjoint operator in $L^2(\Omega)$. Let $0 < \mu_1 \le \mu_2 \le \cdots $ be all the eigenvalues of $L$, where $\mu_k$ appears in the sequence as often as its multiplicity requires. Let $\va_k, k\in \N$ be the eigenfunction of $L$ corresponding to the eigenvalue $\mu_k$. It is known that $\mu_k \to \infty$ as $k \to \infty$ and the eigenfunctions $\va_k$ can be chosen to be orthonormal, i.e., $(\va_j,\va_k)_{L^2(\Omega)} = 1$ if $j=k$ and $(\va_j,\va_k) _{L^2(\Omega)} = 0$ if $j \ne k$. These eigenfunctions $\{\va_k\}_{k\in \N}$ build an orthonormal basis of $L^2(\Omega)$. \\ {\bf (i) Uniqueness of the weak solution.} Let $w$ be a weak solution to (4.1) with $F=0$. Since $\ppp_t^{\alpha}w$, $Lw \in L^2(0,T;L^2(\Omega))$ and $L\va_k = \mu_k\va_k$, the functions $w_k(t) := (w(\cdot,t), \va_k) _{L^2(\Omega)},\ k \in \N$ belong to the space $\mathcal{R}(J^{\alpha})$ and satisfy the relation $\ppp_t^{\alpha}w_k(t) = -\mu_kw_k(t)$. Therefore by Theorem \ref{t31} (i), we have $J^{-\alpha}w_k = -\mu_kw_k$ in $L^2$, that is, $w_k = -\mu_kJ^{\alpha}w_k$ in $L^2$: $$ w_k(t) = \frac{-\mu_k}{\Gamma(\alpha)}\int^t_0 (t-s)^{\alpha-1} w_k(s) ds, \quad 0 < t < T. $$ Hence $$ \vert w_k(t)\vert \le C\int^t_0 (t-s)^{\alpha-1}\vert w_k(s)\vert ds, \quad 0 < t < T. \eqno{(4.6)} $$ The generalized Gronwall inequality (see e.g., Theorem 7.1.2 in \cite{H}) yields then the relations $w_k(t) = 0,\ k \in \N$ for $0 < t < T$. Since $\{\va_k\}_{k\in \N}$ is an orthonormal basis of $L^2(\Omega)$, we obtain the relation $w(\cdot,t) =0$, $0 < t < T$ and thus the uniqueness of the weak solution is proved. \\ {\bf (ii) Existence of the weak solution.} Our construction of the weak solution follows the lines of the one presented in Theorem 2.2 of \cite{SY}. Let us introduce the sequence $u_k(t) := (u(\cdot,t),\va_k)_{L^2(\Omega)}, k \in \N$ and construct a candidate for the weak solution in the form $$ \widetilde{u}(x,t) = \sum_{k=1}^{\infty} p_k(t)\va_k(x), \quad x\in \Omega, \thinspace 0 < t < T \eqno{(4.7)} $$ with the functions $p_k$ given by the formula $$ p_k(t) = \int^t_0 (F(\cdot,s),\va_k)_{L^2(\Omega)} (t-s)^{\alpha-1}E_{\alpha,\alpha}(-\mu_k(t-s)^{\alpha}) ds. $$ Here and henceforth $E_{\alpha,\beta}$ denotes the two-parameters Mittag-Leffler function defined by the series $$ E_{\alpha,\beta}(z) = \sum_{k=0}^{\infty} \frac{z^k}{\Gamma(\alpha\,k+\beta)}, \quad z \in \C,\ \alpha >0. $$ The function $E_{\alpha,\alpha}(z)$ is known to be an entire function. Applying the same technique as the one used in the proof of Theorem 2.2 in \cite{SY}, we can show that the series in (4.7) is convergent in $L^2(0,T;H^2(\Omega) \cap H^1_0(\Omega))$ and the norm estimate $$ \Vert \widetilde{u}\Vert_{L^2(0,T;H^2(\Omega))} \le C\Vert F\Vert_{L^2(0,T;L^2(\Omega))} \eqno{(4.8)} $$ as well as the relation $$ \ppp_t^{\alpha}p_k(t) = -\mu_kp_k(t) + (F(\cdot,t),\va_k)_{L^2(\Omega)}, \quad 0<t < T, \thinspace k\in \N $$ hold true. Setting $\widetilde{u}_N(x,t) = \sum_{k=1}^N p_k(t)\va_k(x)$, we have the formula \begin{align*} &\ppp_t^{\alpha}\widetilde{u}_N = \sum^N_{k=1} p_k(t)(-\mu_k\va_k) + \sum^N_{k=1} (F(\cdot,t),\va_k)_{L^2(\Omega)}\va_k\\ =& -L\widetilde{u}_N + \sum_{k=1}^N (F(\cdot,t),\va_k)_{L^2(\Omega)}\va_k. \end{align*} Since $L\widetilde{u}_N \to L\widetilde{u}$ in $L^2(\Omega)$ as $N \to \infty$ with a function $\widetilde{u} \in L^2(0,T;H^2(\Omega))$, we can derive the relation $\lim_{N\to\infty} \ppp_t^{\alpha}\widetilde{u}_N = -L\widetilde{u} + F$ in $L^2(\Omega)$. Therefore, $\ppp_t^{\alpha}\widetilde{u} = -L\widetilde{u} + F$ and $\widetilde{u}(x,\cdot) \in \mathcal{R} (J^{\alpha})$ for almost all $x \in \Omega$. Thus $\widetilde{u}$ is the weak solution and so $u = \widetilde{u}$. Since $-L$ is an elliptic operator of the second order, the norm estimate $\Vert v\Vert_{H^2(\Omega)} \le C\Vert Lv\Vert_{\LLL}$ is valid for a function $v \in H^2(\Omega) \cap H^1_0(\Omega)$ (see e.g. \cite{LM}). Hence (4.8) yields the inequality $$ \Vert u\Vert_{L^2(0,T;H^2(\Omega))} \le C\Vert F\Vert_{L^2(0,T;L^2(\Omega))}. \eqno{(4.9)} $$ Moreover, Theorem \ref{t31} and the inequality (4.9) imply the following norm estimate \begin{align*} & \Vert u\Vert_{H^{\alpha}(0,T;L^2(\Omega))} \le C\Vert \ppp_t^{\alpha}u\Vert_{L^2(0,T;L^2(\Omega))} = C\Vert -Lu+F\Vert_{L^2(0,T;L^2(\Omega))}\\ \le & C\Vert F\Vert_{L^2(0,T;L^2(\Omega))}. \end{align*} Thus the proof of Theorem \ref{t41} is completed. \end{proof} In this part of the section, we consider the problem (4.1) with a more general elliptic operator $L$ and without the restriction $c(x)\le 0,\ x\in \overline{\Omega}$. More precisely, in place of (4.1) we consider the problem $$ \left\{ \begin{array}{rl} &\ppp_t^{\alpha}u(x,t) = -Lu(x,t) + F(x,t), \quad x\in \Omega, \quad 0 < t < T,\\ &u(x,0) = 0, \quad x \in \Omega,\\ &u\vert_{\ppp\Omega\times (0,T)} = 0, \end{array} \right. \eqno{(4.10)} $$ where we set $$ Lu(x,t) = -\sum_{j,k=1}^n \frac{\partial }{\partial x_j}\left( a_{jk}(x)\frac{\partial }{\partial x_k} u(x,t)\right) - \sum_{j=1}^n b_{j}(x)\frac{\partial }{\partial x_j} u(x,t) - c(x)u(x,t), $$ and we assume the condition (4.2) to be fulfilled, $b_j \in L^{\infty} (\Omega)$, $1\le j\le n$ and $c \in L^{\infty}(\Omega)$. Note that the condition $c(x)\le 0,\ x \in \overline{\Omega}$ is not assumed any more. The weak solution to the problem (4.10) is defined in the exactly same way as the one to the equation (4.1). Our main result regarding the problem (4.10) is as follows: \begin{thm} \label{t42} Let $F \in L^2(0,T;L^2(\Omega))$ and $0 < \alpha < 1$. Then there exists a unique weak solution to the problem (4.10). Moreover, there exists a constant $C>0$ such that the norm estimate (4.4) holds true for all $F \in L^2(0,T;L^2(\Omega))$. \end{thm} \begin{proof} For a function $a \in L^2(\Omega)$, let us define the operator $$ K(t)a = \sum_{k=1}^{\infty} t^{\alpha-1} E_{\alpha,\alpha} (-\mu_kt^{\alpha})(a,\va_k)_{L^2(\Omega)}\va_k, \quad t > 0. $$ Following the lines of \cite{Beck}, one can easily verify the norm estimate $$ \Vert K(t)a\Vert_{\LLL} \le Ct^{\alpha-1}\Vert a\Vert_{\LLL}, \quad a\in \LLL, \quad t>0. \eqno{(4.11)} $$ For $0 \le \gamma \le 1$ one can define the fractional power $L^{\gamma}$ of the operator $L$ defined by (4.5). Then according to \cite{F}, the inequalities $$ \left\{ \begin{array}{rl} &\Vert v\Vert_{H^2(\Omega)} \le C\Vert Lv\Vert_{\LLL}, \quad v \in H^2(\Omega) \cap H^1_0(\Omega), \\ &C^{-1}\Vert L^{\frac{1}{2}}v\Vert_{\LLL} \le \Vert v\Vert_{H^1(\Omega)} \le C\Vert L^{\frac{1}{2}}v\Vert_{\LLL}, \quad v \in H^1_0(\Omega) \end{array}\right. \eqno{(4.12)} $$ hold true. Moreover, for $0 \le \gamma \le 1$, we have $L^{\gamma}K(t)a = K(t)L^{\gamma}a$ for $a \in \mathcal{D}(L^{\gamma})$, and $$ \Vert L^{\gamma}K(t)a\Vert_{\LLL} \le Ct^{\alpha(1-\gamma)-1} \Vert a\Vert_{\LLL}, \quad 0 \le \gamma \le 1, \quad t>0. \eqno{(4.13)} $$ Now we interpret the function $- \sum_{j=1}^n b_{j}(x)\frac{\partial }{\partial x_j} u(x,t) - c(x)u(x,t)$ as a non-homogeneous term in the equation (4.1) and apply Theorem \ref{t41}, so that we have a weak solution $u(t) := u(\cdot,t)$ of the problem (4.10) in the form $$ u(t) = -\int^t_0 K(t-s)\left( \sum_{j=1}^n b_j\ppp_ju(s) + cu(s)\right) ds + \int^t_0 K(t-s)F(s) ds, \quad t > 0. \eqno{(4.14)} $$ First we prove the uniqueness of the weak solution. Let $F=0$ in (4.14). Then, since $u(\cdot,t) \in H^2(\Omega) \cap H^1_0(\Omega)$ for $t >0$, by (4.12) and (4.13) we obtain \begin{align*} & \Vert L^{\frac{1}{2}}u(t)\Vert_{\LLL} \le C\left\Vert \int^t_0 L^{\frac{1}{2}}K(t-s) \left( \sum_{j=1}^n b_j\ppp_ju(s) + cu(s)\right) ds \right\Vert_{\LLL}\\ \le &C \int^t_0 (t-s)^{\frac{1}{2}\alpha-1} \left(\left\Vert \sum_{j=1}^n b_j\ppp_ju(s)\right\Vert_{\LLL} + \Vert cu(s)\Vert_{\LLL} \right) ds. \end{align*} Therefore (4.12) yields $$ \Vert u(t)\Vert_{H^1(\Omega)} \le C\int^t_0 (t-s)^{\frac{1}{2}\alpha-1} \Vert u(s)\Vert_{H^1(\Omega)} ds, \quad t > 0. $$ The generalized Gronwall inequality (Theorem 7.2.1 in \cite{H}) yields $u(t) = 0$, $0 < t < T$ that completes the proof of the uniqueness of the weak solution. Next the existence of the weak solution is proved. First an operator $Q$ from $L^2(0,T;H^2(\Omega))$ to itself is introduced by $$ Qu(t) = -\int^t_0 K(t-s)\left( \sum_{j=1}^n b_j\ppp_ju(s) + cu(s)\right) ds, \quad 0 < t < T. $$ Taking into consideration Theorem \ref{t41}, it is sufficient to prove that the equation $u = Qu + G(t)$ has a unique solution in $L^2(0,T;H^1_0(\Omega))$. Here we set $G(t) = \int^t_0 K(t-s)F(s) ds$ and Theorem \ref{t41} yields $$ \Vert G\Vert_{L^2(0,T;H^2(\Omega))} + \Vert G\Vert_{H^{\alpha}(0,T;L^2(\Omega))} \le C\Vert F\Vert_{L^2(0,T;\LLL)}. \eqno{(4.15)} $$ The estimates (4.12) and (4.13) lead to the inequality $$ \Vert L^{\frac{1}{2}}Qu(t)\Vert_{\LLL} = \left\Vert -\int^t_0 L^{\frac{1}{2}}K(t-s) \left( \sum_{j=1}^n b_j\ppp_ju(s) + cu(s)\right) ds\right\Vert_{\LLL} $$ $$ \le C\int^t_0 (t-s)^{\frac{1}{2}\alpha-1}\Vert L^{\frac{1}{2}}u(s) \Vert_{\LLL} ds, \quad 0 < t < T. \eqno{(4.16)} $$ Applying (4.16), we obtain the following chain of the inequalities: \begin{align*} & \Vert L^{\frac{1}{2}}Q^2u(t)\Vert_{\LLL} = \Vert L^{\frac{1}{2}}Q(Qu(t))\Vert_{\LLL}\\ \le& C\int^t_0 (t-s)^{\frac{1}{2}\alpha-1}\Vert L^{\frac{1}{2}}(Qu(s)) \Vert_{\LLL} ds \le C^2\int^t_0 (t-s)^{\frac{1}{2}\alpha-1} \left( \int^s_0 (s-\xi)^{\frac{1}{2}\alpha-1} \Vert L^{\frac{1}{2}}u(\xi)\Vert_{\LLL}d\xi \right) ds\\ =& C^2\int^t_0 \left( \int^t_{\xi} (t-s)^{\frac{1}{2}\alpha-1} (s-\xi)^{\frac{1}{2}\alpha-1} ds \right) \Vert L^{\frac{1}{2}}u(\xi)\Vert_{\LLL}d\xi\\ =& \frac{\left( C\Gamma\left(\frac{1}{2}\alpha\right)\right)^2} {\Gamma(\alpha)} \int^t_0 (t-\xi)^{\alpha-1} \Vert L^{\frac{1}{2}}u(\xi)\Vert_{\LLL}d\xi. \end{align*} Repeating the last estimation $m$-times, we obtain the inequality $$ \Vert L^{\frac{1}{2}}Q^mu(t)\Vert_{\LLL} \le \frac{\left( C\Gamma\left(\frac{1}{2}\alpha\right)\right)^m} {\Gamma\left(\frac{1}{2}m\alpha\right)} \int^t_0 (t-s)^{\frac{m}{2}\alpha-1} \Vert L^{\frac{1}{2}}u(s)\Vert_{\LLL}ds, \quad 0<t<T, \thinspace m \in \N. $$ Now we choose $m\in \N$ such that $\frac{m}{2}\alpha - 1> 0$ and set $C_m = \frac{\left( C\Gamma\left(\frac{1}{2}\alpha\right)\right)^m} {\Gamma\left(\frac{1}{2}m\alpha\right)}$. Then $$ \Vert Q^mu(t)\Vert_{H^1(\Omega)} \le C_m\int^t_0 \max_{0\le t\le t} (t-s)^{\frac{m}{2}\alpha-1} \Vert u(s)\Vert_{H^1(\Omega)} ds \le T^{\frac{m}{2}\alpha-1}C_m\int^t_0 \Vert u(s)\Vert_{H^1(\Omega)} ds. $$ Hence, setting $\rho_m = T^{\frac{m}{2}\alpha-1}C_m$, we arrive at the estimate $$ \Vert Q^mu(t)\Vert^2_{H^1(\Omega)} \le \rho_m^2\left(\int^t_0 \Vert u(s)\Vert_{H^1(\Omega)} ds\right)^2 \le \rho_m^2T^2\int^T_0 \Vert u(s)\Vert^2_{H^1(\Omega)} ds, $$ which implies the inequality $$ \int^T_0 \Vert Q^mu(t)\Vert^2_{H^1(\Omega)} dt \le \rho_m^2T^2\int^T_0 \Vert u(s)\Vert^2_{H^1(\Omega)} ds. $$ By the asymptotic behavior of the gamma function, it is easy to verify that $$ \lim_{m\to\infty} \rho_m = T^{-1}\lim_{m\to\infty} \frac{\left( T^{\frac{\alpha}{2}} C\Gamma\left(\frac{1}{2}\alpha\right)\right)^m} {\Gamma\left(\frac{1}{2}m\alpha\right)} = 0. \eqno{(4.17)} $$ Hence $T\rho_m < 1$ for large $m\in \N$. Now we set $\widetilde{Q}u = Qu + G$ and $w=u-v$. Then $Qw = \widetilde{Q}w$ and $Q^mw = \widetilde{Q}^mw$, and it follows from (4.17) that $\widetilde{Q}^m$ is a contraction from $L^2(0,T;H^1(\Omega))$ to itself. Hence the mapping $\widetilde{Q}^m$ has a unique fixed point $u_*\in L^2(0,T;H^1(\Omega))$, that is, $\widetilde{Q}^mu_* = u_*$. Because $\widetilde{Q}^m(\widetilde{Q}u_*) = \widetilde{Q}u_*$, the point $\widetilde{Q}u_*$ is also a fixed point of the mapping $\widetilde{Q}^m$. By the uniqueness of the fixed point of $\widetilde{Q}^m$, we finally see the equality $u_* = \widetilde{Q}u_* = Qu_* + G$. Thus the equation $u=Qu+G$ has a unique solution in $L^2(0,T;H^1_0(\Omega))$ and $\Vert u\Vert_{L^2(0,T;H^1(\Omega))} \le C\Vert G\Vert _{L^2(0,T;H^1(\Omega))}$. Moreover, (4.15) implies $$ \Vert u\Vert_{L^2(0,T;H^1(\Omega))} \le C\Vert G\Vert_{L^2(0,T; H^1(\Omega))} \le C\Vert F\Vert_{L^2(0,T;\LLL)}. $$ Therefore $\left\Vert \sum_{j=1}^n b_j\ppp_ju + cu\right\Vert _{L^2(0,T;\LLL)} \le C\Vert F\Vert_{L^2(0,T;\LLL)}$ and so Theorem \ref{t41} yields the estimate \begin{align*} & \left\Vert Q\left(\sum_{j=1}^n b_j\ppp_ju + cu\right)\right\Vert _{L^2(0,T;H^2(\Omega)) \cap H^{\alpha}(0,T;\LLL)}\\ = &\left\Vert \int^t_0 K(t-s)\left(\sum_{j=1}^n b_j\ppp_ju(s) + cu(s) \right)ds\right\Vert _{L^2(0,T;H^2(\Omega)) \cap H^{\alpha}(0,T;\LLL)}\\ \le& C\left\Vert \sum_{j=1}^n b_j\ppp_ju + cu\right\Vert _{L^2(0,T;\LLL)} \le C\Vert F\Vert_{L^2(0,T;\LLL)}, \end{align*} which proves (4.4) and so the proof of the theorem is completed. \end{proof}
1,314,259,992,841
arxiv
\section{\label{sec:Intro}Introduction} The laser-induced dynamics of pure and doped helium (He) nanodroplets is currently attracting considerable attention~\cite{Mudrich:2008,Gruner:2011,KrishnanPRL:2011,Kornilov:2011,PentlehnerPRL:2013,Ovcharenko:2014,Mudrich:2014}. While the superfluidity of He nanodroplets has been tested in numerous key experiments by probing stationary properties~\cite{Hartmann2:1996,Grebenev:1998,Brauer:2013}, the impact of the quantum nature of the droplets on their dynamic response to impulsive excitation or ionization is much less well established. As a prominent recent example, the rotational dynamics of various molecules embedded in He droplets induced by impulsive alignment was found to be significantly slowed down and rotational recurrences were completely absent~\cite{PentlehnerPRL:2013}. This indicates that substantial transient system-bath interactions are present during the laser pulse. In contrast, the vibrational dynamics of rubidium (Rb) molecules Rb$_2$ attached to the surface of He nanodroplets revealed only slow relaxation and dephasing proceeding on a nanosecond time scale~\cite{Mudrich:2009,Gruner:2011}. Various recent experimental and theoretical studies have addressed the dynamics of solvation and desolvation of ionized or excited metal atoms off the surface of He nanodroplets~\cite{Loginov:2007,Loginov:2012,Fechner:2012,Zhang:2012,Vangerow:2014,Loginov:2014,Mudrich:2014,TheisenImmersion:2011,Theisen:2010,Mateo:2013,Leal:2014}. So far, these studies have concentrated on measuring the total yield and the final velocity of the ejected atoms as a function of the atomic species and the electronic state of excitation. In this paper we present the first time-resolved characterization of the desorption process of Rb atoms off the surface of He nanodroplets upon excitation to the droplet-perturbed states correlating to the 6p atomic orbital. The experimental scheme we apply is femtosecond (fs) pump-probe photoionization in combination with time-of-flight mass-spectrometry. We find that the yield of detected Rb$^+$ photoions as a function of delay time $\tau$ between the exciting pump and the ionizing probe pulses is determined by the interplay of the repulsive interaction of excited Rb$^\ast$ with respect to the He surface and the attractive interaction of the Rb$^+$ ion with the He surface induced by photoionization. The Rb$^\ast$-He droplet repulsion initiates the desorption of the Rb$^\ast$ atom off the He droplet surface. Except for the lowest excited state of Rb, 5p$_{1/2}$, all excited states up to high Rydberg levels experience strong repulsion from He droplets~\cite{Aubock:2008,Callegari:2011}. In contrast, the Rb$^+$-He droplet attraction causes the Rb$^+$ ion to fall back into the He droplet when created near the He droplet surface at short delay times~\cite{Theisen:2010,Leal:2014}. Atomic cations are known to form stable ``snowball'' structures consisting of a cationic core which is surrounded by a high density shell of He atoms. As a result, free Rb$^+$ ions appear in the mass spectrum only after a characteristic pump-probe delay time $\tau_D$, which depends on the state the Rb atom is initially excited to. In addition to neat Rb$^+$ atomic ions, the photoionization mass spectra contain Rb$^+$He molecular ions in the full range of laser wavelengths correlating to the droplet-perturbed Rb 6p-state. The occurrence of such molecular ions has previously been interpreted by the formation of metastable `exciplex' molecules~\cite{Droppelmann:2004,Mudrich:2008,Giese:2012,Fechner:2012,Vangerow:2014,Mudrich:2014}. These bound states of excited metal atoms and one or few He atoms can be populated either by a tunneling process~\cite{Reho:2000,Reho2:2000,Loginov:2007,Loginov:2015} or by direct laser-excitation of bound states in the metal atom-He pair potential~\cite{Pascale:1983,Fechner:2012,Vangerow:2014,Loginov:2014}. In the former case, exciplex formation times $\gtrsim 50$~ps are expected~\cite{Reho2:2000,Droppelmann:2004}, whereas in the latter case, exciplexes are created instantaneously. Thus, previous pump-probe measurements revealing exciplex formation times of $8.5$ and $11.6$~ps for Rb$^4$He and Rb$^3$He, respectively, upon excitation into the droplet-perturbed 5p$_{3/2}$-state could not be consistently interpreted~\cite{Droppelmann:2004}. In the present study we observe a time-delayed increase of the Rb$^+$He signal as for Rb$^+$ indicating that the pump-probe dynamics is primarily determined by the competition between desorption of the Rb$^\ast$He exciplex off the He droplet surface and the Rb$^+$He cation falling back into the He droplet interior. Moreover, a pronounced maximum in the Rb$^+$He signal transients indicates that an additional Rb$^+$He formation channel besides photoionization of Rb$^+$He exciplexes is active -- photoassociative ionization (PAI) of the desorbing Rb atom and a He atom out of the droplet surface. PAI is a well-known process where a bound cationic molecule or complex is formed by photoionization or photoexcitation into autoionizing states of an atom or molecule of a collision complex~\cite{Shaffer:1999}. PAI is a special case of traditional associative ionization where a bound molecular cation is formed in a binary collision of an electronically excited atom~\cite{Weiner:1990}. In either case the binding energy is taken away by the electron emitted in the process. \section{Experimental setup} The experimental setup is similar to the previously used arrangement~\cite{Mudrich:2009,Fechner:2012} except for the ionization and detection schemes. He droplets are produced by a continuous supersonic expansion of He~6.0 through a 5~$\mu$m nozzle at a pressure of $50$~bar. The transversal velocity spread of the beam is reduced by placing a 400~$\mu$m skimmer 13~mm behind the nozzle. Unless otherwise stated, the nozzle temperature is kept at 17~K. This results in a log-normal distribution of the He droplet size with a mean size of $1.1\times 10^4$ He atoms. Subsequently, the droplet beam passes a mechanical chopper and a Rb-filled cell of length 1~cm, stabilized at a temperature of $85~\degree$C. At the corresponding vapor pressure, most droplets pick up on average one Rb atom following poissonian statistics. By overlapping the droplet beam with the output of the fs laser, we resonantly excite and ionize the dopant atom. In contrast to previous studies, we use amplified fs laser pulses generated by a regenerative amplifier operated at a pulse repetition rate of 5 kHz. At this repetition rate, multiple excitations of Rb atoms by subsequent pulses from the pulse train are safely excluded. The pulses are frequency-doubled in a BBO crystal resulting in a pulse duration of $t_p=120$~fs with a variation for different laser center wavelengths of 20~fs. Two identical, time-delayed pump and probe pulses are generated by means of a mechanical delay line. The laser beam is focused into the vacuum chamber using a 30~cm lens which leads to a peak intensity in the range of $5\times 10^{12}$ Wcm$^{-2}$. Photoions are detected by a time-of-flight (TOF) mass spectrometer in Wiley-McLaren configuration mounted in-line with the He droplet beam~\cite{Wiley:1955}. At the end of the drift tube a high negative potential is applied to further accelerate the arriving ions which boosts the efficiency of detecting large cluster masses in the $10^4$~amu range using a Daly-type detector~\cite{Daly:1960}. The latter consists of a Faraday cup, a scintillator with an optical bandpass interference filter and a photomultiplier tube. In case of electron detection, a simple electrode setup consisting of a repeller, an extractor grid and a channeltron detector with positive entrance potential is used. For both detectors, the resulting pulses are amplified, threshold-discriminated and acquired by a fast digitizer. When detecting heavy masses a counting unit is used. \section{R\MakeLowercase{b} desorption dynamics} In the present paper we concentrate on the fs pump-probe dynamics of Rb atoms attached to He nanodroplets which are excited to droplet-perturbed states correlating to the atomic 6p-state. These states have previously been studied using nanosecond pulsed excitation and velocity-map imaging of photoions and electrons~\cite{Fechner:2012,Vangerow:2014}. Due to the interaction of the excited Rb atom with the He droplet surface, the 6p-state splits up into the two states 6p$\Sigma$ and 6p$\Pi$ according to the pseudo-diatomic model which treats the whole He droplet, He$_N$, as one constituent atom of the RbHe$_N$ complex~\cite{Stienkemeier:1996,LoginovPRL:2011,Callegari:2011}. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Fig1_Potentials.pdf} \caption{Potential energy diagram of the Rb-He nanodroplet complex. Vertical arrows depict the photo-excitation and ionization processes. The potential curves of the neutral Rb-He$_{2000}$ complex are taken from~\cite{Callegari:2011}, the one of the Rb$^+$-He$_{2000}$ complex is obtained from the Rb$^+$-He pair potential~\cite{Koutselos:1990} on the basis of the He density distribution of the groundstate RbHe$_{2000}$ complex~\cite{Pi}. The two peaks plotted vertically on the left-hand scale show the expected excitation spectrum based on these potentials.} \label{fig:potentials} \end{figure} Using the RbHe$_N$ pseudo-diatomic potential curves for the 5s$\Sigma$ electronic groundstate and the 6p$\Sigma,\,\Pi$ excited states we compute the Franck-Condon profiles for the expected vertical excitation probability using R. LeRoy's program BCONT~\cite{bcont}. The corresponding transition probability profile is depicted on the left-hand side of Fig.~\ref{fig:potentials}. The experimental excitation spectrum is in good agreement with the calculated one apart from the fact that the experimental peaks are somewhat broader~\cite{Fechner:2012}. Since both 6p$\Sigma$ and 6p$\Pi$ pseudo-diatomic potentials are shifted up in energy by up to 1200 cm$^{-1}$ with respect to the atomic 6p level energy, we expect strong repulsion and therefore fast desorption of the Rb atom off the He droplet surface to occur following the excitation. However, upon ionization of the excited Rb atom by absorption of a second photon (vertical arrow on the right-hand side of Fig.~\ref{fig:potentials}), the interaction potential suddenly turns weakly attractive. Thus, the Rb$^+$ ion may be expected to turn around and to fall back into the He droplet provided ionization occurs at short delay times after excitation such that the desorbing Rb$^\ast$ picks up only little kinetic energy \begin{equation} E_{kin,\,\mathrm{Rb}^\ast}(R)<E_{pot,\,\mathrm{Rb}^+}(R). \label{eq:ineq} \end{equation} Here, $E_{pot,\,\mathrm{Rb}^+}(R)$ denotes the lowering of the potential energy of the Rb$^+$ ion due to the attractive interaction with the He droplet at the distance $R$ from the droplet surface. Eq.~\ref{eq:ineq} holds for short distances $R<R_{c}$ falling below a critical value $R_{c}$. When assuming classical motion, we can infer from Eq.~\ref{eq:ineq} the critical distance $R_{c}$ for the turn-over. From simulating the classical trajectory $R(t)$ we can then obtain the delay time $\tau_c$ at which the turn-over occurs. In the following we refer to $\tau_c$ as `fall-back time'. Thus, when measuring the number of free Rb$^+$ ions emitted from the He droplets by pump-probe photoionization we may expect vanishing count rates at short delays $\tau <\tau_c$ due to the Rb$^+$ ions falling back into the droplets, followed by a steep increase and subsequent constant level of the Rb$^+$ signal at delays $\tau>\tau_c$. \subsection{Experimental results} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Fig2_RbPP.pdf} \caption{Pump-probe transient Rb$^+$ ion count rates recorded for various wavelengths $\lambda$ of the fs laser. At $\lambda\gtrsim 409$ nm, excitation occurs predominantly to the 6p$\Pi$-state, at $\lambda\lesssim 409$ nm predominantly the 6p$\Sigma$-state is excited. The thin smooth lines are fits to the data.} \label{fig:RbTransients} \end{figure} Fig.~\ref{fig:RbTransients} shows the transient Rb$^+$ ion signals measured by integrating over the $^{85}$Rb and $^{87}$Rb mass peaks in the time-of-flight mass spectra recorded for each value of the pump-probe delay. The shown data are obtained by subtracting from the measured ion signals the sum of ion counts for pump and probe laser pulses only. The error bars stem from error propagation taking into account the uncertainties associated with the different signal contributions. By tuning the wavelength of the fs laser $\lambda$ we can excite predominantly the 6p$\Pi$ ($\lambda\gtrsim 409$ nm) or the 6p$\Sigma$-states ($\lambda\lesssim 409$) of the RbHe$_N$ complex. As expected, we observe a step-like increase of the Rb$^+$-yield at delays ranging from 600 fs ($\lambda =401$ nm) up to about 1500 fs ($\lambda =415$ nm). The signal increase occurs at shorter delays when exciting into the more repulsive 6p$\Sigma$-state because the Rb atom moves away from the He droplet surface faster than when it is excited into the shallower 6p$\Pi$-state. The rising edge of the signal jump is extended over a delay period of about 400~fs, partly due to the finite length and bandwidth of the laser pulses. Desorption along the 6p$\Pi$-potential appears as an even smoother signal rise, indicating that a purely classical model is not suitable for reproducing the observed dynamics. For laser wavelengths $\lambda<409$ nm we observe a weakly pronounced double-hump structure with maxima around 800 and 1800~fs, respectively, which we discuss in section~\ref{sec:simulations}. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{Fig3_MassSpec.pdf} \caption{(a) Typical mass spectra recorded for Rb-doped He nanodroplets by fs photoionization taken at a center wavelength $\lambda=415$ nm and a 5~ps pump probe delay. In addition to the atomic isotopes $^{85}$Rb$^+$ and $^{87}$Rb$^+$ the mass spectra contain Rb$^+$He and Rb$^+$He$_2$ molecular ions. (b) An extended view of mass spectra taken at various nozzle temperatures using single fs pulses at $\lambda=415$ nm reveals the presence of large masses of unfragmented ion-doped He droplets Rb$^+$He$_N$.} \label{fig:massspec} \end{figure} Before discussing our model calculations for these transients, let us first examine the measured time-of-flight mass spectra in more detail. Fig.~\ref{fig:massspec} (a) depicts a representative mass spectrum in the mass range around 100~amu at a pump-probe delay of 5~ps and a center wavelength $\lambda=415$ nm. The spectrum is averaged over 5000 laser shots. Clearly, the dominant fragments in this mass range are neat Rb$^+$ ions at 85 and 87 amu, where the different peak heights reflect the natural abundances of isotopes (72 and 28 \%, respectively). Even when ionizing with single laser pulses the mass spectra contain bare Rb$^+$ ions at a low level. We attribute this to a fraction of the Rb atoms desorbing off the droplets and subsequently ionizing within the laser pulse. A contribution to the Rb$^+$ signal may come from free Rb atoms accompanying the droplet beam as a consequence of the detachment of the Rb atom from the droplet during the pick-up process. Aside from neat Rb$^+$ atomic ions, the pump-probe mass spectra feature peaks at 89, 91, and 95 amu, which evidence the formation of Rb$^+$He and Rb$^+$He$_2$ molecular ions. These masses are usually attributed to photoionization of bound metastable Rb$^\ast$He exciplexes~\cite{Droppelmann:2004,Mudrich:2008,Fechner:2012,Loginov:2014}. In addition to these discrete mass peaks, we measure extended mass distributions reaching up to 64,000 amu using our time-of-flight mass spectrometer which is optimized to detecting cluster ions. These distributions are in good agreement with the size distributions of pure He nanodroplets generated in a sub-critical expansion~\cite{Lewerenz:1993,Toennies:2004}. From comparing the peak areas of the light masses Rb$^+$, Rb$^+$He$_n$, $n=1,2$ with those of the heavy droplet ions Rb$^+$He$_N$ we deduce that by ionizing with single pulses a fraction of $\lesssim 10$\% of the doped He droplets fragments into free atomic or molecular ions. The larger part of the ionized Rb-doped He droplets generates unfragmented Rb$^+$He$_N$ due to the sinking of the Rb$^+$ ion into the He droplet and the formation of a stable snowball complex~\cite{Theisen:2010}. When adding an additional time-delayed probe pulse we may expect to alter this ratio by depleting the unfragmented Rb$^+$He$_N$ fraction in favor of creating free ions Rb$^+$ and Rb$^+$He$_{1,2}$ ions after desorption. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Fig4_triple.pdf} \caption{Transient ion and electron signals measured at $\lambda=400$ nm. The ion signal traces (a and b) are obtained from integrating over the free atomic Rb$^+$ ion peaks and over the charged He droplet mass distribution, respectively. The total photoelectron signal (c) is measured using a simple electron detector. The thin smooth lines are fits to the data.} \label{fig:triple} \end{figure} Indeed, the delay-dependent peak integrals of the measured mass peaks at $\lambda=400$ nm confirm this picture, see Fig.~\ref{fig:triple} (a) and (b). While the atomic Rb$^+$ ion signal sharply increases around 600 fs and remains largely constant for longer delays, the Rb$^+$He$_N$ signal displays the opposite behavior. The maximum signal level at zero delay significantly drops around $\tau =600$ fs and remains low for long delay times. In addition to the mass-resolved ion signals we have measured the total yield of photoelectrons, depicted in Fig.~\ref{fig:triple} (c). From comparing the electron counts with and without blocking the He droplet beam we find that for pump-probe ionization $>$79\% of photoelectrons correlates with the Rb-doped He droplet beam, $<21$\% is attributed to ionization of Rb and other species in the background gas. The observation that the electron count rate remains constant within the experimental scatter in the entire range of pump-probe delays indicates that the photoionization efficiency (cross-section) of a Rb atom is largely independent of its position with respect to the He droplet surface. These observations further support our interpretation of the step-like increase of Rb$^+$ counts in terms of the competition between desorption of excited Rb atoms and solvation of Rb$^+$ cations into the He droplets. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Fig5_Fallbacktime.pdf} \caption{Simulated (a) and experimental (b) fall-back times as a function of the laser wavelength, derived from the rising edges of the pump-probe transients. The curves in (a) are obtained for various effective masses $m_{\mathrm{He}_n}=N_\mathrm{eff}m_\mathrm{He}$ of the He droplet in units of the He atomic mass $m_\mathrm{He}$. The different symbols in (b) denote the experimental fit results for Rb$^+$, Rb$^+$He$+$Rb$^+$ and Rb$^+$He$_N$ signals. Panel (c) shows the exponential decay constants from fits of the Rb$^+$He ion transients with Eq.~(\ref{Fitfunction}).} \label{fig:tcrit} \end{figure} {Fig.~\ref{fig:tcrit} (b) displays a compilation of the critical delays for all measured laser wavelengths which we obtain by fitting the experimental data with an error function, \begin{equation} f_{Rb^+}(t)=A\cdot\{\mathrm{erf}\left[(t-\tau_c)/\sigma\right]+1\} \label{Rb_Fitfunction} \end{equation} of variable amplitude $A$, width $\sigma$ and position $\tau_c$. Shown are the results for the raw Rb$^+$ and Rb$^+$He$_N$ transients as well as those obtained by fitting the sum of the transients of Rb$^+$ atomic and Rb$^+$He molecular ions. In particular for the 6p$\Pi$ state, the latter signal more accurately reflects the dynamics of the fall-back process than the individual Rb$^+$ and Rb$^+$He transients since additional transient redistribution of population between Rb$^+$ and Rb$^+$He channels, which we discuss below, cancels out. Correspondingly, the fitted time constants of the summed Rb$^+$ and Rb$^+$He transients and those of Rb$^+$He$_N$ are in good agreement. This confirms our conception that the light ions fall back to produce heavy cluster ions at short delays. {Fig.~\ref{fig:tcrit} (c) will be discussed in section~\ref{sec:RbHeDynamics}. \subsection{Simulations} \label{sec:simulations} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Fig6_Rb_des_fallback.pdf} \caption{Classical trajectories of the excited and ionized Rb atom initially located in a dimple near the He droplet surface. (a) At 6p$\Pi$-excitation ($\lambda=415$ nm) and long pump-probe delay $\tau =500$~fs the Rb atom fully desorbs off the He droplet and propagates as a free Rb$^+$ cation after ionization. (b) At shorter $\tau =400$~fs the Rb$^+$ ion turns over and falls back into the He droplet. The schemes at the bottom visualize the dynamics for increasing time from left to right.} \label{fig:RbTrajectories} \end{figure} Further support for our interpretation of the experimental findings is provided by classical trajectory simulations of the dynamics of the pump-probe process. In this model, the Rb atom and the He droplet surface are taken as two point-like particles which propagate classically according to the pseudo-diatomic model potentials~\cite{Callegari:2011,Pi}. Note that these potentials were calculated based on the minimum-energy configuration of a droplet consisting of $N = 2000$ He atoms subjected to the external potential of an Rb atom in the electronic ground state. The classical equation of motion \begin{equation} \mu \ddot{R} = -\frac{dV(R)}{dR}, \label{eq:Newton} \end{equation} is solved numerically. Here, $V=V_{\Sigma ,\,\Pi ,\,\mathrm{Rb}^+ }(R)$ denotes the potential curves of the excited and ionic states, and $R(t)$ is the distance between the Rb atom and the He dimple at the droplet surface. The initial value of the Rb-He droplet distance is the position of the minimum of the groundstate potential well (6.4~\AA). Eq.~\ref{eq:Newton} is first solved for the neutral excited state potential $V_{\Sigma}$ or $V_\Pi$ up to the pump-probe delay time $\tau$. Subsequently, the Rb atom is considered to be ionized and the particle is propagated further using the ionic Rb$^+$-He$_N$ potential $V_{\mathrm{Rb}^+ }$. The reduced mass $\mu=m_\mathrm{Rb}m_{\mathrm{He}_n}/(m_\mathrm{Rb} + m_{\mathrm{He}_n})$ is given by the mass of the Rb atom or ion, $m_\mathrm{Rb}$, and the effective mass of the He droplet, $m_{\mathrm{He}_n}$. We set $m_{\mathrm{He}_n} = 40$~amu for the propagation of the excited as well as for the subsequent propagation of the Rb$^+$ ion with respect to the He droplet. This value is based on previous experimental as well as theoretical findings~\cite{Vangerow:2014}. The motion of the excited and subsequently ionized Rb atom with respect to the He droplet surface is illustrated in Fig.~\ref{fig:RbTrajectories} for different initial conditions. The time-dependent positions of the Rb atom and the He surface are depicted as red and blue lines in the upper parts. The lower parts are graphical visualizations of the dynamics. Fig.~\ref{fig:RbTrajectories} (a) depicts the case when the excitation of the Rb atom, which is initially located in the groundstate equilibrium configuration of the RbHe$_N$ complex, occurs at $t=0$ and ionization is delayed to $\tau = 500$ fs. The laser wavelength is set to $\lambda=415$ nm where the motion follows the 6p$\Pi$-potential. In this case the excited Rb atom fully desorbs off the He droplet and continues to move away from the droplet after its conversion into an ion. In the case of shorter delay $\tau = 400$~fs between excitation and ionization, shown in Fig.~\ref{fig:RbTrajectories} (b), the Rb atom turns over upon ionization as a result of Rb$^+$-He$_N$ attraction and falls back into the He droplet. For assessing the effect of an initial spread of Rb-He$_N$ droplet distances $R$ due to the broad laser bandwidth and of the finite length of the laser pulses $t_p$ we extend the classical trajectory calculation to a mixed quantum-classical simulation which includes an approximate description of the quantum wave packet dynamics of the system. The initial wave packet is obtained by transforming the spectral profile of the laser into a distribution as a function of $R$ using the potential energy difference between the initial 5s$\Sigma$ and the final 6p$\Sigma,\,\Pi$ pseudo-diatomic states. We use a Gaussian-shaped laser profile with a full width at half maximum, $\Delta\nu$, inferred from measured spectra. Typically $\Delta\nu\approx$ 2~nm, depending on the center wavelength of the laser. This corresponds to the instantaneous creation of a wave packet in the excited state centered around the minimum of the groundstate potential. For simulating the dynamics the wave packet is approximated by 25 segments $i$ and each segment is propagated individually according to Eq.~\ref{eq:Newton} where $R(t)$ is replaced by $R_i(t)$ representing the Rb-He$_N$ distance for the $i$-th segment. Convergence of the final results with respect to the number of segments has been checked. This simplified description of the wave packet dynamics is justified because no quantum interference effects are expected for this simple dissociation reaction. Comparison with the full quantum simulation of the desorption process yields excellent agreement within the propagation time range relevant to the experiment. Simulated transient yield curves as a function of the pump-probe delay $\tau$ are obtained by taking the weighted sum of the segments which have propagated outwards up to very large distances after long propagation times. This sum we identify with the fraction of desorbed atoms. Those segments which have turned over towards short distances are considered to contribute to the Rb$^+$ ions falling back into the droplet. For those segments the condition formulated initially (inequality~(\ref{eq:ineq})) is fulfilled implicitly. The finite duration of the excitation process is taken into account by convolving the resulting yield curves with the autocorrelation function of the two laser pulses. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Fig7_Rb_Sim.pdf} \caption{Semiclassical simulations of the yield of free Rb$^+$ ions created by excitation and time-delayed ionization for various center wavelengths $\lambda$ of the laser pulses. See text for details.} \label{fig:RbPPsimulation} \end{figure} The resulting simulated yields of free Rb$^+$ ions as a function of $\tau$ are depicted in Fig.~\ref{fig:RbPPsimulation} for various center wavelengths $\lambda$ of the laser pulses. The obtained curves qualitatively resemble the experimental ones in many respects. Excitation at long wavelengths $\lambda > 409$ nm, at which predominantly the more weakly repulsive 6p$\Pi$ state is populated, induces a smooth signal increase at about $\tau = 400$ fs. At $\lambda < 409$ nm, where predominantly the 6p$\Sigma$-state is excited, the signal rise occurs around $\tau = 210$ fs, considerably earlier than for 6p$\Pi$ excitation. This result qualitatively agrees with the experimental finding see Fig.~\ref{fig:tcrit} (b). Moreover, the superposition of the two rising edges at intermediate wavelengths $\lambda\sim 409$ nm may provide an explanation for the double-hump structure observed in the experimental Rb$^+$ transients at $\lambda < 409$ nm. However, the simulated rising edges occur at significantly shorter delay times than in the experiment, roughly by a factor 2 for excitations to the 6p$\Sigma$-state and up to a factor 4 for the 6p$\Pi$-state. The discrepancy between the experimental results and those of the simulations, shown in Fig.~\ref{fig:tcrit} (b), is present even when assuming very large effective masses of the He droplet $m_{\mathrm{He}_n}>1000$ amu. We attribute this discrepancy to the limited validity of our model assumptions. In particular the interaction potentials we use were obtained on the basis of the frozen He density distribution for the RbHe$_N$ groundstate equilibrium configuration~\cite{Callegari:2011,Pi}. However, transient deformations of the He droplet surface in the course of the dynamics are likely to significantly modify the effective Rb$^\ast$-He$_N$ interactions. Recent time-dependent density functional simulations show a complex ultrafast response of the He droplet to the presence of a Rb$^+$ ion near the surface~\cite{Leal:2014}. In particular when the desorption dynamics is slow ($\Pi$-state) a complex reorganization of the He droplet surface during the Rb desorption process may be expected~\cite{Vangerow:2014}. A clear manifestation of the break-down of the simple pseudo-diatomic model is the formation of Rb$^\ast$He exciplexes which we discuss in the following section. Recently, M. Drabbels and coworkers suggested that the pseudo-diatomic potentials of the excited Na$^\ast$He$_N$ complex may be transiently shifted and even intersect~\cite{Loginov:2014,Loginov:2015}. Detailed three-dimensional simulations including the full spectrum of properties of He droplets are needed to provide an accurate description of this kind of dynamics~\cite{Hernando:2012,Mateo:2013,Vangerow:2014,Leal:2014}. Experimentally, the time evolution of the interaction potential energies will be visualized by means of fs time-resolved photoelectron spectroscopy in the near future. \section{R\MakeLowercase{b}H\MakeLowercase{e}$^+$ dynamics} \label{sec:RbHeDynamics} Aside from free Rb$^+$ ions, fs photoionization of Rb-doped He nanodroplets generates Rb$^+$He$_n$, $n=1,2$ molecular ions. Relative abundances reach up to 31\% and 1.5\%, respectively, measured at $\lambda =415$ nm corresponding to the 6p$\Pi$ excitation, see Fig.~\ref{fig:massspec}. At $\lambda =399$ nm (6p$\Sigma$-excitation), abundances are 4\% and 1\%, respectively. Free Rb$^+$He ions are associated with bound states in the Rb$^\ast$He excited states pair potentials, so called exciplexes. Both the 6p$\Sigma$ and the 6p$\Pi$-states of the RbHe diatom feature potential wells which sustain bound vibrational states that can be directly populated by laser excitation out of the groundstate of the RbHe$_N$ complex~\cite{Pascale:1983,Fechner:2012}. Thus, exciplexes are directly created in a process akin to photoassociation, in contrast to previously observed Na$^\ast$He and K$^\ast$He exciplexes which were formed by an indirect tunneling process upon excitation of the lowest p$\Pi$-states~\cite{Reho2:2000,Loginov:2015}. Exciplex formation is the only route to producing Rb$^+$He ions by photoionization using continuous-wave or nanosecond lasers, where ionization takes place at long delay times when the dynamics of exciplex formation and desorption off the droplets is long complete. In fs experiments, however, ionization can be triggered before or during the process of desorption of the excited atom or exciplex off the droplet surface. In this case, due to the attractive Rb$^+$-He potential a bound Rb$^+$He molecular ion can be formed upon ionization, even if the excited Rb$^\ast$-He interaction does not sustain bound states of the neutral diatom. The process of inducing a molecular bond between two initially unbound neutral species by photoionization is known as photoassociative ionization~\cite{Weiner:1990}. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Fig8_RbHe_PP.pdf} \caption{Experimental yields of Rb$^+$He molecular ions as a function of pump-probe delay for various center wavelengths of the laser pulses. The thin smooth lines are fits to the data.} \label{fig:RbHeTransients} \end{figure} \subsection{Experimental results} The transient yield of Rb$^+$He for various laser wavelengths is displayed in Fig.~\ref{fig:RbHeTransients}. Similarly to the Rb$^+$ transients, we measure vanishing Rb$^+$He pump-probe signal contrast around zero delay. For increasing laser wavelength from $\lambda =399$ up to 418~nm, which corresponds to the crossover from the 6p$\Sigma$ to the 6p$\Pi$ excited pseudo-diatomic states, a step-like increase of the Rb$^+$He ion signal occurs at delays ranging from $\tau =500$~fs up to about $2000$~fs. Besides, at $\lambda\lesssim 415$ nm we measure a transient overshoot of the Rb$^+$He signal by up to about 100\% of the signal level at long delays. The transient yield of Rb$^+$He is fitted using the model function \begin{equation} f_{Rb^+He}(t)=f_{Rb^+}(t)(Ee^{-t/\tau_E}+1). \label{Fitfunction} \end{equation} As for the Rb$^+$ case, $f_{Rb^+}(t)$ models the fall-back dynamics by Eq.~\ref{Rb_Fitfunction}. Additionally, the exponential function with amplitude $E$ and time constant $\tau_E$ takes the transient overshoot into account, whereas the additive constant account for a $\tau$-independent Rb$^+$He formation channel. The exponential time constants $\tau_E$ are plotted as black circles in Fig.~\ref{fig:tcrit} (c). To obtain these values, the parameters $\tau_c$ and $\sigma$ are taken as constants from the fit of the sum of Rb$^+$ and Rb$^+$He signals with Eq.~\ref{Rb_Fitfunction}. Here we make the assumption that the fall-back dynamics is only weakly perturbed by the attachment of a He atom to the Rb atom or ion, which is confirmed by our simulations. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Fig9_RbHe_des_fallback.pdf} \caption{Classical trajectories of the Rb-He-He$_N$ three-body system at $\lambda=409$ nm. The schemes at the bottom visualize the various dynamics for increasing time from left to right. (a) The excited Rb atom departing from the He droplet surface suddenly experiences Rb$^+$He pair attraction upon ionization at $\tau =400$~fs (a). Consequently, a He atom attaches to the Rb atom while it leaves the droplet. (b) For short delay $\tau=350$~fs at $\lambda=409$~nm a Rb$^+$He molecule forms as in (a) but the attraction towards the He droplet makes it turn over and fall back.} \label{fig:RbHeTrajectories} \end{figure} \subsection{Simulation of photoassociative ionization} For a more quantitative interpretation of the Rb$^+$He transients we extend our classical and mixed quantum-classical models to the one-dimensional three-body problem in the second stage of the calculation after ionization has occurred by including one individual He atom out of the surface layer. The classical trajectories are now obtained by solving three individual coupled equations of motion for the three individual particles Rb$^+$, He, and He$_n$. The Rb$^\ast$-He$_N$ interaction leading to desorption is represented by the pseudodiatomic potentials as before. The Rb$^+$-He dynamics is initialized by the velocity and distance of the dissociating Rb$^\ast$He$_N$ complex at the moment of ionization. The Rb$^+$-He pair interaction is given by the Rb$^+$-He pair potential~\cite{Koutselos:1990} augmented by a 16.7~cm$^{-1}$ deep potential step to account for the He-He$_N$ extraction energy as suggested by Reho et al.~\cite{Reho2:2000,Droppelmann:2004,Fechner:2012}.\\ Exemplary trajectories are shown in Fig.~\ref{fig:RbHeTrajectories} for two cases at $\lambda=409$ nm. For long pump-probe delays the Rb$^+$ ion leaves the He droplet without attaching a He-atom, as shown in Fig.~\ref{fig:RbTrajectories}. However, there is a range of delays in which the desorbing Rb atom is far enough away from the droplet so that it will not fall back upon ionization, but it is still close enough to attract a He atom out of the droplet surface so as to form a bound molecular ion by PAI (Fig.~\ref{fig:RbHeTrajectories} (a)). Fig.~\ref{fig:RbHeTrajectories} (b) illustrates the dynamics at short delay when the attractive forces acting between the Rb$^+$ ion and the droplet surface prevent the full desorption and Rb$^+$-He pairwise attraction leads to the formation of Rb$^+$He.\\ \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Fig10_RbHe_SimOverlap.pdf} \caption{Simulations of the yield of free Rb$^+$He-molecules created by excitation and time-delayed ionization for various center wavelengths of the laser pulses. See text for details.} \label{fig:RbHePPsimulation} \end{figure} For simulating the transient Rb$^+$He yields to compare with the experimental data shown in Fig.~\ref{fig:RbHeTransients} we extend the mixed quantum-classical model for the desorption dynamics of bare Rb described in Sec.~\ref{sec:simulations}. It is augmented by computing the probability of populating bound vibrational states of the Rb$^+$He molecule for each segment of the Rb wave packet upon ionization as the sum of spatial overlap integrals \begin{align} p_i^{PAI}(\tau )=\sum_v\left|\int^\infty_\infty \phi_v (R) \cdot \psi_i(R,\tau ) \; dR\right|^2. \end{align} Here $\psi^i$ denotes the $i$-th wave packet segment and $\phi_v$ stands for the vibrational wave functions of Rb$^+$He calculated using R. J. LeRoy's LEVEL program~\cite{level} for the 6p$\Sigma ,\,\Pi$ pair potentials of Rb$^+$He~\cite{Pascale:1983}. The identification of bound and free Rb$^+$He ions in the simulation is based on analyzing the final Rb$^+$-He and Rb$^+$He-He$_N$ distances after long delays $\tau >10$~ps, respectively. The final probability $P$ of detecting a Rb$^+$He molecule is obtained by summing up the detection probabilities for every segment, \begin{equation} P(\tau ) = \sum_i p_i^D(\tau)\cdot (p_i^{PAI}(\tau ) + p_{ex}). \label{eq:Ptau} \end{equation} In agreement with Eq. \ref{Fitfunction}, $p_i^D$ denotes the desorption probability and $p_{ex}$ is the probability of creating a bound neutral Rb$^\ast$He exciplex, which is assumed to occur instantaneously upon laser excitation and thus does not depend on $\tau$. Since the relative contributions of PAI and direct exciplex formation are not precisely known we resign from quantitatively modeling the relative efficiencies of the two pathways leading to free Rb$^+$He. Instead we adjust them to the experimental transients by taking $p_{ex}$ as a free fit parameter. The transient signal $P(\tau )$ is finally convoluted with the intensity autocorrelation function of the laser pulses, as for the Rb$^+$ transients. The resulting simulated yields of free Rb$^+$He molecular ions are depicted in Fig.~\ref{fig:RbHePPsimulation}. Clearly, the same general trends as for neat Rb$^+$ ions are recovered: (i) at short delay times $\tau < 200$ fs the appearance of Rb$^+$He is suppressed due to the falling back of the ion into the He droplet; (ii) longer laser wavelengths $\lambda\gtrsim 409$ nm (6p$\Pi$-excitation) lead to weaker repulsion and therefore to the delayed appearance of free ions as compared to 6p$\Sigma$-excitation at $\lambda\lesssim 409$ nm. These results again qualitatively agree with the experimental findings but the simulated appearance times are shorter by a factor 2-4, as shown in Fig.~\ref{fig:tcrit}. As in the Rb$^+$ case we attribute these deviations to the use of pseudo-diatomic potentials calculated for the frozen RbHe$_N$ groundstate complex. Moreover, the simulation reproduces a signal overshoot around $\tau=300$ fs at short wavelengths, which is due to the contribution of the photoassociative ionization channel. Association of a bound Rb$^+$He ion is possible only at sufficiently short Rb-He distances given at delays $\tau\lesssim 600$ fs for 6p$\Sigma$-excitation and $\tau\lesssim 900$ fs for 6p$\Pi$-excitation, respectively. At these delay times, the PAI signal adds to the signal due to ionization of Rb$^\ast$He exciplexes formed directly by the pump pulse. Note that we have adjusted to the experimental curves the relative contributions to the total Rb$^+$He signal arising from PAI and from exciplex ionization. Therefore our model does not permit a quantitative comparison with the experimentally measured signal amplitudes. A more detailed three-dimensional simulation of the dynamics is needed for a fully quantitative interpretation~\cite{Mateo:2013}. Nevertheless, we take the simulation result as a clear indication that PAI is an additional channel producing He-containing ionic complexes which needs to be considered in experiments involving photoionization of dopants attached to He droplets when using ultrashort pulses. We note that in the particular case of exciting into the 6p$\Sigma$-state of the RbHe$_N$ complex, unusual Rb$^+$He signal transients may arise from the peculiar shape of the RbHe pair potential curve which features a local potential maximum at intermediate Rb-He distance~\cite{Pascale:1983,Fechner:2012}. This potential barrier causes the highest-lying Rb$^\ast$He vibrational states to be predissociative. Semi-classical estimates yield predissociation time constants for the two highest vibrational levels $v=5$ and $6$ of 3.5 ns and 2.2 ps, respectively. However, these values significantly exceed the exponential decay times inferred from the measured transients (see Fig.~\ref{fig:tcrit} (c)). Moreover, we may expect that not only the highest vibrational levels are populated. Note that for the case of Na$^\ast$He and K$^\ast$He formed in the lowest 3p$\Pi$ and 4p$\Pi$ states, respectively, all vibrational levels including the lowest ones were found to be populated to varying extents depending on the laser wavelength~\cite{Reho:2000}. Therefore we tend to discard predissociation of Rb$^\ast$He exciplexes as the origin of the peculiar shape of the Rb$^+$He transients, although we cannot strictly rule it out. More insight into the Rb$^+$He dynamics may be provided by further measurements using electron and ion imaging detection. \section{Summary} This experimental fs pump-probe study discloses the competing dynamics of desolvation and solvation of excited and ionized states, respectively, of Rb atoms which are initially located at the surface of He nanodroplets. The generic feature of the pump-probe transients -- the time-delayed appearance of photoions -- is shown to result from the falling back of ions into the droplets when the ionization occurs at an early stage of the desorption process. This interpretation is backed by the experimental observation of the opposing trend when measuring the yield of unfragmented He nanodroplets containing a Rb$^+$ ion. Furthermore, mixed quantum-classical model calculations based on one-dimensional pseudo-diatomic potentials confirm this picture qualitatively. The limited quantitative agreement with the experimental results is attributed to the use of model potentials in the calculations, which do not account for the transient response of the He density upon excitation and ionization of the Rb dopant atom. Much better agreement may be expected from three-dimensional time-dependent density functional simulations~\cite{Vangerow:2014,Leal:2014} of the full pump-probe sequence which are currently in preparation. Pump-probe dynamics similar to the Rb$^+$ case is observed when detecting Rb$^+$He molecular ions which primarily result from photoionization of Rb$^\ast$He exciplexes. The peculiar structure of the Rb$^+$He transients as well as extended model calculations indicate that photoassociative ionization is an additional mechanism of forming He-containing ionic complexes in fs experiments. However, the dynamics resulting from the additional photoassociative ionization channel cannot unambiguously be distinguished from predissociation of Rb$^\ast$He exciplexes in high-lying vibrational levels of the 6p$\Sigma$-state. These results shed new light on the interpretation of the Rb$^+$He pump-probe transients measured previously by ionizing via the lowest 5p$\Pi$ excited state~\cite{Droppelmann:2004,Mudrich:2008}. The signal increase at short delays was interpreted as the manifestation of the formation dynamics of the Rb$^\ast$He exciplex by a tunnelling process. Possibly the competing desorption of the excited neutral and the fall-back of the photoion actually more crucially determines the rise time of the Rb$^+$He signal in those transients. This issue will be elucidated in future experiments using two-color pump-probe ionization via the 5p droplet states and at low laser repetition rate so as to exclude concurrent effects by subsequent laser pulses. Furthermore, we will investigate the photodynamics of metal atom-doped He nanodroplets in more detail by applying refined detection schemes such as ion and electron imaging~\cite{Fechner:2012,Vangerow:2014} and coincidence detection~\cite{Buchta:2013,BuchtaJCP:2013}. \begin{acknowledgments} The authors gratefully acknowledge fruitful discussions with W. Strunz. We thank M. Pi and M. Barranco for providing us with the Rb$^+$-He$_{2000}$ pseudodiatomic potential curve. Furthermore, we thank the Deutsche Forschungsgemeinschaft for financial support. \end{acknowledgments}
1,314,259,992,842
arxiv
\section{Introduction} Let $(X,h)$ be a smooth, compact Riemannian manifold of dimension $n-1$, and consider the cone on $X$, denoted $C(X)$ and defined as $\mathbb{R}^+\times X$ with metric $g$ given by $g = dr^2 + r^2h.$ The corresponding Laplace operator on $C(X)$ is given by \[\Delta_{C(X)} = \partial_r^2 + \frac{n-1}{r}\partial_r + \frac{1}{r^2}\Delta_h,\] where $\Delta_h$ is the Laplacian on $X$, taken with the negative semidefinite sign convention. We take $\Delta_{C(X)}$ with the Friedrich's extension for simplicity. We are interested in dispersive estimates for the Schr\"odinger flow \begin{equation}\label{schrodinger_flow} e^{itH}P_c,\quad H = -\Delta_{C(X)} + V, \end{equation} where $P_c$ denotes projection onto the continuous spectrum of $H.$ Here, we assume that $V$ is a real-valued radial potential satisfying certain decay assumptions at infinity. Besides giving direct insight into the behavior of waves, dispersive bounds also have interesting applications in nonlinear problems. For example, stability questions around static solutions in nonlinear models such as wave maps have been studied using dispersive decay estimates. See the work of Krieger-Schlag \cite{krieger2008renormalization} and more recently Krieger-Miao-Schlag \cite{krieger2020stability} for instance. See also the many works of Lawrie-Oh-Shahshahani \cite{lawrie2018local,lawrie2019asymptotic,lawrie2016cauchy,lawrie2016gap,lawrie2016profile,lawrie2017stability} for treatment of geometric wave and Schr\"odinger equations in hyperbolic space. Pointwise decay estimates also play a role in obtaining enhanced existence times using normal form methods, see for instance recent works of Ifrim-Tataru \cite{ifrim2015global} and Germain-Pusateri-Rousset \cite{germain2018nonlinear}. It is also an intrinsically interesting question to understand the interaction between a background potential and diffraction in order to better characterize the dynamics of waves on manifolds with conic singularities. Conic manifolds have arisen naturally in the work of Hintz-Vasy and Hafner-Hintz-Vasy on general relativity, see \cite{hafner2019linear,hintz2015semilinear,hintz2018global} and in particular the recent discussion in the work of Hintz \cite{hintz2020resolvents}. Dispersive behavior of Schr\"odinger flows has been studied in a tremendous variety of geometric settings and under many different conditions on the asymptotic decay and regularity properties of the potential $V$. In $\mathbb{R}^n$, some of the first ideas arose in the seminal paper of Journ\'e-Soffer-Sogge \cite{journe1991decay}, who proved dispersive decay for $n \geq 3$ with potentials that had no zero energy eigenvalues or resonances and were somewhat strongly decaying and regular. Since then, decay estimates have been improved in a variety of settings. Early works by Goldberg and collaborators carefully addressed the regularity required of the potential in higher dimensions and decay rates in $3$ dimensions in the absence of embedded resonance and eigenvalues, see \cite{beceanu2012schrodinger,goldberg2006dispersive,goldberg2006dispersivergh,goldberg2004dispersive,goldberg2006counterexample}. Further works for perturbations of the Euclidean Laplacian have extended dispersive decay results to the setting where $-\Delta +V$ has an embedded resonance at zero energy, which results in a weaker decay estimate in time, see for instance especially the works of Erdogan-Schlag in $3$ dimensions \cite{erdougan2010dispersive,ES1}, Erdogan-Green in two dimensions \cite{erdougan2013dispersive,erdougan2013weighted}, Green in $5$ dimensions \cite{green2012dispersive}, as well as Goldberg-Green and Erdogan-Goldberg-Green in odd and even dimensions $\geq 4$ \cite{erdougan2014dispersive,goldberg2014dispersive,goldberg2015dispersive,goldberg2016lp}. Recent progress by Blair-Sire-Sogge \cite{blair2019quasimode} has pushed the construction of the spectral measure for $-\Delta + V$ to cases where the regularity of the potential $V$ is at very critical levels, though the authors have not explored dispersive decay directly. This is by no means an exhaustive list, but these results are representative of the techniques involved, namely careful control of the free resolvent, the use of resolvent expansions, the role of the regularity of the potential $V$, and the spectral structure of the operator $-\Delta + V$. The survey article by Wilhelm Schlag \cite{schlag2007dispersive} contains an excellent overview of the key ideas involved. Dispersive decay estimates have also been studied in several other geometries. For example, Schr\"odinger operators with potential were studied on hyperbolic space by David Borthwick and the second author in \cite{borthwick2015dispersive}. See the recent article of Bouclet \cite{bouclet2018sharp} for a broad overview of results on the asymptotically Euclidean setting, the article by Hassell-Zhang \cite{hassell2016global} and references therein for results on asymptotically conic manifolds, as well as the articles of Schlag-Soffer-Staubach \cite{schlag2010decay1,schlag2010decay2} for manifolds with conical ends. Analysis of the Laplacian on product cones is related to the analysis of Schr\"odinger operators on $\mathbb{R}^n$ with an inverse square potential which have been studied in various settings, e.g. the works \cite{killip2015energy,miao2013strichartz,miao2014maximal,mizutani2019uniform,zhang2014scattering} by various authors. The study of the Laplacian on product cones has a rich history. See the classical results of Cheeger-Taylor, \cite{Cheeger1979,cheeger1982diffraction}, where the spectral measure was first described. As a result, there have been several works that studied evolution equations and their decay estimates on product cones, especially wave equations \cite{baskin2019scattering,blair2012strichartz,blair2013strichartz,Ford2010,ford2015wave,melrose2004propagation,zhang2018strichartz}. See also \cite{baskin2016locating} for information about scattering resonances on hyperbolic cones. Analysis of dispersive estimates for Schr\"odinger equations using the resolvent and spectral measure on a product cone has been studied in the recent results of Zhang-Zheng \cite{zhang2017globalcone,zhang2017global}. These are the most closely related results to ours, but only study specific types of potentials that can be treated more perturbatively, hence they need not fully explore the regularity and decay of the spectral measure in the same fashion undertaken here. See also the very recent work of Chen \cite{chen2020semiclassical} that studies local dispersive behavior on manifolds with non-product conic singularities. On pure product cones, we prove pointwise decay estimates for the mode-by-mode decomposition of the Schr\"odinger flow \eqref{schrodinger_flow}. By this, we mean that if $\{\varphi_j\}_{j=0}^\infty$ is a basis of $L^2(X)$ consisting of eigenfunctions of $\Delta_h$, then the Schr\"odinger flow on $C(X)$ can be formally decomposed as \begin{equation}\label{modebymode} e^{itH}P_c = \sum\limits_{j=0}^\infty e^{it H}P_c E_j, \end{equation} where $E_j:L^2(X)\to L^2(X)$ denotes projection on to the linear span of $\varphi_j.$ We show that if $V\in \rho^{-2\sigma} L^\infty(\mathbb{R}^+)$ for $\sigma$ sufficiently large, where $\rho(r) = 1 + r$ is a weight function, and if the perturbed resolvent \[R_V(z):= (-\Delta_{C(X)} +V - z^2)^{-1}\] does not have a pole at zero, then each component of \eqref{modebymode} satisfies a weighted pointwise estimate. In odd dimensions, we prove this with the same $t^{-\tfrac{n}{2}}$ decay rate as in the Euclidean case, while in even dimensions, there is a loss of $t^{\tfrac{1}{2}}$ which we do not expect to be sharp. \begin{theorem}\label{disp_est} Suppose $C(X)$ is of odd dimension $n\ge 3$. Let $V\in \rho^{-2\sigma}L^\infty(\mathbb{R}^+) $ with \[\sigma > 2n\left\lceil\frac{n}{4}\right\rceil.\] If $R_V(z)$ does not have a pole at $z = 0$, then for any $\alpha \ge 2\left\lceil\frac{n}{4}\right\rceil(n-2)- \frac{n-1}{2}+2$, we have \begin{equation}\label{main_disp_est} \|\rho^{-\alpha} e^{itH}P_c E_jf\|_{L^\infty(\mathbb{R}^+)} \le C_{j,\alpha,\sigma} t^{-\frac{n}{2}}\|\rho^{\alpha}E_jf\|_{L^1(\mathbb{R}^+,r^{n-1}\,dr)}, \end{equation} for some $C_{j,\alpha,\sigma} > 0.$ \end{theorem} \begin{rmk}\textnormal{In the case where $n$ is even, the techniques of this article give a slightly weaker estimate of the form \begin{equation}\label{disp_est_even} \|\rho^{-\alpha} e^{itH}P_c E_jf\|_{L^\infty(\mathbb{R}^+)} \le C_{j,\alpha,\sigma} t^{-\frac{n-1}{2}}\|\rho^{\alpha}E_jf\|_{L^1(\mathbb{R}^+,r^{n-1}\,dr)}, \end{equation} for analogous conditions on $V$ and $\alpha,$ where the loss of the $\frac{1}{2}$ power of decay in $t$ arises as a result of regularity issues encountered in the analysis of the spectral measure near zero energy. We expect that with more sophisticated techniques it may be possible to improve this estimate to give the full $t^{-\frac{n}{2}}$ decay rate exhibited in $\mathbb{R}^n$. } \end{rmk} The strategy for proving \thmref{disp_est} is to use the representation of the Schr\"odinger flow as an oscillatory integral with respect to the continuous part of the spectral measure of $-\Delta_{C(X)} +V$, denoted $d\Pi_V$, which gives \[e^{itH}P_c = \int\limits_\mathbb{R} e^{it\lambda^2}d\Pi_V(\lambda).\] By Stone's formula, the spectral measure can be expressed in terms of the boundary values of the perturbed resolvent near the continuous spectrum as \begin{align*} d\Pi_V(\lambda;x,y) & = \frac{1}{2\pi i}\left[ R_V(\lambda+i0;x,y) - R_V(\lambda-i0;x,y)\right]\lambda\,d\lambda \\ & = \frac{1}{\pi}\operatorname{Im\,} R_V(\lambda+i0;x,y)\lambda\,d\lambda. \end{align*} Through the use of integration by parts, the $L^1\to L^\infty$ mapping properties of the linear Schr\"odinger flow can be inferred from pointwise estimates on $\Im R_V(\lambda+i0;x,y)$. We accomplish this by decomposing $R_V$ with respect to the basis of eigenfunctions $\{\varphi_j\}$ as \[R_V(z) = \sum\limits_{j=0}^\infty R_{V,j}(z)E_j,\] for an associated collection of resolvent operators $R_{V,j}$ acting in the radial variable. By analyzing these ``radial resolvents," we are able to demonstrate that $-\Delta_{C(X)}+V$ has no eigenvalues or resonances embedded in the continous spectrum, and thus we establish weighted $L^2$ mapping properties for each $R_{V,j}$. To obtain pointwise bounds, we then expand each $R_{V,j}$ in a Birman-Schwinger series to reduce the proof to establishing estimates on the free resolvent, which is easier to analyze due to its explicit nature. In doing this, we find that these estimates are not necessarily summable in $j$, which is why the projections $E_j$ are required in \thmref{disp_est}. Similar weighted mode-by-mode estimates are obtained in the works of Schlag-Soffer-Staubach \cite{schlag2010decay1,schlag2010decay2} in the case of surfaces of revolution and related mode by mode decay rates were established for the wave equation on the Schwarzschild space-time in Donninger-Schlag-Soffer \cite{donninger2012pointwise}. \subsection{Outline of the Paper} In Section \ref{free}, we discuss the form of the free resolvent on product cones. Then, in Section \ref{R0_estimates}, we prove operator bounds on the free resolvent using properties of Bessel and Hankel functions. Then, we prove weighted $L^2$ operator bounds on the perturbed resolvent in Section \ref{spectheory}. Then, in Section \ref{spectralresolution}, we prove the full pointwise bounds necessary on the perturbed resolvent using a Birman-Schwinger expansion. Finally, in Section \ref{dispersive}, we use the representation of the spectral measure in terms of the resolvent combined with the estimates from the previous sections to prove Theorem \ref{disp_est}. We also give two appendices at the end of the paper. The first, Appendix \ref{ell2_dispersive}, uses ideas from \cite{Ford2010} to give a modified full dispersive estimate on all frequencies provided we measure in $L^2$ on the cone link. Then, in Appendix \ref{app:embres}, we discuss absence of embedded eigenvalues for general Schr\"odinger operators on product cones, which is a critical component of the proofs in Section \ref{spectheory}. \subsection*{Acknowledgements} BK is supported by DMS 1900519 and Sloan Fellowship through his advisor Yaiza Canzani. JLM was supported in part by NSF CAREER Grant DMS-1352353 and NSF Applied Math Grant DMS-1909035. JLM also thanks Duke University and MSRI for hosting him during the outset of this research project. The authors would like to thank Dean Baskin, David Borthwick, Yaiza Canzani, Michael Goldberg, Andrew Hassell and Jason Metcalfe for very helpful discussions about resolvent estimates on conic manifolds and pointwise estimates in general. \section{The Free Resolvent} \label{free} \noindent In this section, we construct the integral kernel for the free resolvent operator \begin{equation} R_0(z) = (-\Delta_{C(X)} - z^2)^{-1}: L^2(C(X))\to L^2(C(X)), \end{equation} for $\Im z \ne 0,$ closely following the exposition of \cite{baskin2019scattering}. This is equivalent to analyzing solutions of the equation \begin{equation}\label{free_resolvent} (-\Delta_{C(X)} - z^2)u = f \end{equation} for $f\in L^2(C(X)).$ To proceed, we decompose $u$ and $f$ into the basis $\{\varphi_j\}$ of eigenfunctions on $X$ as $$f(r,\theta) = \sum\limits_{j=1}^\infty f_j(r)\varphi_j(\theta), \quad u(r,\theta) = \sum\limits_{j=1}^\infty u_j(r)\varphi_j(\theta).$$ Denote by $-\mu_j^2$ the eigenvalues of $\Delta_h$ associated to each $\varphi_j$. Then, we obtain that \eqref{free_resolvent} is equivalent to the collection of equations \begin{equation}\left(\partial_r^2 + \frac{n-1}{r}\partial_r +z^2 - \frac{\mu_j^2}{r^2}\right) u_j(r) = -f_j(r) ,\hskip 0.2in j = 0,1,2,\dotsc.\label{jth_resolvent_eqn} \end{equation} Therefore, we can express the resolvent $R_0(z)$ as \[R_0(z)f(r,\theta) = \sum\limits_{j=0}^\infty u_j(r)\varphi_j(\theta),\] with $u_j$ as above. If we define the $j$th \textit{radial resolvent} $R_{0,j}(z)$ by \begin{equation}\label{radial_resolvents} R_{0,j}(z) = \left(\partial_r^2 + \frac{n-1}{r}\partial_r + z^2 - \frac{\mu_j^2}{r^2}\right)^{-1} \end{equation} as an operator on $L^2(\mathbb{R}^+,r^{n-1}\,dr)$, then the full resolvent is given by \[R_0(z)f(r,\theta) = \sum\limits_{j=0}^\infty R_{0,j}(z)f_j(r)\varphi_j(\theta).\] For each $j$, the defining equation \eqref{jth_resolvent_eqn} for $R_{0,j}(z)f_j$ is an ODE with a regular singular point at zero, and so by applying the Frobenius method we find that the indicial roots of the equation are $-\frac{n-2}{2} \pm \sqrt{\left(\frac{n-2}{2}\right)^2 + \mu_j^2}.$ For this reason, we introduce the notations $\delta = -\frac{n-2}{2}$ and $\nu_j = \sqrt{\left(\frac{n-2}{2}\right)^2 +\mu_j^2} $. The structure of the indicial roots suggests that we rescale by $r^{\delta}$, and so we define $\omega_j$ by $u_j(r) = r^{\delta}\omega_j(r)$ so that $\omega_j$ is analytic near $r = 0.$ Then, \eqref{jth_resolvent_eqn} becomes \[\partial_r^2\omega_j + \frac{1}{r}\partial_r\omega_j + \left( z^2 - \frac{\nu_j^2}{r^2}\right)\omega_j = -r^{-\delta}f_j(r), \hskip 0.4in j = 0,1,2,\dotsc.\] If $z\ne 0$, we can change variables via $\zeta = zr$ to obtain the following inhomogeneous Bessel equation of order $\nu_j:$ \begin{equation}\label{Bessel_eqn} \widetilde\omega_j'' + \frac{1}{\zeta}\widetilde\omega_j' + \left( 1 - \frac{\nu_j^2}{\zeta^2}\right)\widetilde\omega_j = -\frac{\zeta^{-\delta}}{z^2}f_j(\zeta/z), \end{equation} where $\widetilde\omega_j(\zeta) = \omega_j(\zeta/z)$, and the ``prime" notation denotes the complex derivative with respect to $\zeta$. For notational convenience, we define $f_{j,z}(\zeta):= -\frac{\zeta^{-\delta}}{z^2}f_j(\zeta/z)$. The solutions to the homogeneous Bessel equation of order $\nu$ are the well-known Bessel functions of the first and second kind, denoted $J_{\nu}$ and $Y_{\nu}$, respectively. Closely related to these are the Hankel functions $H_{\nu}^{(1)}$ and $H_{\nu}^{(2)}$, given by \[H^{(1)} = J_{\nu} + i Y_\nu, \hskip 0.3in H^{(2)} = J_\nu - i Y_\nu.\] Any two of these Bessel and/or Hankel functions can be used to form a fundamental solution set for the homogeneous equation. Given an appropriate choice of fundamental solution set, we use the method of variation of parameters to construct solutions to the inhomogeneous problem. So let $y_1,y_2$ be a fundamental solution set for the homogeneous problem associated to \eqref{Bessel_eqn} for some fixed $j$. We then construct our solution $\widetilde\omega_j$ as \[\widetilde\omega_j = v_1y_1 + v_2y_2\] where all objects above are functions of $\zeta$. Straightforward calculations show that if \[v_1'(\zeta) = -\frac{y_2(\zeta)f_{j,z}(\zeta)}{\mathscr W(y_1,y_2)(\zeta)}, \hskip 0.3in \text{and }\hskip 0.3in v_2'(\zeta) = \frac{y_1(\zeta)f_{j,z}(\zeta)}{\mathscr W(y_1,y_2)(\zeta)},\] then $\widetilde\omega_j$ as given above solves the inhomogeneous equation \eqref{Bessel_eqn}, where $\mathscr W(y_1,y_2)(\zeta)$ denotes the Wronskian determinant of $y_1$ and $y_2$ evaluated at $\zeta$. Therefore, we may compute $v_1$ and $v_2$ by taking path integrals in the complex plane, which yields \[\widetilde\omega_j(\zeta) = \left(\int_{\mathscr C_1(\zeta)} -\frac{y_2(\xi)f_{j,z}(\xi)}{\mathscr W(y_1,y_2)(\xi)}\,d\xi\right) y_1(\zeta) + \left(\int_{\mathscr C_2(\zeta)} \frac{y_1(\xi)f_{j,z}(\xi)}{\mathscr W(y_1,y_2)(\xi)}\,d\xi\right) y_2(\zeta)\] where $\mathscr C_1(\zeta)$, $\mathscr C_2(\zeta)$ are any complex contours connecting fixed points $c_1,c_2\in \mathbb{C}$ to $\zeta$, respectively. In fact, it suffices to take $c_1,c_2$ on the real line. We then choose our contours to be the piecewise linear paths defined by \[\mathscr C_1(\zeta) = \{(1-t)c_1 + t \Re \zeta: t\in[0,1]\}\cup \{\Re\zeta + it\Im\zeta : t\in [0,1]\}\] and \[\mathscr C_2(\zeta) = \{(1-t)c_2 + t\Re \zeta:t\in [0,1]\}\cup \{\Re \zeta + it\Im\zeta : t\in [0,1]\}.\] Of particular interest are the boundary values of the resolvent near the continuous spectrum of $-\Delta_{C(X)}+V$. Therefore, if we consider $z = \lambda \pm i\varepsilon$, we have \begin{align*} \widetilde\omega_j(zr) & = y_1(zr)\left(\int\limits_{c_1}^{\lambda r}\frac{-y_2(t)f_{j,z}(t)}{\mathscr W(y_1,y_2)(t)}\,dt + i\int\limits_0^{\pm \varepsilon r} \frac{-y_2(\lambda r + it)f_{j,z}(\lambda r + it)}{\mathscr W(y_1,y_2)(\lambda r+ it)}\,dt\right) \\ & \hskip 0.5in + y_2(zr) \left(\int\limits_{c_2}^{\lambda r}\frac{y_1(t)f_{j,z}(t)}{\mathscr W(y_1,y_2)(t)}\,dt + i\int\limits_0^{\pm \varepsilon r}\frac{y_1(\lambda r + it)f_{j,z}(\lambda r + it)}{\mathscr W(y_1,y_2)(\lambda r + it)}\,dt\right) \end{align*} All that remains is to determine that appropriate fundamental solution set $y_1$, $y_2$ and constants $c_1,c_2$ so that our solution is a well defined element of $L^2(C(X))$. If we take $y_2 = J_{\nu_j}$ and $c_1 = 0,$ then $\widetilde\omega_j$ is bounded as $r \to 0$, provided that the coefficient integrals converge. We then choose $y_1$ to be either $H_{\nu_j}^{(1)}$ or $H_{\nu_j}^{(2)}$, depending on the sign of $\Im z$. By the asymptotic forms of the Hankel functions, we have \[H_{\nu_j}^{(1)}(\zeta) \sim \sqrt{\frac{2}{\pi \zeta}} e^{i\left( \zeta - \frac{\nu_j\pi}{2} - \frac{\pi}{4}\right)}\] and \[H_{\nu_j}^{(2)}(\zeta) \sim \sqrt{\frac{2}{\pi \zeta}} e^{-i \left( \zeta - \frac{\nu_j\pi}{2} - \frac{\pi}{4}\right) }\] for $-\pi < \arg \zeta < \pi$, and the branch of the square root is defined by $\zeta^{1/2} = e^{\frac{1}{2}(\ln|\zeta| + i \arg \zeta)}$ for such $\zeta.$ We can now see that if $z = \lambda + i\varepsilon$, then $\zeta = z r$ also has positive imaginary part, and so $H_{\nu_j}^{(1)}(zr)$ decays exponentially as $r\to \infty$, while $H_{\nu_j}^{(2)}$ exhibits exponential growth. Hence, when $z = \lambda + i\varepsilon$ we take $y_1 = H_{\nu_j}^{(1)}$ and $c_2 = \infty$, which yields \begin{align*} \widetilde\omega_j(zr) & = H_{\nu_j}^{(1)}(zr)\left(\int\limits_{0}^{\lambda r}\frac{J_{\nu_j}(t)f_{j,z}(t)}{2i/(\pi t)}\,dt + i\int\limits_0^{\pm \varepsilon r} \frac{J_{\nu_j}(\lambda r + it)f_{j,z}(\lambda r + it)}{2i/[\pi(\lambda r+ it)]}\,dt\right) \\ & \hskip 0.5in + J_{\nu_j}(zr) \left(\int\limits_{\lambda r}^{\infty}\frac{H_{\nu_j}^{(1)}(t)f_{j,z}(t)}{2i/(\pi t)}\,dt - i\int\limits_0^{\pm \varepsilon r}\frac{H_{\nu_j}^{(1)}(\lambda r + it)f_{j,z}(\lambda r + it)}{2i/[\pi(\lambda r +it )]}\,dt\right), \end{align*} since $\mathscr W(H_{\nu_j}^{(1)},J_{\nu_j})(\xi) = -\frac{2i}{\pi \xi}$. We can then take the limit as $\varepsilon \to 0$ to obtain \[\widetilde\omega_j(\lambda r) =\frac{\pi}{2i}H_{\nu_j}^{(1)}(zr)\int\limits_{0}^{\lambda r}tJ_{\nu_j}(t)f_{j,z}(t)\,dt + \frac{\pi}{2i}J_{\nu_j}(zr) \int\limits_{\lambda r}^{\infty}tH_{\nu_j}^{(1)}(t)f_{j,z}(t)\,dt.\] Recalling that $u_j(r) = (zr)^\delta\widetilde\omega_j(zr)$ and $f_{j,z}(t) = -\frac{t^{\frac{n-2}{2}}}{z^2}f_j(t/z)$, we get that the outgoing solution corresponding to the $j$th resolvent is \begin{align*} & u_j(r) = \frac{\pi i}{2}(\lambda r)^{-\frac{n-2}{2}}H_{\nu_j}^{(1)}(\lambda r)\int\limits_{0}^{\lambda r}\frac{t^{\frac{n}{2}}J_{\nu_j}(t)f_{j}(t/\lambda)}{\lambda^2}\,dt \\ & \hspace{2cm} + \frac{\pi i}{2}(\lambda r)^{-\frac{n-2}{2}}J_{\nu_j}(\lambda r) \int\limits_{\lambda r}^{\infty}\frac{t^{\frac{n}{2}}H_{\nu_j}^{(1)}(t)f_{j}(t/\lambda)}{\lambda^2}\,dt. \end{align*} If we then change variables via $t = \lambda s$, we can rewrite the above as \[u_j( r) = \frac{\pi i}{2} r^{-\frac{n-2}{2}}H_{\nu_j}^{(1)}(\lambda r) \int\limits_0^{r} s^{\frac{n}{2}}J_{\nu_j}(\lambda s) f_j(s)\,ds + \frac{\pi i}{2}r^{-\frac{n-2}{2}}J_{\nu_j}(\lambda r)\int\limits_{r}^\infty s^{\frac{n}{2}}H_{\nu_j}^{(1)}(\lambda s)f_j(s)\,ds.\] The integral kernel of $R_{0,j}(\lambda + i0)$ with respect to the measure $s^{n-1}\,ds$ is therefore given by \begin{equation} \label{radial_resolvents_plus}R_{0,j}(\lambda + i0;r,s) = \begin{cases} \frac{\pi i}{2}(rs)^{-\frac{n-2}{2}} J_{\nu_j}(\lambda s)H_{\nu_j}^{(1)}(\lambda r), & s < r\\ \frac{\pi i}{2}(rs)^{-\frac{n-2}{2}}J_{\nu_j}(\lambda r)H_{\nu_j}^{(1)}(\lambda s), & s > r, \end{cases} \end{equation} since $s^{\frac{n}{2}} = s^{n-1}s^{-\frac{n-2}{2}}$. We can repeat this analysis for $z = \lambda - i\varepsilon$, and we find that we must take use $H_{\nu_j}^{(2)}$ instead of $H_{\nu_j}^{(1)}$ due to the asymptotic behavior at infinity, which also causes the Wronskian to change sign, but otherwise the calculations are identical. We therefore obtain \begin{equation} \label{radial_resolvents_minus}R_{0,j}(\lambda - i0;r,s) = \begin{cases} \frac{\pi }{2i}(rs)^{-\frac{n-2}{2}}J_{\nu_j}(\lambda s) H_{\nu_j}^{(2)}(\lambda r), & s < r\\ \frac{\pi }{2i}(rs)^{-\frac{n-2}{2}}J_{\nu_j}(\lambda r)H_{\nu_j}^{(2)}(\lambda s), & s > r. \end{cases}. \end{equation} We can therefore express the imaginary part of the resolvent kernels $R_{0,j}$ as follows. \begin{lemma}\label{stone} For $\lambda$ real, we have \[\operatorname{Im\,} R_{0,j}(\lambda+i0;r,s)= \frac{\pi }{2}(rs)^{-\frac{n-2}{2}}J_{\nu_j}(\lambda r)J_{\nu_j}(\lambda s)\] as an integral kernel with respect to the measure $s^{n-1}ds$. \end{lemma} \begin{proof} This follows immediately from the fact that \[H_{\nu_j}^{(1)} + H_{\nu_j}^{(2)} = (J_{\nu_j} + iY_{\nu_j}) + (J_{\nu_j} -i Y_{\nu_j}) = 2J_{\nu_j}.\] \end{proof} We can now write down an expression for the spectral measure of $-\Delta_{C(X)}$ as in \cite{Cheeger1979}, which follows from Stone's formula. \begin{lemma} For $\lambda$ real, \[\operatorname{Im\,} R_{0}(\lambda+i0;x,y) = \frac{\pi}{2}(rs)^{-\frac{n-2}{2}}\sum\limits_{j=0}^\infty J_{\nu_j}(\lambda r)J_{\nu_j}(\lambda s)\varphi_j(\theta_1)\overline{\varphi_j(\theta_2)} \] where $x = (r,\theta_1)$ and $y = (s,\theta_2)$ are points in $C(X)$. Moreover, the absolutely continuous part of the spectral measure of $-\Delta_{C(X)}$, with the convention that $\lambda^2$ is the spectral parameter, is given by \begin{align*} d\Pi_0(\lambda;x,y) & = \frac{1}{2\pi i}\left[R_0(\lambda+i0;x,y)- R_0(\lambda - i0;x,y)\right]2\lambda\,d\lambda \\ &= \sum\limits_{j=0}^\infty (rs)^{-\frac{n-2}{2}}J_{\nu_j}(\lambda r)J_{\nu_j}(\lambda s)\varphi_j(\theta_1)\overline{\varphi_j(\theta_2)}\lambda\,d\lambda \end{align*} \end{lemma} \begin{rmk} \textnormal{ We note that by using the convention that $\lambda^2$ is the spectral parameter, as opposed to $\lambda$, we have the following relations between the boundary values of the resolvent: \[R_{0,j}(\lambda+i0) = R_{0,j}(-\lambda - i0),\quad \text{and}\quad R_{0,j}(\lambda -i0) = R_{0,j}(-\lambda +i0)\] for $\lambda\in\mathbb{R}.$ This allows us to reduce many of the proofs in the following sections to the case where we consider only $R_{0,j}(\lambda +i0)$ with $\lambda > 0$. } \end{rmk} \section{Estimates on the Free Resolvent} \label{R0_estimates} \noindent Here, we prove a variety of weighted estimates on the unperturbed radial resolvents ${R_{0,j}}$. These estimates heavily rely on the asymptotic formulae for the Bessel and Hankel functions near zero and infinity. Of particular interest is the behavior of $R_{0,j}$ measured in the weighted $L^q$ spaces defined by \[ L^{q,\sigma}(\mathbb{R}^+,\,r^{n-1}\,dr) = \{f:\mathbb{R}^+\to\mathbb{C}:\, \int_0^\infty\left|f(r)\right|^q\rho^{q\sigma}(r)\,r^{n-1}\,dr < \infty\}, \] where $\rho(r) = 1+r.$ For ease of notation, we simply write $L^{q,\sigma}$ to denote the space $L^{q,\sigma}(\mathbb{R}^+,\,r^{n-1}\,dr)$ where there can be no confusion. The estimates for the free resolvent on these spaces will prove useful in Sections \ref{spectheory} and \ref{spectralresolution} for establishing the mapping properties of the perturbed resolvent. We begin with a version of the Limiting Absorption Principle for the radial resolvents on a product cone. \begin{proposition} \label{L2_L2_prop} \noindent Let $k\ge 0$ be an integer. Then for any $\sigma > \frac{1}{2} + k$, \begin{equation} \|\partial_\lambda^k R_{0,j}(\lambda \pm i0)\|_{L^{2,\sigma}\to L^{2,-\sigma}} \le \frac{C_{j,k,\sigma}}{|\lambda|} \end{equation} for all $|\lambda| \ge 1.$ \end{proposition} \begin{rmk}\textnormal{ A noteworthy observation here is that the constant $C_{j,k,\sigma}$ in \propref{L2_L2_prop} is not known \emph{a priori} to be bounded as a function of $j$. In the special case where $k = 0$, the statement of \propref{L2_L2_prop} can be shown to hold for the full resolvent $R_0(\lambda+i0)$ with a uniform constant using extremely precise asymptotics for the Bessel and Hankel functions such as those found in \cite{FrankSimon2017}. However, when $k > 0$ this method fails due to the fact that differentiating $J_\nu(\lambda r)H_\nu^{(1)}(\lambda s)$ yields a linear combination of products of Bessel and Hankel functions with mismatched orders, and hence the resulting constants in the estimates for the Hankel functions are not balanced by those of the Bessel functions, in contrast to the $k= 0$ case. } \end{rmk} \begin{proof} Note that it suffices to prove the proposition for $R_{0,j}(\lambda+i0)$ and for $\lambda$ positive, since the choice of sign makes no difference in the proof. If $f\in L^{2,\sigma}$, we have \begin{equation*}\|\partial_\lambda^kR_{0,j}(\lambda + i0)f\|_{L^{2,-\sigma}}^2 = \int\limits_0^\infty \left|\int\limits_0^\infty \partial_\lambda^k R_{0,j}(\lambda+i0;r,s)f(s) s^{n-1}\,ds\right|^2 (1+r)^{-2\sigma}r^{n-1}\,dr.\end{equation*} Artificially inserting a factor of $(1+s)^{-\sigma}(1+s)^{\sigma}$ and applying Cauchy-Schwartz, we see that \begin{align*} \begin{split} \|\partial_\lambda^kR_{0,j}(\lambda + i0)f\|_{L^{2,-\sigma}}^2 & \le \int\limits_0^\infty \|\partial_\lambda^kR_{0,j}(\lambda+i0;r,\cdot)\|_{L^{2,-\sigma}_s}^2\|f\|^2_{L^{2,\sigma}_s}(1+r)^{-2\sigma}r^{n-1}\,dr\\ & = \|\partial_\lambda^kR_{0,j}(\lambda+i0;\cdot,\cdot)\|^2_{L^{2,-\sigma}_s L^{2,-\sigma}_r}\|f\|^2_{L^{2,\sigma}_s}. \end{split} \end{align*} Hence, it suffices to show that the kernel satisfies \[ \|\partial_\lambda^k R_{0,j}(\lambda+i0;\cdot,\cdot)\|^2_{L^{2,-\sigma}_s L^{2,-\sigma}_r} \le \frac{C}{\lambda^2}. \] By definition, \begin{equation} \|\partial_\lambda^kR_{0,j}(\lambda+i0;\cdot,\cdot)\|^2_{L^{2,-\sigma}_s L^{2,-\sigma}_r} = \int\limits_0^\infty\int\limits_0^\infty \partial_\lambda^k R_{0,j}(\lambda+i0;r,s)(1+s)^{-2\sigma}(1+r)^{-2\sigma}(rs)^{n-1}\,ds\,dr, \end{equation} Recalling the piecewise formula \eqref{radial_resolvents_plus} for the resolvent kernel, we have that \begin{align}\label{weighted_norm} \begin{split} \|\partial_\lambda^kR_{0,j}(\lambda+i0;\cdot,\cdot)\|^2_{L^{2,-\sigma}_sL^{2,-\sigma}_r} &\\ & \hspace{-1in} =\frac{\pi^2}{4}\int\limits_0^\infty \int\limits_0^r\left[\partial_\lambda^k\left( H_{\nu_j}^{(1)}(\lambda r) J_{\nu_j}(\lambda s)\right)\right]^2(rs)(1 + r)^{-2\sigma}(1+s)^{-2\sigma}\,ds\,dr\\ & \hspace{-.9in} + \frac{\pi^2}{4}\int\limits_0^\infty \int\limits_r^\infty \left[\partial_\lambda^k\left( H_{\nu_j}^{(1)}(\lambda s) J_{\nu_j}(\lambda r)\right)\right]^2(rs)(1 + r)^{-2\sigma}(1+s)^{-2\sigma}\,ds\,dr . \end{split} \end{align} By changing the order of integration, we get that the first term on the right-hand side of \eqref{weighted_norm} can be rewritten as \begin{equation} \frac{\pi^2}{4}\int\limits_0^\infty \int\limits_s^\infty \left[\partial_\lambda^k\left( H_{\nu_j}^{(1)}(\lambda r) J_{\nu_j}(\lambda s)\right)\right]^2(rs)(1 + r)^{-2\sigma}(1+s)^{-2\sigma}\,dr\,ds. \end{equation} We note that up to a relabeling of $r,s$, this is exactly equal to the second term in \eqref{weighted_norm}, and hence \begin{equation} \label{plamR0bd} \begin{split} & \|\partial_\lambda^kR_{0,j}(\lambda+i0;\cdot,\cdot)\|^2_{L^{2,-\sigma}_sL^{2,-\sigma}_r} = \\ & \hspace{1cm} \frac{\pi^2}{2}\int\limits_0^\infty \int\limits_r^\infty \left[\partial_\lambda^k\left( H_{\nu_j}^{(1)}(\lambda s) J_{\nu_j}(\lambda r)\right)\right]^2(rs)(1 + r)^{-2\sigma}(1+s)^{-2\sigma}\,ds\,dr. \end{split} \end{equation} Note that if $\mathcal C_\nu(x)$ is either a Bessel or Hankel function of order $\nu,$ we have \begin{equation}\label{Bessel_derivative} \mathcal C_\nu'(x) = \frac{1}{2}\left(\mathcal C_{\nu+1}(x) - \mathcal C_{\nu-1}(x)\right), \end{equation} and so the triangle inequality reduces the proof of \propref{L2_L2_prop} to showing that the following lemma holds. \end{proof} \begin{lemma}\label{L2_L2_lemma} Let $\ell,m,k$ be nonnegative integers with $\ell+m =k$, and suppose $\alpha,\beta\in\mathbb{Z}$ are such that $|\alpha|\le \ell$ and $|\beta| \le m.$ Then for any $\nu \ge \frac{n-2}{2}$, there exists a $C > 0$ depending only on $k,\nu$ such that \begin{equation}\label{L2_L2_lem_eqn} \int\limits_0^\infty\int\limits_r^\infty |J_{\nu+\alpha}(\lambda r)|^2|H_{\nu+\beta}(\lambda s)|^2 r^{1 + 2\ell}s^{1 + 2m}(1 + s)^{-2\sigma}(1 + r)^{-2\sigma}\,ds\,dr \le \frac{C}{\lambda^2},\hskip 0.2in \lambda\ge 1, \end{equation} provided that $\sigma > \frac{1}{2}+k$. \end{lemma} \begin{proof} This proof, and others which follow it, make extensive use of asymptotic estimates for the Bessel and Hankel functions, which we record here for later use. For any $\nu\in\mathbb{R},$ there exist constants $C_\nu,C_\nu'>0$ such that when $0 < |\tau| \le 1$, \begin{equation}\label{small_arg} \left|J_\nu(\tau)\right| \le C_\nu\tau^\nu, \quad \left|H_{\nu}^{(1)}(\tau)\right| \le C_\nu \tau^{-\nu} \end{equation} and when $|\tau| \ge 1$, \begin{equation}\label{large_arg} \left|J_\nu(\tau)\right| \le C_\nu'\tau^{-\tfrac{1}{2}},\quad \left|H_{\nu}^{(1)}(\tau)\right| \le C_\nu'\tau^{-\tfrac{1}{2}}. \end{equation} To prove \lemref{L2_L2_lemma}, let us first write the left-hand side of \eqref{L2_L2_lem_eqn} as $I(\lambda)+I\!\!I(\lambda)$, where each term is obtained by restricting the integral in the $r$ variable to $0 < r < \frac{1}{\lambda}$ and $\frac{1}{\lambda}< r < \infty$, respectively. To estimate $I(\lambda)$, note that by \eqref{large_arg}, we have \[\int\limits_{\frac{1}{\lambda}}^\infty s^{1 + 2m}(1 + s)^{-2\sigma}|H_{\nu+\beta}(\lambda s)|^{2}\,ds \le \frac{C}{\lambda}\int\limits_{\frac{1}{\lambda}}^\infty s^{2m}(1 + s)^{-2\sigma}\,ds \le \frac{C'}{\lambda},\] as long as $\sigma > m + \frac{1}{2}$. Combining this with \eqref{small_arg}, we have \begin{align} \label{I_bound_1} \begin{split} \int\limits_0^{\frac{1}{\lambda}}\int\limits_{\frac{1}{\lambda}}^\infty |J_{\nu+\alpha}(\lambda r)|^2|H_{\nu+\beta}(\lambda s)|^2 r^{1 + 2\ell}s^{1 + 2m}(1 + s)^{-2\sigma}(1 + r)^{-2\sigma}\,ds\,dr &\le \frac{C'}{\lambda} \int\limits_{0}^\frac{1}{\lambda}r^{1 + 2\ell}(\lambda r)^{2(\nu+\alpha)}\,dr \\ &\le \frac{C''}{\lambda^2} \end{split} \end{align} for any $\ell\ge 0,$ since $\lambda \ge 1.$ Furthermore, if $0<r \le \frac{1}{\lambda}$, we have \begin{align*} \int\limits_r^{\frac{1}{\lambda}} s^{1 + 2m}(1 + s)^{-2\sigma}|H_{\nu+\beta}(\lambda s)|^{2}\,ds \le \frac{C}{\lambda^{2(\nu+\beta)}}\int\limits_r^{\frac{1}{\lambda}}s^{1 + 2(m-\nu-\beta)}\,ds\\ \hspace{.5cm}\le \frac{C}{\lambda^{2(\nu+\beta)}}r^{1 + 2(m-\nu-\beta)}\left(\frac{1}{\lambda}-r\right) \le \frac{C'}{\lambda}(\lambda r)^{-2(\nu+\beta)}r^{2m} \end{align*} since $1 + 2(m-\beta -\nu) \le 0.$ Hence, if we recall that $k = \ell + m\ge |\alpha|+|\beta|$ and apply \eqref{small_arg}, we have \begin{align*} &\int\limits_0^{\frac{1}{\lambda}}\!\!\!\int\limits_r^{\frac{1}{\lambda}} |J_{\nu+\alpha}(\lambda r)|^2|H_{\nu+\beta}(\lambda s)|^2 r^{1 + 2\ell}s^{1 + 2m}(1 + s)^{-2\sigma}(1 + r)^{-2\sigma}\,ds\,dr\\ &\hspace{1cm} \le \frac{C'}{\lambda}\int\limits_0^{\frac{1}{\lambda}} r^{1 + 2k}(\lambda r)^{2(\alpha-\beta)}dr =\frac{C'}{\lambda^{2k+2}}\int\limits_0^\frac{1}{\lambda}(\lambda r)^{1 + 2(k+\alpha-\beta)}dr\\ &\hspace{2cm}\le \frac{C'}{\lambda^{2k+3}} \le \frac{C'}{\lambda^2} \end{align*} for all $k\ge 0$ when $\lambda \ge 1.$ Combining this with \eqref{I_bound_1} proves that $I(\lambda)\le \frac{C}{\lambda^2}$ for some $C>0$ and all $\lambda \ge 1$. Since for any fixed $k$ there are only finitely many possibilities for $\ell,m,\alpha,\beta$, we can choose $C$ to depend only on $k$ and $\nu.$ Now, to estimate $I\!\!I(\lambda)$, we apply \eqref{large_arg} to both the Bessel and Hankel functions to obtain \begin{align*} I\!\!I(\lambda) & \le C\int\limits_{\frac{1}{\lambda}}^\infty\int\limits_{\frac{1}{\lambda}}^\infty r^{1+2\ell}(1 + r)^{-2\sigma}(\lambda r)^{-1}s^{1 + 2m}(1 + s)^{-2\sigma}(\lambda s)^{-1}\,ds\,dr \\ & = \frac{C}{\lambda^2}\int\limits_0^\infty\int\limits_0^\infty (1+r)^{2(\ell -\sigma)} (1 + s)^{2(m-\sigma)}\,ds\,dr\\ &\le \frac{C}{\lambda^2}, \end{align*} provided that $\sigma > k + \frac{1}{2}$, which completes the proof of \lemref{L2_L2_lemma}. \end{proof} It will also prove useful to have a bound on the $L^{2,\sigma}\to L^{2,-\sigma}$ mapping properties of the imaginary part of each $R_{0,j}$ when $\lambda$ is small. In particular, we are able to show that this operator norm has a precise polynomial rate of vanishing as $\lambda \to 0.$ \begin{proposition}\label{L2_L2_imaginary} For any integer $k\ge 0$ and any $\sigma > \frac{n}{2} + k $, we have that \[\|\partial_\lambda^k \operatorname{Im\,} R_{0,j}(\lambda \pm i 0)\|_{L^{2,\sigma}\to L^{2,-\sigma}}\le C_{j,k,\sigma}\lambda^{n-2-k}\] when $0 < |\lambda| \le 1.$ \end{proposition} \begin{proof} By the discussion at the beginning of the proof of \propref{L2_L2_prop}, it is sufficient to show that \begin{equation}\label{imag_kernel} \|\partial_\lambda^k\operatorname{Im\,} R_{0,j}(\lambda+i0;r,s)\|_{L^{2,-\sigma}_r L^{2,-\sigma}_s} \le C\lambda^{n-2-k} \end{equation} for $0<\lambda \le 1.$ By \eqref{plamR0bd} we have \begin{align*} & \|\partial_\lambda^k\operatorname{Im\,} R_{0,j}(\lambda+i0;r,s)\|_{L^{2,-\sigma}_r L^{2,-\sigma}_s}^2 \\ & \hspace{.5cm} =C\int\limits_0^\infty\int\limits_0^\infty \left[\partial_\lambda^k\left( J_{\nu_j}(\lambda r)J_{\nu_j}(\lambda s)\right)\right]^2 (1 + r)^{-2\sigma}(1 + s)^{-2\sigma}(rs)\,dr\,ds. \end{align*} Using the recursive formula for derivatives of the Bessel functions as before, we can reduce the proof to showing that \begin{equation} \int\limits_0^\infty\int\limits_0^\infty r^{1+2\ell} s^{1+2m} |J_{\nu_j+\alpha}(\lambda r)|^2|J_{\nu_j+\beta}(\lambda s)|^2(1 + r)^{-2\sigma}(1 + s)^{-2\sigma}\,dr\,ds \le C\lambda^{2(n-2-k)} \end{equation} for any integers $\ell,\,m\ge 0$ with $\ell + m =k$ and integers $\,\alpha,\,\beta$ with $|\alpha|\le \ell$ and $|\beta|\le m$. Since the above integral is separable, it is in fact enough to show \begin{equation}\label{r_imag_kernel} \int\limits_0^\infty r^{1+2\ell} |J_{\nu_j+\alpha}(\lambda r)|^2(1 + r)^{-2\sigma}\,dr \le C\lambda^{n-2-2\alpha} \end{equation} for any $\ell \le k$ and $|\alpha|\le \ell,$ since analogous estimates will apply to the integral in the $s$ variable. First, notice that \eqref{small_arg} implies \begin{align*} & \int\limits_0^\frac{1}{\lambda} r^{1+2\ell} |J_{\nu_j+\alpha}(\lambda r)|^2(1 + r)^{-2\sigma}\,dr \le C\int\limits_0^\frac{1}{\lambda} r^{1+2\ell}(\lambda r)^{2(\nu_j+\alpha)}(1 + r)^{-2\sigma}\,dr \\ & \hspace{.5cm} = C\lambda^{2(\nu_j+\alpha)}\int\limits_0^\frac{1}{\lambda}r^{1 + 2(\ell + \alpha + \nu_j)}(1 + r)^{-2\sigma}\,dr \le C'\lambda^{2(\nu_j+\alpha)}\int\limits_0^\frac{1}{\lambda}(1 + r)^{1 + 2(\ell +\alpha +\nu_j-\sigma)}\,dr\\ & \hspace{1cm} \le C''\lambda^{2(\nu_j+\alpha)}\left|\left( 1 + \frac{1}{\lambda}\right)^{2 + 2(\ell + \alpha +\nu_j - \sigma)} - 1\right| \le C_1\lambda^{2(\sigma - \ell)-2}(\lambda + 1)^{2+ 2(\ell+\alpha+\nu_j-\sigma)} + C_2 \lambda^{2(\nu_j+\alpha)}\\ &\hspace{1.5cm} \le C_1\lambda^{2(\sigma - \ell) - 2} + C_2 \lambda^{2(\nu_j+\alpha)}. \end{align*} Recalling that $\sigma > \frac{n}{2} + k$ and $\ell \le k,$ we have that $2(\sigma - \ell) > n$. Also, we have $2(\nu_j+\alpha) \ge n-2 + 2\alpha$, and since $|\alpha|\le \ell\le k$, we have that the above is bounded by a constant times $\lambda^{n-2-2\alpha}$ for $0 < \lambda \le 1$ as claimed. Next, we consider the integral over the region where $\frac{1}{\lambda}\le r < \infty$. For this, we use \eqref{large_arg} to obtain \begin{align*} & \int\limits_\frac{1}{\lambda}^\infty r^{1+2\ell} |J_{\nu_j+\alpha}(\lambda r)|^2(1 + r)^{-2\sigma}\,dr \le C\int\limits_\frac{1}{\lambda}^\infty r^{1+2\ell}(\lambda r)^{-1}(1 + r)^{-2\sigma}\,dr\\ & \hspace{.5cm} \le \frac{C}{\lambda}\int\limits_{\frac{1}{\lambda}}^\infty (1 + r)^{2(\ell - \sigma)}\,dr \le \frac{C'}{\lambda}\left( 1 + \frac{1}{\lambda}\right)^{1 + 2(\ell-\sigma)}\\ & \hspace{1.5cm} = C'\lambda^{2(\sigma - \ell)-2}(\lambda + 1)^{1 + 2(\ell - \sigma)} \le C''\lambda^{2(\sigma - \ell) - 2}. \end{align*} The restrictions on $\sigma$ guarantee that the above is bounded by a constant times $\lambda^{n-2}$ for $0 < \lambda \le 1.$ Therefore, \eqref{r_imag_kernel} holds, and the proof is complete. \end{proof} Next, we aim to prove weighted $L^q$ estimates on the free radial resolvent kernels $R_{0,j}$, which enables use to control the terms in the Birman-Schwinger series for $R_{V,j}$ when applied iteratively. First, we make note of a technical lemma. \begin{lemma} \label{bessel_lemma} Let $\nu \ge \frac{n-2}{2}$, and $\lambda > 0$. Suppose that $\beta,m\in\mathbb{Z}$ are such that $|\beta|\le m$ and $\nu + \beta \ge 0$. Assume also that $1\le q < \infty$ and that $\sigma > \frac{n}{q} + m$. Then there exist $C_1,C_2 > 0$ such that \begin{equation}\label{J_bound} \int\limits_0^\infty (\lambda s)^{q\left( m - \frac{n-2}{2}\right)}| J_{\nu+\beta}(\lambda s)|^q (1+s)^{-q\sigma} s^{n-1}\,ds \le \begin{cases} C_1\lambda^{-n} + C_2\lambda^{q\left( m - \frac{n-1}{2}\right)}, & 1 \le \lambda < \infty\\ C\lambda^{q\sigma - n}, & 0 < \lambda \le 1. \end{cases} \end{equation} \end{lemma} \begin{proof} Let us denote by $I(\lambda)$ the integral in the statement above, and observe that $I(\lambda)$ is clearly nonnegative for all $\lambda > 0$. If we split the integral into the the regions where $0 < s < \frac{1}{\lambda}$ and $\frac{1}{\lambda} < s < \infty,$ we can apply \eqref{small_arg} and \eqref{large_arg} to $J_{\nu+\beta}$ to obtain that \[I(\lambda) \le C\int\limits_0^{\frac{1}{\lambda}}(\lambda s)^{q\left(\nu - \frac{n-2}{2} +\beta + m\right)}(1 + s)^{-q\sigma}s^{n-1}\,ds + C\int\limits_{\frac{1}{\lambda}}^\infty (\lambda s)^{q\left( m - \frac{n-1}{2}\right)}(1 + s)^{-q\sigma}s^{n-1}\,ds\] for some constant $C>0$. To estimate these integrals, we treat the cases $\lambda \ge 1$ and $\lambda \le 1$ separately. First suppose that $\lambda \ge 1$. Then we see that \begin{equation} \int\limits_0^{\frac{1}{\lambda}}(\lambda s)^{q\left(\nu - \frac{n-2}{2} +\beta + m\right)}(1 + s)^{-q\sigma}s^{n-1}\,ds \le \lambda^{1-n}\int\limits_0^{\frac{1}{\lambda}}(1 + s)^{-q\sigma}\,ds \le C \lambda^{-n}, \end{equation} since $\nu - \frac{n-2}{2} + \beta +m \ge 0$ and $q\sigma > 0$. For the integral over $\frac{1}{\lambda} < s < \infty$, we have \begin{align*} \int\limits_{\frac{1}{\lambda}}^\infty (\lambda s)^{q\left( m - \frac{n-1}{2}\right)}(1 + s)^{-q\sigma}s^{n-1}\,ds & = \lambda^{q\left( m - \frac{n-1}{2}\right)}\int\limits_{\frac{1}{\lambda}}^\infty s^{n-1 + q\left( m - \frac{n-1}{2}\right)}(1 + s)^{-q\sigma}\,ds. \end{align*} Under the hypothesis that $\sigma > \frac{n}{q} + m,$ the integral \[\int\limits_{1}^\infty s^{n-1 + q\left( m - \frac{n-1}{2}\right)}(1 + s)^{-q\sigma}\,ds\] converges and is bounded by constant which is independent of $\lambda.$ For the region where $\frac{1}{\lambda} < s < 1$, we have \[\int\limits_{\frac{1}{\lambda}}^1 s^{n-1+q\left( m - \frac{n-1}{2}\right)}(1 + s)^{-q\sigma}\,ds \le C\lambda^{-n - q\left( m - \frac{n-1}{2}\right)}.\] Thus, \[\lambda^{q\left( m - \frac{n-1}{2}\right)}\int\limits_{\frac{1}{\lambda}}^\infty s^{q\left( m - \frac{n-1}{2}\right)}(1 + s)^{-q\sigma}s^{n-1}\,ds \le \max\{\lambda^{-n},\lambda^{q\left( m -\frac{n-1}{2}\right)}\}\] when $\lambda \ge 1.$ Now take the case where $0 < \lambda \le 1.$ Then, since $|\beta| \le m$ and $\nu \ge \frac{n-2}{2}$, we have \begin{align*} & \int\limits_0^{\frac{1}{\lambda}}(\lambda s)^{q\left( \nu - \frac{n-2}{2} +\beta + m\right)}(1 + s)^{-q\sigma}s^{n-1}\,ds \le C\int\limits_0^{\frac{1}{\lambda}}(1 + s)^{n - 1 -q\sigma}\,ds\\ & \hspace{1.5cm} = C' \left( 1 + \frac{1}{\lambda}\right)^{n - q\sigma} \le C'' \lambda^{q\sigma - n}. \end{align*} For the integral over $\frac{1}{\lambda} \le s < \infty$, we notice that \[ \int\limits_{\frac{1}{\lambda}}^\infty (\lambda s)^{q\left( m - \frac{n-1}{2}\right)}(1 + s)^{-q\sigma}s^{n-1}\,ds \le \lambda^{q\left( m - \frac{n-1}{2}\right)}\int\limits_{\frac{1}{\lambda}}^\infty (1 + s)^{n-1 + q\left( m - \frac{n-1}{2} - \sigma\right)}\,ds \] since $1 \le \frac{1}{\lambda} \le s$. Recalling the assumption that $\sigma > \frac{n}{q}+ m$, we can see that \[ n - 1 + q\left( m - \frac{n-1}{2} - \sigma\right) < - 1,\] and therefore, \[ \begin{split} & \lambda^{q\left( m - \frac{n-1}{2}\right)}\int\limits_{\frac{1}{\lambda}}^\infty (1 + s)^{n-1 + q\left( m - \frac{n-1}{2} - \sigma\right)}\,ds \le C'\lambda^{q\left( m - \frac{n-1}{2}\right)}\left( 1 + \frac{1}{\lambda}\right)^{n+q\left( m - \frac{n-1}{2}-\sigma\right)} \\ & \hspace{3cm} \le C''\lambda^{q\sigma - n}. \end{split} \] Therefore, $I(\lambda)\le C\lambda^{q\sigma - n}$ for $0< \lambda \le 1$. \end{proof} Next, we establish some estimates on the $L^{q,\sigma}$ norms of $R_{0,j}(\lambda\pm i0)(r,s)$ when the norm is only taken with respect to one variable. \begin{proposition} \label{Lq_imaginary} Let $k \ge 0$ be an integer. Also assume that $1 \le q < \infty$ and \begin{equation} \label{sigma_condition} \sigma > \frac{n}{q} + k. \end{equation} Then for $|\lambda| \ge 1$, we have \begin{equation}\label{imag_high_freq} \begin{split} & \|\partial_\lambda^k\Im R_{0,j}(\lambda \pm i0;r,\cdot)\|_{L^{q,\sigma}} \\ & \hspace{.5cm}\le C_{j,q,\sigma,k}|\lambda|^{n-2-k}\sum\limits_{\ell+m = k}\left[(1 + |\lambda| r)^{\ell - \frac{n-1}{2}}\left( C_1|\lambda|^{-\frac{n}{q}} + C_2|\lambda|^{m-\frac{n-1}{2}}\right)\right] \end{split} \end{equation} for some $C_{j,q,\sigma,k}> 0$. Furthermore, when $|\lambda| \le 1$, we have \begin{equation}\label{imag_low_freq} \|\partial_\lambda^k\Im R_{0,j}(\lambda \pm i0;r,\cdot)\|_{L^{q,\sigma}} \le C_{j,q,\sigma,k}|\lambda|^{n-2}(1 + |\lambda| r)^{k - \frac{n-1}{2}}. \end{equation} By symmetry, we also have the analogous estimates \begin{align} \begin{split} & \|\partial_\lambda^k\Im R_{0,j}(\lambda \pm i0;\cdot,s)\|_{L^{q,\sigma}} \\ & \hspace{1.5cm} \le C_{j,q,\sigma,k}|\lambda|^{n-2-k}\sum\limits_{\ell+m = k}\left[(1 + |\lambda| s)^{\ell - \frac{n-1}{2}}\left( C_1|\lambda|^{-\frac{n}{q}} + C_2|\lambda|^{m-\frac{n-1}{2}}\right)\right] \end{split} \end{align} for $|\lambda| \ge 1$, and \begin{equation} \|\partial_\lambda^k\Im R_{0,j}(\lambda \pm i0;\cdot,s)\|_{L^{q,\sigma}} \le C_{j,q,\sigma,k}|\lambda|^{n-2}(1 + |\lambda| r)^{k - \frac{n-1}{2}} \end{equation} when $|\lambda| \le 1.$ \end{proposition} \begin{rmk} \textnormal{ Note that in the special case where the order of differentiation is less than or equal to $\frac{n-1}{2}$, these formulas can be simplified to show simple polynomial behavior in $\lambda$. However, if the number of derivatives exceeds this threshold value, we begin to see a non-uniformity with respect to the secondary radial variable. This phenomenon is why the spatial weights are necessary in the statement of \thmref{disp_est}. } \end{rmk} \begin{proof} As before, we consider only $\Im R_{0,j}(\lambda + i0)$ for $\lambda>0$, since the proof is identical for the other choice of sign. Recall that the kernel of $\Im R_{0,j}(\lambda +i0)$ has the explicit expression \[\Im R_{0,j}(\lambda + i0;r,s) = \frac{\pi}{2}(rs)^{-\frac{n-2}{2}}J_{\nu_j}(\lambda r)J_{\nu_j}(\lambda s).\] Since Bessel functions satisfy the recursion relation \[J_\nu'(x) = \frac{1}{2}\left( J_{\nu-1}(x) - J_{\nu+1}(x)\right),\] we see that $\partial_\lambda^k\Im R_{0,j}(\lambda +i0)$ can be written as a finite linear combination of terms of the form \begin{equation}\label{J_nu_alphabeta} (rs)^{-\frac{n-2}{2}}r^\ell s^m J_{\nu_j+\alpha}(\lambda r)J_{\nu_j+\beta}(\lambda s), \end{equation} where $\ell,m,\alpha,\beta$ are integers satisfying $\ell + m = k$, $|\alpha|\le \ell,$ and $|\beta|\le m$. Therefore, by the triangle inequality, it suffices to estimate the weighted $L^q$ norms of such terms. Taking the $L^{q,\sigma}$ norm with respect to the $s$ variable in \eqref{J_nu_alphabeta} yields \[ \lambda^{q(n-2 -k)}(\lambda r)^{q\left( \ell - \frac{n-2}{2}\right)}|J_{\nu_j + \alpha}(\lambda r)|^q \int\limits_0^\infty (\lambda s)^{q\left( m - \frac{n-2}{2}\right)}|J_{\nu_j + \beta}(\lambda s)|^q (1 + s)^{-q\sigma}s^{n-1}\,ds. \] Note that since $|\alpha|\le \ell$, we have that the product $(\lambda r)^{\ell - \frac{n-2}{2}}|J_{\nu_j+\alpha}(\lambda r)|$ is a continuous function of $\lambda r$, and thus by \eqref{large_arg} we obtain \[(\lambda r)^{q\left( \ell - \frac{n-2}{2}\right)}|J_{\nu_j+\alpha}(\lambda r)|^q \le C(1 + \lambda r)^{q\left(\ell - \frac{n-1}{2}\right)}.\] Thus, we have that the $L^{q,\sigma}$ norm of \eqref{J_nu_alphabeta} is bounded by \begin{equation} \label{bessel_linear_comb} C \lambda^{q(n-2 -k)}(1 + \lambda r)^{q\left( \ell - \frac{n-1}{2}\right)} \int\limits_0^\infty (\lambda s)^{q\left( m - \frac{n-2}{2}\right)}|J_{\nu_j + \beta}(\lambda s)|^q (1 + s)^{-q\sigma}s^{n-1}\,ds. \end{equation} Now, observe that the integral above is in exactly the right form for us to apply \lemref{bessel_lemma}. Hence, we have that \eqref{bessel_linear_comb} is bounded by \[\begin{cases} C\lambda^{q(n-2-k)}(1 + \lambda r)^{q\left(\ell - \frac{n-1}{2}\right)}\max\{\lambda^{-n},\lambda^{q\left( m - \frac{n-1}{2}\right)}\}, & \lambda \ge 1\\ C\lambda^{q(n-2-k) + q\sigma - n}(1 + \lambda r)^{q\left(\ell - \frac{n-1}{2}\right)}, & \lambda \le 1, \end{cases}\] for some possibly larger constant $C.$ In the case where $\lambda \ge 1$, simply taking $q$th roots gives estimate \eqref{imag_high_freq}. When $\lambda \le 1$, we can use that $\sigma$ satisfies \eqref{sigma_condition} to obtain that $q(n - 2 - k) + q\sigma - n >q(n - 2).$ Once again, taking $q$th roots gives \eqref{imag_low_freq}. \end{proof} Next, we estimate the $L^q$ norm of the resolvent when we do not take the imaginary part. \begin{proposition} \label{Lq_nonimaginary} Let $k\ge 0$ be an integer and suppose $1 \le q\le \frac{n}{n-2}$. Then, if $\sigma$ satisfies \eqref{sigma_condition}, we have that when $|\lambda| \ge 1,$ \begin{equation} \|\partial_\lambda^k R_{0,j}(\lambda \pm i0;r,\cdot)\|_{L^{q,\sigma}} \le C|\lambda|^{n-2-k}\sum\limits_{\ell+m = k}\left[(1 + |\lambda| r)^{\ell - \frac{n-1}{2}}\left( C_1 + C_2|\lambda|^{m-\frac{n-1}{2}}\right)\right] \end{equation} for some $C,C_1,C_2 > 0$. If $0 <|\lambda| \le 1$, then we have \begin{equation} \|\partial_\lambda^k R_{0,j}(\lambda \pm i0);r,\cdot)\|_{L^{q,\sigma}} \le C_1|\lambda|^{-k} + C_2|\lambda|^{n-2-k}(1 + |\lambda| r)^{k - \frac{n-1}{2}}. \end{equation} Under the same assumptions on $\sigma,$ we also have \begin{equation} \|\partial_\lambda^k R_{0,j}(\lambda \pm i0;\cdot,s)\|_{L^{q,\sigma}} \le C|\lambda|^{n-2-k}\sum\limits_{\ell+m = k}\left[(1 + |\lambda| s)^{\ell - \frac{n-1}{2}}\left( C_1 + C_2|\lambda|^{m-\frac{n-1}{2}}\right) \right] \end{equation} when $|\lambda| \ge 1$, and \begin{equation} \|\partial_\lambda^k R_{0,j}(\lambda \pm i0;\cdot,s)\|_{L^{q,\sigma}} \le C_1|\lambda|^{-k} + C_2|\lambda|^{n-2-k}(1 + |\lambda| r)^{k - \frac{n-1}{2}} \end{equation} when $0 < |\lambda| \le 1.$ \end{proposition} \begin{proof} Once again, we consider only the case of $R_{0,j}(\lambda+i0)$ for $\lambda > 0.$ Recalling that \[R_{0,j}(\lambda + i0)(r,s) = \begin{cases} \frac{\pi i}{2}J_{\nu_j}(\lambda s)H^{(1)}_{\nu_j}(\lambda r), & s < r\\ \frac{\pi i}{2}J_{\nu_j}(\lambda r)H^{(1)}_{\nu_j}(\lambda s), & s > r, \end{cases}\] and \eqref{Bessel_derivative}, we see that when $s < r$, $\partial_\lambda^k R_{0,j}(\lambda + i 0;r,s)$ can be written as a finite linear combination of terms of the form \[(rs)^{-\frac{n-2}{2}} r^\ell s^m J_{\nu_j +\alpha}(\lambda s)H^{(1)}_{\nu_j + \beta}(\lambda r),\] where, as in the proof of \propref{Lq_imaginary}, $\ell,m$ are nonnegative integers with $\ell + m = k$ and $\alpha,\beta$ are any integers with $|\alpha|\le \ell$ and $|\beta|\le m.$ Similarly, when $r < s$, we can write $\partial_\lambda^k R_{0,j}(\lambda + i0)(r,s)$ as a combination of terms of the same form, but with the roles of $r$ and $s$ reversed. Therefore, it suffices to estimate \begin{equation}\label{s<r} I(\lambda ,r) : =\|(rs)^{-\frac{n-2}{2}}r^\ell s^m J_{\nu_j+ \beta}(\lambda s)H^{(1)}_{\nu_j+\alpha}(\lambda r)\rho^{-\sigma}(s)\mathds 1_{\{s < r\}}\|_{L_s^q}^q \end{equation} and \begin{equation}\label{s>r} I\!\!I(\lambda,r) : =\|(rs)^{-\frac{n-2}{2}}r^\ell s^m J_{\nu_j + \alpha}(\lambda r)H^{(1)}_{\nu_j+\beta}(\lambda s)\rho^{-\sigma}(s)\mathds 1_{\{s > r\}}\|_{L^q_s}^q \end{equation} for any $\ell,m,\alpha,\beta$ as above. We first estimate $I(\lambda,r)$ in the case where $\lambda r \ge 1.$ Under this hypothesis, we can apply \eqref{large_arg} to obtain \[ I(\lambda,r) \le C\lambda^{q(n-2 -k)}(\lambda r)^{q\left( \ell-\frac{n-1}{2}\right)}\int\limits_0^r (\lambda s)^{q\left( m-\frac{n-2}{2}\right)}|J_{\nu_j + \beta}(\lambda s)|^q(1 + s)^{-q\sigma}s^{n-1}\,ds. \] We now apply \lemref{bessel_lemma} to the integral above, which gives \begin{equation}\label{I_eqn_1} I(\lambda, r) \le \begin{cases} C\lambda^{q(n-2-k)}(\lambda r)^{q\left(\ell - \frac{n-1}{2}\right)}\max\{\lambda^{-n},\lambda^{q\left( m - \frac{n-1}{2}\right)}\}, &\lambda r \ge 1,\,\lambda \ge 1\\ C\lambda^{q(n-2 -k + \sigma) - n}(\lambda r)^{q\left(\ell - \frac{n-1}{2}\right)},& \lambda r \ge 1,\, 0<\lambda \le 1. \end{cases} \end{equation} Now let us consider the case where $\lambda r \le 1$. Here we can apply \eqref{small_arg}, which gives \begin{equation}\label{I_eqn2} I(\lambda,r) \le C\lambda^{q(n-2-k)}(\lambda r)^{q\left( \ell-\frac{n-2}{2} - \nu_j-\alpha\right)}\int\limits_0^r (\lambda s)^{q\left( m -\frac{n-2}{2} +\nu_j +\beta\right)}(1 + s)^{-q\sigma}s^{n-1}\,ds. \end{equation} If $r \le 1,$ we can bound the right-hand side of \eqref{I_eqn2} by \begin{align*} \begin{split} & C\lambda^{q(n-2- k)}(\lambda r)^{q\left( k - \alpha + \beta - (n-2)\right)}r^{n-1}\int\limits_0^r(1 + s)^{-q\sigma}\,ds \le \widetilde C\lambda^{q(n-2-k)}(\lambda r)^{ -q(n-2)} r^n \\ & \hspace{2cm} = \widetilde C\lambda^{-qk}r^{n-q(n-2)}, \end{split} \end{align*} since $k - \alpha + \beta \ge 0$ and $\int\limits_0^r(1 + s)^{-q\sigma}\,ds \le C'r $ for some $C' > 0.$ Recalling that $q \le \frac{n}{n-2}$, we obtain \begin{equation}\label{I_eqn3} I(\lambda,r) \le C\lambda^{-qk}, \hskip 0.2in \lambda r \le 1,\, r\le 1. \end{equation} Now, if $r \ge 1$, we can bound the right-hand side \eqref{I_eqn2} by \begin{equation}\label{I_eqn4} C\lambda^{q(n-2-k)}(\lambda r)^{q\left( k-\alpha+\beta - (n-2)\right)}\int\limits_0^r (1 + s)^{-q\sigma}s^{n-1}\,ds \le C\lambda^{q(n-2-k)}(\lambda r)^{-q(n-2)}(1 + r)^{n-q\sigma} \end{equation} since $k-\alpha + \beta \ge 0$ and $\lambda r \le 1.$ Recalling that $\sigma > \frac{n}{q}$, we have that the right-hand side of \eqref{I_eqn3} by \begin{equation}\label{I_eqn5} C\lambda^{-qk}r^{-q(n-2)} \le C\lambda^{-qk}. \end{equation} Combining \eqref{I_eqn3} and \eqref{I_eqn5}, we have that \begin{equation}\label{I_eqn6} I(\lambda,r)\le C\lambda^{-qk},\quad \lambda r \le 1. \end{equation} Combining \eqref{I_eqn6} with \eqref{I_eqn_1}, we have that \begin{equation}\label{I_final_estimate} I(\lambda,r) \le \begin{cases}C\lambda^{q(n-2-k)}(1 + \lambda r)^{q\left(\ell - \frac{n-1}{2}\right)}\max\{\lambda^{-n},\lambda^{q\left( m - \frac{n-1}{2}\right)}\}, & \lambda \ge 1\\ C_1\lambda^{-qk} + C_2\lambda^{q(n-2)}(1 + \lambda r)^{q\left(\ell-\frac{n-1}{2}\right)}, & 0 < \lambda \le 1. \end{cases} \end{equation} Next, we move on to estimating $I\!\!I(\lambda ,r)$. Again we consider the cases $\lambda r \ge 1$ and $\lambda r \le 1$ separately. For $\lambda r \ge 1,$ we apply \eqref{small_arg} and \eqref{large_arg} to obtain \begin{align*} I\!\!I(\lambda,r) &\le C\lambda^{q(n-2-k)}(\lambda r)^{q\left(\ell - \frac{n-1}{2}\right)}\int\limits_{\frac{1}{\lambda}}^\infty (\lambda s)^{q\left( m - \frac{n-1}{2}\right)}(1 + s)^{-q\sigma}s^{n-1}\,ds.\\ \end{align*} We can then repeat arguments from the proof of \lemref{bessel_lemma} to obtain \begin{equation}\label{II_eqn_1} I\!\!I(\lambda,r) \le\begin{cases} C\lambda^{q(n-2-k)}(\lambda r)^{q\left( \ell - \frac{n-1}{2}\right)}\max\{\lambda^{-n},\lambda^{q\left( m - \frac{n-1}{2}\right)}\}, & \lambda r\ge 1 ,\,\lambda \ge 1\\ C\lambda^{q(n-2-k)}(\lambda r)^{q\left(\ell - \frac{n-1}{2}\right)}, & \lambda r\ge 1, \,\lambda \le 1. \end{cases} \end{equation} Now consider the case where $\lambda r \le 1$. Here we rewrite $I\!\!I(\lambda ,r)$ as \[ \lambda^{q(n-2-k)}(\lambda r)^{q\left(\ell-\frac{n-2}{2}\right)}|J_{\nu_j+\alpha}(\lambda r)|^q\left(\int\limits_r^{\frac{1}{\lambda}}+\int\limits_{\frac{1}{\lambda}}^\infty\right)(\lambda s)^{q\left( m -\frac{n-2}{2}\right)}|H^{(1)}_{\nu_j + \beta}(\lambda s)|^q(1+s)^{-q\sigma}s^{n-1}\,ds. \] For the integral over $\frac{1}{\lambda} < s < \infty$, we can apply \eqref{large_arg} to $H^{(1)}_{\nu_j+\beta}$ and \eqref{small_arg} to $J_{\nu_j+\alpha}$ and repeat previous calculations to show that \begin{align}\label{II_eqn_2} \begin{split} \lambda^{q(n-2-k)}(\lambda r)^{q\left(\ell-\frac{n-2}{2}\right)}|J_{\nu_j+\alpha}(\lambda r)|^q\int\limits_{\frac{1}{\lambda}}^\infty(\lambda s)^{q\left( m -\frac{n-2}{2}\right)}|H^{(1)}_{\nu_j + \beta}(\lambda s)|^q(1+s)^{-q\sigma}s^{n-1}\,ds\\ & \hspace{-4in}\le\begin{cases} C\lambda^{q(n-2-k)}\max\{\lambda^{-n},\lambda^{q\left( m -\frac{n-1}{2}\right)}\}, & \lambda r \le 1, \,\lambda \ge 1\\ C\lambda^{q(n-2-k)}\lambda^{q\sigma - n}, & \lambda r\le 1,\,\lambda \le 1 \end{cases}\\ & \hspace{-4in}\le \begin{cases} C\lambda^{q(n-2-k)}\max\{\lambda^{-n},\lambda^{q\left( m -\frac{n-1}{2}\right)}\}, & \lambda r \le 1, \,\lambda \ge 1\\ C\lambda^{q(n-2)}, & \lambda r\le 1,\,\lambda \le 1, \end{cases} \end{split} \end{align} where the last inequality follows since $\sigma > \frac{n}{q}+k.$ Now, in the region where $r < s < \frac{1}{\lambda}$, we must apply \eqref{small_arg}, which yields \begin{align*} \int\limits_r^{\frac{1}{\lambda}}(\lambda s)^{q\left( m - \frac{n-2}{2}\right)}|H^{(1)}_{\nu_j+\beta}(\lambda s)|^q(1+s)^{-q\sigma}s^{n-1}\,ds &\le C \int\limits_r^{\frac{1}{\lambda}}(\lambda s)^{q\left( m - \frac{n-2}{2} - \nu_j -\beta\right)}(1+s)^{-q\sigma}s^{n-1}\,ds\\ &\hspace{-0.9in} = C\lambda^{q\left( m - \frac{n-2}{2}-\nu_j - \beta\right)}\int\limits_r^{\frac{1}{\lambda}}s^{n - 1 +q\left( m - \frac{n-2}{2} - \nu_j -\beta\right)}(1 + s)^{-q\sigma}\,ds. \end{align*} If $\lambda \ge 1,$ then $(1 + s)^{-q\sigma}$ is bounded by a uniform constant for all $r < s < \frac{1}{\lambda}$, and so the above is bounded by \[C\left( \lambda^{-n} - r^n(\lambda r)^{q\left( m - \frac{n-2}{2} -\nu_j-\beta\right)}\right)\] after possibly increasing $C$. We note that under our assumptions on $r$ and $\lambda,$ this quantity is still nonnegative. Combining this with \eqref{small_arg} applied to $J_{\nu_j+\alpha},$ we obtain \begin{align} \begin{split} \lambda^{q(n-2-k)}(\lambda r)^{q\left(\ell - \frac{n-2}{2}\right)}|J_{\nu_j+\alpha}(\lambda r)|^q\int\limits_r^{\frac{1}{\lambda}}(\lambda s)^{q\left( m - \frac{n-2}{2}\right)}|H^{(1)}_{\nu_j+\beta}(\lambda s)|^q(1+s)^{-q\sigma}s^{n-1}\,ds\\ & \hspace{-4in}\le C\lambda^{q(n-2-k)}(\lambda r)^{q\left(\ell - \frac{n-2}{2} + \nu_j + \alpha\right)}\left(\lambda^{-n} - r^n(\lambda r)^{q\left( m - \frac{n-2}{2} - \nu_j - \beta\right)}\right)\\ & \hspace{-4in} \le \lambda^{q(n-2-k)}\left[C_1\lambda^{-n}(\lambda r)^{q\left(\ell - \frac{n-2}{2} + \nu_j + \alpha\right)} +C_2 r^{n - q(n-2)}(\lambda r)^{q(k + \alpha - \beta)}\right], \end{split} \end{align} for some $C_1,C_2 > 0.$ Recalling that $|\alpha|\le \ell$, $\nu_j \ge \frac{n-2}{2}$, and $|\alpha| + |\beta|\le k$, we obtain \begin{align}\label{II_eqn_3} \begin{split} \lambda^{q(n-2-k)}(\lambda r)^{q\left(\ell - \frac{n-2}{2}\right)}|J_{\nu_j+\alpha}(\lambda r)|^q\int\limits_r^{\frac{1}{\lambda}}(\lambda s)^{q\left( m - \frac{n-2}{2}\right)}|H^{(1)}_{\nu_j+\beta}(\lambda s)|^q(1+s)^{-q\sigma}s^{n-1}\,ds\\ & \hspace{-3in} \le \lambda^{q(n-2-k)}\left[ C_1 \lambda^{-n} + C_2 r^{n - q(n-2)}\right]\\ & \hspace{-3in}\le C \lambda^{q(n-2-k)} \end{split} \end{align} for $\lambda r \le 1$ and $\lambda \ge 1,$ since $q \le \frac{n}{n-2}$. Finally, we consider the same integral over $r < s < \frac{1}{\lambda}$, once again where $0<\lambda r \le 1$ but with $\lambda \le 1.$ For this, we further subdivide into the cases where $r \le 1$ and $r\ge 1$. If $r \le 1$, we split the integral into the regions where $r < s < 1$ and $1 < s < \frac{1}{\lambda}$. For the integral over $r < s < 1$, we can repeat the above argument to obtain the same bound as in \eqref{II_eqn_3}. To bound the integral over $1 < s < \frac{1}{\lambda}$, we use \eqref{small_arg} to obtain \begin{align*} \lambda^{q(n-2-k)}(\lambda r)^{q\left( \ell - \frac{n-2}{2}\right)}|J_{\nu+\alpha}(\lambda r)|^q\int\limits_1^{\frac{1}{\lambda}}(\lambda s)^{q\left( m - \frac{n-2}{2}\right)}|H^{(1)}_{\nu_j+\beta}(\lambda s)|^q(1 + s)^{-q\sigma}s^{n-1}\,ds & \\ & \hspace{-4.5in}\le \lambda^{q(n-2-k)}(\lambda r)^{q\left( \ell - \frac{n-2}{2} + \nu_j + \alpha\right)}\int\limits_1^{\frac{1}{\lambda}}(\lambda s)^{q\left( m -\frac{n-2}{2} - \nu_j -\beta\right)}(1 + s)^{-q\sigma}s^{n-1}\,ds\\ & \hspace{-4.5in}= \lambda^{q(\alpha - \beta)}r^{q\left(\ell - \frac{n-2}{2}+ \nu_j + \alpha\right)}\int\limits_1^\frac{1}{\lambda}s^{q\left( m - \frac{n-2}{2} -\nu_j-\beta\right) + n-1}(1 + s)^{-q\sigma}\,ds\\ & \hspace{-4.5in}\le \lambda^{-qk}\int\limits_1^\infty s^{qk+n-1}(1 + s)^{-q\sigma}\,ds\\ & \hspace{-4.5in}\le C\lambda^{-qk}, \end{align*} where the last inequality follows from the fact that $\sigma > \frac{n}{q} + k.$ Now, if $r \ge 1$, we have \begin{align*} \lambda^{q(n-2-k)}(\lambda r)^{q\left( \ell - \frac{n-2}{2}\right)}|J_{\nu+\alpha}(\lambda r)|^q\int\limits_r^{\frac{1}{\lambda}}(\lambda s)^{q\left( m - \frac{n-2}{2}\right)}|H^{(1)}_{\nu_j+\beta}(\lambda s)|^q(1 + s)^{-q\sigma}s^{n-1}\,ds & \\ & \hspace{-4.5in}\le \lambda^{q(n-2-k)}(\lambda r)^{q\left( \ell - \frac{n-2}{2} + \nu_j + \alpha\right)}\int\limits_r^{\frac{1}{\lambda}}(\lambda s)^{q\left( m -\frac{n-2}{2} - \nu_j -\beta\right)}(1 + s)^{-q\sigma}s^{n-1}\,ds\\\\ & \hspace{-4.5in} \le C\lambda^{q(\alpha - \beta)}r^{q\left(\ell - \frac{n-2}{2} +\nu_j+\alpha\right)}\int\limits_r^\infty s^{q\left( m - \frac{n-2}{2}-\nu_j-\beta\right)-q\sigma +n-1}\,ds\\ & \hspace{-4.5in}\le C\lambda^{q(\alpha - \beta)}r^{q\left( k +\alpha - \beta - (n-2)\right) - q\sigma +n}\\ & \hspace{-4.5in}\le C\lambda^{q(\alpha - \beta)}r^{q\left(\alpha-\beta - (n-2)\right)}, \end{align*} where in the last inequality we once again used that $\sigma > \frac{n}{q} + k.$ Now, since $r \le \frac{1}{\lambda}$, we have that the above is bounded by a constant times $\lambda^{q(n-2)}$ for $\lambda \le 1.$ Combining this with \eqref{II_eqn_1}, \eqref{II_eqn_2}, and \eqref{II_eqn_3}, we obtain \begin{equation}\label{II_final_estimate} I\!\!I(\lambda,r) \le \begin{cases}C\lambda^{q(n-2-k)}(1 + \lambda r)^{q\left(\ell - \frac{n-1}{2}\right)}\max\{1,\lambda^{q\left( m-\frac{n-1}{2} \right)}\}, & \lambda \ge 1\\ C_1\lambda^{-qk} + C_2\lambda^{q\left( n-2-k\right)}(1 + \lambda r)^{q\left(\ell - \frac{n-1}{2}\right)}, & 0<\lambda \le 1. \end{cases} \end{equation} In light of \eqref{I_final_estimate} and \eqref{II_final_estimate}, taking $q$th roots completes the proof of \propref{Lq_nonimaginary}. \end{proof} \section{Operator estimates for $R_V$} \label{spectheory} In this section we establish some weighted operator norm estimates for the perturbed radial resolvents $R_{V,j}$, defined via the mode-by-mode decomposition of $R_V(z)$: \[R_V(z) = \sum\limits_{j=0}^\infty R_{V,j}(z)E_j.\] Since $V$ is radial, it follows that we can write \begin{equation}\label{RV_j} R_{V,j}(z) = \left(\partial_r^2 + \frac{n-1}{r}\partial_r +z^2 - \frac{\mu_j^2}{r^2} + V(r)\right)^{-1}, \end{equation} wherever this inverse is well defined. Here, we prove that the mapping properties established for $R_{0,j}$ in \propref{L2_L2_prop} and \propref{L2_L2_imaginary} extend to $R_{V,j}$. Similar weighted estimates for Schr\"odinger operators on hyperbolic space are given in Section $4$ of \cite{borthwick2015dispersive}, and the techniques therein follow an analogous structure. For a potential $V \in \rho^{-2\sigma} L^\infty ( \mathbb{R}^+)$ with $\sigma > \frac{1}{2}$ (recall that we take $\rho(r) = 1+r$), the operator norm $\norm{VR_0(z)}_{L^2\to L^2}$ is small for $\Im z$ large by the standard resolvent norm estimate on $R_0(z)$, which is computable in a similar fashion to that discussed in \propref{L2_L2_prop}. Hence, the operator $1 + VR_0(z)$ is invertible by Neumann series for large $\Im z$. For $z$ in this range, we can write \[ \begin{split} R_{V,j}(z)= R_{0,j}(z)(1 + VR_{0,j}(z))^{-1}. \end{split} \] We begin our analysis of these perturbed resolvents by proving that each $R_{V,j}$ admits a meromorphic continuation which extends below the real axis. Results of this type are somewhat standard, but we require a more quantitative version, so we provide a careful proof here. \begin{theorem}\label{mer.thm} For $V \in \rho^{-2 \sigma}L^\infty(\mathbb{R}^+)$ with $ \sigma > \frac12$, the resolvent $R_{V,j}$, chosen with outgoing or incoming boundary conditions admits a meromorphic continuation to $\Im z\ge - \delta$ as a bounded operator \[ R_{V,j}(z): L^{2,\frac{1}{2}+\delta}(\mathbb{R}^+,r^{n-1}\,dr) \to L^{2,-\frac{1}{2}-\delta} (\mathbb{R}^+,r^{n-1}\,dr), \] for $0 < \delta < \sigma-\frac{1}{2}$. \end{theorem} \begin{proof} For the meromorphic extension of an operator of the form \[ \chi R_{0,j} \chi \] with $\chi$ a smooth, compactly supported function, we refer to the work of Melrose-Wunsch \cite{melrose2004propagation}, Graham-Zworski \cite{graham2003scattering}, and the recent discussions in Baskin-Marzuola \cite{baskin2016locating} for a general proof of meromorphic continuation for conic manifolds and Baskin-Yang \cite{baskin2019scattering} for the case of product cones and truncated product cones. The meromorphic continuation of $\chi R_{V,j} (\lambda) \chi$ follows from the work of Guillop{\'e}--Zworski~\cite{guillope1995upper} and the compactness of the resolvent on a compact manifold with a conic singularity, which can be seen for instance in the treatment of domains for conic operators in the work of Melrose-Wunsch \cite{melrose2004propagation}. These results are for the cut-off free resolvent. Some care is required to extend them to the perturbed case. It follows from a modification of Proposition \ref{L2_L2_prop} that $ \rho(r)^{- \eta} R_{0,j}(z)$ is compact as an operator on $ L^{2,\frac{1}{2} + \delta}$ provided that $\Im z > - \delta$ and $\eta > 2\delta + 1$, see Proposition $3.29$ of \cite{rafe1991elliptic} for a very general statement. Since $V\in\rho^{-2\sigma}L^\infty$ and $2\sigma > 2\delta + 1$, we have that $VR_{0,j}(z)$ is compact on $ L^{2,\frac{1}{2}+\delta}$ under the same conditions. Therefore, the analytic Fredholm theorem gives a meromorphic continuation of $R_{V,j}(z)$ to the half-plane $\Im z > - \delta$. \end{proof} Next, we rule out the possibility of embedded eigenvalues/resonances in the continuous spectrum of $-\Delta_{C(X)} + V.$ \begin{theorem}\label{absence.thm} For $V\in\rho^{-2 \sigma}L^\infty(\mathbb{R}^+)$ with $\sigma>\frac12$, the operator $-\Delta_{C(X)} + V$ has continuous spectrum $[0, \infty)$, with no embedded eigenvalues or resonances in the range $(0, \infty)$. Moreover, the continued resolvent $R_{V,j} (z)$ has no poles that stem from eigenvalues on the critical line $\Re z = 0$ except possibly at $z = 0$. \end{theorem} We postpone the proof of this result to Appendix \ref{app:embres}, where it is discussed alongside a deeper analysis of the absence of embedded eigenvalues and resonances for the case of more general non-radial potentials. \subsection{The weighted resolvent estimates} Following \cite{goldberg2004dispersive}, we observe that mapping properties of $R_{V,j}$ follow from the estimates established for $R_{0,j}$, provided we can rule out embedded resonances. We now show that for the class of potentials considered here, the high-frequency behavior of the resolvent on the critical line is unaffected by the potential. \begin{proposition} \label{prop:resest} For $V\in\rho^{-2\sigma}L^\infty(\mathbb{R}^+)$ with $\sigma>\frac12 +k$, there exists a constant $M_{V}>0$ such that for $\lambda \in {\mathbb R}$ with $\abs{\lambda} \ge M_{V}$, \begin{equation} \label{eqn:resest} \norm{ \partial_\lambda^k R_{V,j}(\lambda \pm i0)}_{L^{2,\eta} \to L^{2,-\eta}} \le C_{j,k,\sigma} \abs{\lambda}^{-1} \end{equation} for $\frac{1}{2} +k< \eta< \sigma$. In particular, there are no resonances on the critical line for $\abs{\lambda} \ge M_V$. \end{proposition} \begin{proof} By the resolvent identity \[ R_{0,j}(z) = R_{V,j}(z) + R_{V,j}(z)VR_{0,j}(z), \] we can write \[ R_{0,j}(z)\rho^{-\sigma} = R_{V,j}(z)\rho^{-\sigma}(1 + \rho^{\sigma}VR_{0,j}(z)\rho^{-\sigma}) \] for $\rho (r) =1 + r$. The factor on the right is meromorphically invertible by the analytic Fredholm theorem, so that \begin{equation}\label{Rvp} R_{V,j}(z)\rho^{-\sigma} = R_{0,j}(z)\rho^{-\sigma} (1 + \rho^{\sigma}VR_{0,j}(z)\rho^{-\sigma})^{-1}. \end{equation} By Proposition \ref{L2_L2_prop} and the fact that $\rho^{\sigma}V = \rho^{2\sigma}V\rho^{-\sigma}$, we have \[ \norm{\rho^{\sigma}VR_{0,j}(\lambda\pm i0)\rho^{-\sigma}}_{L^2\to L^2} \le C \norm{\rho^{2\sigma} V}_{L^\infty} \abs{\lambda}^{-1}. \] Hence for $V \in \rho^{-2\sigma} L^\infty$ , there exists a constant $M_{V}$ such that for $\abs{\lambda} \ge M_{V}$, \[ \norm{\rho^{\sigma}VR_{0,j}(\lambda\pm i0)\rho^{-\sigma}}_{L^2\to L^2}\le \frac12, \] implying that $(1 + \rho^{\sigma}VR_0(\lambda +i0)\rho^{-\sigma})^{-1}$ exists and satisfies \[ \norm{(1 + \rho^{\sigma}VR_{0,j}(\lambda\pm i0)\rho^{-\sigma})^{-1}}_{L^2\to L^2} \le 2. \] The estimates then follow from \eqref{Rvp} and Proposition \ref{L2_L2_prop}. \end{proof} \begin{rmk}\textnormal{ We note that this quantitative Lemma can be used to establish an absence of embedded resonances for large values of $\lambda$. However, to get all the way down to $\lambda =0$, the more refined techniques in Appendix \ref{app:embres} are required.} \end{rmk} \begin{proposition} \label{prop:RV_imaginarypart} Let $V \in \rho^{-2 \sigma}L^\infty(\mathbb{R}^+)$ with $\sigma>\frac12 + k$ be such that $-\Delta_{C(X)} + V$ does not have a resonance at $0$ energy, then for any $\frac{1}{2} + k< \eta <\sigma$, we have for $\abs{\lambda} \ge 0$, \begin{equation} \label{eqn:RV_L2bd} \norm{\partial_\lambda^k R_{V,j}(\lambda\pm i0 )}_{L^{2,\eta} \to L^{2,-\eta}} \le C_{j,k,\sigma} \langle \lambda \rangle^{-1}. \end{equation} Under the stronger hypothesis that $\sigma> \frac{n}{2} + k$, then we have the the low-energy estimate \begin{equation} \label{RV_imaginarypart} \norm{ \partial_\lambda^k \Im R_{V,j}(\lambda \pm i0) }_{L^{2,\eta} \to L^{2,-\eta}} \le C_{j,k,\sigma} |\lambda|^{n-2-k}, \end{equation} for $0 \le |\lambda|\le 1.$ \end{proposition} \begin{proof} We prove \eqref{eqn:RV_L2bd} only for $R_{V,j}(\lambda+i0)$, since the proof is analogous for $R_{V,j}(\lambda-i0).$ Using a resolvent expansion motivated by \cite{borthwick2015dispersive}, we observe that \[ R_{V,j}(\lambda+ i0) = R_{0,j}(\lambda+ i0) [ I + V R_{0,j} (\lambda+ i0)]^{-1}. \] Hence, if we can establish boundedness and regulariy of $[ I + V R_{0,j}(\lambda+i0)]^{-1}$ through $\lambda = 0$, then \eqref{eqn:RV_L2bd} follows immediately from \propref{L2_L2_prop}. We observe that boundedness and regularity of the operator $(I + R_{0,j} (\lambda+i0) V)^{-1}$ follows from the assumption that $0$ is not a resonance or an eigenvalue of $-\Delta_{C(X)} + V$ and hence the boundedness of \[ (I + VR_{0,j}(\lambda - i0))^{-1} = [(I + R_{0,j}(\lambda + i0) V)^{-1}]^* \] follows from analytic Fredholm theory. Thus, we may extend the results of Proposition \ref{prop:resest} through $\lambda = 0$ to arrive at \eqref{eqn:RV_L2bd}. To prove \eqref{RV_imaginarypart}, we first consider the case $k = 0$ and establish the pointwise bounds in $\lambda$. We require the following resolvent identity \begin{align} \begin{split} \label{resid} &R_{V,j} (\lambda + i0 ) - R_{V,j} (\lambda - i0)\\ & \hspace{1cm}= (I + R_{0,j} (\lambda + i0 )V)^{-1} [R_{0,j}(\lambda + i0) - R_{0,j} (\lambda - i 0)] (I + VR_{0,j} (\lambda - i0))^{-1}. \end{split} \end{align} This shows that the behavior of $\operatorname{Im\,} R_{V,j}(\lambda+i0)$ is the same as that of $\Im R_{0,j}(\lambda+i0)$ near $\lambda = 0,$ provided the operators \[ (I + R_{0,j} (\lambda+i 0)V)^{-1} \ \ \text{and} \ \ (I + VR_{0,j}(\lambda-i0))^{-1} \] are bounded for $\lambda$ in a neighborhood of $0$, which we have already observed earlier in the proof. As a result, the $k=0$ bound in \eqref{RV_imaginarypart} clearly follows. The results for $k > 0$ then follow by differentiating term by term and applying \propref{L2_L2_imaginary}. \end{proof} \section{Full spectral resolution estimates} \label{spectralresolution} \begin{proposition}\label{RV_pointwise} Suppose $V\in \rho^{-2\sigma}L^\infty(\mathbb{R}^+)$ with \begin{equation}\label{RV_sigmacondition}\sigma > 4\left\lceil\frac{n}{4}\right\rceil - 2+k, \end{equation} then for any $\alpha \ge \max\{k - \frac{n-1}{2},0\}$, \begin{equation}\label{RV_highfreq_weighted} \sup\limits_{r,s>0}\left| \rho^{-\alpha}(r)\partial_\lambda^k\operatorname{Im\,} R_{V,j}(\lambda\pm i 0;r,s)\rho^{-\alpha}(s)\right| \le C_{j,k,V}\lambda^{2\lceil\frac{n}{4}\rceil(n-2)-1} \end{equation} for all $\lambda\ge 1$ and some $C>0$. Furthermore, if $0 < \lambda \le 1,$ we have that \begin{equation}\label{RV_lowfreq_weighted} \sup\limits_{r,s>0}\left| \rho^{-\alpha}(r)\partial_\lambda^k\operatorname{Im\,} R_{V,j}(\lambda\pm i 0;r,s)\rho^{-\alpha}(s)\right| \le C_{j,k,V}\lambda^{n-2-k}, \end{equation} under the same restrictions on $\alpha.$ \end{proposition} The proof proceeds similarly to \cite[\S6]{borthwick2015dispersive}, which utilizes the following modified version of Young's inequality. \begin{lemma}\label{young} Suppose that on a measure space $(Y,\mu)$ the integral kernels $K_j(z,w)$, $j=1,2,$ satisfy \[\|K_1(z,\cdot)\|_{L^{q_1}} \le A,\hskip 0.2in \|K_1(\cdot,w)\|_{L^{q_1}}\le A, \hskip 0.2in \|K_2(\cdot,w')\|_{L^{q_2}} \le B\] uniformly in $z,w,w'$ for $q_1,q_2\in [1,\infty]$. Then if $\frac{1}{q_1} + \frac{1}{q_2} = \frac{1}{p} + 1$, we have that \[\left\|\int K_1(\cdot,w)K_2(w,w')\,d\mu(w)\right\|_{L^p} \le AB\] uniformly in $w'.$ The bound on $\|K_1(\cdot,w)\|_{L^{q_1}}$ is not required if $p = \infty.$ \end{lemma} With this lemma in hand, we proceed to the proof of \propref{RV_pointwise}. \begin{proof}[Proof of \propref{RV_pointwise}] We begin by expanding $R_{V,j}$ in a Birman-Schwinger series at all frequencies, as in \cite{goldberg2004dispersive}, which gives \begin{equation}\label{Birman-Schwinger} R_{V,j}(\tau) = \sum\limits_{\ell = 0}^{2M - 1}R_{0,j}(\tau)(-VR_{0,j}(\tau))^\ell + [R_{0,j}(\tau)V]^M R_{V,j}(\tau)[V R_{0,j}(\tau)]^M. \end{equation} \noindent As previously discussed, it suffices to consider only the case where we choose $\lambda+i0$ with $\lambda > 0.$ For simplicity, we write $R_{0,j}$ for $R_{0,j}(\lambda+i0)$ and $R_{V,j}$ for $R_{V,j}(\lambda+i0)$. We first consider the remainder term $[R_{0,j}V]^MR_{V,j}[V R_{0,j}]^M$. Since $V\in \rho^{-2\sigma}L^\infty$, we may write $V(r) = \rho^{-2\sigma}(r)f(r)$ for some $f\in L^\infty(\mathbb{R}^+).$ Also, note that for any two operators with Schwartz kernels $A(r,s),\,B(r,s)$, the kernel of their composition is given by \[\langle A(r,\cdot),\overline{B(\cdot,s)}\rangle_{L^2(\mathbb{R}^+)} = \langle B(\cdot,s),\overline{A(r,\cdot)}\rangle_{L^2(\mathbb{R}^+)},\] provided the composition makes sense. Therefore, we can write \begin{align}\label{remainder_pairing} \begin{split} \rho^{-\alpha}[R_{0,j}V]^M R_{V,j}[V R_{0,j}]^M\rho^{-\alpha}(r,s) = \left\langle (\rho^{-\sigma}R_{V,j}\rho^{-\sigma})A(\cdot,s),A^*(r,\cdot)\right\rangle_{L^2}, \end{split} \end{align} where \[A(r,s) = (\rho^{-\sigma}fR_{0,j}\rho^{-\sigma})^{M-1}(\rho^{-\sigma}f R_{0,j}\rho^{-\alpha})(r,s),\] and $A^*$ denotes the adjoint with respect to the $L^2$ pairing. However, we know that $A^*(r,s) = \overline{A(s,r)}$, and hence we can express the right-hand side of \eqref{remainder_pairing} as \[\langle (\rho^{-\sigma}R_{V,j}\rho^{-\sigma})A(\cdot,s),\overline{A(\cdot,r)}\rangle_{L^2}.\] By \eqref{eqn:RV_L2bd}, we have that \begin{equation}\label{pairing_est} \left|\partial_\lambda^k\langle(\rho^{-\sigma}R_{V,j}\rho^{-\sigma})A(\cdot,s), \overline{A(\cdot,r)}\rangle_{L^2}\right| \le \frac{C}{\langle\lambda\rangle}\max\limits_{k_1+k_2\le k}\left(\ltwo{\partial_\lambda^{k_1}A(\cdot,s)}\ltwo{\partial_\lambda^{k_2}A(\cdot,r)}\right). \end{equation} To estimate the norms on the right, we wish to iteratively apply \lemref{young} to each factor in the definition of $A$. For this we consider the high and low frequency cases separately. First, suppose $\lambda \ge 1$. By \propref{Lq_nonimaginary}, we have that for $1\le q\le \frac{n}{n-2}$ and any $0 \le \widetilde k\le k_1,$ \begin{equation}\label{Lq_est} \lpn{\rho^{-\sigma}(r)\partial_\lambda^{\widetilde k}R_{0,j}\rho^{-\sigma}(r,\cdot)}{q} \le C\rho^{-\sigma}(r) \lambda ^{n-2-\widetilde k}\sum\limits_{\ell+m = \widetilde k}\left[(1 + \lambda r)^{\ell - \frac{n-1}{2}}\left( C_1 + C_2\lambda^{m-\frac{n-1}{2}}\right)\right]. \end{equation} Note that if $\ell\le \frac{n-1}{2}$, we can see that the corresponding term in \eqref{Lq_est} is bounded by a constant times \[ \lambda ^{n-2-\widetilde k}\max\{1,\lambda^{(\widetilde k-\ell) - \frac{n-1}{2}}\} \le \max\{\lambda^{n-2-\widetilde k},\lambda^{\frac{n-3}{2}-\ell}\}\] uniformly for $r\in[0,\infty)$. On the other hand, if $\ell > \frac{n-1}{2}$ then we have that \begin{equation}\label{nonuniform} \rho^{-\sigma}(r)(1 + \lambda r)^{\ell-\frac{n-1}{2}}\lambda^{n-2-\widetilde k}\le (1 + \lambda)^{\ell - \frac{n-1}{2}}(1 + r)^{\ell - \frac{n-1}{2}-\sigma}\lambda^{n-2-\widetilde k} \le C(1 + r)^{\widetilde k-\sigma-\frac{n-1}{2}}\lambda^{\frac{n-3}{2} -(\widetilde k-\ell)}. \end{equation} by Cauchy-Schwarz. Recalling our conditions on $\sigma$, we see that $\widetilde k - \sigma - \frac{n-1}{2} < 0$. Therefore, the corresponding term in \eqref{Lq_est} is bounded by a constant times \[\lambda^{\frac{n-3}{2}-(\widetilde k-\ell)}\max\{1,\lambda^{(\widetilde k-\ell) - \frac{n-1}{2}}\} = \max\{\lambda^{\frac{n-3}{2} - (\widetilde k - \ell)},\lambda^{-1}\}\] uniformly in $r$. Maximizing over the possible combinations of $\ell,m$ with $\ell + m = \widetilde k$, we have that \begin{equation}\label{right_var} \lpn{\rho^{-\sigma}\partial_\lambda^{\widetilde k}R_{0,j}\rho^{-\sigma}(r,\cdot)}{q} \le C\max\{\lambda^{n-2-\widetilde k},\lambda^{\frac{n-3}{2}}\} \end{equation} for some $C > 0$, uniformly in $r$. A similar argument gives \begin{equation}\label{left_var} \lpn{\rho^{-\sigma}\partial_\lambda^{\widetilde k}R_{0,j}\rho^{-\sigma}(\cdot,s)}{q}\le C\max\{\lambda^{n-2-\widetilde k},\lambda^{\frac{n-3}{2}}\} \end{equation} uniformly in $s.$ For the final factor in the definition of $A$, which has asymmetric weights, we only need an estimate in the left variable in order to apply \lemref{young}. By \propref{Lq_nonimaginary} we have, for $1\le q\le \frac{n}{n-2},$ \begin{align*} & \lpn{\rho^{-\sigma}g\partial_\lambda^{\widetilde k}R_{0,j}\rho^{-\alpha}(\cdot,s)}{q} \\ & \hspace{1.5cm} \le C\rho^{-\alpha}(s) \lambda ^{n-2-\widetilde k}\sum\limits_{\ell+m = \widetilde k}\left[(1 + \lambda s)^{\ell - \frac{n-1}{2}}\left( C_1 + C_2\lambda^{m-\frac{n-1}{2}}\right)\right]. \end{align*} We may repeat our previous argument almost exactly in order to bound this quantity. The only difference here is that the analogue of \eqref{nonuniform} has a factor of $\rho^{-\alpha}$ instead of $\rho^{-\sigma}.$ So in order to obtain an estimate which is uniform in $s$, we must enforce the condition that $\alpha \ge \max\{k - \frac{n-1}{2},0\}$ and recall that $\widetilde k \le k.$ Aside from this, the rest of the argument is identical, and so we have \begin{equation}\label{R0_uniform} \lpn{\rho^{-\sigma}f\partial_\lambda^{\widetilde k}R_{0,j}\rho^{-\alpha}(\cdot,s)}{q}\le C\max\{\lambda^{n-2-\widetilde k},\lambda^{\frac{n-3}{2}}\} \end{equation} uniformly in $s$, provided that $\alpha \ge \max\{k - \frac{n-1}{2},0\}.$ We can now iteratively apply \lemref{young} to $\|\partial_\lambda^{k_1}A(\cdot,s)\|_{L^2}$. To do this, we must choose $q = \frac{2M}{2M-1}$ so that $\frac{M}{q} = \frac{1}{2}+(M-1)$. We also require $1\le q \le \frac{n}{n-2}$, which is equivalent to taking $M \ge \frac{n}{4}$. This then implies that we must take $\sigma > \frac{n(2M-1)}{2M}+k_1$ in order for \propref{Lq_imaginary} and \propref{Lq_nonimaginary} to apply. In particular, we can take $M = \left\lceil\frac{n}{4}\right\rceil$, the smallest integer larger than $\frac{n}{4}$. Using \eqref{RV_sigmacondition}, we see that \[\sigma > 4\left\lceil\frac{n}{4}\right\rceil - 2 + k = n\left(\frac{4M-2}{n}\right) + k \ge n\left(\frac{4M-2}{4M}\right)+k \ge n\left(\frac{2M-1}{2M}\right)+k_1,\] and so the following argument holds under this condition on $\sigma.$ Repeatedly applying \lemref{young} to $\ltwo{\partial_\lambda^{k_1} A(\cdot,s)}$ and using that $f$ is uniformly bounded, we obtain \[\ltwo{\partial_\lambda^{k_1}A(\cdot,s)} \le C\lambda^{M(n-2)}. \] The analogous estimate for $\|\partial_\lambda^{k_2}A(\cdot, r)\|$ combined with \eqref{pairing_est} gives \begin{equation}\label{remainder_highfreq} \left|\partial_\lambda^k\langle(\rho^{-\sigma}R_{V,j}\rho^{-\sigma})A(\cdot,s), \overline{A(\cdot,r)}\rangle_{L^2} \right| \le C\lambda^{2M(n-2)-1} \end{equation} for $\lambda \ge 1$, and this estimate holds uniformly in $r$ and $s.$ Next, we consider the remainder term in \eqref{Birman-Schwinger} when $0 < \lambda\le 1.$ In this case, taking the imaginary part in the left-hand side of \eqref{pairing_est} is essential, so we must estimate \begin{equation} \partial_\lambda^k\operatorname{Im\,}\langle \rho^{-\sigma} R_{V,j}\rho^{-\sigma}A(\cdot,s),A(\cdot,r)\rangle_{L^2}. \end{equation} First, we note that the above can be written as a finite linear combination of terms where the imaginary part falls on either $R_{V,j}$ or at least one of the factors of $A$. Thus, by \eqref{eqn:RV_L2bd} and \eqref{RV_imaginarypart}, we can write \begin{align}\label{pairing_imag} \begin{split} \left|\partial_\lambda^k\operatorname{Im\,}\langle \rho^{-\sigma} R_{V,j}\rho^{-\sigma}A(\cdot,s),A(\cdot,r)\rangle_{L^2} \right|&\le C\max\limits_{k_1+k_2 + k_3\le k}\lambda^{n-2-k_3}\|\partial_\lambda^{k_1}A(\cdot,s)\|_{L^2}\|\partial_\lambda^{k_2}A(\cdot,r)\|_{L^2}\\ & + C\max\limits_{k_1+k_2\le k}\|\partial_\lambda^{k_1}\operatorname{Im\,} A(\cdot,s)\|_{L^2}\|\partial_\lambda^{k_2}A(\cdot,r)\|_{L^2} \end{split} \end{align} for $0 < \lambda \le 1.$ To estimate the first term on the right-hand side of \eqref{pairing_est}, we can argue analogously to the $\lambda \ge 1$ case, but now we use the low-frequency estimates from \propref{Lq_nonimaginary}, which give \[\|\rho^{-\sigma}\partial_\lambda^{\widetilde k}R_{0,j}\rho^{-\sigma}\|_{L^q}\le C_1\lambda^{-\widetilde k}\rho^{-\sigma}(r) + C_2\lambda^{n-2-\widetilde k}(1 + \lambda r)^{\widetilde k - \frac{n-1}{2}}\rho^{-\sigma}(r) \le C\lambda^{-\widetilde k}\] for any $\widetilde k \le k$ and $1\le q\le \frac{n}{n-2}$ as before. Similarly, we have \begin{equation}\label{R0_nonim_term} \|\rho^{-\sigma}\partial_\lambda^{\widetilde k}R_{0,j}\rho^{-\alpha}\|_{L^q} \le C\lambda^{-\widetilde k}, \end{equation} for $\alpha \ge \max\{k-\frac{n-1}{2},0\}.$ Therefore, using \lemref{young}, we have that \[\|\partial_\lambda^{\widetilde k}A(\cdot,s)\|_{L^2} \le C\lambda^{-\widetilde k} \] uniformly in $s$, for $0 < \lambda \le 1.$ Therefore, we have \begin{equation}\label{pairing_1} \max\limits_{k_1+k_2+k_3\le k}\lambda^{n-2-k_3}\|\partial_\lambda^{k_1}A(\cdot,s)\|_{L^2}\|\partial_\lambda^{k_2}A(\cdot,r)\|_{L^2} \le C\lambda^{n-2-k}. \end{equation} Now, to handle the second term on the right-hand side of \eqref{pairing_imag}, we note that one may expand $\operatorname{Im\,} A(\cdot,s)$ into a linear combination of terms in which the imaginary part falls on at least one factor of $R_{0,j}.$ Therefore, we can use \propref{Lq_imaginary} to obtain that \begin{equation}\label{R0_im_term} \|\rho^{-\sigma}\partial_\lambda^{\widetilde k}\operatorname{Im\,} R_{0,j}\rho^{-\sigma}\|_{L^q} \le C\lambda^{n-2}(1 + \lambda r)^{k-\frac{n-1}{2}}\rho^{-\sigma} \le C\lambda^{n-2} \end{equation} for any $\widetilde k\le k.$ Thus, applying \lemref{young} in combination with \eqref{R0_nonim_term} and \eqref{R0_im_term} gives \[\|\partial_\lambda^{\widetilde k}\operatorname{Im\,} A(\cdot,s)\|\le C\lambda^{n-2-\widetilde k}\] for any $\widetilde k \le k.$ Hence, we have \begin{equation}\label{pairing_2} \max\limits_{k_1+k_2\le k}\|\partial_\lambda^{k_1}\operatorname{Im\,} A(\cdot,s)\|_{L^2}\|\partial_\lambda^{k_2}A(\cdot,r)\|_{L^2} \le C\lambda^{n-2-k} \end{equation} uniformly in $r,s.$ Combining \eqref{pairing_1} and \eqref{pairing_2} with \eqref{pairing_imag} yields \begin{equation}\label{remainder_lowfreq} \partial_\lambda^k\operatorname{Im\,}\langle \rho^{-\sigma} R_{V,j}\rho^{-\sigma}A(\cdot,s),A(\cdot,r)\rangle_{L^2} \le C\lambda^{n-2-k}, \end{equation} and so the remainder in the Birman-Schwinger expansion of $R_{V,j}$ satisfies the claimed estimate for $0<\lambda\le 1.$ Now we consider a generic term in the sum in \eqref{Birman-Schwinger} for $1\le \ell \le 2M-1$. As before, we use the fact that $V = \rho^{-2\sigma}(r)f(r)$ for some $f\in L^\infty(\mathbb{R}^+)$ to write \begin{align} \begin{split} \rho^{-\alpha}\partial_\lambda^k R_{0,j}(VR_{0,j})^\ell\rho^{-\alpha}(r,s) = \partial_\lambda^k (\rho^{-\alpha}R_{0,j}\rho^{-\sigma})(\rho^{-\sigma}fR_{0,j}\rho^{-\sigma})^{\ell - 1}(\rho^{-\sigma}fR_{0,j}\rho^{-\alpha})(r,s). \end{split}\end{align} Writing the above as an $L^2$-pairing, we have \begin{equation}\label{L2_pairing}\rho^{-\alpha} \partial_\lambda^kR_{0,j}(VR_{0,j})^\ell\rho^{-\alpha}(r,s) =\partial_\lambda^k \langle\rho^{-\alpha}R_{0,j}\rho^{-\sigma}(r,\cdot),(\rho^{-\sigma}fR_{0,j}\rho^{-\sigma})^{\ell-1}(\rho^{-\sigma}fR_{0,j}\rho^{-\alpha})(\cdot,s)\rangle_{L^2}. \end{equation} Upon taking the imaginary part, we obtain a finite linear combination of terms of the form \eqref{L2_pairing} where at least one factor of $R_{0,j}$ has the imaginary part acting on it. We assume without loss of generality that the leftmost factor on the right-hand side of \eqref{L2_pairing} has the imaginary part, and thus we can apply H\"older's inequality to obtain \begin{align}\label{holder} \begin{split} \left|\partial_\lambda^k \langle\rho^{-\alpha}\operatorname{Im\,} R_{0,j}\rho^{-\sigma}(r,\cdot),(\rho^{-\sigma}fR_{0,j}\rho^{-\sigma})^{\ell-1}(\rho^{-\sigma}fR_{0,j}\rho^{-\alpha})(\cdot,s)\rangle_{L^2}\right|\\ & \hspace{-3.7in}\le C\|\rho^{-\alpha}\partial_\lambda^{k_1}\operatorname{Im\,} R_{0,j}\rho^{-\sigma}(r,\cdot)\|_{L^{q'}}\\ & \hspace{-3.5in}\times\|(\rho^{-\sigma}f\partial_\lambda^{k_2}R_{0,j}\rho^{-\sigma})\dotsm(\rho^{-\sigma}f\partial_\lambda^{k_{\ell-1}}R_{0,j}\rho^{-\sigma})(\rho^{-\sigma}f\partial_\lambda^{k_\ell}R_{0,j}\rho^{-\alpha})(\cdot,s)\|_{L^p} \end{split} \end{align} for some $1\le q' < \infty$ to be determined, and $p$ given by $\frac{1}{q'} + \frac{1}{p} = 1,$ where $k_1 + k_2 +\dotsm + k_\ell = k.$ If $0 < \lambda \le 1$, we recall that by \propref{Lq_imaginary}, \begin{equation}\label{imag_Lq} \|\rho^{-\alpha}\partial_\lambda^{k_1}\operatorname{Im\,} R_{0,j}\rho^{-\sigma}(r,\cdot)\|_{L^{q'}} \le C\lambda^{n-2} \end{equation} provided that $\sigma > \frac{n}{q'} + k$ and $\alpha \ge \max\{ k_1 - \frac{n-1}{2},0\}$. Similarly, for any $\widetilde k\le k$, we have by \propref{Lq_nonimaginary} that for any $1\le q\le \frac{n}{n-2}$, \begin{equation}\label{nonimag_Lq} \|\rho^{-\sigma}\partial_\lambda^{\widetilde k}R_{0,j}\rho^{-\sigma}(r,\cdot)\|_{L^q} \le C\lambda^{-\widetilde k}, \end{equation} if $\sigma > \frac{n}{q}+k$, along with the analogous estimate when the norm is taken with respect to the other variable. Using \eqref{imag_Lq}, \eqref{nonimag_Lq}, and repeated applications of \lemref{young} to the right-hand side of \eqref{holder}, we obtain \begin{equation}\label{nonremainder_lowfreq} \left|\partial_\lambda^k \langle\rho^{-\alpha}\operatorname{Im\,} R_{0,j}\rho^{-\sigma}(r,\cdot),(\rho^{-\sigma}fR_{0,j}\rho^{-\sigma})^{\ell-1}(\rho^{-\sigma}fR_{0,j}\rho^{-\alpha})(\cdot,s)\rangle_{L^2}\right|\le C\lambda^{n-2 - k} \end{equation} when $0 < \lambda \le 1,$ as long as we choose $q,q'$ such that $\frac{1}{q'} + \frac{\ell}{q} = \ell$ and provided that $\sigma >\max\{\frac{n}{q'},\frac{n}{q}\}+k$. Since $\frac{n}{q} \ge n-2-k$, we have by \eqref{RV_sigmacondition} that \[\sigma > 4\left\lceil\frac{n}{4}\right\rceil - 2 + k \ge n-2+k,\] and so we can ensure that $\sigma > \frac{n}{q} + k$ if $q$ is chosen sufficiently close to, but just below $\frac{n}{n-2}.$ Given this choice of $q,$ we also have that $q'$ lies just above $\frac{n}{2\ell}$, and so \[\sigma > 4\left\lceil\frac{n}{4}\right\rceil - 2+k = 2(2M-1) + k,\] and since $1\le \ell \le 2M-1$, we can ensure that $\sigma > \frac{n}{q'} + k$, since $\frac{n}{q'}+k$ can be made arbitrarily close to $2\ell + k.$ Therefore, under the claimed conditions on $\sigma$ and $\alpha$, we have that \[\left| \partial_\lambda^kR_{0,j}(VR_{0,j})^\ell\right| \le C\lambda^{n-2-k}\] for $0 < \lambda\le 1$ and any $1\le \ell \le 2M-1.$ In the case where $\lambda \ge 1$, we have that \[\|\rho^{-\alpha}\partial_\lambda^{k_1}\operatorname{Im\,} R_{0,j}\rho^{-\sigma}(r,\cdot)\|_{L^{q'}} \le C\max\{ \lambda ^{n - 2- k_1-\frac{n}{q'}},\lambda^{\frac{n-3}{2}}\}\] uniformly in $r$ as before, provided that $\alpha \ge \max\{k-\frac{n-1}{2},0\}$ and $\sigma > \frac{n}{q'} + k$. Next, choose some $q$ which lies just below $\frac{n}{n-2}$ as above. Then, for any $\widetilde k \le k,$ \[\|\rho^{-\sigma}\partial_\lambda^{\widetilde k}R_{0,j}\rho^{-\sigma}(r,\cdot)\|_{L^q}\le C\max\{\lambda^{n-2 - \widetilde k},\lambda^{\frac{n-3}{2}}\}, \] uniformly in $r$, provided that $\sigma > \frac{n}{q} + k$, along with the analogous estimate when the $L^q$ norm is taken over the second variable. We also note that for the rightmost factor in \eqref{holder}, we have \[\|\rho^{-\sigma}\partial_\lambda^{\widetilde k}R_{0,j}\rho^{-\alpha}(\cdot,s)\|_{L^q}\le C\max\{\lambda^{n-2-\widetilde k},\lambda^{\frac{n-3}{2}}\},\] uniformly in $s,$ provided $\alpha \ge \max\{k - \frac{n-1}{2},0\}.$ Given these estimates and \lemref{young}, we can maximize over the possible combinations of $k_1,\dotsc,k_\ell$ to see from \eqref{holder} that \begin{equation}\label{nonremainder_highfreq} \begin{split} \left|\partial_\lambda^k \langle\rho^{-\alpha}\operatorname{Im\,} R_{0,j}\rho^{-\sigma}(r,\cdot),(\rho^{-\sigma}fR_{0,j}\rho^{-\sigma})^{\ell-1}(\rho^{-\sigma}fR_{0,j}\rho^{-\alpha})(\cdot,s)\rangle_{L^2} \right|\\ & \hspace{-2in}\le \lambda^{\ell( n-2)-k}\max\{\lambda^{n-2-\frac{n}{q'}},\lambda^{\frac{n-3}{2}}\} \end{split} \end{equation} provided that $\frac{1}{q'} + \frac{\ell}{q} = \ell$ and $\sigma >\max\{\frac{n}{q'},\frac{n}{q}\}+k$. As shown previously, this condition on $\sigma$ is satisfied under the hypothesis \eqref{RV_sigmacondition} if $q$ is chosen close enough to $\frac{n}{n-2}.$ Furthermore, for this choice of $q$, we have that $q'$ lies just above $\frac{n}{2\ell}$. We claim that this implies that the the bound \eqref{nonremainder_highfreq} is smaller than the estimate \eqref{remainder_highfreq}. To see this, note that if $q'$ is chosen sufficiently close to $\frac{n}{2\ell}$, then $\frac{n}{q'} = 2\ell + \varepsilon$ for some $\varepsilon > 0$. Then, we have \[\ell(n-2) + n-2 - \frac{n}{q'} = \ell(n-2) + (n-2) - 2\ell -\varepsilon \le \ell(n-4) + (n-2).\] If $n\le 4$, then the above is smaller than $n-2$ for all $\ell = 1,\dotsc,2M-1$. If $n\ge 4,$ then we have \[\ell(n-4)+(n-2) \le (2M-1)(n-4) +(n-2) = 2M(n-2) -2(2M-1) \le 2M(n-2) - 1.\] Furthermore, we note that \[\ell(n-2) + \frac{n-3}{2} \le (2M-1)(n-2) - \frac{n-3}{2} \le 2M(n-2) - \frac{n-1}{2} \le 2M(n-2) - 1.\] Therefore, the exponent on $\lambda$ in \eqref{nonremainder_highfreq} is smaller than that of \eqref{remainder_highfreq} for any $\ell,$ and hence we have \begin{equation}\label{nonremainder_highfreq2} \left|\partial_\lambda^k \langle\rho^{-\alpha}\operatorname{Im\,} R_{0,j}\rho^{-\sigma}(r,\cdot),(\rho^{-\sigma}fR_{0,j}\rho^{-\sigma})^{\ell-1}(\rho^{-\sigma}fR_{0,j}\rho^{-\alpha})(\cdot,s)\rangle_{L^2} \right|\le \lambda^{2M(n-2)-1} \end{equation} when $\lambda \ge 1$. Now, if the imaginary part falls on any factor other than the first on the right-hand side of \eqref{L2_pairing}, we simply repeat the preceding argument, but with the $L^{q'}$ norm on that factor. Finally, we consider the case where $\ell = 0$ in \eqref{Birman-Schwinger}. For this term, we must simply obtain pointwise bounds on $\rho^{-\alpha}(r)\partial_\lambda^k\operatorname{Im\,} R_{0,j}(r,s)\rho^{-\alpha}(s)$. Recall that by \lemref{stone}, we have \[\operatorname{Im\,} R_{0,j}(r,s) = \frac{\pi }{2} \lambda^{n-2}(\lambda r\lambda s)^{-\frac{n-2}{2}} J_{\nu_j}(\lambda r)J_{\nu_j}(\lambda s).\] Therefore, $\partial_\lambda^k R_{0,j}(r,s)$ can be written as a finite linear combination of terms of the form \begin{equation}\label{R0_linearcomb} \lambda^{n-2-k}(\lambda r)^{\ell-\frac{n-2}{2}}(\lambda s)^{m - \frac{n-2}{2}}J_{\nu_j+\alpha}(\lambda r)J_{\nu_j+\beta}(\lambda s) \end{equation} for $\ell + m = k$, $|\alpha|\le \ell$, and $|\beta| \le m$. Using the standard asymptotics of the Bessel functions, we have that the above is bounded in absolute value by a constant times \begin{equation}\label{R0_unweighted} \lambda^{n-2-k}(1 + \lambda r)^{\ell - \frac{n-1}{2}}(1 + \lambda s)^{m - \frac{n-1}{2}}. \end{equation} Next, we note that \[\rho^{-\alpha}(r)(1 + \lambda r)^{\ell - \frac{n-1}{2}} \le C(1 + \lambda)^\ell,\] for all $\lambda,$ uniformly in $r$, under the assumption that $\alpha \ge \max\{k - \frac{n-1}{2},0\}$. The analogous estimate holds for $\rho^{-\alpha}(s)(1 + \lambda s)^{m-\frac{n-1}{2}}$, and therefore, we have that \begin{equation}\label{R0_pointwisebound} \left|\rho^{-\alpha}(r)\partial_\lambda^k R_{0,j}(r,s)\rho^{-\alpha}(s)\right| \le C\lambda^{n-2-k}(1 + \lambda)^k. \end{equation} Combining \eqref{R0_pointwisebound} with \eqref{remainder_lowfreq}, \eqref{remainder_highfreq}, \eqref{nonremainder_lowfreq}, and \eqref{nonremainder_highfreq}, the proof of \propref{RV_pointwise} is complete. \end{proof} \section{Dispersive estimates} \label{dispersive} In this section, we prove the main estimate in \thmref{disp_est}. To accomplish this, we write the continuous part of the spectral measure for $-\Delta_{C(X)} + V$ as \[d\Pi_V(\lambda;x,y) = \frac{1}{\pi i}[R_V(\lambda+i0;x,y) - R_V(\lambda-i0;x,y)]\lambda\,d\lambda = \frac{1}{\pi}\Im R_V(\lambda + i0;x,y)\lambda\,d\lambda.\] Then, we can write \[\left[e^{it(-\Delta_{C(X)} + V)}P_c\right](x,y) = \int\limits_{-\infty}^\infty e^{it\lambda^2}d\Pi_V(\lambda;x,y)= \frac{1}{\pi }\int\limits_{-\infty}^\infty e^{it\lambda^2}\operatorname{Im\,} R_{V}(\lambda +i0;x,y) \lambda\,d\lambda,\] where $P_c$ denotes projection onto the continuous spectrum of $-\Delta_{C(X)}+V.$ Projecting further onto the span of $\varphi_j$, we obtain \begin{equation} \frac{1}{\pi}\int\limits_{-\infty}^\infty e^{it\lambda^2}\Im R_{V,j}(\lambda +i0;r,s)\lambda\,d\lambda \end{equation} since $V$ is radial. Therefore, the proof of \thmref{disp_est} is equivalent to showing that \begin{equation}\label{spectral_measure_est} \left|\frac{1}{\pi }\int\limits_{-\infty}^\infty e^{it\lambda^2} \rho^{-\alpha}(r)\Im R_{V,j}(\lambda +i0;r,s)\rho^{-\alpha}(s)\lambda\,d\lambda\right| \le Ct^{-\frac{n}{2}} \end{equation} for $\alpha > 2\left\lceil\frac{n}{4}\right\rceil(n-2) - \frac{n-1}{2} + 2$. \begin{proof}[Proof of \thmref{disp_est}] Assume that $n$ is odd, and let $\chi\in C_0^\infty(\mathbb{R})$ be a cutoff function which is identically one on $[-1/2,1/2]$ and zero outside $[-1,1]$. We then consider the low-frequency component of the left-hand side of \eqref{spectral_measure_est}, given by \begin{equation}\label{low_frequency} \frac{1}{\pi }\int\limits_\mathbb{R} e^{it\lambda^2} \,\chi(\lambda)\rho^{-\alpha}(r)\Im R_{V,j}(\lambda+i0;r,s)\rho^{-\alpha}(s)\lambda\,d\lambda. \end{equation} Noting that the operator $\frac{1}{2it\lambda}\partial_\lambda$ preserves $e^{it\lambda^2}$, we may integrate by parts $N$ times in $\lambda$ to obtain \begin{equation}\label{IBP} \frac{C_N}{t^N}\int\limits_{-\infty}^\infty e^{it\lambda^2}\partial_\lambda\left(\frac{1}{\lambda}\partial_\lambda\right)^{N-1}\left[\chi(\lambda)\rho^{-\alpha}(r)\Im R_{V,j}(\lambda+i0;r,s)\rho^{-\alpha}(s)\right]\,d\lambda \end{equation} for some $C_N\in\mathbb{C}\setminus 0.$ Now, observe that when expanding the integrand via the product rule, any terms in which a derivative falls on the factor of $\chi(\lambda)$ can be written as \[C_Nt^{-N}\int\limits_{-\infty}^\infty e^{it\lambda^2}G(\lambda;r,s)\,d\lambda\] for some $G(\lambda;r;s)$ which is smooth and compactly supported away from $0$ in $\lambda,$ and bounded uniformly in $r,s$ by \propref{RV_pointwise}. Applying the standard dispersive estimate for the Schr\"odinger equation on $\mathbb{R},$ we have \begin{equation}\label{G_terms} t^{-N}\left|\int\limits_{-\infty}^\infty e^{it\lambda^2}G(\lambda;r,s)\,d\lambda\right|\le Ct^{-N-\tfrac{1}{2}}\|\widehat G(\cdot\,;,r,s)\|_{L^1}, \end{equation} where $\widehat G$ denotes the Fourier transform in $\lambda.$ Since $n$ is odd, we may choose $N = \frac{n-1}{2}$, so the right-hand side of \eqref{G_terms} is bounded by $C t^{-\frac{n}{2}}$ as claimed, after possibly increasing $C$. Now, any terms obtained from expanding \eqref{IBP} where no derivatives fall on the factor of $\chi$ must be of the form \begin{equation}\label{lowfreq_linearcomb} \lambda^{1-2N + k}\chi(\lambda)\rho^{-\alpha}(r)\partial_\lambda^k\Im R_{V,j}(\lambda+i0;r,s)\rho^{-\alpha}(s) \end{equation} for some $k = 1,2,\dotsc,N$ since at least one derivative always falls on the factor of $\Im R_{V,j}.$ By \propref{RV_pointwise}, we have that each of the above terms is bounded in absolute value by a constant times \[\lambda^{1-2N + k}\chi(\lambda)\lambda^{n-2-k} = \lambda^{n-1 - 2N}\chi(\lambda)\] uniformly for $r,s >0.$ For our choice of $N = \frac{n-1}{2}$, we have that $\lambda^{n-1 - 2N} = 1$, and hence \eqref{lowfreq_linearcomb} is a smooth function of $\lambda$, and so its Fourier transform is bounded in $L^1$. Once again, using the standard $L^1\to L^\infty$ dispersive estimate for the free one-dimensional Schr\"odinger equation, we have that \begin{align}\label{low_frequency_bound} \begin{split} t^{-N}\left|\int\limits_{-\infty}^\infty e^{it\lambda^2}\lambda^{1-2N + k}\chi(\lambda)\rho^{-\alpha}(r)\partial_\lambda^k\Im R_{V,j}(\lambda+i0)(r,s)\rho^{-\alpha}(s)\,d\lambda\right| \le C t^{-N-\frac{1}{2}} = Ct^{-\frac{n}{2}}, \end{split} \end{align} uniformly in $r,s.$ We remark that it is in this calculation that the choice of $N = \frac{n-1}{2}$, and hence the power of $t^{-\frac{n}{2}}$, cannot be improved, since any additional derivatives which fall on $\Im R_{V,j}(\lambda+i0)$ would yield an integrand which is not bounded smooth near $\lambda = 0$. We also note that for this portion of the argument, we only require that $\alpha \ge 0$, since we did not differentate $\Im R_{V,j}$ more than $\frac{n-1}{2}$ times. Next, we consider the ``high-frequency" component of \eqref{spectral_measure_est}, which we define by \begin{equation}\label{high_frequency} \frac{1}{\pi}\int\limits_{-\infty}^\infty e^{it\lambda^2}\chi(\lambda/R)(1 - \chi(\lambda)) \rho^{-\alpha}(r)\Im R_{V,j}(\lambda +i0;r,s)\rho^{-\alpha}(s)\,\lambda\,d\lambda \end{equation} for any $R\in[1,\infty)$. To control this term, we integrate by parts as before to obtain \begin{align*} &\frac{1}{\pi}\int\limits_{-\infty}^\infty e^{it\lambda^2}\chi(\lambda/R)(1 - \chi(\lambda)) \rho^{-\alpha}(r)\Im R_{V,j}(\lambda +i0;r,s)\rho^{-\alpha}(s)\,\lambda\,d\lambda\\ & \hskip .05in = C_N t^{-N}\int\limits_{-\infty}^\infty e^{it\lambda^2}\partial_\lambda\left(\frac{1}{\lambda}\partial_\lambda\right)^{N-1}\left[\chi(\lambda/R)(1 - \chi(\lambda))\rho^{-\alpha}(r)\Im R_{V,j}(\lambda + i0;r,s)\rho^{-\alpha}(s)\right]\,d\lambda \end{align*} for any $N > 0$ and some corresponding constant $C_N$. We aim to show that the integrand can be bounded uniformly in $L^1(\mathbb{R},d\lambda)$ as $R\to\infty.$ We also claim that it is sufficient to consider the case where all the derivatives in $\lambda$ fall on the factor of $\operatorname{Im\,} R_{V,j}$. To see this, note that $\partial_\lambda(1-\chi(\lambda))$ is supported in a fixed compact set which is bounded away from $\lambda = 0,$ and that $\partial_\lambda\chi(\lambda/R) = \frac{1}{R}\chi'(\lambda/R)$ is supported away from $\lambda = 0$ in a set of size $\mathcal O(R).$ Therefore, we need only show that \begin{equation}\label{high_freq_integrand}\frac{1}{\lambda^{N-1}}\chi(\lambda/R)(1-\chi(\lambda))\rho^{-\alpha}(r)\partial_\lambda^N\operatorname{Im\,} R_{V,j}(\lambda+i0;r,s)\rho^{-\alpha}(s) \end{equation} has bounded $L^1$ norm, and that the estimate is uniform with respect to $r,\,s,$ and $\,R$. For this, we utilize \propref{RV_pointwise}, which implies that if $\alpha \ge \max\{N -\frac{n-1}{2},0\}$ and $\sigma > 4\left\lceil\frac{n}{4}\right\rceil -2 + N$, then \eqref{high_freq_integrand} is bounded by a constant times $\langle\lambda\rangle^{1-N+L},$ uniformly in $r,\,s,$ and $R$, where $L = 2\left\lceil\frac{n}{4}\right\rceil(n-2)-1.$ Thus, choosing \[N = 2\left\lceil\frac{n}{4}\right\rceil(n-2)+2,\] guarantees that \eqref{high_freq_integrand} is uniformly bounded in $L^1$. Noting that $N\ge \frac{n}{2}$ if $N$ is chosen as above, we obtain \begin{equation}\label{high_frequency_bound} \lim\limits_{R\to\infty}\left|\frac{1}{\pi}\int\limits_{-\infty}^\infty e^{it\lambda^2}\chi(\lambda/R)(1 - \chi(\lambda))\lambda \Im R_{V,j}(\lambda +i0;r,s)\,d\lambda\right|\le Ct^{-\frac{n}{2}}, \end{equation} where $C > 0$ is independent of $r,\,s$ and $R.$ Our choice of $N$ also determines the maximum number of derivatives of $\Im R_{V,j}(\lambda + i0;r,s)$ that must be taken, which yields \[\sigma > 4\left\lceil\frac{n}{4}\right\rceil -2 + 2\left\lceil\frac{n}{4}\right\rceil(n-2)+2 = 2n\left\lceil\frac{n}{4}\right\rceil \] as the sufficient condition on the decay rate of $V$. Also, the condition on $\alpha$ becomes \[\alpha > N - \frac{n-1}{2} = 2\left\lceil\frac{n}{4}\right\rceil(n-2) +2 - \frac{n-1}{2},\] as stated in \thmref{disp_est}. Under these conditions on the weights, we can combine \eqref{low_frequency_bound} and \eqref{high_frequency_bound} to obtain \eqref{spectral_measure_est}, which completes the proof of \thmref{disp_est}. \end{proof} \begin{rmk}\textnormal{ In the case where $n$ is even, we find that in repeating the argument just prior to \eqref{low_frequency_bound}, the largest $N$ we can choose is $\frac{n-2}{2}$, which leads to a decay rate of $t^{-\frac{n-1}{2}}$ in the $L^1\to L^\infty$ estimate. The remainder of the argument goes through without modification, yielding \eqref{disp_est_even}. } \end{rmk}
1,314,259,992,843
arxiv
\section{Introduction \label{sec:introduction} \Glspl{CNN} have had a significant breakthrough in almost all \clarity{Artificial Intelligence (AI)} tasks in recent years, thanks to large, comprehensible, and publicly available data sets, easy-to-use frameworks, and the newly available vast compute resources. Nevertheless, state-of-the-art neural networks come with high computational costs ($>$GFLOPs/inference) and memory requirements ($>$10 MB), which has led to an explosion of \glspl{DSA} in datacenters~\cite{TPU2016, liao2019davinci} and edge devices~\cite{kang2021benchmarking}. Typically, floating-point numbers have been used to flexibly represent \gls{CNN} computations, but floating-point \clarity{datapath is} very area- and power-hungry due to the large intermediate values, re-normalization, and exception handling, which require adequate hardware support. Thanks to the intrinsic error tolerance of neural networks~\cite{zhu2020towards, guo2018survey}, 8-bit integer operations can be used for most inference applications with no to minimal accuracy degradation. As integer operations---like additions and multiplications---are one order of magnitude more energy-efficient than their floating-point counterparts~\cite{horowitz2014energy}, integer-based accelerators can achieve much higher peak throughput and energy efficiency. In order to reduce operation count and memory footprint even further, recent works have proposed to adopt smaller convolutional kernel sizes~\cite{Krizhevsky2012a}, exploit sparsity~\cite{Han2015}, use group convolutions~\cite{Krizhevsky2012a, xie2017aggregated}, channel shuffling~\cite{Zhang2017}, and depthwise separable convolutions~\cite{howard2017mobilenets}. Nevertheless, compute-heavy, dense $3{\times}3$ convolutional layers are still widely used in many state-of-the-art computer vision models~\cite{ssd300_vgg16, redmon2018yolov3, unet}. \clarity{Thus, the adoption} of the Winograd algorithm~\cite{winograd1980arithmetic} represents an interesting \clarity{optimization} opportunity as it converts the $3{\times}3$ convolution operation into a much less expensive elementwise multiplication. The Winograd convolution algorithm extends the Toom-Cook algorithm to support convolutions by applying the (polynomial) Chinese remainder theorem, minimizing the number of required multiplications \cite{winograd1980arithmetic,blahut2010fast}. Specifically, the 2D convolution with a feature map $x$ of size $m{\times}m$ and convolutions kernel $f$ of size $3{\times}3$ is calculated with the Winograd (convolution) algorithm $F_m = F(m{\times}m, 3{\times}3)$ as follows: \begin{align}\label{eq:winograd} Y=A^T \left[\left(G f G^T\right)\odot\left(B^T x B\right)\right]A \end{align} \clarity{where $G\in\mathbb{R}^{(m+2){\times}3}$, $B^T\in\mathbb{R}^{(m+2){\times}(m+2)}$, and $A^T\in\mathbb{R}^{m{\times}(m+2)}$ are called transformation matrices. Specifically, the $G$ and $B^T$ matrices transform the weights and the input feature maps, respectively, from the spatial domain to the Winograd domain. Here, the convolution becomes an $(m+2)^2$-sized element-wise multiplication of the feature maps with the filter weights such that the number of multiplications is reduced from $m^2\cdot9$ to $(m+2)^2$. Then, the $A^T$ matrix transforms the output feature maps back to the spatial domain.} While larger feature map tile sizes ($m$ $\uparrow$) reduce the number of required multiply-and-accumulate operations (MACs $\downarrow$) compared to the standard convolution algorithm, this comes at the cost of more complex transformation matrices, higher sensitivity to numerical inaccuracies, and \clarity{so, in practice,} diminishing returns~\cite{barabasz2020error}. Thus, the focus of actual implementations has been mainly put towards $m\in\{2,4\}$, resulting in a \clarity{potential} reduction of the number of \glspl{MAC} by 2.25$\times$ for $F_2$ and by 4$\times$ for $F_4$. Unfortunately, the numerical instability of $F_4$ prevents a straightforward adoption of \verb|int8| operations~\cite{Lavin2015a,fernandez2020searching,barabasz_2019_winogradbeyondlinear}. Moreover, the challenges of processing more complex transformation matrices in a programmable AI accelerator have not been addressed in previous works~\cite{liu2021winocnn, lu2018spwa, yang2021biswsrbs}. This work aims primarily at enabling \verb|int8| Winograd $F_4$ inference on a domain-specific accelerator. \clarity{Particularly,} we propose a novel tap-wise quantization algorithm to overcome the numerical issue of Winograd $F_4$ and an architectural and micro-architectural design space exploration for efficient hardware implementation. \clarity{An extensive evaluation demonstrates that the tap-wise quantization algorithm guarantees negligible accuracy drop on several state-of-the-art \glspl{CNN}. Moreover, the domain-specific accelerator enhanced to support Winograd $F_4$ with tap-wise quantization runs up to \revised{3.42\bm{x}{}} faster on compute-intensive convolutional layers, and up to \revised{1.83\bm{x}{}} faster on an entire network with a gain up to \revised{1.85\bm{x}{}} in energy efficiency.} In the following section, we introduce \clarity{the Winograd $F_4$ algorithm} in more detail, highlighting its challenges and our proposed solutions. \section{Beyond Winograd 2×2} The transformation matrices for Winograd \clarity{$F_2$} can be derived from the polynomial roots $\{0,1,-1\}$ s.t.: \smallEquationFont \begin{align} B^T &= \begin{bmatrix} 1 & \phantom{-}0 & -1 & \phantom{-}0 \\ 0 & \phantom{-}1 & \phantom{-}1 & \phantom{-}0 \\ 0 & -1 & \phantom{-}1 & \phantom{-}0 \\ 0 & \phantom{-}1 & \phantom{-}0 & -1 \end{bmatrix} & G=\frac{1}{2}\begin{bmatrix} 2 & \phantom{-}0 & 0 \\ 1 & \phantom{-}1 & 1 \\ 1 & -1 & 1 \\ 0 & \phantom{-}0 & 2 \end{bmatrix} \nonumber \end{align} \begin{equation} A^T=\begin{bmatrix} 1 & 1 & 1 & 0 \\ 0&1&-1&-1 \end{bmatrix} \nonumber \end{equation} \normalsize The matrices are relatively sparse and contain only $\pm1$ in $B^T$ and $A^T$, requiring \clarity{just} additions and subtractions. The weight transformation matrix has mostly $\pm\frac{1}{2}$ values, which can be implemented as a shift-by-1-and-add: $c=\frac{1}{2}a+\frac{1}{2}b=(a+b)$\texttt{>}\texttt{>}$1$. To guarantee bit-true computation in the Winograd domain, $B^TxB$ requires only 2 extra bits ($k=2^2$), and $GfG^T$ requires 3 extra bits ($k=3^2$) as the sum of $k$ \verb|int|$n$ values results in a $\lceil\text{log}_2 (k(2^n-1)+1)\rceil$-bit integer in the worst case. However, as weights and activation value distributions in \glspl{CNN} usually follow a Gaussian distribution centered around zero, in practice, 8 bits are sufficient to keep the same accuracy. The most common Winograd $F_4$ algorithm uses the root points $\{0,1,-1,\frac{1}{2},\frac{1}{2}\}$, s.t.: \scriptsize \begin{align} B^T = \begin{bmatrix} 4 & \phantom{-}0 & -5 & \phantom{-}0 & 1 & 0 \\ 0 & -4 & -4 & \phantom{-}1 & 1 & 0 \\ 0 & \phantom{-}4 & -4 & -1 & 1 & 0 \\ 0 & -2 & -1 & \phantom{-}2 & 1 & 0 \\ 0 & \phantom{-}2 & -1 & -2 & 1 & 0 \\ 0 & \phantom{-}4 & \phantom{-}0 & -5 & 0 & 1 \end{bmatrix},\quad G=\frac{1}{3}\begin{bmatrix} \phantom{-}3/4 & \phantom{-}0 & \phantom{-}0 \\ -1/2 & -1/2 & -1/2 \\ -1/2 & \phantom{-}1/2 & -1/2 \\ \phantom{-}1/8 & \phantom{-}1/4 & \phantom{-}1/2 \\ \phantom{-}1/8 & -1/4 & \phantom{-}1/2 \\ \phantom{-}0 & \phantom{-}0 & \phantom{-}3 \end{bmatrix} \nonumber \end{align}\ifdefined\LONG\else\vspace{-2mm}\fi \begin{align} A^T=\begin{bmatrix} 1 & 1 & \phantom{-}1 & 1 & \phantom{-}1 & 0 \\ 0 & 1 & -1 & 2 & -2 & 0 \\ 0 & 1 & \phantom{-}1 & 4 & \phantom{-}4 & 0 \\ 0 & 1 & -1 & 8 & -8 & 1 \end{bmatrix} \nonumber \end{align} \normalsize The transformation matrices of $F_4$ are much less sparse, contain a wider range of coefficients, and require more computing effort. Therefore, while $F_4$ further reduces the number of MACs, it also introduces three significant challenges. \begin{figure} \ifdefined\LONG \includegraphics[width=\linewidth]{figs/valuedistribution__GfG_rebuttal.pdf} \else \centering\includegraphics[width=0.8\linewidth,trim={0cm 0.3cm 0 0},clip]{figs/valuedistribution__GfG_rebuttal.pdf}\vspace{-2mm} \fi \caption{Weight Distribution in Winograd domain $GfG^T$ for 3 selected taps \revised{and their combined distribution }for ResNet-34 on ImageNet} \label{fig:GfGdistribution} \end{figure} \textbf{Challenge I: Non-uniform dynamic range.} A bit-true $F_4$ Winograd algorithm requires 10 extra bits for the weights and 8 extra bits for both the input and output feature maps transformations. Clearly, such an increased bitwidth represents an unfeasible requirement for a high-throughput hardware accelerator as it significantly raises the power and area costs. However, quantizing all the taps to \verb|int8| in a traditional fashion, i.e., using the same scaling factor for all the taps, has a disruptive effect on the accuracy of the network~\cite{fernandez2020searching}. We \clarity{found out} that the Winograd transformation matrices significantly change the dynamic range of each output tap, as shown in \cref{fig:GfGdistribution} for three taps of the weights in Winograd domain $GfG^T$ of ResNet-34. To this end, we propose a \textit{tap-wise quantization algorithm} to enable \verb|int8| inference with the $F_4$ Winograd algorithm. Specifically, we present a training method that learns hardware-friendly powers-of-two scaling factors needed to independently quantize each tap based on its post-transformation dynamic range. \textbf{Challenge II: Complex transformation operations.} As the heart of virtually all modern accelerators is a large and high-throughput 2D or 3D data path for \gls{GEMM}~\cite{TPU2016, liao2019davinci, nvidiaa100}, translating the lower computational complexity of the Winograd algorithm into wall-clock time speed-up is not straightforward. Among all the steps involved in the Winograd algorithm, only the tap-wise multiplications can be efficiently processed as a batched \gls{GEMM}. On the other hand, the input, output, and weight transformations involve several small \glspl{GEMM} and data-layout rearrangement operations, which cannot be processed at high throughput on a 2D or 3D \gls{GEMM} engine. Thus, increasing the Winograd tile size moves ``ops'' from cheap, high-arithmetic intensity operations to more sparse, low-arithmetic intensity ones. To address this challenge, we present a design space exploration of custom hardwired modules that implement the low-arithmetic intensity ``ops'' of the Winograd transformation operations in an area- and power-efficient way. \textbf{Challenge III: Orchestrating heterogeneous operations.} One of the major challenges of developing a high-performance CNN accelerator is balancing memory bandwidth and compute throughput. By adding a new class of operations, the Winograd algorithm increases the heterogeneity of the compute operations, making the orchestration of data movements and computations much more complex. Moreover, the Winograd algorithm lowers the computational complexity of the Conv2D operation compared to the \clarity{standard} implementation but at the cost of substantially reducing the data reuse opportunities. This characteristic inevitably puts more pressure on the memory bandwidth requirements and calls for a careful dataflow development. With this in mind, we show how to integrate the Winograd transformation engines in an industrial grade, programmable AI accelerator and how to \clarity{tune the microarchitecture of such blocks} to match the throughput of data movement, \clarity{Winograd} transformation, and compute operations, maximizing the overall compute efficiency. \clarity{The presented methodology can serve as a guideline for DSA designers willing to exploit the Winograd transformation engines in other accelerators.} \section{Tap-Wise Quantization} \textbf{Quantization.} Neural networks are commonly trained with floating-point numbers with 16--32\,bits. However, this comes with a significant power and area cost. Quantization to integer numbers for inference has become popular as most neural networks can be quantized to \verb|int8| with minimal or no accuracy loss \cite{zhu2020towards, guo2018survey}. Floating-point numbers are approximated as integer numbers with a shared FP32 scale factor $s$, s.t., $x_{\text{\texttt{float32}}}\approx\hat x_{\text{\texttt{int}}n}\cdots$ where $s = \frac{x_\text{max}}{2^{n-1}}$, and $x_\text{max}$ is the largest representable value, and the quantized value: \begin{equation} \hat x_{\text{\texttt{int}}n}=\nint*{x/s}_{\text{\texttt{int}}n}=\text{clamp}\left(\nint*{x/s}, -2^{n-1}, 2^{n-1}-1\right)\label{eq:quant} \end{equation} We calibrate \clarity{$x_\text{max}$} by calculating a running average of the maximum values obtained during the training of the full network. After scaling, the data is rounded to the next integer value, and \clarity{clamped} within the $n$-bit integer number range, i.e., [-128,127] for \verb|int8|, denoted by the function $\nint*{x}_{\text{\texttt{int}}n}$. Previously, several works have proposed to quantize directly in the Winograd domain\cite{JiongGong2018,li2020lance,meng2019efficient,fernandez2020searching,barabasz2020quantaized}. Even though this helped to improve the performance of Winograd $F_2$ ($m=2$), it is is not sufficient for $F_4$ and larger tile sizes. Specifically, looking at the value distributions, we \clarity{found out} that the weights and feature maps in the Winograd domain heavily depend on their tap index, as shown in \cref{fig:GfGdistribution}. For this reason, we propose to independently quantize each tap. \textbf{Tap-wise Quantization.} Based on the formulas of Winograd (\cref{eq:winograd}), and the general quantization (\cref{eq:quant}), \shepherd{tap-wisely} quantized Winograd (for a single \gls{oFM} and single tile) can be described as follows: \begin{adjustbox}{width=\linewidth,center} \begin{minipage}{\linewidth} \vspace{4mm} \begin{align} y&=\sum_{C_{in}} A^T\left( s_B\nint*{B^T \hat x_{C_{in}} B/s_B}_{{\text{\texttt{int}}b}} \odot s_G\nint*{G \hat f_{C_{in}}G^T/s_G}_{{\text{\texttt{int}}b}}\right)A \nonumber \end{align} \vspace{0.01mm} \end{minipage} \end{adjustbox} We then replace the linear scaling factors $s_B$ and $s_G$ with tap-wise scaling matrices $S_G, S_B \in \mathbb{R}^{(m+r-1)\times(m+r-1)}$ for the weights and the \acrfull{iFM} respectively. We define $S_{BG}=S_G\odot S_B$ for the \acrfull{oFM}. The multiplications/divisions with scalars are substituted with their element-wise counterparts $\odot, \oslash$ Finally, we apply the distributivity law and reararrange the linear operations to obtain the following quantization scheme: \begin{adjustbox}{width=\linewidth,center} \begin{minipage}{\linewidth} \vspace{2mm} \begin{align*} A^T\!\left(\underbrace{S_B\!\odot\!S_G}_{S_{BG}}\!\odot\!\!\sum_{C_{in}} \underbrace{ \nint*{B^T \hat x_{C_{in}} B\!\oslash\!S_B}_{{\text{\texttt{int}}b}} \odot \nint*{G \hat f_{C_{in}}G^T\!\oslash\!S_G}_{{\text{\texttt{int}}b}}}_{{\text{\texttt{int}}2b}}\right)\!A. \end{align*} \vspace{0.01mm} \end{minipage} \end{adjustbox} The multiplications and accumulations over the \glspl{iFM} are calculated in the integer domain, and rescaling is applied just once before the back-transformation, as an element-wise multiplication with ${S_{BG}}$. \subsection{Winograd-aware Training}\label{sec:winogradaware} We use stochastic gradient descent to train the network. To improve the training outcome, we also adopt the static Winograd-aware training method \cite{fernandez2020searching}, \clarity{which propagates the gradients through the Winograd domain}. Fernandez et al.~\cite{fernandez2020searching} also proposed the \texttt{flex} variant, in which they learn the transformation matrices as a network parameter, and thus propagate the gradients back to $G, B, \text{and } A$. We are not considering this option because it introduces significantly more floating-point operations by making the transformation matrices dense and by preventing the use of HW-friendly shift-and-add operations. \begin{figure*}[t] \centering \includegraphics[width=.95\textwidth]{figs/da_vinci_core_v3.pdf} \caption{High-level overview of the inference accelerator with the proposed extensions.} \label{fig:system_overview} \end{figure*} \subsection{Power-of-two Tap-wise Quantization}\label{sec:power2} The scaling operations cannot be moved outside of the Winograd domain and therefore introduce one multiplication with an FP32 value per transformation and tap. These multiplications can be shared among the output channels for \glspl{iFM} transformation and input channels for the \glspl{oFM} transformation. Nevertheless, it is favorable to restrict the scaling values to power-of-two values, such that the transformations can be performed with shift-and-add operations only, including the rescaling to adapt to the dynamic range. We evaluate and combine 3 approaches to learn the \clarity{power-of-two} scaling factors. \textbf{Straight-forward power-of-two quantization.} All scaling factors calculated from the calibrated maximum values are rounded to the next power-of-two: $\tilde s_{i,j} := 2^{\lceil{\log_2 s_{i,j}\rceil}}$, such that the quantized value is $q_{\text{\texttt{int}}b}(x) := \nint*{x / 2^{\lceil{\log_2 s\rceil}}}_{\text{\texttt{int}}b}$. \textbf{Learned power-of-two quantization.} \clarity{The scaling factor can be learned to find a more optimal representation range, particularly as improving the precision of smaller values might be more important for the end-to-end accuracy compared to having less clamped values}. The quantization function is a multi-step function and its derivative is zero almost everywhere thus preventing meaningful training. We approximate the gradient with the straight-through estimator~\cite{bengio2013estimating}: $\frac{\partial}{\partial x} \lceil{x}\rceil=\frac{\partial}{\partial x} \lceil{x}\rfloor=\frac{\partial}{\partial x} \lfloor{x}\rfloor=1$. Instead of training the scale value $s$, we calculate the gradient to the logarithm of $\log_2 t$, s.t., $s=2^{\lceil{\log_2 t\rceil}}$ \cite{jain2019trained}. \begin{align} \frac{\partial q(x)}{\partial \log_2(t) }=s\;\text{ln}(2)\cdot\text{clamp}\left(\nint*{\frac{x}{s}}-\frac{x}{s}, -2^{b-1}, 2^{b-1}-1\right) \end{align} Furthermore, gradients need to be normalized for better convergence and to guarantee scale invariance for data and scaling factors, otherwise, they depend heavily on the relative distance of the input to the clamping threshold of the quantization function. For the scaling factors, we are using the Adam optimizer with its built-in gradient normalization ($\beta_1 =0.9$, $\beta_2=0.99$) \cite{kingma2014adam}. For the other parameters, we stick to standard SGD with a separate learning rate. \textbf{Knowledge distillation.} Knowledge distillation (KD) has been used to train compact networks (student) \clarity{by minimizing the distance to a} larger network (teacher). We also use KD in the proposed training flow by setting the power-of-two tap-wise quantized network as the student and the floating-point baseline model as the teacher. We adopt the Kullback-Leibler divergence loss and the tempered softmax activation function \cite{hinton2015distilling}. \revised{As not all convolutional layers can be implemented efficiently with the Winograd algorithm, we only replace those with a $3{\times}3$ kernel size and unitary stride, whereas all the others, like the $1{\times}1$ pointwise convolutions, are processed using a standard algorithm.} Although strided convolution can be implemented with the Winograd algorithm~\cite{yepez2020stride, yang2020stride}, the control and compute overhead dominates the potential \glspl{MAC} reduction (i.e., stride-2 $F_4$ \clarity{leads only to} a 1.8$\times$ \glspl{MAC} reduction). \section{Hardware Acceleration} \label{sec:hardware_acceleration}\label{sec:hardware_acc} \subsection{Baseline Accelerator} \label{sec:baseline_acc} \Cref{fig:system_overview} shows the architecture of our baseline inference DSA, featuring two AI cores (AIC0 and AIC1) inspired by the DaVinci architecture~\cite{liao2019davinci}. Each core exposes a custom instruction set architecture (ISA) and implements all functionalities necessary for processing CNN layers. The datapath of the AI core comprises a \unit{Cube Unit} for \glspl{GEMM}, a \unit{Vector Unit} for vector operations, and a \unit{Scalar Unit} to handle scalar tasks. The \unit{Cube Unit} performs a \gls{GEMM} between two \verb|int8| matrices of size $[16{\times}32]$ and $[32{\times}16]$ to produce an \verb|int32| $[16{\times}16]$ output matrix, which can be optionally accumulated to a third input operand. The memory access patterns are simplified by storing the input and output tensors for the \unit{Cube Unit} in the fractal format~\cite{optimizing_cnn_model_cpus_usenix19}, where the dimension of the tensor used as the reduction axis ($C$) is split into a sub-dimension of size $C_0=32$ and a super-dimension of size $C_1=\frac{C}{32}$. Thus, for instance, the data layout of the \glspl{iFM} for a convolutional operation is \tensorDim{N}{C_1}{H}{W}{C_0} The \unit{Vector Unit} is 256B wide and comprises multiple parallel lanes with a peak throughput of 128 FP16 or 256 \verb|int8| operations per cycle. It performs general arithmetic operations between vectors besides more specific ones needed in CNN workloads, e.g., data type conversion, ReLU, and pairwise reductions. The number of parallel lanes ensures that the throughput of the \unit{Vector Unit} matches the output data rate of the \unit{Cube Unit} for relevant CNN workloads. The on-chip memory hierarchy follows a multi-level organization, where \unit{L0A} and \unit{L0B} serve as the input buffers for the \unit{Cube Unit} and \unit{L0C} as its output buffer, \unit{UB} as the input and output buffer for the \unit{Vector Unit}, and \unit{L1} as the second level memory. The memory hierarchy is fully software-managed by the memory transfer engines (MTEs), which perform data movements and layout transformations. Specifically, the \unit{MTE2} is in charge of transferring chunks of data from global memory (\unit{GM}) to \unit{L1} or \unit{UB}, whereas the \unit{MTE3} of transferring data from \unit{UB} to \unit{GM} or to \unit{L1} for fusing multiple consecutive layers. The \unit{MTE1} transfers input tiles from \unit{L1} to \unit{L0A} or \unit{L0B} and can optionally perform the im2col transformation~\cite{im2col_paper} to lower a 2D convolution into a \gls{GEMM}. The \unit{im2col} engine supports $3$, $5$, and $7$ as kernel sizes and $1$ and $2$ as stride parameters. The \unit{FixPipe} module within the \unit{Vector Unit} transfers the output of the \unit{Cube Unit} from \unit{L0C} to \unit{UB}, potentially performing re-quantization operations on-the-fly. The size and the number of banks of the on-chip memories are tuned to minimize the area while having enough bandwidth and capacity to avoid blocking the computational units. Specifically, \unit{L0A} and \unit{L0B} can feed one operand per cycle to the \unit{Cube Unit} without incurring bank conflicts. Similarly, \unit{L0C} can sustain write and read operations from the Cube Unit at the potential rate of one output tile per cycle, and it also has an additional read port towards the \unit{FixPipe} module. \unit{L1} has a rather complex addressing scheme and multiple read and write ports, managing bank conflicts at run time. The idle cycles caused by bank conflicts can be excluded from the critical path by exploiting data reuse in other memories. The AI core relies on an in-order scalar front-end to offload instructions to the \unit{MTEs}, the \unit{Vector Unit}, and the \unit{Cube Unit}. All units have a private instruction queue and a \textmu-sequencer to repeat the same instruction on different data and reduce the dispatching overhead. Specifically, the current ISA of the core requires an instruction repetition factor and one stride parameter for each operand. Furthermore, the core also implements a form of \emph{decoupled access/execute strategy}~\cite{dae_arch} as the different units are synchronized with an explicit token exchanging mechanism~\cite{vta_tvm}. Such a mechanism allows the programmer to control the overlap between data movements and compute operations. \revised{We decided to use an accelerator based on a \gls{GEMM} engine as our baseline since it represents a versatile and flexible design choice compared to a fully spatial architecture like Eyeriss~\cite{Chen2016} or MAERI~\cite{maeri}. Specifically, having an AI core grounded in linear algebra eases the development of the compiler infrastructure needed to support the growing diversity of AI workloads~\cite{norrie2021tpuv2v3}. However, in the following section, we will present the microarchitectural space of the Winograd transformation blocks such that they can be tuned for the characteristics of the target accelerator system.} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figs/engines.pdf} \caption{Winograd Transformation Engines.} \label{fig:wino_xform_engines} \end{figure*} \subsection{Winograd Extensions} \label{sec:wino_extensions} \subsubsection{Winograd Transformation Engines} \label{sec:xform_engine} The Winograd transformations $B^TxB$, $GfG^T$, and $A^TYA$ from \cref{eq:winograd} can be generalized as follows: \begin{equation} s_w = T^T \times s \times T = T^T \times \Tilde{s}, \label{eq:general_wino_xform} \end{equation} where $T$ is a generic constant $[h_T \times w_T]$ transformation matrix, $s$ is a $[h_T \times h_T]$ tile of the \glspl{iFM}, \glspl{oFM}, or weights, depending on the specific transformation. The operations in \cref{eq:general_wino_xform} are repeated multiple times to transform the entire input tensor, namely, $\frac{H}{m}{\times}\frac{W}{m}{\times}C_{in/out}$ for the input/output transformations, and $C_{in}{\times}C_{out}$ for the weight transformation. To design an area- and power-efficient transformation engine, we unroll the whole Winograd transformation (\cref{eq:general_wino_xform}) into a flat data flow graph (DFG), which can be heavily optimized by exploiting that we only use integer operations and that the $T$ matrix is constant and known at design time. Specifically, as the transformation matrices have many symmetries and common terms, we apply common subexpression elimination (CSE) to share computations in time or space, reducing the number of cycles or resources needed. \clarity{Moreover,} as most values in the transformation matrices are powers-of-two, we avoid using multipliers and carry out the computation using only shifters and adders. The few multiplications with non-power-of-two numbers are split into multiple shift-and-add operations, e.g., $c=5\cdot a=(a<<2)+a$. The bitwidth is kept to the minimum for each intermediate operation. Finally, we perform the scheduling and resource allocation of the DFG, exploring different area-throughput trade-offs, and selecting the solution that works best based on the requirements of the target transformation. Specifically, we devise two high-level implementation strategies, which we denote as \textit{row-by-row} and \textit{tap-by-tap} transformation engines. In the \textit{row-by-row transformation engine} (\cref{fig:wino_xform_engines}a), we decompose the transformation operation (\cref{eq:general_wino_xform}) as a series of vector-matrix multiplications $s[y,:] \times T$, which we map on a spatial processing element (PE). Specifically, the PE reads one row at a time of the matrix $s$ and performs the entire operation $\Tilde{s} = s \times T$ in multiple cycles. The PE hardcodes the multiplication of the input vector with the matrix $T$, using only adders and fixed shifters. The second part of the Winograd transformation, namely, $T^T \times \tilde s$, can be computed using the already allocated resources (\textit{slow solution}) or using additional spatial resources inside the PE (\textit{fast solution}). The former saves resources at the cost of higher latency, requiring only an additional set of $h_T{\cdot}w_T$ registers to store the intermediate results. The latter requires additional $w_T\cdot w_T$ lanes to compute $s_w$ in an output-stationary fashion, reducing the number of required cycles but increasing the number of adders needed. Moreover, the former solution produces one row of the output matrix at a time, whereas the latter produces the entire output matrix. As shown in \cref{fig:wino_xform_engines}a, the PE can be replicated to perform multiple transformations in parallel. $P_c$ and $P_s$ represent the two factors controlling the number of parallel transformations along the channels and the spatial dimension, \clarity{respectively}. The parallelization strategy is not only constrained by the area \clarity{budget} but also by the memory bandwidth and access pattern requirements. Specifically, the row-by-row engine requires all the elements of a spatial row to be contiguous in memory and the memory bandwidth to be sufficient for reading multiple rows of different tiles in the spatial dimension. Similar considerations also apply to the input channels dimension. The \textit{tap-by-tap transformation engine} (\cref{fig:wino_xform_engines}b) represents a different point of the optimization space, where the DFG of \cref{eq:general_wino_xform} is completely unrolled in time. The PE is very simple in this solution, comprising only a configurable shifter, an adder/subtractor, and an accumulator register. Thus, in the worst case, the PE takes $h_T {\cdot} h_T$ cycles to compute one tap. Luckily, we can exploit two properties of the transformation matrices to reduce the total number of cycles: i) the transformation matrices are sparse, lowering the average number of cycles needed per tap; ii) some taps share a significant fraction of computations with other taps, so we can apply CSE in time to avoid recomputations. Higher throughput can be achieved by replicating the PEs to perform multiple transformations in parallel. Apart from $P_c$ and $P_s$, we can use the number of parallel taps in a single PE, $P_t$, as an additional parallelization axis. Since one input value can be read once and shared to compute multiple taps in parallel, increasing $P_t$ does not affect the input bandwidth requirements. Moreover, by splitting the write back into multiple sub-writes, $P_t$ does not affect the output bandwidth requirements either. \input{tables/engines.tex} We enhance the PE with an input or an output stage comprising a configurable shifter and a rounding module to support tap-wise quantization. The number of quantization stages depends on the number of taps produced or consumed in parallel per cycle. The overall performance and requirements for the two implementation styles are summarized in \cref{tab:engines_requirements}. In the following section, we will detail the data flow of the Winograd Operator, illustrating the motivations behind our specific design choices. \subsubsection{Winograd Convolutional Operator} \label{sec:winograd_operator} The baseline is extended with the transformation engines (reported in bold in \cref{fig:system_overview}) needed for the \gls{iFM} and weight transformations in the \unit{MTE1}, and for the \gls{oFM} transformation in the \unit{FixPipe} module. The data flow of the Winograd operator for a $3{\times}3$ Conv2D layer is reported in \cref{lst:wino_operator}. It refers to the computation of a subset of the channels of the \glspl{oFM}, and it works as follows. First, a tile of the weights is transferred from \unit{GM} to \unit{L1} and transformed to the Winograd domain (lines~2 -- 6). Each core operates on different output channels of the weights (line~2). \unit{L0B} is used as an intermediate buffer to store the weights before the transformations. Thus, the data transfer is carried out in tiles (line~5), each tile is transformed using the weight transformation engine within the \unit{MTE1} module, and the output is stored in \unit{L1} (line~6). In order to overlap data transfers and the weight transformation, \unit{L0B} is double-buffered (line~4), and proper token exchanging instructions synchronize the transfers performed by the \unit{MTE2} with the transformations performed by the \unit{MTE1}. For simplicity, such synchronization instructions are not shown in the pseudocode. The transformed weights are kept stationary in \unit{L1} and reused for all the \glspl{iFM}. Three levels of loop blocking are used to perform double-buffering across the entire on-chip memory hierarchy, maximizing the concurrency between compute, data movement, and \clarity{Winograd} transformation operations. Specifically, in the outer-most loop block (lines~8 -- 10), the load of the \glspl{iFM} from \unit{GM} (line~11) is overlapped with all the core-level compute and data-movement operations (lines~13 -- 26). In the loop block in the middle (lines~13 -- 15), the input transformation and the cube operations (lines~17 -- 23) are done in parallel with the output transformation (line~24), the re-quantization step (line~25), and the write to \unit{GM} (line~26). In the innermost loop block (lines~17 -- 20), the input transformation (line~21) is overlapped with the batched \glspl{GEMM} performed in the \unit{Cube Unit} (lines~22 -- 23). \input{algos/operator} Matching the production and consumption rates is key to maximizing the \unit{Cube Unit} utilization, so compute efficiency. As any overhead in the innermost loop will be multiplied by the total number of outer iterations, matching the input transformation production rate with the Cube Unit consumption rate has the highest priority. As reported in \cref{sec:baseline_acc}, the fractal data layout \tensorDim{N}{C_1}{H}{W}{32} is used for the \glspl{iFM} in \unit{L1}, making 32 input channels and the spatial dimension $W$ contiguous in memory. This represents a perfect fit for the row-by-row engine as, given the read bandwidth of \unit{L1} and the write bandwidth of \unit{L0A}, it allows us to replicate the PEs 32 times along the $C_{in}$ dimension and two times along the $\frac{W}{4}$ spatial dimension, performing up to 64 transformations in parallel. With this parallelism, the transformation engine has a production rate of $64 {\cdot} \frac{36}{12} \SI{}{B/cycle}$, which is 4\bm{x}{} slower than the consumption rate of the \unit{Cube Unit}. Thus, to avoid blocking the \unit{Cube Unit}, we need to reuse the transformed \glspl{iFM} four times across the output channels, with two main consequences. First, we need to compute at least $4{\cdot} 16$ output channels at a time. Second, \unit{L0C} should store at least $64{\cdot} 16 {\cdot} 36$ \verb|int32| elements, that is a total size of $288\,kB$ considering double-buffering. Note that the row-by-row engine produces multiple taps per cycle, which are read by the \unit{Cube Unit} in different cycles. Thus, we enhance the addressing capabilities of \unit{L0A} with a \textit{diagonal} write mode such that different rows of different banks can be accessed with one single memory access. As reported in the experimental results section, this modification has a negligible area and power overhead. To keep the overall pipeline busy, the computations related to an input tile must be overlapped with the output transformation of the previous tile. In the case of the output transformation engine, the choice of the transformation engine to use was mainly driven by the need to minimize the number of memory accesses. The tap-by-tap engine reads multiple times the same data from the input memory, which is too costly in the case of \unit{L0C} as data is stored in \verb|int32|. Thus, we rely on the row-by-row engine for the output transformation engine. With the available \unit{L0C} bandwidth, up to $16$ transformations can be performed in parallel along the output channel dimension. Thus, a volume of $36 \cdot C_{out} \cdot \frac{H}{4}\cdot\frac{W}{4}$ feature map (taps) in the Winograd domain will be transformed in \revised{$\frac{C_{out}}{16}\cdot\frac{H}{4}\cdot\frac{W}{4}\cdot10$} or $\frac{C_{out}}{16}\cdot\frac{H}{4}\cdot\frac{W}{4}\cdot6$ cycles, depending on the chosen row-by-row engine solution. The same volume of data is produced by the Cube in $\frac{C_{out}}{16} {\cdot} \frac{Ci}{32} {\cdot} \frac{H}{4} {\cdot}\frac{W}{4} {\cdot} \frac{1}{16} {\cdot} 36$ cycles. To match the two operations, we need at least $3$ fractal input channel tiles ($C_{in}{=}96$) for the fast engine and $6$ fractal input channel tiles for the slow one ($C_{in}{=}192$). As many layers of SoA networks have less than $192$ input channels, we decide to use the fast engine. Moreover, as the \unit{Cube Unit} writes the taps in different rows of \unit{L0C}, the output transformation engine performs a gather operation to collect one row of the input matrices. Finally, the read and write operations to and from \unit{GM} must match the processing time of all the core-level operations. As the weights are read from \unit{GM} and transformed on the fly (lines~2-6), the throughput of the weight transformation engine should match the external bandwidth. In this case, we rely on the tap-by-tap transformation engine as it produces the output data precisely in the data layout expected by the \unit{Cube Unit} for the weights. Moreover, the layout of the weights can be reorganized offline such that we can avoid gathering operations when the PE reads the weights from the memory. With two available AI cores, we need to read at least $2\cdot 18 \cdot 18 \cdot C_{in}$ B of \glspl{iFM} and to write $2\cdot 16 \cdot 16\cdot 64$ B of \glspl{oFM} to match the cores throughput, which corresponds to a \unit{GM} bandwidth of $\approx 2\cdot72 + \frac{7281}{C_{in}} \SI{}{B/cycle}$ assuming a peak compute efficiency for the core-level operations (lines 13-26). Being this requirement hardly met when the AI Core clock frequency is in the order of the hundreds of \,MHz, we apply three system-level and data flow optimizations. First, as the two cores work on different sets of output channels, the \glspl{iFM} can be shared between the two cores, almost halving the required bandwidth. To this end, the cores are connected to the memory controllers via a \unit{Broadcast Unit} (\unit{BU} in Fig.~\ref{fig:system_overview}). The \unit{BU} can either accept independent memory requests from the \unit{MTEs} of the two cores and transfer them to the memory controllers or process special broadcast requests in the form of a streaming access pattern~\cite{streaming_nowatzki}. When the BU reveives two broadcast memory requests from the two cores, it acts as a DMA and brodcasts data from GM to the MTEs of the cores. To avoid deadlock, the \unit{BU} has two separate queues for non-broadcast and broadcast requests, where the latter are served with higher priority. Second, when the \glspl{iFM} shape is larger than $18{\times}18$, the volume to be transferred can be reduced by exploiting the halo regions that characterize the unitary-stride $3{\times}3$ Conv2d operator. Third, by prefetching input tiles and allocating multiple output buffers (instead of just two for double buffering), it is possible to decouple read and write operations, prioritizing the more critical read transfers. \section{Experimental Evaluation} \subsection{Tap-wise Quantization Algorithm} \subsubsection{Datasets and Baseline Networks} We use two common image classification datasets, namely, CIFAR-10 \cite{krizhevsky2009learning} and ImageNet ILSVRC \cite{Russakovsky2014}, to compare \clarity{the proposed quantization flow} with other Winograd-based algorithms. CIFAR-10 has 60k 32\bm{x} 32 RGB images \clarity{divided into} 10 classes, and ImageNet 1.4M 224\x224 RGB images \clarity{divided into} 1k classes. We split the datasets into training (90\% of 50k/1.3M training set), validation (10\% of 50k/1.3M training set, used for learning rate scheduler), and test set (10k/100k, inference only). We use the standard preprocessing methods: random horizontal flip (training set only) and color normalization for CIFAR-10; resize, random crop, and color normalization for ImageNet. We benchmark ResNet-34 and ResNet-50 for the ImageNet dataset, where we use the pre-trained networks from Torchvision. The baseline networks (im2col/FP32) achieve 72.6\%/ 75.5\% Top-1 and 90.7\%/92.6\% Top-5 accuracy on the test set. For CIFAR-10, we re-implement the ResNet-20 \cite{He2015} and train it from scratch (94.4\%). Furthermore, we use a light-version of VGG \cite{Nagadomi2014}, used by Liu et al. \cite{liu2018efficientsparse} and Lance et al. \cite{li2020lance}, and replace all but the last dropout layers with batch normalization layers (92.2\%). We trained the networks using PyTorch while extending the Winograd-Aware Training \cite{fernandez2020searching} with tap-wise quantization support. \subsubsection{Tap-Wise Quantization Evaluation} \label{sec:tap-wise_quant_eval} We retrain the network as a quantized \verb|int8| network from the FP32-baseline, whereas the weights and feature maps are quantized, as described in \cref{eq:quant}. All networks can be trained using 8-bit integers without any loss of precision. \clarity{We train the Winograd $F_2$ ResNet34 on Imagenet following the (static) Winograd-Aware algorithm (\cref{sec:winogradaware}), achieving} 71.4\% (-1.2\% drop) with 8-bit quantization. As expected, extending weights and feature map to 10 bits in the Winograd domain achieves the full accuracy because just 3\,bits are required for a bit-true calculation. Furthermore, we train \clarity{the Winograd $F_4$ version} with the Winograd-Aware method and \clarity{KD}. The baseline Winograd $F_4$ shows a significant drop of 13.6\%; even with two extra bits, the accuracy drops at least by 3.5\%. \input{tables/winograd_quant_resnet34} \Cref{tab:overviewapproaches} gives an overview of the performance of ResNet-34 on the ImageNet dataset, comparing several training methods and configurations. In the second section of the table, we evaluate the \clarity{tap-wise} quantization with unrestricted quantization scaling factors, i.e., $s_{i,j}\in \text{FP32}$. We further relax the numerical pressure by adding 2 extra bits in the Winograd domain, but keeping 8-bits in the spatial domain, denoted with \verb|int8/10|. Using Winograd-aware static training and straight-forward threshold calibration, \clarity{the accuracy loss is small for full-}\verb|int8| (-1.2\%), \clarity{and the final accuracy is even closer} to the baseline (72.0\%, -0.6\%) with the 2\,bit extension in the Winograd domain. With knowledge distillation, \clarity{we can get the best performance even with} the full-\verb|int8| configuration. As explained in \cref{sec:power2}, we prefer to have powers-of-2 quantization as all re-quantization and de-quantization operations become shifts within the Winograd domain. \clarity{The results with powers-of-2 scaling factors are summarized in the third section of the table. The straightforward method leads to} a 1.7/0.5\% drop with \verb|int8| and \verb|int8/10|. With knowledge distillation, the drop can be reduced to 0.4\% for the \verb|int8/10|. The best accuracy is achieved with $\log_2$ gradients and knowledge distillation, namely, 71.1\% (-1.5\%) for \verb|int8| and 72.3\% (-0.3\%) for \verb|int8/10|. \revised{The training with $\log_2$ gradients without knowledge distillation shows worse performance than the straightforward calibration method due to convergence issues. Knowledge distillation stabilizes the $\log_2$ training by acting as an implicit regularizer~\cite{saglietti2022solvable}.} We further investigate the bit shifts learned by the network. The feature maps are shifted right by 1 to 5 bits (i.e., $s \in \{2,2^2,\dots,2^5\}$) and the weights by 2 to 10 bits. Particularly, the large \clarity{difference} between the shift values implies why \clarity{quantizing with} a single scalar cannot work well. Within the same tap, the distribution is in the range of 2--3 bits, but it needs to be learned independently per layer. \input{tables/winograd_soa} \subsubsection{SoA Winograd-aware Quantization Methods} \label{sec:qat_eval} Recently, several approaches for Winograd-aware quantization have been presented. \Cref{tab:overviewSoA} gives a full overview of the main methods and of our solution. As the baseline accuracy varies across related works due to different training methods and implementation details, we \clarity{report their baseline accuracy and compare relative performance}. Our results are trained with the Winograd-aware method and powers-of-two tap-wise quantization, $\log_2$ gradients, and KD. The first network is ResNet-20 on CIFAR-10. Without adding any extra compute, we improved the (static) Winograd-Aware WA accuracy from 84.3\% to the full accuracy of 94.4\% with powers-of-two tap-wisely quantized $F_4$ \cite{fernandez2020searching}. Moreover, we also outperform both the WA-flex method, which trains the transformation matrices to recover some of the quantization losses \cite{fernandez2020searching}, and the Legendre-$F_4$ \cite{barabasz2020quantaized} method by 1.9\% and 2.1\%, respectively. We retrain the light version of VGG (VGG-nagadomi \cite{Nagadomi2014}). Our baseline network reaches 92.0\% accuracy, the tap-wisely powers-of-two quantized $F_4$ performs with a drop of 1.2\%, 0.1\% or no drop (for \verb|int8|, \verb|int8/9|, and \verb|int8/10|). For this network, no $F_4$ numbers have been presented in previous work, so we compare to $F_2$ results. Liu et al. \clarity{prunes the weights} in the Winograd domain to further reduce the compute intensity. They can retain their baseline performance of 93.3\% \cite{liu2018efficientsparse}. Li et al. proposed to quantize to \verb|int8| in the Winograd domain, where they can achieve 90.3\% (-0.1\%) \cite{li2020lance}. Finally, we train and compare ResNet-50 on ImageNet. We achieve 75.2\%/92.3 (-0.3\%/-0.3) with 8 bits and 75.5\%/ 92.5 (0.0\%/-0.1\%) when extending the Winograd domain to 10 bits (\verb|int8/10|).\clarity{On such a benchmark, there are three main related} works. Meng et al. uses complex root points leading to numerically more stable but complex transformation matrices \cite{meng2019efficient} with a small drop, but with lower baseline (i.e., 73.2, -0.1\%). Liu et al. uses the Residue Number System RNS. They use very large tile size of (i.e., $14\times 14$) to compensate for the transformation overhead. Even though the RNS could perform the \verb|int8| operations losslessly (transformations and elementwise multiplications), the accuracy drops by 1.0\% or 75.1\%. It is expected that this is due to the quantization of the transformations matrices, furthermore, it is known \cite{alam2022winograd, Lavin2015a} that very large tiles introduce very high numerical error even for FP32. A very interesting approach is LoWino, they operate on FP32 weights and feature maps, but quantize linearly to 8\,bits before and after the elementwise multiplication in the Winograd domain \cite{li2021lowino}. They achieve an identical accuracy of 75.5\% as our method with \verb|int8/10|, although with an accuracy drop of 0.6\% instead of 0\% as they start from a higher baseline. While LoWino provides the same reduction in operation count, it requires a 4\bm{x}{} higher bandwidth than our proposed method, which eliminates any benefits as shown in \cref{sec:throughputanalysis}. Notably, only our method with \verb|int8/10| can avoid any accuracy drop with ResNet-50. \begin{figure*} \centering \subfloat[Spatial Quantization]{ \includegraphics[width=0.40\textwidth]{figs/error_dist_spatial.pdf}\label{fig:quanterror_spatial} } \qquad \subfloat[Quantization in Winograd domain]{ \includegraphics[width=0.40\textwidth]{figs/error_dist.pdf}\label{fig:quanterror_wino} } \caption{\revised{Quantization error for the weights in (a) spatial and (b) Winograd domain on ResNet-34 using different strategies: layer-wise quantization, channel-wise quantization, tap-wise quantization, and channel- \& tap-wise quantization.} \end{figure*} \subsubsection{\revised{Tap-wise vs. Channel-wise Quantization}}\label{sec:finegrained} \revised{Previous works have shown that fine-grain quantization strategies, particularly (output) channel-wise quantization, can significantly improve the accuracy of quantized networks \cite{krishnamoorthi2018quantizing,liang2021pruning}. Therefore, in this section, we compare the tap-wise with the channel-wise quantization strategy. We use a pre-trained ResNet-34 from Torchvision, and we evaluate the quantization error on the weights, although similar trends can also be observed for the feature maps. The scaling factors $s$ are determined as follows:} \begin{align*} \hat{\gamma}=\arg\min_{\gamma} \sum_f |\mathrm{Quant}_{\mu,s}(f)-f|/|f|, \quad s=\gamma\sigma/2^{n-1}, \end{align*} \revised{where $Quant_{s,\mu}(x) = \mu+s\nint*{(x-\mu)/s}_{\text{\texttt{int}}n}$, and the mean $\mu$, the standard deviation $\sigma$, and the optimized scaling factor $\hat{\gamma}$ are obtained per layer (uniform quantization strategy), per channel, or per tap.} \revised{Fig.~\ref{fig:quanterror_spatial} shows the distribution of relative quantization error (in log2 scale) of all layers with kernel size $3\times 3$ for a uniform and for a channel-wise quantization strategy in the spatial domain for $n=8$ bits. Channel-wise quantization reduces the mean relative error by $1.7\times{}$, namely, from $2^{-6.01}$ to $2^{-6.72}$. Fig. \ref{fig:quanterror_wino} shows, instead, the error distribution for uniform, channel-wise, and tap-wise quantization in the Winograd domain. Specifically, we quantize in the Winograd domain (i.e., $Quant(GfG^T)$), and then we transform the values back to the spatial domain to compare the error introduced by the quantization process. We calculate the Moore-Penrose inverse of the transformation matrices based on SVD to transform the quantized weights back to the spatial domain. In this case, the improvement of channel-wise quantization is significantly lower as the mean relative error reduces from $2^{-5.58}$ to $2^{-5.62}$. On the other hand, tap-wise quantization shows much better performance ($2.3\times$), leading to a mean error as low as $2^{-6.78}$. Combining channel-wise with tap-wise quantization further improves the average error by $1.06\times$ at the cost of a much more complicated compute phase. For networks with significantly different channel distribution, the combined quantization strategy might achieve better performance.} \subsection{System Evaluation} \label{sec:system_eval} \subsubsection{Experimental Setup} \label{sec:system_eval:setup} \textbf{Area and Power.} To assess the area and power consumption of the accelerator, we developed the RTL of the parts of the AI core most affected by the Winograd extensions, specifically the \unit{Cube Unit}, the \unit{MTE1}, and the \unit{FixPipe} module. {{We have implemented the design with a high-$k$ metal gate (HKMG) 28\,nm CMOS technology, and a corresponding multi-VT standard cell library at a supply voltage of 0.8\,V in typical corner. We have synthesized the design and performed place-and-route, targeting a clock frequency of $500$\,MHz in typical operating conditions. We used an industrial-grade memory compiler for the SRAM and register file macros. We have selected input data segments from the first 3\bm{x}{}3 layer of a ResNet-34 quantized using our method. Then we run a timing-annotated (SDF) post-place \& route gate-level simulations with a cycle-accurate event-based RTL simulator. Finally, we simulate the power consumption based on the extracted switching activities (VCD).}} \textbf{Performance Profiling}. An event-based simulator~\cite{villa2021need} was developed to model and profile the overall system. Besides modeling timing behavior, the simulator also models data movements and computation to check the correctness of the results. It was validated with micro benchmarking against the parts of the system developed in RTL. \revised{Specifically, we compared the number of cycles obtained from the RTL simulation with that estimated by the simulator. On several small and medium-sized Conv2D operations, the simulator shows a $5\%$ worst-case difference.} The simulator implements a \clarity{simple} model for the DRAM subsystem in which memory requests are served in order. Moreover, the completion time of a memory request depends on the maximum bandwidth ($\SI{81.2}{B/cycle}$), which corresponds to \revised{$\approx0.8\cdot\SI{51.2}{GB/s}$ given the clock frequency of the core, and on a fixed average latency (150 AI core cycles) with a jitter extracted from a zero-mean Gaussian distribution with a variance of 5 cycles. Such memory bandwidth and latency characteristics meet the expected performance of an LPDDR4x-3200 memory with two channels \cite{steiner2021exploration,hackenberg2015energy}}. Although we do not use a detailed model of all the DRAM resources (e.g., command scheduling, channel bandwidth, bank, and row-buffer conflicts), their effects on the performance of the cases under analysis are minimal~\cite{villa2021need} as the memory accesses are regular and follow a streaming pattern. We estimated the energy consumption by projecting the power consumption of computational units and memory obtained from the back-annotated gate-level simulations. For \unit{L1}, we estimated its area and energy cost by multiplying the values obtained from the memory compiler by 1.5$\times$ to take into account the logic needed to manage bank conflicts and arbitration between read and write ports. \textbf{Workloads}. To evaluate the system performance, we adopt two sets of benchmarks. The first is a synthetic benchmark suite comprising 63 $3{\times}3$ Conv2D layers built using common values for batch size (B), height (H), and width (W) of the \glspl{oFM}, and the number of input and output channels ($C_{in}$, $C_{out}$). The second is a benchmark suite comprising \clarity{the Conv2D layers of} $7$ state-of-the-art CNN networks to quantify the speed-up and the energy savings on models with different architectures. Within the selected benchmarks, ResNet-34 and ResNet-50~\cite{He2015} are taken as representative of computationally intensive networks used for classification tasks; RetinaNet-ResNet50-fpn~\cite{Lin_2017_ICCV}, SSD-VGG16~\cite{ssd300_vgg16}, and YOLOv3~\cite{redmon2018yolov3} for object detection tasks; UNet~\cite{unet} for high-resolution semantic segmentation tasks. We used the implementation of the networks available in the Torchvision Python package. \begin{table*}[ht] \caption{Throughput of the Winograd operator normalized to the im2col operator for different $3{\times}3$ Conv2D layers with stride equals to 1 and padding \textit{same}. $H,W$ refers to the output resolution.} \label{tab:throughput_wino4_norm_im2col} \centering \begin{minipage}{0.8\textwidth} \centering \includegraphics[width=\linewidth]{figs/conv2d_winoHW_m4_memc_medium_cube_normal_bw_ddr4_vs_im2col_memc_medium_cube_normal_bw_ddr4_2_xformWeight_True.pdf} \hspace{-1.5cm} \end{minipage}% \begin{minipage}{0.145\textwidth} \centering \includegraphics{figs/colormap_vertical_min_0.0_max_4.0.pdf} \end{minipage} \end{table*} \input{tables/accelerator_breakdown.tex} \subsubsection{Area and Power Analysis} \Cref{tab:area_power_breakdown} reports the detailed area and power breakdown of the AI core and the physical layout of the implemented hardware extensions. The \unit{Cube Unit} dominates both the area and the power of the compute modules, being at least \revised{$6.4\times$} larger and requiring \revised{$6.7\times$} more power than a single Winograd transformation engine. Overall, all Winograd transformation engines occupy \revised{merely $6.1\%$} of the core area. Note that most of the time, only the input and the output transformation engines are active simultaneously, whereas the power cost of the weight transformation engine is amortized over the computations of all activations. Thus, the Winograd extension adds \revised{$\approx 17\%$} of power overhead to the \unit{Cube Unit}, but it also reduces its number of active cycles to one-fourth compared to the im2col. \revised{As shown in \Cref{tab:area_power_breakdown}, the power consumption of the \unit{Cube Unit} and of the \unit{L0C-Port B} increases for the Winograd kernel by $1.26\times$ and $2.22\times$, respectively. The lower sparsity of activations and weights in the Winograd domain~\cite{liu2018efficientsparse} increases the switching power consumption. Nevertheless, the compute datapath is $\approx3\times$ more energy efficient when using the Winograd kernel instead of the im2col}. On the memory side, both area and energy per access are highly correlated with the size of the memory. Although \unit{L0A} exposes a more complex access pattern compared to \unit{L0B} (see \cref{sec:xform_engine}), the overhead on area and energy access cost is negligible. On the other hand, the rotation logic needed on the output port (\textit{PortB}) of \unit{L0C} remarkably affects the power consumption. However, the (\textit{PortB}) of \unit{L0C} is, on average, less utilized compared to (\textit{PortA}), making the average access cost to \unit{L0C} much less expensive in practice. \subsubsection{Throughput Analysis}\label{sec:throughputanalysis} \Cref{tab:throughput_wino4_norm_im2col} shows the speed-up of the Winograd operator compared to the im2col for different parameters of a $3{\times}3$ Conv2D layer. Although the performance of the Winograd algorithm is highly dependent on the characteristics of the workload, we can identify two macro-trends. \textit{Larger resolution or batch size $\rightarrow$ higher speed-up}. As explained in \cref{sec:winograd_operator}, we adopt a weight-stationary dataflow, where the weights are reused for all the \glspl{iFM}. Thus, when the reuse of the weights is small, the performance is limited by the transfer of the weights. For example, the speed-up increases from \revised{$1.98\times$ to $3.30\times$} when the resolution increases from $32{\times}32$ to $128{\times}128$ with $256$ input and output channels at batch size equal to $1$, and from $1.98\times$ to $3.18\times$ when changing the batch size from $1$ to $8$ at iso-resolution ($32{\times}32$). To better visualize the bottlenecks for different workloads, \cref{fig:critical_path_sampled} shows the cycle usage breakdown of the critical path of the Winograd operator normalized to the im2col (hatched bar). Specifically, comparing the first and the third workloads in \cref{fig:critical_path_sampled}, \clarity{a batch size of 8 instead of 1} decreases the normalized percentage of cycles occupied by weight transfer and weight transformations from $13\%$ to $2\%$. Note that the throughput of the weight transformation engine has been tuned to match the external weight transfers while occupying the minimum area. Thus, removing the contribution of the weight transformation engine will reveal another critical path where the weight data transfer takes the place of the transformations. \revised{This analysis also shows the need for transforming the weights on-the-fly instead of reading the transformed weights from the external memory. As the weights in the Winograd domain are $4\times$ larger than the in the spatial domain, the load overhead will be much higher and difficult to amortize.} \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{figs/critical_path_winoHW_m4_vs_im2col_ddr4.pdf} \caption{Cycle Breakdown for im2col vs. Winograd $F_4$.} \label{fig:critical_path_sampled} \end{figure} \textit{Larger number of input channels $\rightarrow$ higher speed-up}. A larger number of input channels increases the output reuse within the core, reducing the memory bandwidth occupied by the write operations of the \glspl{oFM}. This increased data reuse frees bandwidth for the transfer of the \glspl{iFM}, with a remarkable effect on performance as the \glspl{iFM} are broadcasted to the two cores. For example, the speed-up increases from \revised{$2.62\times$} to \revised{$3.18\times$} when increasing the number of input channels from $128$ to $256$, with batch size equal to $8$, spatial resolution equal to $32{\times}32$, and output channels equal to $256$. In \cref{fig:critical_path_sampled}, having more bandwidth to reserve for the \glspl{iFM} transfer reduces the cycles occupied by the MTE2 from \revised{$5\%$} to \revised{$2\%$} for the first two workloads and from \revised{$15\%$} to \revised{$6\%$} for the last ones. The lack of bandwidth represents, in fact, the main reason why the $F_4$ Winograd operator does not achieve the theoretical $4\times$ speed-up on our system. As shown in \cref{fig:critical_path_sampled}, the input and the output transformation engines never become the bottleneck of the operator as their throughput was sized to exactly match the input and output data rate of the Cube Unit. \subsubsection{Comparison with NVDLA} \shepherd{ In \cref{tab:nvdla_comparison}, we compare our accelerator system with the open-source NVDLA accelerator version 1 which supports direct convolution (in FP16 and INT8) and Winograd $F_2$ (FP16 only) with an on-chip memory of 512\,kB per engine\cite{nvdlaprimer}. As NVDLA does not provide any tools to convert a model that can be used with its compiler into a format accepted by its verification infrastructure, we have modified the Linux driver used in the virtual platform to write out the sequence of reads and writes from/to the control and status registers of the accelerator, which we then use to simulate the RTL for performance benchmarking. The results are compared with the expected values for functional correctness. } \shepherd{ The results are summarized in \cref{tab:nvdla_comparison}. As a single NVDLA core has a peak throughput of 1\,TOp/s at 1\,GHz, we use 8 NVDLA engines to match the peak throughput of our system, namely, 8\,TOp/s. We consider two different configurations for the NVDLA-based system: the leftmost column in~\cref{tab:nvdla_comparison} refers to a system with quasi-infinite bandwidth, whereas the middle column refers to a system with 42.7\,Gword/s, i.e., 85.4\,GB/s in FP16, to match the more realistic bandwidth constraints of our system, i.e., 41 Gword/s (41\,GB/s with INT8 for our system, 82\,GB/s with fp16 for NVDLA). We use words rather than bytes for the iso-bandwidth evaluation, as the public NVLDA version only supports FP16, and it can be expected that the performance scales with the word width. Even though NVDLA with quasi-infinite bandwidth gets close to the theoretical 2.25\bm{x}{} speed-up, our accelerator still outperforms NVDLA by 21 to 50\%. In the more realistic scenario of the system with limited external bandwidth, Winograd convolutional algorithm on NVDLA becomes strongly memory-bound, which significantly reduces its benefits over the direct convolutional algorithm. One of the reasons behind this degradation is that NVDLA needs the weights to be transformed offline, increasing the transferred weight volume by $4^2/3^2=1.78$\bm{x}. Furthermore, if the input feature maps of a single layer cannot be stored entirely on-chip, they need to be transferred multiple times from external memory, which leads to cases where the Winograd kernel works even worse than the direct convolution. Overall, our accelerator system runs between 1.5 and 3.3\bm{x}{} faster than NVDLA at the same peak throughput and same external bandwidth, thanks to using Winograd $F_4$ vs. $F_2$, bandwidth optimization through on-the-fly weight transformation, and higher utilization. } \subsubsection{Full Network Evaluation} \label{sec:full_network_eval} \input{tables/nvdla_comparison} \Cref{tab:full_network_evaluation} reports the evaluation of the proposed system on various state-of-the-art CNNs. The $F_4$ Winograd operator increases the throughput of the $3{\times}3$ compute-heavy Conv2D layers by \revised{$1.9\times$} on average and up to \revised{$2.60\times$}. The gain on the throughput of the entire network depends on the specific architecture. In fact, the benefits of the Winograd Algorithm are lower for the networks with many $1{\times}1$ convolutions, like ResNet-50, compared to the networks dominated by the $3{\times}3$ convolutions, like U-Net or YOLOv3. However, the Winograd algorithm becomes remarkably beneficial when increasing the batch size or the input resolution. For example, for ResNet-34, the speed-up achieved increases from \revised{$1.07\times$} to \revised{$1.36\times$} when using a batch size of $16$ instead of $1$. Even more remarkable is the improvement on \revised{SSD-VGG-16 when increasing the batch size, with the speedup going from $1.55\times$ to $1.83\times$}. \input{tables/full_network_assessment_v4.tex} In \cref{tab:full_network_evaluation}, we also report the throughput of the $F_2$ Winograd operator implemented following the same methodology described in \cref{sec:hardware_acc} for $F_4$ and the same dataflow reported in \cref{lst:wino_operator}. When the $2.25\times$ computational reduction introduced by the $F_2$ operator makes the workloads of the layers memory-bound, $F_2$ and $F_4$ achieve similar performance\revised{, although the $F_4$ configuration always outperforms the $F_2$.} However, when increasing the batch size or the input resolution, or for very compute-heavy networks such as SSD-VGG-16, YOLOv3, and UNet, $F_4$ increases the throughput w.r.t. to $F_2$ up to \revised{$1.4\times$}. \revised{To highlight the benefits of the Winograd $F_4$ algorithm, \cref{tab:full_network_evaluation} also shows the speed-up w.r.t. to the im2col algorithm for a system with a higher bandwidth ($1.5\times$, i.e., the ratio between a DDR5 and a DDR4 memory). In this case, while Winograd $F_2$ hits a plateau around $\approx1.8\times$ of speed-up, the Winograd $F_4$ exploits the additional external memory bandwidth to double the end-to-end throughput compared to the baseline.} \shepherd{ Overall, Winograd $F_4$ outperforms both im2col and $F_2$ in most cases, even though the improvements over $F_2$ are not always remarkable and highly dependent on the shapes of the specific layers of the network. Specifically, the presence of the Winograd transformation engines not only constrains the dataflow within a single AI core but also limits the feasible loop transformations, e.g., reordering and blocking, that can be applied on the outer loops of the convolution operation. Moreover, the spatial resolution of the output activation tiles must be a multiple of $4$, limiting the choice of tiling factors and, in some cases, requiring zero-padding and adding ineffective computations. All these additional constraints affect the data reuse in the core and the access patterns to the external memory, making the bandwidth limitations even more visible. Further proof is given by the layer-wise analysis in~\cref{tab:full_network_evaluation}, which reveals that not the same layers of the network are mapped either on Winograd $F_2$ or on Winograd $F_4$ depending on the available extension. For example, for YOLOv3 with an input resolution of $256$ and batch size $1$, the Winograd $F_2$ outperforms Winograd $F_4$ because it is used to process the deep layers of the network where the small spatial resolution ($\leq 16{\times}16$) makes the Winograd $F_4$ perform worse than the im2col algorithm. However, for the YOLOv3 with an input resolution of $256$ and batch size $8$, Winograd $F_4$ results in a $1.4\times$ higher throughput than Winograd $F_2$ (i.e., with DDR4). Apart from the throughput gain, which could be lower than the theoretical $4\times$ FLOPs reduction, the Winograd $F_4$ still reduces the utilization of the \gls{GEMM} engine, usually the most power-hungry computational resources. Therefore the overall energy efficiency is improved, which is analyzed in the following paragraphs. Thus, depending on the application use cases and the area budget, accelerator designers can use the methodology presented in~\cref{sec:hardware_acceleration} to develop the transformation engines for Winograd $F_2$ and to integrate them together with the Winograd $F_4$ ones, allowing the compiler to select the best computational kernel for each layer of the network. } \begin{figure}[t] \centering \includegraphics[width=0.96\columnwidth]{figs/eval_winoHW_m4_memc_medium_cube_normal_bw_ddr4_vs_im2col_memc_medium_cube_normal_bw_ddr4_energy_barchart.pdf} \caption{Number of memory accesses (left) and energy breakdown (right) for Winograd $F_4$ w.r.t. im2col.} \label{fig:memory_accesses} \end{figure} \Cref{fig:memory_accesses} reports, on the left, the average number of read and write accesses and, on the right, the average energy breakdown of the Winograd $F_4$ operator for the Winograd layers only of the networks reported in \cref{tab:full_network_evaluation}. All values are normalized to the im2col Conv2D operator. The read accesses to the weights in \unit{GM} are the same as the im2col, as the weights are transformed on the fly in the core. On the other hand, the write accesses to \unit{L1} increases due to the expansion factor caused by the Winograd transformation, namely, $\frac{(m+2)^2}{9}{=}4$ in the case of $F_4$. The read accesses to the weights in \unit{L1} increase significantly, as the \unit{Cube Unit} directly reads the weights from \unit{L1} instead of storing and reusing them in \unit{L0B}, as the im2col operator does. Nevertheless, the $F_4$ Winograd algorithm reduces the total number of weight reads to one-fourth, and, as the \unit{L1} energy access cost is only $3\times$ higher compared to that of \unit{L0B}, it also lowers the overall energy consumption. All the accesses to \unit{L0B} are due to the weight transformations only, and so its cost is highly amortized over time. The read accesses to the \glspl{iFM} in \unit{GM} slightly increase, as the write accesses to \unit{L1}, because the data reuse factor, i.e., the number of output channels, is limited to 64. The read accesses to the \glspl{iFM} in \unit{L1} and the write accesses to \unit{L0A} decrease as the Winograd transformation increase the volume of the \glspl{iFM} only by a factor of $\frac{(m+2)^2}{m^2}{=}2.25$ for $m=4$ instead of 9 as the im2col for a $3{\times}3$ convolution. As the total number of \unit{Cube Unit} active cycles decreases, so does the number of read accesses to \unit{L0A}. The number of read and write accesses to \unit{L0C} is higher as the \glspl{oFM} are in the Winograd domain. Overall, the energy spent on the memory subsystem is comparable between $F_4$ Winograd and the im2col operator, yet the Winograd $F_4$ algorithm lowers more than $2\times$ the total energy consumption as it reduces the active cycles of the \unit{Cube Unit}, which, as shown in \cref{fig:memory_accesses}, dominates the energy consumption of the core. \shepherd{This analysis reveals another key advantage of the Winograd $F_4$ algorithm compared to the im2col and the Winograd $F_2$ algorithm}: although the theoretical $4\times$ \glspl{MAC} reduction may not always translate into an equivalent throughput increase, it guarantees a higher energy efficiency, which makes it a perfect fit for inference DSA. \section{Related Work} \label{sec:related_work} \textbf{Winograd Algorithm.} Several works have been proposed to extend the original Winograd algorithm~\cite{winograd1980arithmetic} to work on general 2D convolution~\cite{Lavin2015a, yepez2020stride, yang2020stride}, and to improve its performance by combining it with the Strassen algorithm~\cite{zhao2018faster} or its numerical accuracy by using higher-order polynomials~\cite{barabasz_2019_winogradbeyondlinear} and better polynomial root points for $m>4$~\cite{barabasz2020error, alam2022winograd}. Li et al.~\cite{li2021winograd} combined the Winograd algorithm with AdderNet, which uses $\ell_1$ instead of $\ell_2$ norm for feature extraction, therefore replacing all MAC operations with additions. However, on CIFAR-10/ResNet-20, the proposed method introduces an accuracy drop from 92.3\% for the FP32 baseline to 91.6\%. Sparsity has been extensively used to reduce the computational complexity of CNNs. Liu et al.~\cite{liu2018efficientsparse} and Li et al.~\cite{li2017enabling} proposed to move the ReLU activation layer after the input transformation and to prune the weights in the Winograd domain. However, they use FP32 networks and only report the reduction of the number of MACs. Combining pruning with tap-wise quantization and assessing its benefit on a hardware accelerator represents an interesting future work direction. \textbf{Quantized Winograd.} Gong et al.~\cite{JiongGong2018} and Li et al.~\cite{li2020lance} proposed to quantize $F_2$ in the Winograd domain with a single quantization scalar per transformation. Meng et al.~\cite{meng2019efficient} extended the algorithm to use complex root points, increasing the number of valid root points for Winograd $F_4$. Liu et al.~\cite{liu2020efficientRNS} proposed to combine Winograd and Residue Number System (RNS), selecting 8\,bit moduli 251,241,239 and Winograd $F_{14}$. Fernandez et al.~\cite{fernandez2020searching} proposed Winograd-Aware Training for Quantized Neural Networks, where gradients are propagated through the Winograd Domain. In the case of $F_4$, they had to re-train the transformation matrices (WA-flex), making the transformation operation dense and introducing FP32 MACs. Barabasz et al.~\cite{barabasz_2019_winogradbeyondlinear} extended the work of Fernandez et al.~\cite{fernandez2020searching} using Legendre polynomial bases, where 6 additional sparse diagonal matrix multiplications are required. Li et al.~\cite{li2021lowino} proposed to use FP32 feature maps and weights but to quantize the weights and feature maps in the Winograd domain. In this way, the elementwise multiplication can be performed using \verb|int8|, whereas the input and output transformations are carried out in FP32. \textbf{Custom Winograd Accelerators.} Several custom accelerators targeting FPGAs were proposed to accelerate the Winograd algorithm~\cite{liu2021winocnn, lu2018spwa, yang2021biswsrbs}. They comprise a spatial architecture capable of performing only the Winograd algorithm, whereas we propose a methodology to integrate Winograd support in a programmable AI accelerator based on a high-throughput MatMul unit, which is the most adopted solution for ASIC accelerators. Wang et al.~\cite{wang2021customized} proposed a RISC-V extension to support Winograd transformations efficiently. Xygkis et al.~\cite{xygkis2018efficient} proposed a solution to map the $F_2$ Winograd operator on a general-purpose edge device with Vector Units. \revised{The closest proposal to our work is WinDConv~\cite{mahale2020windconv}, an accelerator based on NVDLA~\cite{nvdlaprimer} which supports the $F_2$ Winograd operator with a fused datapath. Unfortunately, a one-to-one comparison is difficult as they targeted a mobile application scenario, they reported a post-synthesis-only evaluation using a much newer technological node, and they did not consider the effects of external memory on the performance. However, their Winograd extension leads to an increase in energy efficiency over their baseline of $1.82\times$ in the best case, i.e., considering $100\%$ of utilization, whereas our proposal achieves a $2.1\times$ increase of the energy efficiency on average for 12 state-of-the-art CNNs. Moreover, they also quantize to 6 bits in the spatial domain, which leads to a higher accuracy drop~\cite{nayak2019bit} than the proposed tap-wise quantization flow}. \textbf{Winograd SW optimizations.} Several efficient SW implementations of the Winograd algorithms were recently proposed for GPGPUs\cite{liu2021optimizing, castro2021opencnn, kim2021performance} and CPUs~\cite{li2021optimizing, maji2019efficient}. All these papers adopt similar loop-level optimizations, such as loop unrolling, parallelization, or vectorization, to make the most of the targeted platforms, whose characteristics and constraints differ significantly from ours. \section{Conclusion} We presented the tap-wise quantization algorithm to enable efficient quantized Winograd on 4\bm{x} 4 tiles $F_4$. Using 8-bits integers for the feature maps and weights and 10-bits integers in the Winograd domain, the $F_4$ Winograd network achieves the same accuracy as the FP32 baseline for ResNet-20 and VGG-nagadomi on the CIFAR-10 benchmark and for ResNet-50 on the ImageNet classification task. The proposed method outperforms the state-of-the-art integer-only and $F_4$-aware quantization methods on all the tested networks and tasks. Furthermore, we presented a custom HW extension to efficiently process $F_4$ integer Winograd layers and its integration into an industrial-grade AI accelerator. \shepherd{Our proposed system outperforms NVDLA with its Winograd $F_2$ extension by 1.5 to 3.3\bm{x}{} at the same compute throughput and bandwidth constraints due to the higher computational reduction from $F_4$, optimized bandwidth requirements by on-the-fly transformations, and higher utilization thanks to the optimized dataflow.} The proposed hardware extensions have a small area (\revised{6.1\%} of the core area) and power (\revised{17\%} compared to the MatMul engine) overhead over the baseline architecture while achieving up to \revised{3.42\bm{x}{}} speed-up on compute-intensive convolutional layers. An extensive evaluation over several state-of-the-art computer-vision benchmarks revealed up to \revised{1.83\bm{x}{}} end-to-end inference speed-up and \revised{1.85\bm{x}{}} energy efficiency improvement.
1,314,259,992,844
arxiv
\section{Introduction} Light pseudoscalar particles appear in many extensions of the standard model. The most typical example is the axion, which was introduced as a consequence of the Peccei-Quinn mechanism to solve the puzzle of the absence of CP violation in quantum chromodynamics~\cite{Peccei:1977hh, Peccei:1977ur}. The axion is a hypothetical light particle that has a two-photon vertex described by the interaction term \begin{equation} \mathcal{L}_{a\gamma}\;\;=\;\;-\frac{1}{4}gF_{\mu\nu}\tilde{F}^{\mu\nu}a\;\; = \;\;g\;\vec{E}\cdot\vec{B}\;a\;\;, \end{equation} where $g$ is the axion-photon coupling constant, $F$ is the electromagnetic tensor, $\tilde{F}$ its dual, $\vec{E}$ the electric field, $\vec{B}$ the magnetic field and $a$ the axion field. This term implies the possibility of photon-axion oscillations in an external magnetic field~\cite{1983PhRvL..51.1415S, 1988PhRvD..37.1237R}. This coupling is used experimentally to search for axions that would be thermally produced in the Sun~\cite{2011PhRvL.107z1302A}, or axion dark matter~\cite{2010PhRvL.105q1801W}. In the case of the Peccei-Quinn axion, the photon-axion coupling is predicted to scale with the axion mass; however, other models predict light pseudoscalar particles with the same coupling to the electromagnetic field but {\it a priori} unrelated to their mass~\cite{1987PhR...150....1K}. Those are called axionlike particles (ALPs), the phenomenology of which is similar to standard axions. Astrophysical environments can offer ideal conditions for photon-ALP oscillations, with the possibility of long baseline experiments involving magnetic fields~\cite{1996slfp.book.....R}. Progress over the last decade in $\gamma$-ray astronomy allowed one to consider searching for the imprints of $\gamma$-ALP oscillations in the energy spectra of high energy $\gamma$-ray sources~\cite{2007PhRvD..76b3001M}. The effect of $\gamma$-ALP oscillation is usually assumed to be twofold: it is expected to induce a dimming of the fluxes above a given threshold ~\cite{2007PhRvL..99w1102H, 2007PhRvD..76l3011H}, and possibly decrease the gamma-ray pair production related opacity at high energy. The opacity can be that of the intergalactic medium~\cite{2007PhRvD..76l1301D, 2009PhRvD..79l3511S} or within the sources themselves~\cite{2012arXiv1202.6529T}. A crucial point is the turbulent nature of the magnetic fields the photon beam travels through. It implies a consequential randomness in the prediction of the observable effects. This has been pointed out in~\cite{2009JCAP...12..004M} in the case of the change of opacity due to $\gamma$-ALP oscillations. The authors of~\cite{2009JCAP...12..004M, 2009PhRvL.102t1101B} showed that because of the random nature of the intergalactic magnetic fields, the effect of $\gamma$-ALP mixing should be very different from one source to another. Such an observable is then useless to perform ALP searches through the observation of a single source, leaving only the possibility of a population study in order to average the effect over many sources. This type of study has been conducted in {\it e.g.}~\cite{Horns:2012fx}, showing a hint for an anomaly in the transparency of the Universe. Though rapidly increasing with the advent of the last generation Cherenkov telescope arrays such as HESS, MAGIC and VERITAS, there are only a handful of high energy sources that are effectively concerned with extragalactic absorption. It is thus interesting to point out some effect of the $\gamma$-ALP mixing that does not rely either on stacking or averaging, in order to exploit observations of single sources. Here for the first time an effect is pointed out that potentially applies to single observations. This article is organized as follows. First, we briefly recall the formalism of $\gamma$-ALP mixing and apply the results to a single coherent magnetic domain. As a second step we show the results of a simulation of photons traveling through a set of magnetic domains. In particular we show that contrary to what is stated in the literature, a sharp drop in the energy spectrum of high-energy $\gamma$-ray sources is not a robust observable and is not what should be searched for. Actually the $\gamma$-ALP mixing would produce an anomalous dispersion of the spectra, which would no longer appear as smooth in a limited energy range. We then give an explicit example of how the effect could appear in the data, in the case of a specific situation, namely an extragalactic TeV emitter whose photons travel through the intergalactic magnetic field (IGMF), and we discuss the robustness of the method. \section{The photon/axion system in a magnetic field} The $\gamma$-ALP system is described following the approach of~\cite{1988PhRvD..37.1237R}. A three-state wave function is used with two states of polarization for the photon and one for the ALP. Let $\theta$ be the angle between the magnetic field direction and the photon momentum. Since only the $\vec{B}$ component transverse to the propagation couples photons and ALPs, the strength of the magnetic field involved in the coupling is $B\sin\theta$. Moreover, for parity issues, only one polarization state parallel to the field is involved in the interaction. This is accounted for by introducing the angle $\phi$ between the transverse component of the field and the direction of the polarization sate $A_1$. The $\gamma$-ALP system is then propagated using the following linearized equations of motion assuming relativistic axions: \begin{equation} \left(E - i\partial_z + \mathcal{M} \right ) \left(\begin{array}{c} A_1 \\ A_2 \\ a \end{array} \right) = 0 \;\;, \end{equation} with the mixing matrix \begin{equation} \mathcal{M}\;\;=\;\;\left(\begin{array}{ccc} \Delta_{11}-i\Delta_{\mathrm{abs}} & \Delta_{12} & \Delta_\mathrm{B}\cos\phi \\ \Delta_{21} & \Delta_{22}-i\Delta_{\mathrm{abs}} & \Delta_\mathrm{B}\sin\phi \\ \Delta_\mathrm{B}\cos\phi & \Delta_\mathrm{B}\sin\phi\ & \Delta_\mathrm{a} \end{array}\right)\;\;, \end{equation} where $\Delta_\mathrm{B} = gB\sin\theta/2$ is the coupling term, and $\Delta_\mathrm{a} = -m_\mathrm{a}^2/2E$ is the ALP mass term. Here we neglect the Faraday effect and the vacuum Cotton-Mouton term, as the low magnetic field strength considered in the following makes the corresponding contribution irrelevant for this study. This implies $\Delta_{12} = \Delta_{21} = 0$ and that the other diagonal terms are $\Delta_{11} = \Delta_{22} = -\omega^2_\mathrm{pl}/2E$, $\omega_{\rm pl}$ being the plasma frequency accounting for the effective photon mass. As in~\cite{2003JCAP...05..005C}, absorption of photons on their way is introduced with the $\Delta_{\mathrm{abs}} = \tau/2s$ term where $\tau$ is the optical depth assuming a propagation over a domain of size $s$ within which the opacity is homogeneous. Because of that term, the matrix is no longer Hermitian and unitarity is lost. In the following, this term will be used to model the absorption of photons on the extragalactic background light (EBL) while propagating in IGMFs. After diagonalization of the mixing matrix, the equations of motion can be analytically solved and the transfer matrix of the system is obtained. The probability of $\gamma-a$ conversion after crossing one coherent magnetic field domain of size $s$ in the simplest case, without absorption and neglecting the plasma term, yields \begin{equation}\label{prob} P_{\gamma\rightarrow a} \;=\; \frac{2 \Delta_\mathrm{B}^2}{\Delta_{\mathrm{osc}}^2}\sin^2\frac{\Delta_{\mathrm{osc}}s}{2}\;\;, \end{equation} with $\Delta_{\mathrm{osc}} = \sqrt{\Delta_\mathrm{a}^2+4\Delta_\mathrm{B}^2}$. The energy dependence of the mass terms in $\Delta_{\mathrm{osc}}$ implies an energy threshold above which the conversion becomes efficient, \begin{equation}\label{thresh} E_\mathrm{thr} = \frac{m_{\mathrm{eff}}^2}{2gB\sin\theta} \;\; , \end{equation} $m_\mathrm{eff}$ being the effective ALP mass in the presence of charges ({\it e.g.} in a plasma). For $E \ll E_\mathrm{thr}$, $\Delta_{\mathrm{osc}} \gg \Delta_\mathrm{B}$ and then no conversion occurs. For $E \sim E_{\mathrm{thr}}$ spectral oscillations happen due to the energy dependent $\sin^2\Delta_{\mathrm{osc}}s/2$ term. For $E \gg E_\mathrm{thr}$, $\Delta_{\mathrm{osc}} \sim \Delta_\mathrm{B}$ and the conversion probability is no longer energy dependent. The conversion probability of Eq.~\ref{prob} can be parameterized in terms of $E_\mathrm{thr}$ and \begin{equation}\label{delta} \delta = gBs\sin\theta/2 \end{equation} instead of $B\sin\theta$ and $s$. $\sin^2\delta/2$ is then the conversion probability at very high energy (VHE, $E \gg E_\mathrm{thr}$). The condition required for a significant conversion to occur, $\delta \gtrsim 1$, is similar to the Hillas criterion for the acceleration of ultra high energy cosmic rays, as pointed out in~\cite{2007PhRvL..99w1102H}. Figure ~\ref{fig:1domain} shows the evolution of the photon survival probability as function of the energy for three different values of $\delta$. For allowed large IGMF values of order 1 nG, an ALP mass of 2 neV and a coupling $g=8\times10^{-11}\;\rm GeV^{-1}$ at the limit of current experimental constraints~\cite{2011PhRvL.107z1302A}, $E_\mathrm{thr}$ lies at about 1 TeV. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{Fig1} \caption{Survival probability of an unpolarized photon as a function of the energy for three values of $\delta$.} \label{fig:1domain} \end{figure} The asymptotical value of $1-P_{\gamma \rightarrow a}$ gives the level of dimming of the photon flux (independently of eventual additional EBL absorption). One can see that this attenuation is hardly predictable given the uncertainties on the environmental parameters, $\vec{B}$ and $s$, as it depends sinusoidally on the value of $\delta$. In astrophysical environments, magnetic fields are usually not coherent. In the case of a propagation through a turbulent magnetic field, the beam path can be divided into coherent domains of size of the coherence length of the field (the validity of this simple model is discussed in Sec.~\ref{discussion}). For each domain, a transfer matrix is generated with a random orientation of the magnetic field yielding a specific value of $\delta$. The total transfer matrix associated with this realization of the turbulent magnetic field is the product of all individual transfer matrices. The spectral shape of the global conversion probability for one single realization is the result of the interference of all oscillation patterns such as those displayed in Fig.~\ref{fig:1domain}. As the pseudo-period is different in each domain, the photon survival probability has a very complex energy dependence. As an illustration, the survival probability of a photon from a source at redshift $z=0.1$ traveling through a single realization of a 1 nG IGMF with coherent domains of size $s_0=1\;\rm Mpc$ is displayed in Fig.~\ref{fig:turbulent}. A plasma density of $n_e = 10^{-7} \mathrm{cm}^{-3}$ typical of the intergalactic medium is assumed. In this condition and for ALP masses of order neV, $m_{\rm eff}=m_{\rm a}$. For illustration, the upper panel shows the survival probability without absorption on the EBL, whereas the lower panel results include this effect. Conservatively, the EBL density model used here is the lower limit model from~\cite{2010A&A...515A..19K}. To account for redshifting, a flat $\rm \Lambda CDM$ Universe with $(\Omega_{\rm \Lambda},\,\Omega_{\rm m})=(0.73,\,0.27)$ and $H_0=71\,\rm km/s/Mpc$ is assumed. Here the dashed red line is the prediction without ALPs, so that the dimming is only due to EBL. From Fig.~\ref{fig:turbulent} one can see that the prediction of the model including ALPs is the presence of a significant level of noise in the energy spectrum over one decade or so around $E_{\rm thr}$. Because of the unknown nature of the orientation of the magnetic field within the domains, the exact shape of the spectrum in this region is unpredictable. However, as we shall see in the following, the noise level is a prediction of the model. This prediction significantly differs from what usually appears in the literature, namely a smooth transition between no dimming below $E_{\rm thr}$ and a fixed level of attenuation above it. It has been shown in~\cite{2002PhLB..543...23G} that the averaging over a large number of realizations of $N$ domains in each of which the conversion probability is $P_0$ yields an overall conversion probability \begin{equation}\label{paveraged} P_{\gamma\rightarrow a} = \frac{1}{3}\left (1-{\rm e}^{-3NP_0} \right )\;\; . \end{equation} This means that the effect as it has been studied so far is valid for an average over a collection of sources. In the case of the observation of one source only, if $N$ is very large and the energy spectrum is binned, then the smooth behavior can be retrieved in principle. In practice $N$ is not large enough, as we shall see in the following. The results presented in Fig.~\ref{fig:turbulent} are obtained with a single realization. By averaging the results of Fig.~\ref{fig:turbulent} over a large number of realizations, the value given by Eq.~\ref{paveraged} is retrieved. From one realization to another, only the orientations of the magnetic fields vary; the number of domain and their sizes are kept fixed. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{Fig2} \caption{Photon survival probability as function of the energy for a realization of a source at $z = 0.1$ using B = 1 nG, $s_0$ = 1 Mpc, $g=8\times10^{-11}$ GeV$^{-1}$ and $m_{\rm a}$ = 2 neV without absorption (upper panel) and with EBL absorption (lower panel). } \label{fig:turbulent} \end{figure} Note that above 5 TeV the survival probability for this specific realization is higher with ALPs than with EBL only. This is the so-called opacity effect, because photons are untouched by the EBL as they travel disguised as axions, the Universe appears to be more transparent. This result should be taken with care, however because, as shown in~\cite{2009JCAP...12..004M}, there exist realizations of the IGMF where the opposite effect is obtained, basically when most ALPs do not convert back to photons before detection. \section{Observational effects}\label{observational} The experimental relevance of the proposed signature is now studied in the particular case of a source at redshift $z = 0.1$ for the same parameters as above. The intrinsic spectrum of the source is simulated following a log-parabola shape with an integrated flux in the TeV band at the Crab level. A 50 h observation time is assumed with an energy resolution of 15 $\%$ and assuming a constant effective area of $10^5 \;\rm m^2$, these values being typical of current generation Cherenkov observatories. The intrinsic spectrum is convolved by one randomly generated photon survival probability and eventually binned to obtain the spectrum that would be observed in this model. The result of this simulation is displayed on the left panel of Fig.~\ref{fig:simu}. A fit of the simulated experimental data by a log-parabola shape convolved with EBL absorption is also shown, as it would be performed by observers. In the right panel of Fig.~\ref{fig:simu} are displayed the residuals of that fit. It appears that in the case without ALPs the residuals would evenly spread around 0 whereas these residuals would show anomalously strong and chaotic deviations from 0 in the case of $\gamma$-ALP mixing. This is the expected signature of ALPs in the spectrum, induced by the noisy spectral shape of the photon survival probability. \begin{figure*}[t] \centering \includegraphics[width=.75\columnwidth]{Fig3a} \includegraphics[width=.75\columnwidth]{Fig3b} \caption{Simulation of the observation of one $\gamma$-ray source at $z$=0.1, with the effect of $\gamma$-ALP mixing (left), and distribution of the residuals of fits to a conventional model and a model with ALPs (right).} \label{fig:simu} \end{figure*} The approach considered here corresponds to what an observer would do. First one would fit a smooth shape (be it a log-parabola, a broken power law or an emission model inspired one) and then pick up the shape providing the best $\chi^2$ and a decent residual distribution ({\it e.g.} centered on zero, without obvious biases, etc.). The crucial point is that the observer would fit the data with a smooth function. This is motivated by the fact that no TeV source emission model predicts spectra with local extrema or a noisy shape. So this result holds for any smooth spectral shape provided it gives the best possible fit. For a given observation, a given emission model and a given fit function, the noisy energy range and the variance of the residuals is a prediction of the ALP model. This is illustrated in Tab.~\ref{tab1}, which displays the variance of the fit residuals under different hypotheses: no ALP and two values of $g$, with an ALP mass still yielding $E_{\rm thr}=1\;\rm TeV$. The exact value of the variance of the fit residuals depends on the analysis that would be performed, in particular the energy range chosen by the observer. The use of the variance of the fit residuals is only an example, as observers might choose to use a more sophisticated estimator of the noise in theirs data sets. Additionally, because of the random nature of the predicted effect, it is important here to verify that the scatter in the prediction among realizations is smaller than the effect itself. The predicted uncertainty on the variance of the fit residuals due to the random nature of the prediction is also shown in Tab.~\ref{tab1} for each considered scenario. These results are obtained by averaging over 5000 realizations. It appears that for the considered parameters, the effect is significant, as one can see that the variance anomaly in the presence of ALP is predicted to be significantly above the conventional value. The observation of a Crab-level source for 50 h was chosen here as an illustration. Actually one finds that for the same redshift and energy range, the effect would still be visible but with less significance by observing only 5 h. \begin{table} \begin{tabular}{|c|c|} \hline \footnotesize Model & Variance of the fit residuals \\ \hline No ALP & $0.04\pm 0.01$ \\ \hline $g=10^{-11}$, $m=0.7$ & $0.11\pm 0.04$ \\ \hline $g=8\times10^{-11}$, $m=2$ & $0.20\pm 0.05$\\ \hline \end{tabular} \caption{Values of the RMS of the fit residuals to mock data with different assumptions for $g$ and $m$ (in units of $\rm GeV^{-1}$ and neV resp.), for constant size magnetic field domains.\label{tab1}} \end{table} The observational signature that is discussed here occurs for energies around $E_{\rm thr}$ given in Eq.~\ref{thresh}. Therefore the range of accessible ALP parameters with this method depends on the value of the magnetic field and the energy range of the experiment. For instance considering TeV $\gamma$-rays and nG IGMF, the above results show that a typical IACT would be sensitive to ALPs with $g\sim10^{-11}\; \rm GeV^{-1}$ in a range from 0.1 neV to 10 neV. In that range of mass the most stringent constraint currently comes from the CAST helioscope with an upper limit on $g$ of order $10^{-10}\;\rm GeV^{-1}$~\cite{2011PhRvL.107z1302A}. So in principle this method should allow improving current constraints in this range of mass. To go to larger masses, one has to consider larger magnetic fields (in principle the method discussed in this article is valid for any $\gamma$-ray source behind a turbulent magnetic field) and/or higher energies, as the relevant mass for a given $g$ goes as $\sqrt{E\times B}$. \section{Discussion}\label{discussion} One important point is that should anomalous dispersion be observed some day, one would know how to falsify the interpretation in terms of new physics. This can be done for instance by observing the same object with more exposure. If the ALP interpretation is wrong, local extrema would not hold and all the residual points would be redistributed around zero. If the interpretation is correct though, two effects are predicted due to the increased statistics: {\it i}) the significance of the deviant bins would strengthen, and {\it ii}) irregularity would disappear at VHE as expected from ALP models. The first point is justified by the fact that a magnetic field that is coherent over a scale $s$ should remain coherent over times of oder $s/c$. For scales of order 1 Mpc as relevant here, this time scale is of order $3\times 10^6$ yrs. Concerning possible effects that could produce similar irregular spectra, one could imagine a complex landscape of background UV-IR photons that produces non-trivial absorption features in the energy spectra and mimics the effect. In the event of a positive detection, this would therefore require studying the effect over more sources and how it depends on $z$ for instance. For observers interested in putting constraints on ALP models, though, this is not an issue since such an effect would add up to the irregularity of the spectrum and by no way it could cancel it. The modeling of the IGMF as it is done here with domains of same sizes is the simplest model one could think of. It has been used here as it is widely used in the literature. To describe more precisely the magnetic field turbulence, it is possible to account for the power distribution of the modes. The turbulent field can be modeled as a Gaussian random field with each Fourier mode proportional to some power of the wave number $k^{-\alpha}$. In the generic case of isotropic and homogeneous Kolmogorov-like turbulence, $\alpha=5/3$. As shown in~\cite{2007PhRvD..76b3001M}, this leads to a variation of the rms intensity of the magnetic field $B$ as a function of the scale $s$ such that $B\propto s^{1/3}$. Before discussing the effect of such a magnetic field on $\gamma$-ray source spectra, let us remark that magnetic fields that are coherent on small scales should have negligible effects on the spectrum in comparison with the larger scales. For small values of $\delta$, $P_0$, the conversion probability over a scale $s$, is expected to be of order $\delta^2/2\sim g^2B^2s^2/8$ (see Eq.~\ref{delta}). In that case, the averaged formula of Eq.~\ref{paveraged} reduces to $P_{\gamma\rightarrow a}\simeq N\delta^2/2$. All in all, for a given $g$, this probability is proportional to $N B^2 s^2$. If $P_s$ is the probability of photon conversion for modes of size $s$, given the above mentioned law for the magnetic field strength, one gets $P_{s/10}\sim 2.5\%\times P_s$ for the conversion probability in a magnetic field mode corresponding to a scale $s/10$. This means that the small scales rapidly become irrelevant for this study and one can safely consider that the largest scales contribute the most in the power distribution of modes. Concerning larger scales, the effect on the noise level is limited by the ratio between the considered scale and the distance to the source. Speaking in terms of domains, if there are only a few equivalent domains, little interference will happen and then the noise in the energy spectra will have wider fluctuations. \begin{table} \begin{tabular}{|c|c|} \hline \footnotesize Model & Variance of the fit residuals \\ \hline $g=10^{-11}$, $m=0.35$ & $0.18\pm 0.05$ \\ \hline $g=8\times10^{-11}$, $m=1$ & $0.42\pm 0.14$\\ \hline \end{tabular} \caption{Values of the RMS of the fit residuals to mock data with different assumptions for $g$ and $m$ (in units of $\rm GeV^{-1}$ and neV resp.), in the case of a Kolmogorov-like turbulent magnetic field.\label{tab2}} \end{table} To be more quantitative, the study of Sec.~\ref{observational} has been repeated using a Kolmogorov-like turbulent magnetic field inspired by the modeling used in~\cite{1999ApJ...520..204G}. As for the previous study, 5000 realizations of turbulent magnetic field are performed, with wave numbers ranging from 0.1 Mpc to 100 Mpc, and a rms intensity of $B$ of 1 nG at 100 Mpc. The exact same kind of noise in the $\gamma$-ray spectra is obtained. To illustrate this, the results of these simulations are shown in Table ~\ref{tab2}; in particular, the variance of the fit residuals is still larger than in the no-ALP situation, in a statistically significant way. It has been checked that this results is stable when larger scales are used for the lowest wave number. \section{Conclusion} In this study we showed a new possible signature of $\gamma$-ALP mixing in the form of an anomalous dispersion in the energy spectra of $\gamma$-ray sources. The smooth-noisy-smooth alternation behavior in the energy spectrum is a peculiar prediction of ALP models that could hardly be mimicked by known astrophysical processes. It has been shown that this effect can be used to constrain ALP models from the observation of single sources. An explicit example has been given in the case of oscillations in IGMF; however, such a signature can be searched in any source for which a turbulent magnetic field is present along the line of sight. \begin{acknowledgments} We would like to thank Gilles Henri and Mathieu Langer for interesting discussions about the project, and Pasquale Serpico, Jean-Fran\c{c}ois Glicenstein, Aion Viana, Emmanuel Moulin and Fabian Sch\"ussler for reading and improving the manuscript. \end{acknowledgments}
1,314,259,992,845
arxiv
\section{Introduction} The confinement in QCD is a general phenomenon which establishes main features of our Universe, yielding more than 90 percent of its visible mass. The theory of colormagnetic confinement (CMC) and colorelectric confinement (CEC) based on the Field Correlator Method (FCM) has been formulated in the form of analytical approach \cite{1,2,3,4,5} and studied numerically by the lattice data \cite{5*,6,7,8,9}, which support the good convergence of the method . Since that time the CEC was studied in detail and its basic mechanism -FCM where correlators of field strength (FS) are calculated selfconsistently via integrals of FS -- was exploited in numerous analysis of experimental and lattice data -see \cite{5} for recent review. The role of CMC is less evident since at $T=0$ it appears as the spin-dependent corrections in hadron spectra and reactions. Moreover, the CMC also defines the basic interaction of quarks and gluons at high temperature and in the quark-gluon plasma. The CMC is provided by the colormagnetic field correlators in the same way as the standard CEC arises from the colorelectic correlators and at zero temperature both correlators coincide. However, they are yielding completely different contributions to the hadron dynamics: the CEC establishes the main part of the visible hadron mass of the Universe, while at zero temperature the main role of the CMC is providing one half of the vacuum field energy and establishing the spin and the momentum-dependent terms in the hadron Hamiltonian. This provides important corrections in the hadron spectra as will be discussed below. With increasing temperature the roles of both confining forces change drastically: the CEC is decreasing and finally disappears at the critical temperature, $T\geq T_c$, while the CMC grows (with the CMC string tension increasing as $T^2$) and plays an important role in the quark-gluon dynamics. For that reason the analysis of the quark-gluon plasma requires the account of the CMC. The important role in the analysis of the CMC was always played by lattice analysis \cite{1a,2a,3a} which revealed from the very beginning that CMC is not like CEC for temperatures $T>T_c$ and moreover the colormagnetic string tension $\sigma_s$ at large $T$ is proportional to $T^2$ \cite{2a,3a}. It was understood that CMC could be analyzed in the $3d$ model with the adjoint Higgs field \cite{4a,5a,6a,7a} Moreover the analysis of the gluon screening mass has allowed the lattice measurement of the nonperturbative Debye mass $m_D(T)$ \cite{8a,9a,10a}. We shall demonstrate below the analytic calculation of both CMC string tension $\sigma_s$ and $m_D(T)$ in the framework of FCM and display a good agreement with lattice data. At this point it is important to stress that FCM enables one to calculate the field correlators (both CM and CE) as two-gluon Green's functions (gluelumps) $G_{gl}$ where gluons interact via CM and CE confinement and the resulting equation for the string tension is an integral of $G_{gl}$ with gluons interacting via the same $\sigma$. This gives a check of selfconsistency of the whole method and as we shall show below it enables one to calculate $\sigma_s(T)$ without extra parameters in agreement with lattice data. Summarizing the additional features of the CMC (being the important part of the general nonperturbative FCM method), one discovers the strong spin-orbit force (``the Thomas term") in hadron spectroscopy, the strong coupling effect in the qgp, the origin of the effective screening mass at $T>T_c$, the resolution of the Linde problem in the high $T$ perturbation theory. It is the purpose of the present paper to summarize the existing knowledge of the CMC and to propose possible developments in this field, which can be checked both numerically and experimentally. For many years the confinement theory was also using different ideas based on the geometrical or quasiclassical objects in the QCD vacuum, such as monopoles or center vortices (see \cite {11a,12a} and \cite{13a,14a} as reviews). In principle this can be accomodated in the FCM as an additional (might be unnecessary) detailisation of the FCM correlators, whereas the method can keep its form. In this sense one consider this approach as an attempt to understand why at all field strength correlators have nonzero vacuum averages in the confining phase. The present paper gives an answer to this question and predict CM correlators at low and high temperatures both in the confining and deconfined phase. The plan of the paper is as follows. In the next section we introduce the field correlators responsible for the colorelectric (CE) and colormagnetic (CM) confinement and construct the hadron Hamiltonian containing both effects. In section 3 we specifically study the CM effects in hadrons in the phase of the CE confinement. The section 4 is devoted to the CM interactions in the CE deconfined phase where we discuss the analysis of the CM effects in quark-gluon plasma which yield the growing as $T^2$ the CMC string tension. We also analyze the standard perturbative theory in qgp and using the CMC, we distinguish and resolve the famous Linde problem. The concluding section contains the overall discussion of the results and an outcome. \section{The colormagnetic and colorelectric correlators and the QCD Hamiltonian} The CEC in the framework of FCM was exploited as a basic dynamical theory for the hadron spectra and wave functions \cite{10,11,12,13,14} with numerous applications \cite{15,16,17,18,19,19*,20,20*,20**,21,22,23,24}. To give a simple idea of the FCM we can describe the following picture of the hadron in the QCD. In QCD the quarks and gluons propagate along Wilson lines and the propagation of all hadrons can be described by the corresponding Wilson loops, which according to the nonabelian Stokes theorem contain inside numerous field fluxes which in the certain gauge (``the generalized contour gauge", see \cite{3} for details and discussion) can be written simply as $ F_{\mu,\nu}(z) d\sigma_{\mu,\nu}(z)$, actually, the integral of those. In the FCM one considers these fluxes, with all ${z}$ inside the Wilson loop, as a statistical medium with the field correlators, defined by the average values of $\langle F(x)F(y>, <F(x)F(y)F(z)\rangle, ...$. It was proved that in FCM the lowest correlators $\langle F(x)F(y)\rangle$ are dominant, while the higher ones contribute less than 5 percent in agreement with detailed lattice data \cite{5}. This result refers to the time-like $F_{i4}= E_i$, as well as to the space-like $F_{ik}= e_{ikl} H_{l}$ field strengths. This stochastic concept, fully supported by existing data, will be the basis of our analysis here, mainly devoted to the CMC, described by the colormagnetic field correlators $\langle H_i(x) H_k(y)\rangle$, and the resulting physical phenomena. We start with the definition of the field correlators, both colorelectric and colormagnetic. $$ \frac{g^2}{N_c}\langle\langle TrE_i(z)\Phi(z,z') E_j(z')\Phi(z',z)\rangle\rangle= $$\begin{equation} =\delta_{ij} \left[D^E(u) + D^E_1(u) + u^2_4 \frac{\partial D^E_1}{\partial u^2}\right] + u_i u_j \frac{\partial D^E_1}{\partial u^2}, \label{1}\end{equation} $$ \frac{g^2}{N_c} \langle\langle Tr H_i(z)\Phi(z,z') H_j(z')\Phi(z',z)\rangle\rangle=$$\begin{equation}= \delta_{ij} \left[D^H(u) + D^H_1(u) + \mbox{\boldmath${\rm u}$}^2 \frac{\partial D^H_1}{\partial u^2}\right] - u_i u_j \frac{\partial D^H_1}{\partial u^2}, \label{2}\end{equation} \begin{equation}\frac{g^2}{N_c} \langle\langle Tr H_i(z)\Phi(z,z') E_j(z')\Phi(z',z)\rangle\rangle= e_{ijk} u_4 u_k \frac{\partial D^{EH}_1}{\partial u^2}.\label{3}\end{equation} Here the resulting correlators $D^E(u),D^H(u)$ define the confinement interaction -- the string tensions- in the planes $(i4),(ik)$, namely, \begin{equation} \sigma^{E(H)}= \frac{1}{2} \int d^2 z D^{E(H)}(z). \label{4}\end{equation} It is important that at zero temperature all Euclidean planes are equivalent and both colormagnetic (CM) and colorelectric (CE) correlators coincide, as well as the string tensions, and each hadronic system is under the action of both colorelectric and colormagnetic forces. However, above the critical temperature $T_c$ the colorelectric correlators vanish and the QCD vacuum is fully in the realm of the CM correlators (apart from the perturbative interactions). It is the purpose of the present paper to study specifically the effects of the colormagnetic interactions both, below $T_c$ -- the colorelectric confinement region, and above $T_c$ -- in the CMC region. In this section we derive the Hamiltonian with the CEC and CMC in the quark-antiquark systems. The Hamiltonian for heavy quarks in terms of the field correlators was written in \cite{10}. To derive the Hamiltonian in the case of light quarks one can use the relativistic Fock--Feynman--Scwinger path integral method \cite{11}, which relates the integral representation of the $q\bar q$ Green's function with the Hamiltonian in terms of the virtual quark (antiquark) energies $\omega_1$ ($\omega_2$). Its general form was elaborated in \cite{14,16}. We follow below the form of \cite{15}, where the result is presented in terms of $\sigma_H,\sigma_E$. The general form of the Hamiltonian consists of the radial kinetic term $H_0$, the orbital motion term $H_l$, the spin-dependent term $H_{sd}$, the perturbative contribution $H_{pert}$ and the self-energy term $H_{se}$. To make the complicated general form of the Hamiltonian more simple it is convenient to introduce the extra parameters (called ``einbeins''), which are defined via the solution of the subsidiary equations for the resulting energy (mass) eigenvalues, \begin{equation} \frac{\partial M(\lambda_i)}{\partial \lambda_i}=0, \lambda_i= \omega_1, \omega_2, \nu(\beta), \eta. \label{5} \end{equation} The Hamiltonian can be written as \begin{equation} H(\omega_1,\omega_2,\nu)= H_0 + H_{int} + H_l + H_{sd} + H_{pert} + H_{se}. \label{6} \end{equation} Here $H_0$ contains only the radial kinetic motion and is written in terms of quark and antiquark effective energies $\omega_1,\omega_2$. In what follows we shall discuss the case of the equal masses, with correspondingly $\omega_1= \omega_2= \omega$. The case of general mass relations can be found in \cite{14}. \begin{equation} H_0= \omega + \frac{p^2_r + m^2}{\omega}. \label{7} \end{equation} \begin{equation} H_{int}= \int^1_0d\beta \left[\frac{\sigma_1^2 r^2}{2\nu} + \frac{\nu}{2} + \sigma_2 r\right]. \label{8} \end{equation} Here $\sigma_1= \sigma_H + \eta^2(\sigma_H- \sigma_E), \sigma_2= 2 \eta (\sigma_E- \sigma_H)$. The orbital part of the Hamiltonian, $H_l$ depends not only on the effective energies but also on the colorelectric and colormagnetic string tensions, expressed via the einbein factor $\nu(\beta)$, see \cite{14,14*}, \begin{equation} H_l= \frac{\mbox{\boldmath${\rm L}$}^2}{r^2 [\omega + 2 \int^1_0d\beta (\beta- 1/2)^2 \nu(\beta)]}. \label{9} \end{equation}. The most complicated term of the Hamiltonian is the spin-dependent part, derived in \cite{10,19*,20*,20**}, $$ H_{sd}=\left (\frac{\sigma_i^{1}}{4\omega_1^2}+\frac{\sigma_i^{2}}{4\omega_2^2}\right)L_i \frac{1}{r} (V'_0(r) + 2 V'_1(r)) + \frac{\sigma_i^{1}+ \sigma_i^{2}}{2 r \omega_1 \omega_2} V'_2(r) +$$\begin{equation}+ \frac{3 \sigma_i^{1} n_i \sigma_k^{2} n_k - \sigma_i^{1} \sigma_i^{2}}{12\omega_1 \omega_2} V_3(r) + \frac{\sigma_i^{1}\sigma_i^{2}}{12\omega_1 \omega_2} V_4(r). \label{10}\end{equation} Here the spin-dependent potentials are expressed via the field correlators $D^E,D^E_1,D^H,D^H_1$ where the last two correlators are appear due to the CMC, namely, \begin{equation} V'_0(r)= 2 \int_0^\infty d\nu \int^r_0 d\lambda D^E(\lambda,\nu) +r\int_0^\infty d\nu D^E_1(r,\nu), \label{11}\end{equation} \begin{equation} V'_1(r)= -2\int_0^\infty d\nu \int_0^r d\lambda (1- \lambda/r) D^H(\lambda,\nu) \label{12}\end{equation} \begin{equation} V'_2(r)= \frac{2}{r}\int_0^\infty d\nu \int_0^r\lambda d\lambda D^H(\lambda,\nu) + r\int_0^\infty d\nu D^H_1(r,\nu) \label{13}\end{equation} \begin{equation} V_3(r)= -2r^2\frac{\partial}{\partial r^2} \int_0^\infty d\nu D^H_1(r,\nu) \label{14}\end{equation} \begin{equation} V_4(r)= 6\int_0^\infty d\nu \left[D^H(r,\nu)+ (1+ \frac{2r^2}{3}\frac{\partial}{\partial \nu^2}D^H_1(r,\nu))\right] \label{15}\end{equation} Here the field correlators depend on only one variable: $D(x,y)=D(\sqrt(x^2+ y^2)$. One can see important contribution of the CMC terms, $D^H$ and $D^H_1$, which define the spin-spin forces, and one may wonder what is the contribution of their purely nonperturbative parts. To this end we are using (\ref{8})-(\ref{12}) in the large $r$ region and obtain the estimates, \begin{equation} V'_0/r= 1/r \sigma^E + O(1/r^2), \label{16}\end{equation} \begin{equation} V'_1/r= - 1/r \sigma^H + O(1/r^2), \label{17}\end{equation} while $V_3,V_4$ terms decay exponentially at large $r$. As a result at large r one obtains the dominant contribution for the spin-orbit force $V_{ls}$ in the case of equal quark and antiquark mass (the first two terms in (\ref{7}), \begin{equation} V_{ls}= \frac{\mbox{\boldmath${\rm S}$} \mbox{\boldmath${\rm L}$}}{2 \omega^2 r} (\sigma^E- 2\sigma^H) + O(1/r^2). \label{18} \end{equation} This expression can be compared with purely perturbative contributions to the spin-dependent interactions, where the CMC and CEC do not appear, which, however, can be derived from the correlators $D^{E,H}_1$. To this end we can identify the purely perturbative spin-dependent contributions $V_{ip}(i=1,2,3,4)$, namely \cite{15*}, \begin{equation} V_{1p}=0,~~ V'_{2p}= \frac{4\alpha_s}{3r^2}, ~~V_{3p}= \frac{4 \alpha_s}{r^3}, ~~V_{4p}= \frac{32\pi \alpha_s \delta^{3}(\mbox{\boldmath${\rm r}$})}{3}. \label{19}\end{equation} Here we have suppressed the indices $E,H$ in the perturbative expressions. Note that the strong coupling constant $\alpha_s$ is well defined in the coordinate space, since the QCD constant $\Lambda_{\overline{MS}}(n_f)$ is now known from experiment \cite{24*,24**}. Finally, we need to take into account the self-energy contribution to the Hamiltonian $H_{se}$, which is a definite negative constant, produced by the $\sigma_{\mu\nu} F_{\mu\nu}$ part in the Green's function \cite{25,26}, \begin{equation} H_{se}= -\frac{4\sigma_E}{\pi \omega^{0}} \chi(m_q). \label{20} \end{equation} where $\omega^{0}$ is the stationary value of the effective quark energy, obtained as in (\ref{5}), and $\chi(m_q)= 0.9$ for the zero quark mass and $\chi= 0.80$ for the $s$ quark. At this point we stress that the resulting Hamiltonian (\ref{6}) does not contain any fitting constants and is fully defined by the field correlators $D^E,D^E_1,D^H,D^H_1$, while its spin-independent part is defined by only $\sigma_E,\sigma_H$. This is specifically true for the FCM Hamiltonian, while all other existing approaches exploit numerical fitting constants or functions. In the next chapter we shall discuss the comparison of our theoretical results with experimental and lattice data, making a special emphasis on the role of the CMC contributions. \section{The colormagnetic interaction in hadrons at zero temperature} We start our analysis of the resulting Hamiltonian (\ref{6}) with the spin-independent part and firstly consider the case of $L=0$. At $T=0$ both string tensions are equal $\sigma_E= \sigma_H$, giving $H_l=0$ and varying over $\nu,\eta$, one obtains the simple result, \begin{equation} H_{int}= (\sigma_1 + \sigma_2) r = \sigma_E r . \label{21} \end{equation} However, taking into account that $H_l$ also contains $\nu$ and therefore should participate in the varying (optimization) process, and keeping unequal $\sigma_E,\sigma_H$, one obtains approximately \cite{15} \begin{equation} H_{int} + H_l= \eta \sigma_E r + (1/\eta -\eta) \sigma_H r + \omega y^2, \eta=\frac{y}{\arcsin y}, \label{22} \end{equation} where $y$ is a solution of the equation, \begin{equation} \frac{\sqrt{L(L+1)}}{\sigma_H r^2}= 1/(4y) (1 + \eta^2(1-\sigma_E/\sigma_H))(1/\eta -\sqrt{1-y^2}) + \frac{\omega y}{\sigma_H r}. \label{23} \end{equation} One can see in (\ref{23}) that for $\sigma_E=\sigma_H$ and $L=0$ one has $y=0$ and the resulting $H_{int} + H_l= \sigma r$, while for $L > 0$ the presence of the parameter $\nu$ in the denominator of (\ref{9}) (which denotes the string contribution to the rotating mass) brings the so-called string correction in the Hamiltonian. For example, in the heavy quark system this gives $\Delta H_l= -\frac{\sigma \mbox{\boldmath${\rm L}$}^2}{6m^2 r}$. One can see that $\sigma_H$ plays an important role in the hadron dynamics at $T=0$. Turning to the spin-dependent dynamics one can write the most important nonperturbative contribution in (\ref{10}); the analysis of the resulting expressions for the spin-dependent potentials, made in \cite{15*,19*}, shows that the nonperturbative CMC contributions to the tensor and spin-spin forces are strongly suppressed, while the spin-orbit forces are dominated by them. Indeed, writing the spin-orbit term from (\ref{11}),(\ref{12}) and neglecting the terms $D^E_1,D^H_1$, \begin{equation} V_{so}(r)= \left(\frac{\mbox{\boldmath${\rm S}$}_1\mbox{\boldmath${\rm L}$}_1}{2\omega_1^2}- \frac{\mbox{\boldmath${\rm S}$}_2\mbox{\boldmath${\rm L}$}_2}{2\omega_2^2}\right)= 1/r(V'_0 +2V'_1), \label{24}\end{equation} then neglecting $D^E_1$, which produces small contribution at low values of $r$, one has \begin{equation} 1/r V'_0= 2/r \int^\infty_0 d\tau \int^r_0d\lambda D^E(\tau,\lambda), 2/r V'_1= -4/r \int^\infty_0 d\tau \int^r_0 d\lambda D^H(\tau,\lambda) (1- \lambda/r). \label{25}\end{equation} At zero temperature $\sigma^E= \sigma^H$ and due to $D^H$ one obtains the full contents of the famous negative Thomas term \cite{27}, which was the object of numerous studies, see e.g. \cite{20} for the field correlator treatment and \cite{28} for the string dynamics approach. Indeed the phenomenological Thomas term for heavy quarks \cite{27}, $V_{so}= -\frac{\sigma \mbox{\boldmath${\rm S}$}\mbox{\boldmath${\rm L}$}}{2 m^2 r}$, is produced by both the CMC and CEC, connected by the Gromes relation \cite{29} at $T=0$ with $\sigma=\sigma^E=\sigma^H$ (see \cite{10,20**,19*} for more details). In this way one can see that the CMC secures the correct behavior of the spin-orbit forces in hadrons. To understand how it works in reality we can calculate the nonperturbative spin-orbit matrix element for the $nP$-states with the radial excitation $n$: $a_{so}(nP)= - \frac{\sigma_H \langle 1/r\rangle}{2 \omega^2(nP)}$, where $\omega(nP)$ is the effective quark energy, defined in (\ref{5}). The analytically computed values of $a_{so}(nP)$ for the ground ($n=1$) states and different $q\bar q$ systems are given below in Table 1. \begin{table}[!htb] \caption{Nonperturbative spin-orbit splitting of the $1P$ quark-antiquark states} \begin{center} \label{tab.01} \begin{tabular}{|l|c|c|c|} \hline $ q\bar q$ & $n\bar n$& $c\bar c$& $ b\bar b$\\\hline $ a_{so}^{np}(1P)$ (in MeV) & -88 & -13.3 & -2.3 \\ $<1/r> (1P)$ (in GeV$^{1/3}$) & 0.241 & 0.394 & 0.559 \\ $r^{-3}$ (in GeV) & 0.0271 & 0.120 & 0.448 \\ & th (exp) & th (exp) & th (exp) \\ $a_{so}^{tot}(1P)$ (in MeV) & 41 (abs) & $34.0 (35.0\pm 0.2)$& $13.3 (13.6\pm 0.7)$\\ \hline \end{tabular} \end{center} \end{table} Here one can see strong decrease of the nonperturbative spin-orbit term with the growing quark mass, which is very small in bottomonium, whereas in a light meson its magnitude is large, providing decreasing of the fine-structure splitting, in agreement with the experimental data. It is now interesting to compare our results for glueballs with the lattice data \cite{30}, as it was done in \cite{20**}. For glueballs the total scheme of the spin-dependent forces is the same as for mesons, given above, except that all field correlators and the string tension are $9/4$ times larger. The comparison of the FCM prediction for the states $0^{-+},2^{-+}$, split by the spin-orbit interaction, is as follows \cite{20**} (in GeV): $M_{glb}(0^{-+})= 2.56 , M_{glb}(2^{-+})= 3.03$, which can be compared with the lattice data \cite{31}: $2.59$~GeV and $3.1$~GeV, respectively. Note that the FCM calculations do not contain any fitting parameters, while the overall negative constant in the Hamiltonian in (\ref{5}) is calculated via the string tension \cite{25,26}. Also in the FCM there is no fitting parameters for all low-lying mesons \cite{32} and only highest states need corrections due to so-called ``flattening'' of the confinement potential, which occurs due to holes in the film, produced by the pair creation process \cite{32}. This is in contrast to the well-known calculations of hadron masses \cite{33}, where multiple fitting constants are used and the overall subtraction constant is introduced. We would like to underline that in the FCM the negative correction $H_{se}$ is calculated via string tension and the quark kinetic energies. Summarizing one can say that the CMC defines the important part of the strong spin-orbit interaction in hadrons at zero temperature, while the the CEC defines the linear confinement interaction, and the perturbative QCD is mostly responsible for the short-range spin-spin forces. \section{The CMC at finite temperature and in the quark-gluon plasma} One can consider the $T>0$ region in two aspects:\begin{enumerate} \item as an individual hadron physics in the regions with $\sigma_E < \sigma_H$ and in the deconfined region $\sigma E=0$, \item the role of the CMC in the thermodynamics of quark-gluon plasma (qgp). Below we discuss these points in this order. \end{enumerate} 1. ~ It was shown in \cite{15} that the resulting spin-orbit potential (\ref{24}),(\ref{25}) has the form of the attractive Thomas potential at large $r$ and strong repulsive core at small distances, which ensures weakly coupled $q\bar q$ bound states for the quark mass $m_q>0.22$ GeV (due to the CMC contribution); e.g. for $s$-quark with $m_s=0.22$~GeV the $s\bar s$ binding energy is $-45$ MeV and much less for the $c$ and $b$-quarks. The situation for light quarks in the deconfined region is even more complicated and seems to be similar to the $Z>g 137$ critical phenomena in QED, when the central charge Z is surrounded by the plasma-like vacuum \cite{15}. In difficult to develop the quantitative theory of the corresponding medium at the deconfining temperatures around $150$ MeV but one can expect that these effects will give relatively small corrections at this temperature. 2.~ We shall turn now to the most important topic of the role of the CMC in the quark-gluon thermodynamics at $T>T_c$ and show that the CMC will provide the following basic features in this region: A) the growth of $\sigma_H(T)$ with the temperature, $\sigma_H(T)={\rm const}~ T^2$; B) the effects of the CMC on the quark-gluon medium which gives a special CMC factor in the pressure of quarks and gluons; C) the mass correlation parameter (the Debye mass) defining the gluon exchange forces in qgp in the background of CMC vacuum; D) the violation of the standard perturbation theory in the qgp, when the $g^6$ term contains the infinite series of contributions- the Linde problem. We shall below discuss these topics term by term. \subsection {(A). The colormagnetic string tension $\sigma_H$ at nonzero temperature.} As was discussed in the Introduction this topic was actively studied on the lattice \cite{1a,2a,3a,4a} where also the model containing an adjoint Higgs was exploited with similar results \cite{5a,6a,7a}. On the theoretical side one can express in the framework of FCM the CM string tension via the gluelump Green's function, where gluelump is the system of 2 gluons and adjoint Wilson line connected by adjoint strings. Actually gluelumps define the confining dynamics in both CE and CM strings in a selfconsistent way since CE and CM string tensions are expressed via integrals of the corresponding gluelump Green's functions,where interaction is given again by the CE and CM string tensions. The behavior of $\sigma_H$ near $T_c$ was found in \cite{34} in good agreement with lattice data. In the large $T$ region the FCM allows to define it analytically \cite{35} and compare with lattice data \cite{36} in \cite{36*} finding a good agreement. We shall be interested in the region of temperatures $T>T_c$ and exploit the standard definition \ref{4} of $\sigma_H$ via the CM correlator, $\sigma_H= 1/2 \int d^2z D^H(z)$, where $D^H$ is expressed via two-gluon Green's function and finally via the product of interacting $4d$ one-gluon Green's functions $D^H(z)\sim G^{(2g)}_{4d}(z)\sim(G^{(g)}_{4d})^2$. It is important \cite{34,35} that the path integral along the 4-th axis does not contain interaction and therefore at large $T$ one arrives at the result \begin{equation} G^{g}_{4d}= T G^{(g)}_{3d}(z) + K_{3d}(z): ~~ D^H(z)= \frac{g^4 (N^2_c -1)T^2}{2} \langle G^{2g}_{3d}(z)\rangle + ..., \label{26}\end{equation} where neglected terms are subleading at large $T$. As a result the CMC string tension at large $T$ can be written as \begin{equation} \sqrt{\sigma_H(T)}= g^2 T c_{\sigma} + {\rm const}~, ~~c^2_{\sigma}= \frac{N^2_c-1}{4}\int d^2 w \langle G^{2g}_{3d}(w)\rangle \label{27}\end{equation} Numerically the lattice data \cite{36} yield $c_{\sigma}= 0.566 +/- 0.013$. In FCM using (\ref{27}) the integral was calculated approximately yielding as a lower limit $c_{\sigma}= 0.47$ in a reasonable agreement with lattice. We now turn to the region $0<T<T_c$, where one can generalize the form of $D^H(z)$ to the nonzero $T$ region, summing over infinite $n\beta$ series ($\beta= 1/T)$ \cite{34}, which yields at small $T$ \begin{equation} \sigma_H(T)/\sigma_H(0)= \frac{\sin h(M/T)+ M/T}{\cosh(M/T)- 1}= 1+ 2(1+M/T)\exp{-M/T}+ O(\exp -2M/T). \label{28}\end{equation} \subsection{(B). The CMC pressure in the quark-gluon plasma} Using the relativistic path integral for the quark and gluon pressure $P_q,P_g$ \cite{37,13}, one express those via the spatial loop integrals of the thermal Green's functions of $q,g$ respectively, $G_3(s),S_3(s)$ \begin{equation} P_{gl}= \frac{N^2_c-1}{\sqrt{4\pi}}\int^\infty_0\frac{ds}{s^{3/2}} G_3(s) \sum{n}\exp(-\frac{n^2}{4T^2 s}) L^{n}_{adj}, \label{29}\end{equation} where $L^{n}_{adj}$ is the adjoint Polyakov loop and $G_3(s)$ is the 3d closed loop gluon Green's function as a function of the relativistic square of distance $s$. It is clear that in the 3d closed loop the confinement is colormagnetic and the result for q and g Green's functions can be written as \cite{38} \begin{equation} G_3(s)= \frac{1}{(4\pi s)^{3/2}} \left(\frac{(M_{adj})^2 s}{sh((M_{adj})^2 s)}\right)^{1/2} \label{30} \end{equation} Here $M_{adj}= 12 \sqrt{\sigma_H(T)}$. For the quark function $S_3(s)$ one should replace $M_{adj}$ by $M_f= 1/3 M_{adj}$. Substituting (\ref{30}) into (\ref{29}) one obtains the gluon pressure as \begin{equation} P_{gl}= \frac{2(N^2_c-1)}{(4\pi)^2}\sum_{n=\infty,n=1}L^{n}_{adj} \int^{\infty}_0 ds \frac{1}{s^3} \exp\left(-\frac{n^2}{4T^2s}\right) \sqrt{\frac{M^2_{adj} s}{sh(M^2_{adj}s)}}. \label{31}\end{equation} Here $L^{n}_{adj}$ can be taken from lattice \cite{39} or analytic \cite{38} expressions. { \begin{figure}[htb] \setlength{\unitlength}{1.0cm} \centering \begin{picture}(6.8,6.8) \put(0.5,0.5){\includegraphics[height=5.7cm]{eps/Fig01_Figure1_v1.eps}} \put(6.35,0.1){$T/T_c$} \put(3.95,0.1){\footnotesize $T_c$} \put(0.1,5.25){\footnotesize \rotatebox{90}{$P/T^4$}} \put(5.265,6.35){$P_{gl}$} \put(7.08,4.3){$P_{gb}$} \end{picture} \caption{Pressure $P(T)$ as function of temperature $T$ for the confined phase (Glueballs) -- solid line, and for the deconfined phase (dashed line). The intersection point is at the critical temperature $T_{c}$.} \label{fig:fig01} \end{figure} }\medskip In Fig. 1 we show how proceeds the transition of the confined phase of glueballs into the deconfined phase of gluons with the CMC interaction in the gluon plasma in comparison with the lattice data from \cite{40} (Fig. 1 from \cite{38}). { \begin{figure}[!htb] \setlength{\unitlength}{1.0cm} \centering \begin{picture}(8.0,4.1) \put(0.5,0.5){\includegraphics[height=3.65cm]{eps/Fig04_PglT4_v1.eps}} \put(7.0,0.1){\footnotesize $T/T_c$} \put(0.1,3.25){\footnotesize \rotatebox{90}{$P/T^4$}} \end{picture} \caption{The pressure ${P(T)}/{T^4}$ in the $SU(3)$ theory in the deconfined phase. The solid line is for the modified oscillator confinement Eq. (\ref{30}), and filled dots are for the lattice data \cite{40}.} \label{fig:fig04} \end{figure} }\medskip { }\medskip In Fig. 2 we demonstrate the behavior of the gluon plasma at large $T$ vs the lattice data from \cite{40} (Fig. 3 from \cite{38}). \subsection{ (C). The Debye mass in the qgp} In this section we shall show that the only gauge- invariant definition of the Debye mass in the qgp is via the CM mass, i.e. via the square root of the CMC string tension $\sigma_H$, and we shall demonstrate a good agreement between the resulting theoretical and lattice data \cite{41,42}. The problem of the Debye mass in the QCD standard perturbation theory (SPT) is that it cannot be defined in a gauge-invariant way and therefore one is using some approximate definitions, introducing fitting constants, e.g. in \cite{36} the ansatz was exploited $m_D(T)= A g(T) T \sqrt{1+ N_f/6}$ with $A= 1.51,1.42$ for $N_f= 0,2$, respectively. Instead in the nonperturbative FCM one can calculate Debye mass with a good accuracy \cite{41,42}. To this end one defines the Debye mass from the gluon-exchange diagram between trajectories of two charges, see Fig 1 from \cite{42} . It is clear that the gluon distorts the Wilson loop surface of two charges and this additional piece (its 3$d$ projection) contributes being multiplied with $\sigma_H$ to the gluon action. In this way one understands that the exchanged gluon, together with its projection on the unperturbed plane of two charges, forms the gluelump \cite{43,44} -- the system of one gluon plus another static with infinite mass. The $T$-dependent Hamiltonian for the gluelumps was derived in \cite{41,42} as \begin{equation} H_n= \sqrt{\mbox{\boldmath${\rm p}$}_{perp}^2 + (2\pi n T)^2}+ \sigma_H^{adj} r_{perp}, n= 0,1,2,... \label{32}\end{equation} The corresponding gluelump screening mass spectrum was found in \cite{41}. The lowest eigenvalue of $H_0 $ is equal to $\epsilon_0= 2.82 \sqrt{\sigma_H}$. In the next approximation one should take into account the OGE interaction in the gluelump which yields $\Delta \epsilon_0= -5.08 \alpha_s^{eff} \sqrt{\sigma_H}$. As a result one obtains for the Debye mass \begin{equation} m_D= \epsilon_0 + \Delta \epsilon_0= 2.06 \sqrt{\sigma_H} \label{33}\end{equation} \subsection{(D). CMC in perturbative thermodynamics of QCD}. In this approach one of the problems is the resummation of the infinite series of infrared divergent gluon-loop diagrams, which are known as hard thermal loops (HTL) resummation. The perturbation theory of the qgp or purely gluon plasma at $T>T_c$ operates with amplitudes $A_n$ corresponding to diagrams with $n$ vertices, which are produced by the term in the Lagrangian $L_3= g \partial_\mu a_\nu f^{abc} a^b_\mu a^c_\nu$. There are numerous studies in this field, see e.g. \cite{45,46} and a recent review \cite{47}. In this perturbative approach one does not exploit the notion of the CMC in the deconfined phase of QCD, probably, not realizing that the deconfined phase implies the absence of the CEC but not CMC. Instead, one can introduce in this area the notion of the ``magnetic mass" of the gluon to prevent the basic divergencies of the theory without CMC. The latter were designated by Linde \cite{48} and are known as the ``Linde problems". The resolution of these problems with account of the CMC was given in \cite{35} and can be described shortly below as follows. The main problem perturbative QCD thermodynamics (PQCDTh), which essentially operates in the 3$d$ space, is the IR or large distance divergence, since the gluon propagator $G(x,y) \sim \frac{T}{|\mbox{\boldmath${\rm x}$}-\mbox{\boldmath${\rm y}$}|}$ is a slowly decreasing function at large distances $X$. Correspondingly, the $n$-th order amplitude behaves as $A_n \sim g^n T^{n/2 +1} X^{n/2-3}$. One can see that the diagrams with $n >g 6$ gluon vertices diverge at large $X$ -- this is just the Linde problem 1. One can see that the CMC easily solves this problem. Indeed in 3$d$ the Wilson loops, which cover all the diagram surface, obey the screening law: $W(C)= \exp(-\sigma_H S_{\min})$, where $S_{\min}$ is the area of the surface, and $S_{\min} \sim X^2$. As a result the amplitude acquires the form \begin{equation} A^{\rm conf}_n= g^n T^{n/2+ 1} \int (dX)^{n/2- 3} \exp(-\sigma_H X^2) \sim g^n T^{n/2+1} (\sqrt{\sigma_H})^{-(n/2-3)} {\rm const}. \label{34}\end{equation} Now taking into account \ref{27}, $\sqrt{\sigma_H}= g^2 T {\rm const}$, one comes to the conclusion that all diagrams with $ n> 6$ yield $A^{conf}_n= g^6 T^4 c_n$ (the Linde problem 2). As a result one should sum up all the diagrams with $n> 6$, as it is shown in \cite{35}, which are made finite due to the CMC. \section{Conclusions} We have considered the basic picture of the confined and deconfined matter which is well described in terms of the colorelectric and colormagnetic field correlators. The latter are obtained selfcosistently from the nonperturbative QCD vacuum with the basic characteristics -- the gluon condensate, $ G_2= \frac{\alpha_s}{\pi} \langle(F_{\mu \nu})^2\rangle$ , which can be taken at the standard value, $ G_2= 0.012$ GeV$^4$ \cite{49}. As it was shown in \cite{50}, $G_2$ defines confinement characteristics in the confined phase (CEC and CMC) with the energy density \cite{49} $\epsilon_{\rm vac}= - \frac{11-2/3 n_f}{32} G_2$ and the energy density in the deconfined (only the CMC) area is $1/2 \epsilon_{\rm vac}$. The corresponding pressure $P= -F$ in the confined phase can be written as $P({\rm conf})= |\epsilon_{\rm vac}|+ T^4 p_{\rm hadr}$ and the pressure in the deconfined region is $P({\rm deconf})= 1/2 |\epsilon_{\rm vac}| + T^4 (p_q + p_g)$. Now from the relation $P_{\rm conf}(T_c)= P_{\rm deconf}(T_c)$ one obtains the equation for the transition temperature $T_c$ \cite{13,3} via standard expressions of $p_h,p_q,p_g$ (with or without additional interactions, which will induce small corrections in the $T_c$ values since $\epsilon_{\rm vac}$ is a dominant magnitude), \begin{equation} T_c= \left( \frac{1/2 |\epsilon_{\rm vac}| + p_{\rm hadr}}{p_q + p_g}\right)^{1/4} \label{35}\end{equation} As a result (taking free the quark, gluon, hadron pressures), one obtains in \cite{13,3} $T_c= 240,150,134$ MeV for $n_f= 0,2,4$, which is very close to the lattice data $T_c=240,146,131$ MeV . One can see that $\epsilon(CMC)= 1/2 \epsilon_{\rm vac}$ plays the main role in the definition of the deconfined phase transition. As it was shown above, the role of the CMC is even more important. Namely, as it was shown above in the section 3, in the confined region the CMC ensures an important part of the interaction, (1) without CMC the sign of the nonperturbative part of spin-orbit force (the Thomas term) would have the opposite sign (see (\ref{18},\ref{25})),(2) The CMC yields the important string correction $\Delta H= - \frac{\sigma \mbox{\boldmath${\rm L}$}^2}{6 m^2}$. We also discussed the possibility of the weakly bound hadrons due to CMC above $T_c$ (in section 3). Finally in the deconfined region the CMC ensures 3 major effects of the qgp physics: ~~I. The CMC creates its own factor $G_3(s)$, \ref{30} in the qgp pressure, \ref{31}, which is the main contribution (along with the Polyakov line) to the QCD thermodynamics, which is supported by the lattice calculations (Figs. 1, 2 in section 4). ~~ II. The CMC ($\sigma_H(T)= {\rm const} T^2$) creates the Debye mass $m_D= 2.06 \sqrt{\sigma_H}$ (\ref{33}). Finally, the CMC solves the Linde problem \cite{35} which allows to summarize the infinite set of graphs and make the total sum finite. The author is greatly indebted to A. M. Badalian for advices and contributions in section 3 of the paper, and to N. P. Igumnova for help in preparing the manuscript.
1,314,259,992,846
arxiv
\section{Introduction} \label{sec:intro} Face alignment is aimed at locating a group of pre-defined facial landmarks from images. Robust face alignment based on deep learning technology has attracted increasing attention in recent years and it is the fundamental algorithm in many face-related applications such as face reenactment \cite{FreeNet}, face swapping \cite{AHF} and driver fatigue detection \cite{Fatigue}. Despite recent progress, it still remains a challenging problem, especially for images with heavy occlusion, profile view and illumination variation. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Fig1.pdf} \caption{The proposed coarse-to-fine framework leverages the sparse local patches for robust face alignment. The sparse local patches are cropped according to the landmarks in the previous stage and fed into the same SLPT to predict the facial landmarks. Moreover, the patch size narrows down with the increasing of stages to enable the local features to evolve into a pyramidal form.} \label{fig1} \end{figure} The inherent relation between facial landmarks play an important role in face alignment since human face has a regular structure. Although heatmap regression methods achieve impressive performance \cite{MOHP, LUVLI, Awing, DeCaFA, HRnet} in recent years, they still ignore the inherent relation because convolutional neural network (CNN) kernels focus locally, thus failed to capture the relations of landmarks farther away in a global manner. In particular, they consider the pixel coordinate with highest intensity of the output heatmap as the optimal landmark, which inevitably introduces a quantization error, especially for common downsampled heatmap. Coordinate regression methods \cite{SCDF, SAN, Wing, TCDNN, LAB, DVLN, SRT} have an innate potential to learn the relation since it regresses the coordinates from global feature directly via fully-connected layers (FC). Nevertheless, a coherent relation should be learned together with local appearance while coordinate regression methods lose the local feature by projecting the global feature into FC layers. To address the aforementioned problems, we propose a Sparse Local Patch Transformer (SLPT). Instead of predicting the coordinates from the full feature map like DETR \cite{DETR}, the SLPT firstly generates the representation for each landmark from a local patch. Then, a series of learnable queries, which are called \textit{landmark queries}, are used to aggregate the representations. Based on the cross-attention mechanism of transformer, the SPLT learns an adaptive adjacency matrix in each layer. Finally, the subpixel coordinate of each landmark in their corresponding patch is predicted independently by a MLP. Due to the use of sparse local patches, the number of the input token decreases significantly compared to other vision transformer\cite{DETR, VIT}. To further improve the performance, a coarse-to-fine framework is introduced to incorporate with the SLPT, as shown in Fig.1. Similar to cascaded shape regression method \cite{CFSS, DAN, DAC-CSR}, the proposed framework optimizes a group of initial landmarks to the target landmarks by several stages. The local patches in each stage are cropped based on the initial landmarks or the landmarks predicted in the former stage, and the patch size for a specific stage is $1/2$ of its former stage. As a result, the local patches evolve in a pyramidal form and get closer to the target landmarks for the fine-grained local feature. To verify the effectiveness of the SLPT and the proposed framework, we carry out experiments on three popular benchmarks, WFLW\cite{LAB}, 300W\cite{300W} and COFW\cite{COFW}. The results show the proposed method significantly outperforms other state-of-the-art methods in terms of diverse metrics with much lower computational complexity. Moreover, we also visualize the attention map of SLPT and the inner product matrix of landmark queries to demonstrate the SLPT can learn the inherent relation of facial landmarks. The main contributions of this work can be summarized as: \begin{itemize} \item We introduce a novel transformer, Sparse Local Patch Transformer, to explore the inherent relation between facial landmarks based on the attention mechanism. The adaptive inherent relation learned by SLPT enables the model to achieve SOTA performance with much less computational complexity. \item We introduce a coarse-to-fine framework to incorporate with the SLPT, which enables the local patch to evolve in a pyramidal form and get closer to the target landmark for the fine-grained feature. \item Extensive experiments are conducted on three popular benchmarks, WFLW, 300W and COFW. The result illustrates the proposed method learns the inherent relation of facial landmarks by the attention mechanism and works at the SOTA level. \end{itemize} \section{Related Work} In the early stage of face alignment, the mainstream methods \cite{CLM, SDM, Face3000, CFSS, SCDF, DAC-CSR, COFW, DCFE} regress facial landmarks directly from the local feature with classical machine learning algorithms like random forest. With the development of CNN, the CNN-based face alignment methods have achieved impressive performance. They can be roughly divided into two categories: heatmap regression method and coordinate regression method. \subsection{Coordinate Regression Method} Coordinate regression methods \cite{TCDNN, MTCNN, DVLN, Wing} regress the coordinates of landmarks from feature map directly via FC layers. To further improve the robustness, diverse cascaded networks \cite{MDM, DAN} and recurrent networks \cite{RAR} are proposed to achieve face alignment with multi stages. Despite coordinate regression methods have an innate potential to learn the inherent relation, it commonly requires a huge number of samples for training. To address the problem, Qian et al. \cite{AVS} and Dong et al. \cite{SAN} expand the number of training samples by style transfer; Browatzki et al. \cite{3FabRec} and Dong et al. \cite{SRT} leverage the unlabeled dataset to train the model. In recent years, state-of-the-art works employ the structure information of face as the prior knowledge for better performance. Lin et al. \cite{SCDF} and Li et al. \cite{SDL} model the interaction between landmarks by a graph convolutional network (GCN). However, the adjacency matrix of GCN is fixed during inference and cannot adjust case by case. Learning an \textit{adaptive} inherent relation is crucial for robust face alignment. Unfortunately, there is no work yet on this topic, and we propose a method to fill this gap. \begin{figure*}[t!] \centering \includegraphics[width=\linewidth]{Fig2.pdf} \caption{An overview of the SLPT. The SLPT crops local patches from the feature map according to the facial landmarks in the previous stage. Each patch is then embedded into a vector that can be viewed as the representation of the corresponding landmark. Subsequently, they are supplemented with the structure encoding to obtain the relative position in a regular face. A fixed number of landmark queries are then input into the decoder, attending the vectors to learn the inherent relation between landmarks. Finally, the outputs are fed into a shared MLP to estimate the position of each facial landmark independently. The rightmost images demonstrate the adaptive inherent relation of different samples. We connect each point to the point with highest cross-attention weight in the first inherent relation layer.} \label{fig2} \end{figure*} \subsection{Heatmap Regression Method} Heatmap regression methods \cite{Hourglass, Dunet, HRnet, DeCaFA} output an intermediate heatmap for each landmark and consider the pixel with highest intensity as the optimal output. Therefore, it leads to quantization errors since the heatmap is commonly much smaller than the input image. To eliminate the error, Kumar et al. \cite{LUVLI} estimate the uncertainty of predicted landmark locations; Lan et al \cite{HIH} adopt an additional decimal heatmap for subpixel estimation; Huang et al. \cite{ADNet} further regress the coordinate from an anisotropic attention mask generated from heatmaps. Moreover, heatmap regression methods also ignore the relation between landmarks. To construct the relation between neighboring points, Wu et al. \cite{LAB} and Wang et al. \cite{Awing} take advantage of facial boundaries as the prior knowledge; Zou et al. \cite{HLSE} cluster landmarks with a graph model to provide structural constraints. However, they still cannot explicitly model an inherent relation between the landmarks with long distance. The vision transformer \cite{VIT} proposed recently enables the model to attend the area with a long distance. Besides, the attention mechanism in transformer can generate an adaptive global attention for different tasks, such as object detection \cite{DETR, Deformable_DETR} and human pose estimation \cite{TokenPose}, and in principle, we envision that it can also learn an adaptive inherent relation for face alignment. In this paper, we demonstrate the capability of SLPT for learning the relation. \section{Method} \subsection{Sparse Local Patch Transformer} As shown in Fig.2, Sparse Local Patch Transformer (SLPT) consists of three parts, the patch embedding \& structure encoding, inherent relation layers and prediction heads. \textbf{Patch embedding \& structure encoding}: ViT \cite{VIT} divides an image or a feature map $\bm{I} \in \mathbb{R}^{H_I \times W_I \times C}$ into a grid of $\frac{H_I}{P_h} \times \frac{W_I}{P_w}$ with each patch of size $P_h \times P_w$ and maps it into a $d$-dimension vector as the input. Different from ViT, for each landmark, the SLPT crops a local patch with the fixed size $\left( P_h, P_w \right)$ from the feature map as its supporting patch, whose center is located at the landmark. Then, the patches are resized to $K \times K$ by linear interpolation and mapped into a series of vectors by a CNN layer. Hence, each vector can be viewed as the representation of the corresponding landmark. Besides, to retain the relative position of landmarks in a regular face shape (structure information), we supplement the representations with a series of learnable parameters called \textit{structure encoding}. As shown in Fig.3, the SLPT learns to encode the distance between landmarks within the regular facial structure in the similarity of encodings. Each encoding has high similarity with the encoding of neighboring landmark (eg. left eye and right eye). \textbf{Inherent relation layer}: Inspired by Transformer \cite{Transformer}, we propose inherent relation layers to model the relation between landmarks. Each layer consists of three blocks, multi-head self-attention (MSA) block, multi-head cross-attention (MCA) block, and multilayer perceptron (MLP) block, and an additional Layernorm (LN) is applied before every block. Based on the self-attention mechanism in MSA block, the information of queries interact adaptively for learning a $query-query$ inherent relation. Supposing the $l$-th MSA block obtains $H$ heads, the input $T^l$ and landmark queries $Q$ with $C_I$-dimension are divided into $H$ sequences equally ($T^l$ is a zero matrix in $1$st layer). The self-attention weight of the $h$-th head $\bm{A}_h$ is calculated by: \begin{equation} \bm{A}_h=softmax\left ( \frac{\left(\bm{T}_h^{l} + \bm{Q}_h\right) \bm{W}^q_h \left( \left(\bm{T}_h^{l} + \bm{Q}_h\right)\bm{W}^k_h\right)^T}{\sqrt{C_h}}\right ), \end{equation} where $\bm{W}^q_h$ and $\bm{W}^k_h$ $\in \mathbb{R} ^ {C_h \times C_h}$ are the learnable parameters of two linear layers. $\bm{T}^{l}_{h} \in \mathbb{R} ^{N \times C_h}$ and $\bm{Q}_h \in \mathbb{R} ^{N \times C_h}$ are the input and landmark queries respectively of the $h$-th head with the dimension $C_h=C_I/H$. Then, MSA block can be formulated as: \begin{equation} MSA\left(\bm{T}^{l} \right) = \left[ \bm{A}_1\bm{T}^l_1\bm{W}^v_1;...;\bm{A}_H\bm{T}^l_H\bm{W}^v_H \right]\bm{W}_P, \end{equation} where $\bm{W}_h^v \in \mathbb{R}^{C_h \times C_h}$ and $\bm{W}_P \in \mathbb{R}^{C_I \times C_I}$ are also the learnable parameters of linear layers. The MCA block aggregates the representations of facial landmarks based on the cross-attention mechanism for learning an adaptive $representation-query$ relation. As shown in the rightmost images of Fig.2, by taking advantage of the cross attention, each landmark can employ neighboring landmarks for coherent prediction and the occluded landmark can be predicted according to the representations of visible landmarks. Similar to MSA, MCA also has $H$ heads and the attention weight in the $h$-th head $\bm{A}_h^\prime$ can be calculated by: \begin{equation} \bm{A}_h^\prime=softmax\left ( \frac{\left(\bm{T}_h^{\prime l} + \bm{Q}_h\right) \bm{W}^{\prime q}_h \left( \left(\bm{R}_h + \bm{P}_h\right)\bm{W}^{\prime k}_h\right)^T}{\sqrt{C_h}}\right ). \end{equation} Where $\bm{W}^{\prime q}_h$ and $\bm{W}^{\prime k}_h \in \mathbb{R} ^ {C_h \times C_h}$ are learnable parameters of two linear layers in the $h$-th head. $\bm{T}_h^{\prime l} \in \mathbb{R} ^{N \times C_h}$ is the input $l$-th MCA block; $\bm{P}_h \in \mathbb{R} ^{N \times C_h}$ is the structure encodings; $\bm{R}_h \in \mathbb{R} ^{N \times C_h}$ is the landmark representations. MCA block can be formulated as: \begin{equation} MCA\left(\bm{T}^{\prime l} \right) = \left[ \bm{A}^{\prime}_1\bm{T}^{\prime l}_1\bm{W}^{\prime v}_1;...;\bm{A}^{\prime}_H\bm{T}^{\prime l}_H\bm{W}^{\prime v}_H \right]\bm{W}^{\prime}_P, \end{equation} where $\bm{W}_h^{\prime v} \in \mathbb{R}^{C_h \times C_h}$ and $\bm{W}^{\prime}_P \in \mathbb{R}^{C_I \times C_I}$ are also the learnable parameters of linear layers in MCA block. Supposing predicting $N$ pre-defined landmarks, the computational complexity of the MCA that employ sparse local patches $\Omega(S)$ and full feature map $\Omega(F)$ is: \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Fig3.pdf} \caption{Cosine similarity for structure encodings of SLPT learned from a dataset with 98 landmark annotations. High cosine similarities are observed for the corresponding points which are close in the regular face structure.} \label{fig3} \end{figure} \begin{equation} \Omega(S)=4HNC_h^2 + 2HN^2C_h, \end{equation} \begin{equation} \Omega(F)=\left(2N+2\frac{W_IH_I}{P_wP_h} \right)HC_h^2 + 2NH\frac{W_IH_I}{P_wP_h}C_h. \end{equation} Compared to using the full feature map, the number of representations decreases from $\frac{H_I}{P_h} \times \frac{W_I}{P_w}$ to $N$ (with the same input size, $\frac{H_I}{P_h} \times \frac{W_I}{P_w}$ is $16 \times 16$ in the related framework \cite{DETR}), which decreases the computational complexity significantly. For a 29 landmark dataset \cite{COFW}, $\Omega(S)$ is only $1/5$ of $\Omega(F)$ ($H = 8$ and $C_h = 32$ in the experiment). \textbf{Prediction head}: the prediction head consists of a layernorm to normalize the input and a MLP layer to predict the result. The output of the inherent relation layer is the local position of the landmark with respect to its supporting patch. Based on the local position on the $i$-th patch $\left(t_x^i, t_y^i\right)$, the global coordinate of the $i$-th landmark $\left( x^i, y^i\right)$ can be calculated by: \begin{equation} \begin{aligned} x^i &= x^i_{lt} + w^i t^i_x,\\ y^i &= y^i_{lt} + h^i t^i_y, \end{aligned} \end{equation} where $(w^i, h^i)$ is the size of the supporting patch. \begin{algorithm}[t!] \caption{Training pipeline of the coarse-to-fine framework} \label{alg1} \begin{algorithmic}[1] \REQUIRE Training image $\bm{I}$, initial landmarks $\bm{S}_0$, backbone network $B$, SLPT $T$, loss function $L$, ground truth $\bm{S}_{gt}$, Stage number $N_{stage}$ \WHILE{the training epoch is less than a specific number} \STATE Forward $B$ for feature map by $\bm{F}=B\left( I\right)$; \STATE Initialize the local patch size $\left(P_w, P_h\right) \leftarrow \left(\frac{W}{4}, \frac{H}{4}\right)$ \FOR{ $i$ $\leftarrow$ 1 \TO $N_{stage}$ } \STATE Crop local pactes $\bm{P}$ from $\bm{F}$ according to former landmarks $\bm{S}_{i-1}$; \STATE Resize patches from $\left(P_w, P_h\right)$ to $K \times K$; \STATE Forward $T$ for landmarks by $\bm{S}_{i}=T\left( \bm{P} \right)$; \STATE Reduce the patch size $\left(P_w, P_h\right)$ by half; \ENDFOR \STATE Minimize $L\left(\bm{S_{gt}}, \bm{S_{1}}, \bm{S_{2}}, \cdots , \bm{S}_{N_{stage}} \right)$ \ENDWHILE \end{algorithmic} \end{algorithm} \subsection{Coarse-to-fine locating} To further improve the performance and robustness of SLPT, we introduce a coarse-to-fine framework trained in an end-to-end method to incorporate with the SLPT. The pseudo-code in \textbf{Algorithm 1} shows the training pipeline of the framework. It enables a group of initial facial landmarks $\bm{S}_0$ calculated from the mean face in the training set to converge to the target facial landmarks gradually with several stages. Each stage takes the previous landmarks as center to crop a series of patches. Then, the patches are resized into a fixed size $K \times K$ and fed into the SLPT to predict the local point on the supporting patches. Large patch size in the initial stage enables the SLPT to obtain a large receptive filed that prevents the patch from deviating from the target landmark. Then, the patch size in the following stages is $1/2$ of its former stage, which enables the local patches to extract fine-grained features and evolve into a pyramidal form. By taking advantage of the pyramidal form, we can observe a significant improvement for SLPT. (see Section 4.5). \begin{table}[t!] \centering \begin{tabular}{m{2.3cm}<{\centering}|m{1.3cm}<{\centering}m{1.3cm}<{\centering}m{1.3cm}<{\centering}} \hline Method & NME(\%)$\downarrow$ & FR$_{0.1}$(\%)$\downarrow$ & AUC$_{0.1}$$\uparrow$\\ \hline LAB \cite{LAB} & 5.27 & 7.56 & 0.532 \\ SAN \cite{SAN} & 5.22 & 6.32 & 0.535 \\ Coord$^\star$ \cite{HRnet} & 4.76 & 5.04 & 0.549 \\ DETR$^\dag$ \cite{DETR} & 4.71 & 5.00 & 0.552 \\ Heatmap$^\star$ \cite{HRnet} & 4.60 & 4.64 & 0.524 \\ AVS+SAN \cite{AVS} & 4.39 & 4.08 & 0.591 \\ LUVLi \cite{LUVLI} & 4.37 & 3.12 & 0.557 \\ AWing \cite{Awing} & 4.36 & 2.84 & 0.572 \\ SDFL$^\star$ \cite{SCDF} & 4.35 & {\color{red}$\bm{2.72}$} & 0.576 \\ SDL$^\star$ \cite{SDL} & 4.21 & 3.04 & 0.589 \\ HIH \cite{HIH} & 4.18 & 2.84 & {\color{blue}$\bm{0.597}$} \\ ADNet \cite{ADNet} & {\color{red}$\bm{4.14}$} & {\color{red}$\bm{2.72}$} & {\color{red}$\bm{0.602}$} \\ \hline SLPT$^\ddag$ & {\color{blue} $\bm{4.20}$} & 3.04 & 0.588 \\ SLPT$^\dag$ & {\color{red}$\bm{4.14}$} & {\color{blue} $\bm{2.76}$} & 0.595 \\ \hline \end{tabular} \caption{Performance comparison of the SLPT and the state-of-the-art methods on WFLW. The normalization factor is inter-ocular and the threshold for FR is set to 0.1. Key: [{\color{red} \textbf{Best}}, {\color{blue} \textbf{Second Best}}, $^\star$=HRNetW18C, $^\dag$=HRNetW18C-lite, $^\ddag$=ResNet34]} \label{Tabal1} \end{table} \subsection{Loss Function} We employ the normalized L2 loss to provide the supervision for stages of the coarse-to-fine framework. Moreover, similar to other works \cite{Hourglass, Dunet}, providing additional supervision for the intermediate output during the training is also helpful. Therefore, we feed the intermediate output of each inherent relation layer into a shared prediction head. The loss function is written as: \begin{equation} L = \frac{1}{SDN} \sum_{i=1}^{S} \sum_{j=1}^{D} \sum_{k=1}^{N} \frac{\left\| \left( x_{gt}^k, y_{gt}^k\right) - \left( x^{ijk}, y^{ijk}\right) \right\|_2}{d}, \end{equation} where $S$ and $D$ indicate the number of coarse-to-fine stage and inherent relation layer respectively. $\left(x_{gt}^k, y_{gt}^k\right)$ is the labeled coordinate of the $k$-th point. $\left(x^{ijk}, y^{ijk}\right)$ is the coordinate of $k$-th point predicted by $j$-th inherent relation layer in $i$-th stage. $d$ is the distance between outer eye corners that acts as a normalization factor. \section{Experiment} \subsection{Datasets} Experiments are conducted on three popular benchmarks, including WFLW \cite{LAB}, 300W\cite{300W} and COFW\cite{COFW}. \textbf{WFLW} dataset is a very challenging dataset that consists of 10,000 images, 7,500 for training and 2,500 for testing. It provides 98 manually annotated landmarks and rich attribute labels, such as profile face, heavy occlusion, make-up and illumination. \textbf{300W} is the most commonly used dataset that includes 3,148 images for training and 689 images for testing. The training set consists of the fullset of AFW \cite{AFW}, the training subset of HELEN \cite{HELEN} and LFPW \cite{LFPW}. The test set is further divided into a challenging subset that includes 135 images (IBUG fullset \cite{300W}) and a common subset that consists of 554 images (test subset of HELEN and LFPW). Each image in 300W is annotated with 68 facial landmarks. \textbf{COFW} mainly consists of the samples with heavy occlusion and profile face. The training set includes 1,345 images and each image is provided with 29 annotated landmarks. The test set has two variants. One variant presents 29 landmarks annotation per face image (COFW), The other is provided with 68 annotated landmarks per face image (COFW68 \cite{COFW68}). Both contains 507 images. We employ the COFW68 set for \textit{cross}-dataset validation. \begin{table}[t!] \begin{tabular}{m{2.2cm}<{\centering}|m{1.2cm}<{\centering}m{1.6cm}<{\centering}m{1.2cm}<{\centering}} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Inter-Ocular NME (\%) $\downarrow$} \\ & Common & Challenging & Fullset \\ \hline SAN \cite{SAN} & 3.34 & 6.60 & 3.98 \\ Coord$^\star$ \cite{HRnet} & 3.05 & 5.39 & 3.51 \\ LAB \cite{LAB} & 2.98 & 5.19 & 3.49 \\ DeCaFA \cite{DeCaFA} & 2.93 & 5.26 & 3.39 \\ HIH \cite{HIH} & 2.93 & 5.00 & 3.33 \\ Heatmap$^\star$ \cite{HRnet} & 2.87 & 5.15 & 3.32 \\ SDFL$^\star$ \cite{SCDF} & 2.88 & 4.93 & 3.28 \\ HG-HSLE \cite{HLSE} & 2.85 & 5.03 & 3.28 \\ LUVLi \cite{LUVLI} & 2.76 & 5.16 & 3.23 \\ AWing \cite{Awing} & 2.72 & {\color{red} \textbf{4.53}} & 3.07 \\ SDL$^\star$ \cite{SDL} & {\color{blue} \textbf{2.62}} & 4.77 & {\color{blue} \textbf{3.04}} \\ ADNet \cite{ADNet} & {\color{red} \textbf{2.53}} & {\color{blue} \textbf{4.58}} & {\color{red} \textbf{2.93}} \\ \hline SLPT$^\ddag$ & 2.78 & 4.93& 3.20 \\ SLPT$^\dag$ & 2.75 & 4.90 & 3.17 \\ \hline \end{tabular} \caption{Performance comparison for SLPT and the state-of-the-art methods on 300W common subset, challenging subset and fullset. Key: [{\color{red} \textbf{Best}}, {\color{blue} \textbf{Second Best}}, $^\star$=HRNetW18C, $^\dag$=HRNetW18C-lite, $^\ddag$=ResNet34]} \label{Tabal2} \end{table} \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Fig4.pdf} \caption{Visualization of the ground truth and face alignment result of SLPT, heatmap regression (HRNetW18C) and coordinate regression (HRNetW18C) method on the faces with blur, heavy occlusion and profile face.} \label{fig4} \end{figure} \subsection{Evaluation Metrics} Referring to other related work \cite{LUVLI, Awing, SCDF}, we evaluate the proposed methods with standard metrics, Normalized Mean Error (NME), Failure Rate (FR) and Area Under Curve (AUC). \textbf{NME} is defined as: \begin{equation} NME\left(\bm{S}, \bm{S}_{gt}\right) = \frac{1}{N}\sum_{i=1}^{N}\frac{\left\|\bm{p}^i-\bm{p}_{gt}^i\right\|_2}{d} \times 100\%, \end{equation} where $\bm{S}$ and $\bm{S}_{gt}$ denote the predicted and annotated coordinates of landmarks respectively. $\bm{p}^i$ and $\bm{p}^i_{gt}$ indicate the coordinate of $i$-th landmark in $\bm{S}$ and $\bm{S}_{gt}$. $N$ is the number of landmarks, $d$ is the reference distance to normalize the error. $d$ could be the distance between outer eye corners (inter-ocular) or the distance between pupil centers (inter-pupils). \textbf{FR} indicates the percentage of images in the test set whose NME is higher than a certain threshold. \textbf{AUC} is calculated based on Cumulative Error Distribution (CED) curve. It indicates the fraction of test images whose NME(\%) is less or equal to the value on the horizontal axis. AUC is the area under CED curve, from zero to the threshold for FR. \begin{table}[t!] \centering \begin{tabular}{m{2.2cm}<{\centering}|m{1.05cm}<{\centering}m{0.95cm}<{\centering}|m{1.05cm}<{\centering}m{0.95cm}<{\centering}} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Inter-Ocular} & \multicolumn{2}{c}{Inter-Pupil} \\ & NME(\%)$\downarrow$ & FR(\%)$\downarrow$ & NME(\%)$\downarrow$ & FR(\%)$\downarrow$ \\ \hline DAC-CSR \cite{DAC-CSR} & 6.03 & 4.73 & - & -\\ LAB \cite{LAB} & 3.92 & 0.39 & - & -\\ Coord$^\star$ \cite{HRnet} & 3.73 & 0.39 &- & -\\ SDFL$^\star$ \cite{SCDF} & 3.63 & {\color{red} $\bm{0.00}$} & - & - \\ Heatmap$^\star$ \cite{HRnet} & 3.45 & {\color{blue} $\bm{0.20}$} & - & -\\ Human \cite{COFW} & -& - & 5.60 & - \\ TCDCN \cite{TCDNN} & - & - & 8.05 & - \\ Wing \cite{Wing} &-&- & 5.44 & 3.75 \\ DCFE \cite{DCFE} &-&-& 5.27 & 7.29 \\ AWing \cite{Awing} &-&-& 4.94 & {\color{blue} $\bm{0.99}$} \\ ADNet \cite{ADNet} &-&-& {\color{red} $\bm{4.68}$} & {\color{red} $\bm{0.59}$} \\ \hline SLPT$^\ddag$ & {\color{blue} $\bm{3.36}$} & 0.59 & 4.85 & 1.18\\ SLPT$^\dag$ & {\color{red} $\bm{3.32}$} & {\color{red} $\bm{0.00}$} & {\color{blue} $\bm{4.79}$} & 1.18 \\ \hline \end{tabular} \caption{NME and FR$_{0.1}$ comparisons under Inter-Ocular normalization and Inter-Pupil normalization on $within$-dataset validation. The threshold for failure rate (FR) is set to 0.1. Key: [{\color{red} \textbf{Best}}, {\color{blue} \textbf{Second Best}}, $^\star$=HRNetW18C, $^\dag$=HRNetW18C-lite, $\ddag$=ResNet34]} \label{Tabal3} \end{table} \subsection{Implementation Details} Each input image is cropped and resized to $256 \times 256$. We train the proposed framework with Adam \cite{Adam}, setting the initial learning rate to $1\times10^{-3}$. Without specifications, the size of the resized patch is set to $7 \times 7$ and the framework has 6 inherent relation layers and 3 coarse-to-fine stages. Besides, we augment the training set with random horizontal flipping ($50\%$), gray ($20\%$), occlusion ($33\%$), scaling ($\pm5\%$), rotation ($\pm30^{\circ}$), translation ($\pm 10 px$). We implement our method with two different backbone: a light HRNetW18C \cite{HRnet} (the modularized block number in each stage is set to 1) and Resnet34\cite{Resnet}. For the HRNetW18C-lite, the resolution of feature map is $64 \times 64$, and for the Resnet34, we extract representations from the output feature maps of stages C2 through C5. (see Appendix A.1). \begin{table}[t!] \centering \begin{tabular}{m{2.4cm}<{\centering}|m{1.8cm}<{\centering}m{1.6cm}<{\centering}} \hline Method & Inter-Pupil NME(\%)$\downarrow$ & FR$_{0.1}$(\%)$\downarrow$ \\ \hline TCDCN \cite{TCDNN} & 7.66 & 16.17 \\ CFSS \cite{CFSS} & 6.28 & 9.07 \\ ODN \cite{ODN} & 5.30 & - \\ AVS+SAN \cite{AVS} & 4.43 & 2.82 \\ LAB \cite{LAB} & 4.62 & 2.17 \\ SDL$^\star$ \cite{SDL} & 4.22 & {\color{blue} $\bm{0.39}$} \\ SDFL$^\star$ \cite{SCDF} & 4.18 & {\color{red} $\bm{0.00}$} \\ \hline SLPT$^\ddag$ & {\color{blue} $\bm{4.11}$} & 0.59 \\ SLPT$^\dag$ & {\color{red} $\bm{4.10}$} & 0.59 \\ \hline \end{tabular} \caption{Inter-ocular NME and FR$_{0.1}$ comparisons on 300W-COFW68 \textit{cross}-dataset evaluation. Key: [{\color{red} \textbf{Best}}, {\color{blue} \textbf{Second Best}}, $^\star$=HRNetW18C, $^\dag$=HRNetW18C-lite, $^\ddag$=ResNet34]} \label{Tabal4} \end{table} \begin{table*}[t!] \centering \begin{tabular}{c|m{0.7cm}<{\centering}m{0.7cm}<{\centering}m{0.7cm}<{\centering}m{0.7cm}<{\centering}m{0.7cm}<{\centering}m{0.7cm}<{\centering}m{0.7cm}<{\centering}m{0.7cm}<{\centering}m{0.7cm}<{\centering}m{0.7cm}<{\centering}m{0.7cm}<{\centering}m{0.7cm}<{\centering}} \hline \multicolumn{1}{c|}{\multirow{3}{*}{Model}} & \multicolumn{12}{c}{Intermediate Stage} \\ \cline{2-13} \multicolumn{1}{c|}{} & \multicolumn{3}{c|}{1st stage} & \multicolumn{3}{c|}{2rd stage} & \multicolumn{3}{c|}{3rd stage} & \multicolumn{3}{c}{4th stage} \\ \cline{2-13} \multicolumn{1}{c|}{} & NME & FR & \multicolumn{1}{c|}{AUC} & NME & FR & \multicolumn{1}{c|}{AUC} & NME & FR & \multicolumn{1}{c|}{AUC} & NME & FR & AUC \\ \hline Model$^\dag$ with 1 stage & 4.79\% & 5.08\% & \multicolumn{1}{c|}{0.583} & - & - & \multicolumn{1}{c|}{-} & - & - & \multicolumn{1}{c|}{-} & - & - & - \\ \hline Model$^\dag$ with 2 stages & 4.52\% & 4.24\% & \multicolumn{1}{c|}{0.563} & 4.27\% & 3.40\% & \multicolumn{1}{c|}{0.585} & - & - & \multicolumn{1}{c|}{-} & - & - & - \\ \hline Model$^\dag$ with 3 stages & 4.38\% & 3.60\%& \multicolumn{1}{c|}{0.574} & 4.16\% & 2.80\% & \multicolumn{1}{c|}{0.594} & {\color{red} $\bm{4.14\%}$} & {\color{red} $\bm{2.76\%}$} & \multicolumn{1}{c|}{{\color{red} $\bm{0.595}$}} & - & - & - \\ \hline Model$^\dag$ with 4 stages & 4.47\% & 4.00\% & \multicolumn{1}{c|}{0.567} & 4.26\% & 3.40\% & \multicolumn{1}{c|}{0.586} &4.24\% & 3.36\% & \multicolumn{1}{c|}{0.588} & 4.24\% & 3.32\% & 0.587 \\ \hline \end{tabular} \caption{Performance comparison of the SLPT with different number of coarse-to-fine stages on WFLW. The normalization factor for NME is inter-ocular and the threshold for FR and AUC is set to 0.1. Key: [{\color{red} \textbf{Best}}, $^\dag$=HRNetW18C-lite]} \label{Tabel5} \end{table*} \subsection{Comparison with State-of-the-Art Method} \textbf{WFLW}: as tabulated in Table 1 (more detailed results on the subset of WFLW are in Appendix A.2), SLPT demonstrates impressive performance. With the increasing of inherent layers, the performance of SLPT can be further improved and outperforms the ADNet (see Appendix A.5). Referring to DETR, we also implement a Transformer based method that employs the full feature map for face alignment. The number of the input tokens is $16 \times 16$. With the same backbone (HRNetW18C-lite), we observe an improvement of 12.10\% in NME, and the number of training epoch is $8 \times $ less than the DETR (see Appendix A.3). Moreover, the SLPT also outperforms the coordinate regreesion and heatmap regression methods significantly. Some qualitative results are shown in Fig. 4. It is evident that our method could localize the landmarks accurately, in particular for face images with blur (2$nd$ row in Fig.4), profile view (1$st$ row in Fig.4) and heavy occlusion (3$rd$ and 4$th$ row in Fig.4). \textbf{300W}: the comparison result is shown in Table 2. Compared to the coordinate and heatmap regression methods (HRNetW18C \cite{HRnet}), SLPT still achieves an impressive improvement of 9.69\% and 4.52\% respectively in NME on the fullset. However, the improvement on 300W is not as significant as WFLW since learning an adaptive inherent relation requires a large number of annotated samples. With limited training samples, the methods with prior knowledge, such as facial boundaries (Awing and ADNet) and affined mean shape (SDL), always achieve better performance. \begin{table}[t!] \centering \begin{tabular}{|m{1.4cm}<{\centering}|m{0.85cm}<{\centering}|m{0.85cm}<{\centering}|m{0.85cm}<{\centering}|m{0.85cm}<{\centering}|m{0.85cm}<{\centering}|} \hline Method & MSA & MCA & NME & FR & AUC \\ \hline Model$^\dag$ 1 & w/o & w/o & 4.48\% & 4.32\% & 0.566 \\ \hline Model$^\dag$ 2 & w/ & w/o & 4.20\% & 3.08\% & 0.590 \\ \hline Model$^\dag$ 3 & w/o & w/ & 4.17\% & 2.84\% & 0.593 \\ \hline Model$^\dag$ 4 & w/ & w/ & {\color{red} $\bm{4.14}$}\% & {\color{red} $\bm{2.76}$}\% & {\color{red} $\bm{0.595}$} \\ \hline \end{tabular} \caption{NME($\downarrow$), FR$_{0.1}$($\downarrow$) and AUC$_{0.1}$($\uparrow$) with/without Encoder and Decoder. Key: [{\color{red} \textbf{Best}}, $^\dag$=HRNetW18C-lite]} \label{Tabal6} \end{table} \begin{table}[t!] \centering \begin{tabular}{|m{3.4cm}<{\centering}|m{1.0cm}<{\centering}|m{1.0cm}<{\centering}|m{0.85cm}<{\centering}|} \hline Method & NME & FR & AUC \\ \hline w/o structure encoding$^\dag$ & 4.16\% & 2.84\% & 0.593 \\ \hline w structure encoding$^\dag$ & {\color{red} $\bm{4.14}$}\% & {\color{red} $\bm{2.76}$}\% & {\color{red} $\bm{0.595}$} \\ \hline \end{tabular} \caption{NME($\downarrow$), FR$_{0.1}$($\downarrow$) and AUC$_{0.1}$($\uparrow$) with/without structure encoding. Key: [{\color{red} \textbf{Best}}, , $^\dag$=HRNetW18C-lite]} \label{Tabal7} \end{table} \textbf{COFW}: We conduct two experiments on COFW for comparsion, the $\textit{within}$-dataset validation and $\textit{cross}$-dataset validation. For the $\textit{within}$-dataset validation, the model is trained with 1,345 images and validated with 507 images on COFW. The inter-ocular and inter-pupil NME of SLPT and the state-of-the-art methods are reported in Table 3 respectively. In this experiment, the number of training sample is quite small, which leads to the significant degradation of the coordinate regression methods, such as SDFL, LAB. Nevertheless, SLPT still maintains excellent performance and yields the second best performance. It improves the metric by 3.77\% and 11.00\% in NME over the heatmap regression and coordinate regression methods respectively. For the $\textit{cross}$-dataset validation, the training set includes the complete 300W dataset (3,837 images) and the test set is COFW68 (507 images with 68 landmark annotation). Most samples of COFW68 are under heavy occlusion. The inter-ocular NME and FR of SLPT and the state-of-the-art methods are reported in Table 4. Compared to the methods based on GCN (SDL and SDFL), the SLPT (HRNet) achieves impressive result, as low as 4.10\% in NME. The result illustrates that the adaptive inherent relation of SLPT works better than the fixed adjacency matrix of GCN for robust face alignment, especially for the condition of heavy occlusion. \subsection{Ablation Study} \begin{figure*}[t!] \label{verifycluster} \centering \subfloat[MCA-layer 1]{\includegraphics[width=2.7cm]{layer1.pdf}}\hspace{0.1cm} \subfloat[MCA-layer 2]{\includegraphics[width=2.7cm]{layer2.pdf}}\hspace{0.1cm} \subfloat[MCA-layer 3]{\includegraphics[width=2.7cm]{layer3.pdf}}\hspace{0.1cm} \subfloat[MCA-layer 4]{\includegraphics[width=2.7cm]{layer4.pdf}}\hspace{0.1cm} \subfloat[MCA-layer 5]{\includegraphics[width=2.7cm]{layer5.pdf}}\hspace{0.1cm} \subfloat[MCA-layer 6]{\includegraphics[width=2.7cm]{layer6.pdf}}\hspace{0.1cm} \subfloat[MSA-layer 1]{\includegraphics[width=2.7cm]{layer1_S.pdf}}\hspace{0.1cm} \subfloat[MSA-layer 2]{\includegraphics[width=2.7cm]{layer2_S.pdf}}\hspace{0.1cm} \subfloat[MSA-layer 3]{\includegraphics[width=2.7cm]{layer3_S.pdf}}\hspace{0.1cm} \subfloat[MSA-layer 4]{\includegraphics[width=2.7cm]{layer4_S.pdf}}\hspace{0.1cm} \subfloat[MSA-layer 5]{\includegraphics[width=2.7cm]{layer5_S.pdf}}\hspace{0.1cm} \subfloat[MSA-layer 6]{\includegraphics[width=2.7cm]{layer6_S.pdf}}\hspace{0.1cm} \caption{The statistical attention interactions of MCA and MSA in the final stage on the WFLW test set. Each row indicates the attention weight of the landmark.} \label{fig5} \end{figure*} \textbf{Evaluation on different coarse-to-fine stages}: to explore the contribution of the coarse-to-fine framework, we train the SLPT with different number of coarse-to-fine stages on the WFLW dataset. The NME, AUC$_{0.1}$ and FR$_{0.1}$ of each intermediate stage and the final stage are shown in Table 5. Compared to the model with only one stage, the local patches in multi-stages model evolve into a pyramidal form, which improves the performance of intermediate stages and final stage significantly. When the stage increases from 1 to 3, the NME of the first stage decreases dramatically from 4.79\% to 4.38\%. When the number of stages is more than 3, the performance converges and additional stages cannot bring any improvement to the model. \begin{table}[t!] \centering \begin{tabular}{m{3.8cm}<{\centering}|m{1.5cm}<{\centering}m{1.5cm}<{\centering}} \hline Method & FLOPs(G) & Params(M) \\ \hline HRNet$^\star$ \cite{HRnet} & 4.75 & 9.66 \\ LAB \cite{LAB} & 18.85 & 12.29 \\ AVS + SAN\ \cite{AVS} & 33.87 & 35.02 \\ AWing \cite{Awing} & 26.8 & 24.15 \\ \hline DETR$^\dag$ (98 landmarks) \cite{DETR} & 4.26 & 11.00 \\ DETR$^\dag$ (68 landmarks) \cite{DETR} & 4.06 & 11.00 \\ DETR$^\dag$ (29 landmarks) \cite{DETR} & 3.80 & 10.99 \\\hline SLPT$^\dag$ (98 landmarks) & 6.12 & 13.19 \\ SLPT$^\dag$ (68 landmarks) & 5.17 & 13.18 \\ SLPT$^\dag$ (29 landmarks) & 3.99 & 13.16 \\ \hline \end{tabular} \caption{Computational complexity and parameters of SLPT and SOTA methods. Key: [$^\star$=HRNetW18C, $^\dag$=HRNetW18C-lite]]} \label{Tabal8} \end{table} \textbf{Evaluation on MSA and MCA block}: To explore the influence of \textit{query-query} inter relation (eq.1) and \textit{representation-query} inter relation (eq.3) created by MSA and MCA blocks, we implement four different models with/without MSA and MCA, ranging from 1 to 4. For the models without MCA block, we utilize the landmark representations as the queries input. The performance of the four models are tabulated in Table 6. Without MSA and MCA, each landmark is regressed merely based on the feature of the supporting patches in model 1. Nevertheless, it still outperforms other coordinate regression methods because of the coarse-to-fine framework. When self-attention or cross-attention is introduced into the model, the performance is boosted significantly, reaching at 4.20\% and 4.17\% respectively in terms of NME. Moreover, the self-attention and cross-attention can be combined to improve the performance of model further. \textbf{Evaluation on structure encoding}: we implement two models with/without structure encoding to explore the influence of structural information. With structural information, the performance of SLPT is improved, as shown in Table 7. \textbf{Evaluation on computational complexity}: the computational complexity and parameters of SLPT and other SOTA methods are shown in Table 8. The computational complexity of SLPT is only $1/8$ to $1/5$ FLOPs of the previous SOTA methods (AVS and AWing), demonstrating that learning inherent relation is more efficient than other methods. Although SLPT runs three times for coarse-to-fine localization, patch embedding and linear interpolation procedures, we do not observe a significant increasing of computational complexity, especially for 29 landmarks, because the sparse local patches lead to less tokens. Besides, the influence of patch size and inherent layer number are shown in the Appendix A.4 and A.5. \subsection{Visualization} We calculate the mean attention weight of each MCA and MSA block on the WFLW test set, as shown in Fig.5. We find out that the MCA block tends to aggregate the representation of the supporting and neighboring patches to generate the local feature, while MSA block tends to pay attention to the landmarks with a long distance to create the global feature. That is why the MCA block can incorporate with the MSA block for better performance. \section{Conclusion} In this paper, we find out that the inherent relation between landmarks is significant to the performance of face alignment while it is ignored by the most state-of-the-art methods. To address the problem, we propose a sparse local patch transformer for learning a \textit{query-query} and a \textit{representation-query} relation. Moreover, a coarse-to-fine framework that enables the local patches to evolve into pyramidal former is proposed to further improve the performance of SLPT. With the adaptive inherent relation learned by SLPT, our method achieves robust face alignment, especially for the faces with blur, heavy occlusion and profile view, and outperforms the state-of-the-art methods significantly with much less computational complexity. Ablation studies verify the effectiveness of the proposed method. In future work, the inherent relation learning will be studied further and extended to other tasks. \newpage {\small \bibliographystyle{ieee_fullname}
1,314,259,992,847
arxiv
\section{Introduction} Quantum Hall (QH) phases are two-dimensional topological phases featuring gapless, chiral modes localized at the boundary of the sample~\cite{Halperin1982}. Since the bulk of a QH phase is gapped, its transport properties are controlled by the edge modes~\cite{Wen1990,Wen1992}. The topological properties of the bulk QH phase guide the nature of these boundary modes~\cite{WenBook}. For instance, conventional models suggest that integer and particle-like fractional phases support only {\it downstream} edge modes, while hole-conjugate phases may support modes with both (upstream and downstream) chiralities~\cite{Wen91A,Wen91B,MacDonald_PRL_90,Johnson_PRL_91}. Disorder-induced tunneling and intermode interactions further renormalise such counter-propagating edge modes and, in certain situations, may lead to the emergence of upstream neutral modes~\cite{KFP1994,KaneFisher_PRB_95,Yuval_AP_2017}. Experimental signatures of these upstream neutrals have been observed in the $\nu = 2/3$~\cite{Bid2009,Bid2010, Gurman2012,Yaron2012} and $\nu=5/2$ phases~\cite{Heiblum2021}, as well as in engineered geometries~\cite{Design1,Design2}. However, recent transport experiments~\cite{Yacoby2012,Inoue2014,Heiblum2019} suggest that, even for the `simple' QH phases (such as the $\nu = 1$ or $1/3$), the edge structure is much more intricate and may not be described by orthodox models. Exciting the $\nu = 1$ edge at a quantum point contact (QPC), Ref.~\cite{Yacoby2012} observed an upstream flow of energy (but not charge). A similar experiment was performed for fractional QH phases in the lowest Landau level in Ref.~\cite{Inoue2014}. They observed that partial transmission of charge current through a QPC is accompanied by upstream electric noise (with no net current) in several fractional QH phases (including Laughlin states). In a complementary study, Ref.~\cite{Heiblum2019} observed that the visibility of the interference pattern in an electronic Mach-Zehnder interferometer decreases as the filling factor ($\nu$) is reduced from $2$ to $1$. Moreover, interference is fully suppressed for $\nu \leq 1$. These experiments suggest that the standard picture of particle-like phases, involving one or more downstream modes, is incomplete. Instead these results point to the presence of additional counter-propagating modes, some of which may be charge-neutral. \begin{figure*}[t] \centering \includegraphics[scale=0.25]{Figs/1A} \includegraphics[scale=0.25]{Figs/1B} \includegraphics[scale=0.25]{Figs/1C} \includegraphics[scale=0.25]{Figs/1D} \caption{ A priori possible edge configurations for a bulk (a,b) $\nu = 1$ and (c,d) $\nu = 1/3$ phase. For a sharp confining potential, a single QH droplet (with $\nu = 1$ or $1/3$) composed of $N_{B} + N_{S}$ electrons is expected. As the edge potential becomes smoother, an additional side strip (separated from the bulk by $L_{S}$ guiding centers) composed of $N_{S}$ electrons is nucleated along the edge. The filling factor of the side strip may be the identical to [as shown in (a,c)] or different from [as shown in (b,d)] the bulk filling factor.} \end{figure*} In the early 90s, it was realized that in the presence of a smooth confining potential at the boundary, electronic interactions may induce quantum phase transitions at the edge (which leave the bulk unperturbed). Such edge transitions (or edge reconstructions) may occur in both integer~\cite{CSG1992,Dempsey1993,ChamonWen,Sondhi_PRL_96,FrancoBrey97,KunYangIQHS,Switching2017, Ganpathy21,Karmakar2020,IQHS2020} and fractional~\cite{Meir93,MacDonald_JP_93,KunYang_2002,KunYang_2003,KunYang03,KunYang_2008,KunYang_2009, Ganpathy_PRB_03,WMG_PRL_2013,Jain2014,Yang2021,FQHS2021,Liangdong21} QH phases, as well as in time-reversal-invariant topological insulators~\cite{Yuval2017,Rosenow2021}. The reconstructed edge structure may differ in terms of the number, order, or even the nature of the edge modes. Such phase transitions are driven by the competition between the electrostatic effects of a smooth confining potential and the exchange/correlation energy of an incompressible QH state. For sufficiently smooth potentials, this competition leads to nucleation of additional electronic strips (in QH phases) along the edge~\cite{Beltram2012,Thomas2014}. The nucleated side strips define additional pairs of counter-propagating chiral edge modes at their boundaries. Similarly to the edge of hole-conjugate states, intermode interactions and disorder-induced tunneling among these additional and the original (topological) edge modes may lead to a subsequent renormalization, modifying their nature qualitatively. Such renormalization may even give rise to additional (non-topological) upstream neutral modes~\cite{WMG_PRL_2013}. Here, we describe our recent attempts~\cite{IQHS2020,FQHS2021} to theoretically account for the experimental surprises described above, in terms of reconstruction and the subsequent renormalization of the edge for $\nu = 1$ and $1/3$ QH phases. Additionally, we present new analysis of edge reconstruction of the $\nu = 2/5$ QH phase. The main challenge here is to determine the precise filling factor of the additional side strip nucleated at the edge for a smooth confining potentials. An additional side strip of filling factor $\nu_{\text{strip}}$ defines counter-propagating modes of charge $\nu_{\text{strip}}$. Therefore, for $\nu_{\text{strip}} = \nu_{\text{bulk}}$ subsequent renormalization of the modes (due to disorder-induced tunneling) would lead to localization of a pair of counter-propagating modes and render transport experiments blind to the presence of reconstruction. On the other hand, for $\nu_{\text{strip}} < \nu_{\text{bulk}}$ subsequent renormalization would not induce localization, and may instead lead to the emergence of upstream neutral modes. Therefore, the experimental consequences of reconstruction crucially depend on the strip filling factor. Figures~1, 2 depict some of the a priori possible configurations of the reconstructed edge at $\nu = 1,1/3$ and $2/5$. Here, we find the lowest energy state among these structures as a function of the slope of the confining potential through a variational analysis~\cite{Meir93,IQHS2020,FQHS2021}, which allows us to include a large number of electrons while accounting for quantum correlations inherently present in QH states. Specifically, we treat the strip-size ($N_{S}$) and separation ($L_{S}$) as variational parameters. When the confining potential is sharp, we find the lowest energy state comprises a single QH droplet, i.e. no edge reconstruction. On the other hand, for sufficiently smooth potentials, we find that edge reconstruction leads to the emergence of a pair of additional counter-propagating gapless modes. Our results indicate that, in all three phases, the gapless modes of the reconstructed edge, and their subsequent renormalization leading to the emergence of neutral modes, may account for the experimental results reported in Refs.~\cite{Yacoby2012,Inoue2014,Heiblum2019}. We also analyze additional experimental consequences of edge reconstruction, such as in two-terminal transport. \section{Model Hamiltonian and Variational Analysis} {\it Here, we provide details of the theoretical model used to study the QH edge. We also describe the variational analysis employed to find the lowest energy edge configuration as a function of the slope of the confining potential.} \begin{figure*}[t] \centering \includegraphics[scale=0.25]{Figs/2A} \includegraphics[scale=0.25]{Figs/2B} \includegraphics[scale=0.25]{Figs/2C} \includegraphics[scale=0.25]{Figs/2D} \caption{A priori possible structures at the reconstructed edge for a (spin-singlet) $\nu = 2/5$ phase. The blue and yellow colors correspond to the two spin-polarizations of electrons in the lowest Landau level. Panel (a) depicts a spin-unpolarized edge configuration, while panels (b-d) depict structures with finite magnetization at the edge. Such spontaneous magnetization may arise without an additional stripe at the edge [as depicted in panel (b)] or due to the formation of such a stripe [panels (c,d)].} \end{figure*} \subsection{Basic Setup} We analyze the QH edge in the disk geometry, which is convenient due to the presence of a single boundary even for finite systems. We employ a rotationally symmetric gauge, $e \vec{A} /\hbar = (-y/2\ell^{2}, x/2\ell^{2})$, where $\ell = \sqrt{\hbar/eB}$ is the magnetic length. The rotational invariance allows the single-particle states to be labelled by eigenvalues of the angular momentum ($\hat{L}$). We denote the states in the lowest Landau level (LLL) as $\phi_{m}$ with $m = 0, 1, 2, \dots$. The corresponding wavefunction is $\phi_{m} (\vec{r}\,) = \left( r / \ell \right)^{m} e^{-im\theta_{\text{r}}} e^{-\left(\frac{r}{2\ell}\right)^2} / \sqrt{2^{m+1} \pi m! \ell^2} $, where $re^{-i \theta_{\text{r}}} = x - iy$ is the electronic position. The state $\phi_{m}$ is strongly localized around $r = \sqrt{2 m} \ell$ and has angular momentum $\hbar m$. In the LLL, the dynamics may be described by the Hamiltonian $H = H_{\text{ee}} + H_{\text{c}}$, where $H_{\text{ee}}$ is the two-body electronic repulsion and $H_{\text{c}}$ is the one-body confining potential (also assumed to be circularly symmetric). Note that $H$ commutes with $\hat{L}$. Therefore, the many-body states may be labelled by the total angular momentum. Defining $E_{c} = e^2/\epsilon_0 \ell$ as the Coulomb energy scale and $c_{m \sigma}$ as the annihilation operator corresponding to $\phi_{m}$ with spin state $\sigma={\uparrow}/{\downarrow}$ (along $s_{z}$), we have, \begin{align} H_{\text{ee}} &= \frac{E_{\text{c}}}{2} \sum_{i \neq j} \frac{\ell}{|\vec{r}_{i} - \vec{r}_{j}|} \\ \nonumber &\equiv \frac{E_{\text{c}}}{2} \sum_{\{m,\sigma\},n} V_{m_1 m_2 ; n}^{ee} c_{m_1 + n \sigma_{1}}^{\dagger} c_{m_2 \sigma_{2}}^{\dagger} c_{m_2 + n \sigma_{2}} c_{m_1 \sigma_{1}}, \\ H_{\text{c}} &= \sum_{m, \sigma} V_{m}^{\text{c}} \, \, c_{m \sigma}^{\dagger} c_{m \sigma}. \label{eq:H2} \end{align} The confining potential is modelled using a positively charged background disk (with radius $R$, charge density $\rho_{\text{bg}}$) separated from the electron gas by a distance $d$ along the magnetic field~\cite{KunYangIQHS,KunYang_2002,KunYang_2003}. The parameters $R$ and $\rho_{\text{bg}}$ are chosen such that overall charge neutrality is maintained. The electrostatic potential of this disk (in the plane of the electrons) is, \begin{align} V_{c} (r) = \int_{0}^{R} dr^{\prime} \int_{0}^{2\pi} d \theta \frac{E_{c} \rho_{\text{bg}}}{\sqrt{d^2 + r^2 + {r^{\prime}}^2 - 2 r^{\prime} r \cos \theta}}. \end{align} Then $V_{m}^{\text{c}}$ in Eq.~(\ref{eq:H2}) are the matrix elements of $V_{c}(r)$. Note that the smoothness of this confining potential is controlled by the distance $d$ (the tuning parameter in our analysis). Specifically, the edge potential is quite sharp for $d \sim 0$, and becomes smoother as $d$ increases. We note that edge reconstruction of both integer and fractional QH phases has been considered in previous works. These studies employed unbiased methods, such as exact diagonalization (ED)~\cite{KunYangIQHS,KunYang_2002,KunYang_2003, KunYang_2008,KunYang_2009} and density matrix renormalization group (DMRG)~\cite{Liangdong21,DMRG2021}. However, the precise filling factor at the edge cannot be obtained in ED due to its inherent limitation to small system sizes. By contrast, DMRG overcomes the size limitations of ED but works best for one-dimensional systems and its applicability to the problem of edge reconstruction is not clear. For these reasons, here we employ a variational method to study the edge~\cite{Meir93}. Our method is capable of predicting the precise filling factor of the edge, and is not limited to a small system size. Moreover, such methods have been used extensively to study various aspects of QH phases~\cite{JainCF} and their applicability is well established. \subsection{Variational Analysis} We consider several variational classes describing a priori possible edge structures for the $\nu = 1, 1/3$ and $2/5$ QH phases. All the classes considered here represent product states of a bulk QH droplet composed of $N_{B}$ electrons, and a single edge strip composed of $N_{S}$ electrons. The edge strip is separated from the bulk by $L_{S}$ guiding centers. In our analysis, the total number of electrons ($N_{B} + N_{S}$) is kept fixed. Therefore, the states in any of the classes may be parameterized by $N_{S}$ and $L_{S}$. For a given bulk QH phase, each variational class corresponds to fixed filling factors for the bulk and the edge strip. Figures 1 and 2 depict the various classes of variational states considered in this work. \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{Figs/3} \caption{ Results of the variational analysis using $100$ electrons with bulk filling factor $\nu = 1$. The blue (red) dots show the total energy of the variational states with a $\nu = 1$ integer ($\nu = \frac{1}{3}$ fractional) side-strip as a function of the total angular momentum for a (a) sharp ($d = \ell$) and (b) smooth ($d = 1.5 \ell$) confining potential. Each curve corresponds to states with the same separation between the bulk and side-strip ($L_{S}$) but with different number of electrons in the side-strip ($N_{S}$). The curves shown here correspond to $L_{S}$ varying from $0$ to $30$ guiding centers. The energy of the unreconstructed state has been subtracted to make comparison easier. (a) For sharp edges $(d < 1.3 \ell)$ the ground state is the one with minimum angular momentum, implying no edge reconstruction. (b) For smooth edges $(d > 1.3 \ell)$ the ground state shifts to a higher angular momentum sector, implying that the electronic disk expands and the edge undergoes reconstruction. The minimum energy state lies on the curve corresponding to $L_{S} = 0$. Panel (b) shows that a fractional reconstruction is energetically favorable to an integer reconstruction. } \end{figure} The $\nu = 1$ and $1/3$ phases are assumed to be fully spin-polarized (cf. Fig.~1). Therefore we consider spinless electrons in these cases. Then the bulk $\nu = 1$ phase represents a Slater determinant of $N_{B}$ electrons, occupying all the guiding centers from $m = 0$ to $m = N_{B} - 1$. The bulk $\nu = 1/3$ phase is represented by the $\nu = 1/3$ Laughlin state. The Laughlin wavefunction corresponding to $\nu = 1/m_{B}$ is~\cite{Laughlin83,JainCF}, \begin{align} \Psi_{\frac{1}{m_{B}}, N_{B}} = \prod_{j > i} \bigg[ \big( z_i - z_j \big)^{m_{B}} \bigg] e^{-\frac{1}{4} \sum_{i} |z_{i}|^2}. \end{align} Here $z_{j} = (x_{j} - iy_{j})/\ell$ is the position of the $j^{th}$ electron. Next, the edge strip comprising the $\nu = 1$ phase [Fig.~1(a)] may also be represented as a Slater determinant of $N_{S}$ electrons. On the other hand, the edge strip comprising $\nu = 1/3, 1/5$ phases [Figs. 1(b-c)] are described through a $\nu = 1/m_{S}$ Laughlin state ($m_{S} = 3, 5$) with $M_{S}$ quasiholes at the origin. The separation of the bulk and edge strip ($L_{S}$) is given by, $L_{S} = (M_{S} - 1) - \nu_{\text{bulk}} (N_{B} - 1) $. The corresponding (unnormalized) wavefunction is, \begin{align} \Psi_{\frac{1}{m_{S}}, N_{S}, M_{S}} = \prod_{i=1}^{N_{S}} \bigg[ z_{i}^{M_{S}} \, \prod_{j > i} \big( z_i - z_j \big)^{m_{S}} \bigg] e^{-\frac{1}{4} \sum_{i} |z_{i}|^2}. \end{align} In this work, we focus on the spin-unpolarized $\nu = 2/5$ phase, which may be described as the product state of two copies (one for each spin) of the $\nu = 1/5$ Laughlin phase, i.e. $\Psi_{\frac{2}{5}, N_{B}} = \Psi_{\frac{1}{5}, N_{B}/2,{\uparrow}} \otimes \Psi_{\frac{1}{5}, N_{B}/2,{\downarrow}}$. The reconstructed edge in this phase could be `simple' and identical to the bulk [as shown in Fig.~2(a)], or due to the additional spin degree of freedom, may be spontaneously spin-polarized [see Figs.~2(b-d)]. Interestingly, the latter possibility may occur even without the nucleation of an additional edge stripe [see Fig.~2(b)] (this is analogous to the edge structure described for the $\nu = 2$ phase in Ref.~\cite{Dempsey1993}). All these configurations may be described through product states of appropriate Laughlin states, as mentioned above. For Slater determinants, the energy ($\langle H \rangle$) of the variational states may be evaluated trivially given the matrix elements of the Coulomb interaction and the confining potential~\cite{IQHS2020}. On the other hand, for Laughlin states these may be evaluated using standard classical Monte-Carlo techniques~\cite{JainCF,Metropolis53,Laughlin83,MacDonald93,FQHS2021}. In our analysis, we evaluate the energy of the states in each variational class as a function of $d$, which controls the slope of the confining potential. The ground state for each QH phase, and the precise structure of the edge, is then found by comparing the energies of the states in the different classes. Note that the unreconstructed state (without an additional edge strip) is included in all classes (corresponding to $N_{S} = 0 = L_{P}$). It is the lowest energy state for sharp confining potentials ($d \sim 0$). By contrast, the ground state supports an additional edge strip (finite $N_{S}$, $L_{S}$ or $L_{P}$) for smoother potentials. Finally, the structure of the edge in the ground state uniquely determines the number and nature of the low-energy chiral modes. \section{Variational Results} {\it This section presents the results of our analysis of the edge structure for the $\nu = 1, 1/3$ and $2/5$ QH phases. In all cases, we find that edge reconstruction may lead to the emergence of side stripes with filling factor different from that of the bulk QH phase. } \begin{figure*}[t] \centering \includegraphics[width=0.95\textwidth]{Figs/4} \caption{ Results of the variational calculations with 50 electrons with bulk filling factor $\nu = 1/3$. (a)-(c) The energy ($\langle H \rangle$) of the states in the two variational classes as a function of the total angular momentum at (a) sharp ($d = 0.01 \ell$), (b) moderately smooth ($d = 2.25 \ell$), and (c) very smooth ($d = 2.50 \ell$) confining potentials. In all cases, the energy of the unreconstructed state ($\langle H \rangle_{\text{ur}}$) has been subtracted. The blue (red) circles show energy of states with a side strip of $\nu = 1/3$ ($\nu = 1/5$). The black square marks the state with the lowest energy. (d) The variation of the lowest possible energy in the two variational classes with the smoothness of the confining potential (parameterized by $d/\ell$). The blue (red) line corresponds to states with a side strip of $\nu = 1/3$ ($\nu = 1/5$). As expected, for sharp edges the ground state is the one with $N_{S} = 0$, corresponding to the unreconstructed $\nu = 1/3$ state with angular momentum $3675 \hbar$. This state supports a single chiral $e/3$ mode. For moderately smooth potentials ($ 2.17 < d/\ell < 2.42$), an additional strip of $\nu = 1/3$ is generated at the edge, which gives rise to an extra pair of counter-propagating $e/3$ modes. For very smooth potentials ($d > 2.42 \ell$) the additional strip has the filling factor $1/5$. This second reconstructed state supports a counterpropagating pair of $e/5$ modes in addition to the chiral $e/3$ mode arising from the bulk. } \end{figure*} Figure~3 shows the total energies for the two classes of variational states corresponding to $\nu = 1$ as a function of the total angular momentum at different confining potentials (controlled by $d$). A total of 100 particles were used for these results. The blue dots correspond to integer edges [Fig.~1(a)] while the red dots correspond to the fractional edges [Fig.~1(b)]. For a sharp confining potential [$d < 1.2\ell$, Fig.~3(a)] the lowest energy state is the one with the minimal angular momentum (in this case $4950\hbar$). This corresponds to the unreconstructed $\nu = 1$ state with a single chiral edge mode. For smoother potentials [$d > 1.3\ell$, Fig.~3(b)], the lowest energy state has a much larger angular momentum ($5256 \hbar$ for $d=1.5\ell$ with $N_{S} = 18$ and $L_{S} = 0$) than the compact state. Note that the states with a fractional edge are found to have a lower energy than the states with an integer edge \textit{whenever reconstruction is favored}. We have verified that our results do not depend on the detailed form of the confining potential~\cite{IQHS2020}. The fractionally reconstructed edge [Fig.~1(b)] supports a downstream $e^{*} = 1$ mode (originating from the bulk) in addition to a counter-propagating pair of $e^{*} = 1/3$ modes arising from the side strip. Therefore, our results imply that {\it fractionally} charged chiral edge modes may exist even at the edge of bulk integer QH phases. \begin{figure}[b] \centering \includegraphics[scale=0.25]{Figs/5} \caption{ Results of the variational calculations with 40 electrons with bulk filling factor $\nu = 2/5$. The curves show the variation of the lowest possible energy in the different variational classes as a function of the smoothness of the confining potential. The energy of unreconstructed state has been subtracted for ease of comparison. The black curve corresponds to spin-unpolarized edge, while the blue (red) curves correspond to spin-polarized edge structures without (with) an additional edge stripe. The red curve corresponds to an edge stripe in the $\nu = 1/3$ phase. } \end{figure} Figure~4 depicts the total energies of the states in the two classes corresponding to $\nu = 1/3$, classified by their angular momentum, for several values of $d$. These results correspond to a total of 50 particles. The blue (red) dots in Figs.~4(a-c) correspond to edges with a side strip of filling factor $1/3$ ($1/5$). The black square marks the lowest energy state. In each case, we have subtracted the energy of the unreconstructed state ($N_{S} = 0$) to make the comparison easier. For a sharp confining potential [$d \lesssim 2.1\ell$, Fig.~4(a)] the standard Laughlin state, with no additional side strip, has the lowest energy (as expected). Such a state clearly has a single chiral $e/3$ mode at the edge. For smoother potentials [$d \gtrsim 2.1\ell$, Figs.~4(b-c)], the lowest energy state comprises an additional side strip ($N_{S} > 0$). This side strip may have filling factor $1/3$ [Fig.~4(b)] for moderate slope of the confining potential ($N_{S} = 15$, $L_{S} = 11$ for $d = 2.25 \ell$) or $1/5$ [Fig.~4(c)] for very shallow slope of the potential ($N_{S} = 14$, $L_{S} = 3$ for $d = 2.5 \ell$). Figure~4(d) shows the variation of the lowest possible energy in the two classes with the slope of the confining potential. Evidently, the filling factor of the side strip is $1/3$ in the range $2.17 \ell < d < 2.42 \ell$, and switches to $1/5$ for larger values of $d$. Hence, our analysis of the $\nu = 1/3$ edge suggests that upon reconstruction, it may support in addition to the single $e^{*} = 1/3$ mode arising from the bulk, a pair of counter-propagating $e^{*} = 1/3$ or (notably) $1/5$ modes. Figure~5 presents the lowest possible energy in the classes corresponding to $\nu = 2/5$ as a function of the slope of the confining potential. These are results for a total (including both spins) of 40 particles. The black line corresponds to the structure shown in Fig.~2(a) with a finite $N_{S}$ (note that $N_{S} = 0$ corresponds to the unreconstructed state). Our analysis suggests that such a reconstruction is not energetically favorable for any slope of the confining potential. The blue line corresponds to reconstruction without an additional strip [Fig.~2(b)]. Clearly, this edge configuration is favorable (compared to the unreconstructed state) for $d > 2.0 \ell$. The emergence of a spontaneous spin-polarization at the edge through a redistribution of the particles within the bulk (as opposed to the formation of a separate stripe) is analogous to the results of Ref.~\cite{Dempsey1993} for the bulk $\nu = 2$ state. Such a reconstruction does {\it not} lead to the emergence of new chiral modes. Rather, it only increases the spatial separation between the two bare (spin-polarized) $e^{*} = 1/5$ modes supported by the bulk state. However, our analysis suggests that for even smoother confining potentials ($d > 3.1 \ell$), a separate edge stripe with $\nu = 1/3$ [Fig.~2(d)] is more favorable energetically (the red curve in Fig.~5 shows the lowest possible energy of this class). Such an edge structure has finite edge magnetization and supports an (additional) pair of counter-propagating $e^{*} = 1/3$ modes. Our results indicate that the structure shown in Fig.~2(c) is not energetically favorable in any range of $d$. For this reason, we do not show the energy of this class in Fig.~5. We thus conclude that for sufficiently smooth confining potentials, the spin-unpolarized $\nu = 2/5$ state may support at its edge, a pair of (spin-polarized) counter-propagating $e^{*} = 1/3$ modes in addition to the pair of downstream $1/5$ modes of both spins. \section{Experimental Manifestations of Edge Reconstruction} {\it The various configurations of the reconstructed edge found in our analysis may be uniquely identified in carefully designed transport experiments. Here, we focus on the behavior of the two-terminal conductance as a function of the sample length, and the manifestations of upstream neutral modes, which may emerge due to further renormalization of the edge modes. } \subsection{Two-Terminal Conductance} Edge reconstruction is expected to have very clear consequences for the (electric) two terminal conductance ($g_{\text{2-ter}}$) as a function of the length of the edge ($L$). In a two-terminal setup, in the absence of edge equilibration, the chiral channels exiting the source contact are biased with respect to the modes entering it. The presence of impurities and potential disorder generates random tunneling between the chiral modes at the edge, which may facilitate intermode equilibration over a characteristic length $\ell_{\text{eq}}$~\cite{Nosiglia2018,Gornyi21}. Therefore, we may expect that $g_{\text{2-ter}}$ varies as a function of $L$ over the equilibration length scale $\ell_{\text{eq}}$. For $L \gg \ell_{\text{eq}}$, assuming full intermode equilibration, the two-terminal conductance is $g_{\text{2-ter}} = \nu_{\text{bulk}} \times e^{2}/h$ irrespective of the slope of the confining potential, reflecting the topological order of the bulk. \begin{figure}[t] \centering \includegraphics[width=0.98\columnwidth]{Figs/6} \caption{ For the edge structures in Figs.~1(b,d), the bare edge modes ($\phi_{1,2,3}$) are renormalized by intermode interactions (represented by the red wavy line) and disorder-induced electron tunneling (represented by the blue dashed line). Such renormalization may lead to emergence of a downstream charge ($\phi_{c}$) and an upstream neutral mode ($\phi_{n}$). In both cases, the outermost mode is assumed to be decoupled from the inner two modes, since our variational analysis indicates that, for smooth confining potentials, the distance between the outer mode and the two inner modes is much larger than the distance between the two inner modes.} \end{figure} The $L \ll \ell_{\text{eq}}$ regime is more interesting, since the two terminal conductance is sensitive to the detailed structure of the edge in absence of intermode equilibration. For the unreconstructed edge (in the case of a sharp confining potential), $g_{\text{2-ter}} = \nu_{\text{bulk}} \times e^{2}/h$ for all values of $L$. This is because the unreconstructed edge supports only downstream mode (for the three phases considered here), rendering the notion of equilibration irrelevant. For reconstructed edges the additional pair of counter-propagating modes may also contribute to $g_{\text{2-ter}}$. For the bulk $\nu = 1$ phase, the side stripe has filling factor $\nu = 1/3$. Then $g_{\text{2-ter}} = 5/3 \times e^{2}/h$ in very short samples. Note that the coefficient $5/3$ uniquely determines the filling factor of the edge stripe. Hence, this is a {\it smoking gun} signature of our predicted edge structure. The reconstructed edge of the $\nu = 1/3$ phase, may support additional modes with $e^{*} = 1/3$ or $1/5$ depending on the slope of the confining potential. Evidently, $g_{\text{2-ter}} = 1 \times e^{2}/h$ ($11/15 \times e^{2}/h$) in the former (latter) case. Finally, for the $\nu = 2/5$ phase, $g_{\text{2-ter}}$ is sensitive only to the `second' reconstruction involving the formation of an additional $\nu = 1/3$ stripe. In this case, $g_{\text{2-ter}}$ increases to $16/15 \times e^{2}/h$. We note that the length dependence of the two-terminal conductance has been reported for other filling factors~\cite{Lafont2019}. \subsection{Emergent Non-Topological Neutral Modes} In the previous section, we relied on our variational analysis of the ground state in order to discern the nature of the chiral modes at the reconstructed edge. However, intermode interactions and disorder-induced tunnelling among these chirals may lead a subsequent renormalization of the bare edge modes. Such renormalization would lead to localization for identical (same $e^{*}$) counter-propagating modes. By contrast, counter-propagating modes of unequal charges (arising from QH regions of different filling factor) would be renormalized to two new effective modes of (in general, non-universal) charges $e^{*}_{{\uparrow}}$ and $e^{*}_{{\downarrow}}$ (here, ${\uparrow}/{\downarrow}$ denotes the direction of propagation: upstream/downstream)~\cite{KFP1994,Yuval_AP_2017}. Interestingly, in some cases $e^{*}_{{\uparrow}}$ may be zero leading to the emergence of gapless upstream neutral modes. As explained previously, our variational analysis suggests that (for sufficiently smooth potentials) the filling factor of the additional side strip is not equal to the bulk filling factor, implying that chiral modes with differing $e^{*}$ may be supported at the reconstructed edge. Our results also indicate that as the confining potential becomes shallower ($d$ increases), the width of the edge stripe ($N_{S}$) increases much faster than its separation from the bulk ($L_{S}$). Hence, for very smooth confining potentials the outermost chiral mode couples very weakly to the inner pair of counter-propagating modes. Over sufficiently short length scales, we may assume that the outermost mode is completely decoupled from the other two. In this case, mode renormalization of the inner pair of counter-propagating chirals could lead to upstream neutral modes ($\phi_{n}$ in Fig.~6). Note that for simplicity, we only focus on the fully spin-polarized cases of $\nu = 1$ and $\nu = 1/3$ phases in this section. The emergent neutral mode $\phi_{n}$ supports chiral flow of heat without an accompanying charge current, and hence has several unique manifestations in transport experiments. Such an upstream heat current was reported in Ref.~\cite{Yacoby2012} for the $\nu = 1$ phase. A biased neutral mode may also lead to generation of shot noise, despite the absence of a net charge current, due to the formation of quasiparticle-quasihole pairs~\cite{Park2019,Spanslatt2019,Spanslatt2020}. Such observations were reported in Refs.~\cite{Inoue2014,Sabo2017,Heiblum2019} for various QH phases (including particle-like fractions). Additionally, the presence of upstream neutral modes may lead to the generation of shot noise on the (intermediate) conductance plateaus in the transmission of a quantum point contact. Interestingly, under certain situations, the Fano factor of this noise may be quantized and equal to the bulk filling factor $\nu_{\text{bulk}}$ instead of the quasiparticle charge~\cite{Heiblum2019,Cohen2019,Jinhong2020,Biswas2021}. A complementary signature of upstream neutrals is the suppression of visibility of anyonic interference in electronic Mach-Zehnder setups~\cite{Moshe_PRL_2016}. This is in accordance with the observations of Ref.~\cite{Heiblum2019} for QH phases with $\nu \leq 1$. \section{Conclusions} We have employed variational analysis to study edge reconstruction that at the boundary of prototypical particle-like QH phases ($\nu = 1, 1/3$ and $2/5$). We have found that, in each case, edge reconstruction leads to the formation of an additional side strip, and that for sufficiently smooth potentials, the filling fraction of this side strip may be different from the bulk filling factor. Such a reconstruction has clear signatures in transport. We have pointed out some of these consequences related to the two-terminal conductance and the emergence of upstream neutral modes. \\ \section*{Acknowledgments} We acknowledge illuminating discussions with Jinhong Park and Moty Heiblum. M.G. was supported by the Israel Science Foundation (ISF) and the Directorate for Defense Research and Development (DDR\&D) grant No. 3427/21 and by the US-Israel Binational Science Foundation (BSF) Grants No. 2016224 and 2020072. Y.G. was supported by CRC 183 (project C01), the Minerva Foundation, DFG Grant No. RO 2247/11-1, MI 658/10-2, the German Israeli Foundation (Grant No. I-118-303.1-2018), the National Science Foundation through award DMR-2037654 and the US-Israel Binational Science Foundation (BSF), and the Helmholtz International Fellow Award. U.K. was supported by the Raymond and Beverly Sackler Faculty of Exact Sciences at Tel Aviv University and by the Raymond and Beverly Sackler Center for Computational Molecular and Material Science.
1,314,259,992,848
arxiv
\section{Introduction} The present paper, devoted to the scalar wave equation in a non-commutative Schwarzschild space-time, is strongly motivated by three branches of modern gravitational physics: \vskip 0.3cm \noindent (i) In their investigation of quantum amplitudes in black-hole evaporation \cite{Hawk05}, the authors of \cite{Farl04,Farl05} have considered emission of scalar radiation in a black-hole collapse problem, assuming non-spherical perturbations of the scalar field $\phi$ on the final surface $\Sigma_{F}$, and that the intrinsic three-metric describes an exactly spherically-symmetric spatial gravitational field. \vskip 0.3cm \noindent (ii) In general relativity, unexpected features of the asymptotic structure are already found to occur: massless scalar fields which have a Bondi-type expansion in powers of $r^{-1}$ near null past infinity do not have such an expansion near future null infinity; solutions which have physically reasonable Cauchy data may fail to have Bondi-type expansions near null infinity \cite{Stew79}. \vskip 0.3cm \noindent (iii) According to the models studied in \cite{Nico06}, \cite{Smai04}, \cite{Spal06}, the non-commutativity of spacetime can be encoded in the commutator of operators corresponding to spacetime coordinates, i.e. (the integer $D$ below being even) \begin{equation} [x^{\mu},x^{\nu}]={\rm i} \; \theta^{\mu \nu}, \; \mu,\nu=1,2,...,D \label{(1.1)} \end{equation} when the antisymmetric matrix $\theta^{\mu \nu}$ is taken to have a block-diagonal form $$ \theta^{\mu \nu}={\rm diag}\Bigr(\theta_{1},...,\theta_{D/2}\Bigr) $$ with \begin{equation} \theta_{i}=\theta \left(\begin{array}{cc} 0 & 1 \\ -1 & 0 \\ \end{array} \right) \; \; \forall i=1,2,...,D/2, \label{(1.2)} \end{equation} the parameter $\theta$ having dimension of length squared and being constant. As shown in \cite{Smai04}, the constancy of $\theta$ is very important to obtain a consistent treatment of Lorentz invariance and unitarity. The authors of \cite{Nico06} solve the Einstein equations with mass density of a static, spherically symmetric, smeared particle-like gravitational source as (hereafter, in agreement with our earlier work \cite{Digr06}, we use $G=c={\hbar}=1$ units) \begin{equation} \rho_{\theta}(r)={M \over (4\pi \theta)^{3\over 2}} {\rm e}^{-{r^{2}\over 4\theta}}, \label{(1.3)} \end{equation} which therefore plays the role of matter source. Their resulting spherically symmetric metric is \begin{equation} ds^{2}=-\left(1-{2m(r,\theta)\over r}\right)dt^{2} +\left(1-{2m(r,\theta)\over r}\right)^{-1}dr^{2} +r^{2}(d\Theta^{2}+\sin^{2}\Theta d\varphi^{2}), \label{(1.4)} \end{equation} where, in terms of the lower incomplete gamma function \begin{equation} \gamma \left({3\over 2},{r^{2}\over 4\theta}\right) \equiv \int_{0}^{{r^{2}\over 4\theta}}\sqrt{t} {\rm e}^{-t} \; dt, \label{(1.5)} \end{equation} we define the mass function \cite{Nico06, Digr06} \begin{equation} m(r,\theta) \equiv {2M \over \sqrt{\pi}} \gamma \left({3\over 2},{r^{2}\over 4\theta}\right). \label{(1.6)} \end{equation} Thus, if one tries to study emission of scalar radiation as in \cite{Farl04, Farl05} but in the presence of a non-vanishing $\theta$ parameter (cf \cite{Digr06} for the pure gravity case), one is naturally led to study a scalar wave equation in a spherically symmetric spacetime whose metric is affected by $\theta$, which is the goal of the present paper. Section 2 builds conformal infinity for the space-time with metric (1.4). Section 3, following \cite{Stew79}, turns the scalar wave equation into an inhomogeneous Euler--Poisson--Darboux equation. Section 4 solves such an equation and shows under which conditions there is full qualitative agreement with general relativity. Concluding remarks and open problems are presented in section 5, while the appendices describe relevant mathematical details. \section{Conformal infinity} Inspired by general relativity, we define a new radial coordinate $r^{*}$ in such a way that \begin{equation} dr^{*}={dr \over 1-{2m(r,\theta)\over r}}. \label{(2.1)} \end{equation} This equation is solved by \begin{equation} r^{*}=r+2 \int {m(r,\theta)\over r-2m(r,\theta)}dr , \label{(2.2)} \end{equation} and the metric (1.4) can be re-expressed in the form \begin{equation} ds^{2}=-\left(1-{2m(r,\theta)\over r}\right)du dv +r^{2}(d\Theta^{2}+\sin^{2}\Theta d\varphi^{2}), \label{(2.3)} \end{equation} where $\theta \in [0,\pi], \varphi \in [0,2\pi]$, and we have defined the `retarded' coordinate \begin{equation} u \equiv t-r^{*} \; \; \in ]-\infty,\infty[ , \label{(2.4)} \end{equation} and the `advanced' coordinate \begin{equation} v \equiv t+r^{*} \; \; \in ]-\infty,\infty[. \label{(2.5)} \end{equation} The equations (2.2), (2.4) and (2.5) yield \begin{equation} {2\over (v-u)}={1\over r \left[1+{2\over r} \int {m(r,\theta)\over r-2m(r,\theta)}dr \right]}. \label {(2.6)} \end{equation} To lowest order, Eq. (2.6) is solved by ${1\over r} \approx {2\over (v-u)}$, and on defining \begin{equation} F(u,v,\theta) \equiv \left . \int {m(r,\theta) \over r-2m (r,\theta)}dr \right |_{r={(v-u)\over 2}} , \label{(2.7)} \end{equation} one finds, by iterated approximations, the asymptotic expansion \begin{equation} {1\over r} \sim {2\over (v-u)}+8{F(u,v,\theta)\over (v-u)^{2}} +{\rm O} \left({F^{2}(u,v,\theta)\over (v-u)^{3}}\right). \label{(2.8)} \end{equation} The limit $v \rightarrow + \infty$ with $u,\Theta,\varphi$ fixed defines future null infinity ${\cal I}^{+}$; the limit $u \rightarrow + \infty$ with $v,\Theta,\varphi$ fixed defines past null infinity ${\cal I}^{-}$, while the limit $u \rightarrow -\infty, v \rightarrow + \infty$ with $(u+v),\Theta,\varphi$ fixed defines spacelike infinity, i.e. the point $I^{0}$. The figures below show the behaviour of the denominator $y \equiv 1-{2m(r,\theta)\over r}$ in Eq. (2.1) for various values of $\theta$. Interestingly, the occurrence of $\theta$ does not introduce new singularities with respect to general relativity. It is actually simpler to introduce the coordinates $u$ and $v$ separately, which yields the conformally rescaled, ``unphysical'' metrics (here $f \equiv r^{-1}$, and $d\Sigma^{2}$ is the metric on a unit two-sphere) \begin{equation} d{\widetilde s}^{2}= f^{2}\left[-(1-2mf)du^{2}-2du \; dr +r^{2} d\Sigma^{2}\right] =-(f^{2}-2mf^{3})du^{2}+2du \; df +d\Sigma^{2}, \label{(2.9)} \end{equation} and \begin{equation} d{\widetilde S}^{2}=-(f^{2}-2mf^{3})dv^{2}-2dv \; df +d\Sigma^{2}. \label{(2.10)} \end{equation} These metrics are manifestly regular and analytic on their respective hypersurfaces $f=0$, since their determinants are equal to $- \sin^{2}\Theta$ for all $f$, including $f=0$. The physical space-time corresponds to $f>0$ in (2.9), and we can extend the manifold to include ${\cal I}^{+}$, given when $f=0$. Similarly, in (2.10), the physical space-time corresponds to $f>0$ and can be extended to include ${\cal I}^{-}$, given when $f=0$. Only the boundary ${\cal I} \equiv {\cal I}^{+} \cup {\cal I}^{-}$ is adjoined to the space-time. In common with general relativity, we note here a difficulty that is encountered if we try to identify ${\cal I}^{-}$ with ${\cal I}^{+}$. If we do extend the region of definition of (2.9) to include negative values of $f$, and then make the replacement $f \rightarrow -f$, we see that the metric has the form (2.10) (with $u$ in place of $v$) but with a mass function $-m$ in place of $m$. Thus, the extension across ${\cal I}$ involves a reversal of the sign of the mass function, which is incompatible with Eq. (1.6) unless we advocate a discontinuity in the derivative of the curvature \cite{Penr86} across $\cal I$. It is therefore not reasonable to identify ${\cal I}^{+}$ with ${\cal I}^{-}$. To sum up, we have two disjoint boundary hypersurfaces ${\cal I}^{+}$ and ${\cal I}^{-}$, each of which is a cylinder with topology $S^{2} \times {\bf R}$. It is clear from (2.9) and (2.10) that each of ${\cal I}^{\pm}$ is a null hypersurface (the induced metric at $f=0$ being degenerate). These null hypersurfaces are generated by rays (given by $\Theta,\phi ={\rm constant}$, $f=0$) whose tangents are normals to the hypersurfaces. These rays may be taken to be the ${\bf R}$'s of the topological product $S^{2} \times {\bf R}$. \begin{figure}[!h] \centerline{\hbox{\psfig{figure=th1.eps,width=0.6\textwidth}}} \caption{Plot of the denominator $y \equiv 1-{2m(r,\theta)\over r}$ in Eq. (2.1) when $\theta=10^{-7}$.} \end{figure} \begin{figure}[!h] \centerline{\hbox{\psfig{figure=th2.eps,width=0.6\textwidth}}} \caption{Plot of the denominator $y \equiv 1-{2m(r,\theta)\over r}$ in Eq. (2.1) when $\theta=10^{-4}$.} \end{figure} \begin{figure}[!h] \centerline{\hbox{\psfig{figure=th3.eps,width=0.6\textwidth}}} \caption{Plot of the denominator $y \equiv 1-{2m(r,\theta)\over r}$ in Eq. (2.1) when $\theta=10$.} \end{figure} \begin{figure}[!h] \centerline{\hbox{\psfig{figure=th4.eps,width=0.6\textwidth}}} \caption{Plot of the denominator $y \equiv 1-{2m(r,\theta)\over r}$ in Eq. (2.1) when $\theta=10^{4}$.} \end{figure} \clearpage \section{Inhomogeneous Euler--Poisson--Darboux equation} The coordinates $(u,v)$ defined in (2.4) and (2.5) are not the most convenient for discussing the limits which define conformal infinity \cite{Stew79}. We therefore define (cf. \cite{Stew79}) a function $w_{\theta}(x)$ by requiring that $w_{\theta}(x=r^{-1})$ should be equal to $r^{*}$ in (2.2), i.e. \begin{equation} w_{\theta}(x) \equiv \int {dx \over x^{2}(2xm(x^{-1},\theta)-1)} , \label{(3.1)} \end{equation} which implies \begin{equation} g_{\theta}(x) \equiv -w_{\theta}'(x) ={1\over x^{2}(1-2xm)}. \label{(3.2)} \end{equation} Equation (3.1) defines a one-parameter family of monotone decreasing $C^{\infty}$ functions taking values over the whole real line. The monotone decreasing character of $w_{\theta}$ is proved by imposing that $1-2xm >0$. This is indeed satisfied for sufficiently small values of $\theta$, so that $1-2xm \approx 1-2xM$, which is positive provided $x < {1\over 2M}$. A $C^{\infty}$ inverse function therefore exists, which makes it possible to define new coordinates $a,b$ according to (cf \cite{Stew79}) \begin{equation} w_{\theta}(x=a) \equiv \left . \int {dx \over x^{2}(2xm-1)} \right |_{x=a} ={v \over 2}={t\over 2}+{r^{*}\over 2}, \label{(3.3)} \end{equation} \begin{equation} w_{\theta}(x=b) \equiv \left . \int {dx \over x^{2}(2xm-1)} \right |_{x=b} =-{u \over 2}=-{t\over 2}+{r^{*}\over 2}, \label{(3.4)} \end{equation} where the integrals (3.3) and (3.4) involve the mass function $m=m(r=x^{-1})$. On defining $f \equiv r^{-1}$ as in section 2, one finds from (2.2), (3.1), (3.3) and (3.4) that \begin{equation} w_{\theta}(f(a,b))=r^{*}(a,b)=w_{\theta}(x=a)+w_{\theta}(x=b). \label{(3.5)} \end{equation} Moreover, from (3.2)--(3.4), the metric (2.3) in the $(u,v,\Theta,\varphi)$ coordinates takes the following form in the $(a,b,\Theta,\varphi)$ coordinates: \begin{equation} ds^{2}=4(1-2mf)g_{1}(a)g_{2}(b)da \; db +f^{-2}d\Sigma^{2}, \label{(3.6)} \end{equation} having defined \begin{equation} M_{1}(a) \equiv m(a^{-1},\theta), \; g_{1}(a) \equiv a^{-2}(1-2aM_{1}(a))^{-1}=g_{\theta}(a), \label{(3.7)} \end{equation} \begin{equation} M_{2}(b) \equiv m(b^{-1},\theta), \; g_{2}(b) \equiv b^{-2}(1-2bM_{2}(b))^{-1}=g_{\theta}(b). \label{(3.8)} \end{equation} In the analysis of the scalar wave equation $\cstok{\ } \psi=0$, we now rescale the scalar field $\psi$ according to \begin{equation} {\widetilde \psi}=\Omega^{-1}\psi, \label{(3.9)} \end{equation} where $\Omega$ is a real positive function such that \begin{equation} \Omega=0, \; \Omega_{,k} \not = 0, \; g^{ik}\Omega_{,i} \Omega_{,k}=0 \; {\rm on} \; {\cal I}^{\pm}. \label{(3.10)} \end{equation} The `unphysical' scalar field $\widetilde \psi$ satisfies the conformally invariant wave equation in $4$ spacetime dimensions, i.e. \begin{equation} \left(\cstok{\ } +{R\over 6} \right){\widetilde \psi}=0, \label{(3.11)} \end{equation} where $\cstok{\ }$ is the D'Alembert wave operator \cite{Stew79}, and $R$ is the scalar curvature in the `unphysical' space-time with line element \begin{equation} d{\widetilde s}^{2}=\Omega^{2}ds^{2}=4 \Omega^{2}(1-2mf) g_{1}(a)g_{2}(b)da \; db +\Omega^{2}f^{-2}d\Sigma^{2}. \label{(3.12)} \end{equation} On choosing the conformal factor in the form $\Omega=(a+b)f$, we therefore obtain the metric tensor \begin{equation} g_{\mu \nu}= \begin{pmatrix} 0 \hfill & G(a,b) \hfill & 0 \hfill & 0 \hfill \\ G(a,b) \hfill & 0 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & (a+b)^{2} \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & 0 \hfill & (a+b)^{2}\sin^{2} \Theta \end{pmatrix} \label{(3.13)} \end{equation} having introduced \begin{equation} m(a,b) \equiv {2M\over \sqrt{\pi}} \gamma \left({3\over 2},{1\over 4 \theta f^{2}(a,b)}\right), \label{(3.14)} \end{equation} \begin{equation} F(a,b) \equiv f^{2}(a,b)-2m(a,b)f^{3}(a,b), \label{(3.15)} \end{equation} \begin{equation} G(a,b) \equiv 2(a+b)^{2} g_{1}(a)g_{2}(b)F(a,b). \label{(3.16)} \end{equation} By virtue of spherical symmetry, we look for solutions of Eq. (3.11) as a linear combination of factorized terms as \begin{equation} {\widetilde \psi}_{\theta}(a,b,\Theta,\varphi) ={\chi_{\theta}(a,b)\over (a+b)}Y_{lm}(\Theta,\varphi), \label{(3.17)} \end{equation} where $Y_{lm}(\Theta,\varphi)$ are the spherical harmonics on $S^{2}$. Substitution of the ansatz (3.17) into Eq. (3.11) gives \begin{equation} L[\chi]=S_{\theta}(a,b)\chi, \label{(3.18)} \end{equation} where $L$ is the Euler--Poisson--Darboux operator \cite{Stew79} \begin{equation} L \equiv {\partial^{2}\over \partial a \partial b} -{l(l+1)\over (a+b)^{2}}, \label{(3.19)} \end{equation} which depends implicitly on $\theta$ through $a,b$ defined in (3.3), (3.4), while $S_{\theta}$ is the $\theta$-dependent source term \begin{eqnarray} S_{\theta}(a,b) & \equiv & l(l+1){\left({G\over 2}-1 \right) \over (a+b)^{2}} +{1\over 12}G R(\theta) \nonumber \\ &=& l(l+1)\left[g_{1}(a)g_{2}(b)F(a,b)-{1\over (a+b)^{2}} \right] \nonumber \\ &+& {1\over 6}(a+b)^{2} g_{1}(a)g_{2}(b)F(a,b)R(\theta), \label{(3.20)} \end{eqnarray} having denoted by $R(\theta)$ the scalar curvature in Eq. (B24). Inspired by \cite{Stew79}, we now write the solution of Eq. (3.18) as the sum $\chi^{0}+L^{-1}S$, where $\chi^{0}$ is the general solution of the homogeneous Euler--Poisson--Darboux equation $L[\chi]=0$, while $L^{-1}$ is an integral operator with kernel given by the Riemann--Green function (see appendix) of $L$ \cite{Cour61}: \begin{equation} \chi_{\theta}(a,b)=\chi^{0}(a,b)- \int \int_{D(a,b)} R(a,b;a',b') S_{\theta}(a',b')\chi_{\theta}(a',b')da' \; db', \label{(3.21)} \end{equation} having defined \begin{equation} D(a,b) \equiv \{ a',b': 0 \leq a \leq a' \leq b' \leq b \}. \label{(3.22)} \end{equation} As is described in \cite{Stew79}, $\chi^{0}(a,b)$ has the general form \begin{equation} \chi^{0}(a,b)=(a+b)^{l+1}\left[\left({\partial \over \partial a}\right)^{l} {A(a)\over (a+b)^{l+1}}+\left({\partial \over \partial b}\right)^{l} {B(b)\over (a+b)^{l+1}}\right], \label{(3.23)} \end{equation} with $A$ and $B$ arbitrary $C^{l+1}$ functions. Moreover, the Riemann--Green function of the operator $L$ defined in (3.19) is obtained from the Legendre polynomial of degree $l$ according to \cite{Cops58} \begin{equation} R(a,b;a',b')=P_{l}(z(a,b;a',b')), \label{(3.24)} \end{equation} having defined \cite{Stew79} \begin{equation} z(a,b;a',b') \equiv {(a-a')(b-b')+(a+b')(a'+b) \over (a+b)(a'+b')}. \label{(3.25)} \end{equation} \section{Qualitative analysis of the $l=0$ solution} Hereafter we consider for simplicity the case $l=0$; the comparison with our Ref. \cite{Stew79} is then easier, and all main features are already displayed. Strictly, we consider an asymptotic characteristic initial-value problem where data are specified on past null infinity for $a \in [0,a_{0}]$ and on the outgoing null hypersurface $a=a_{0}={\rm constant}$. If $l=0$, it is clear from (3.23) that the characteristic data can be set to $1$: $\chi^{0}(a,b)=1$, while the Riemann--Green function in (3.24) reduces to $1$ \cite{Stew79}: \begin{equation} R_{l=0}(a,b;a',b')=P_{0}(z(a,b;a',b'))=1. \label{(4.1)} \end{equation} The inhomogeneous wave equation (3.18) with $l=0$ can be solved with the help of a contraction mapping, i.e. \cite{Stew79} \begin{equation} \chi(a,b)=\chi^{0}(a,b)+\sum_{n=1}^{\infty}\chi^{n}(a,b) =1+\sum_{n=1}^{\infty}\chi^{n}(a,b), \label{(4.2)} \end{equation} where \begin{equation} \chi^{n}(a,b)=\int \int_{D(a,b)}S_{\theta}(a',b')\chi^{n-1}(a',b') da' \; db' = {\rm O}((a+b)^{n}), \label{(4.3)} \end{equation} and the series in (4.2) is known to be uniformly convergent near spacelike infinity in general relativity \cite{Stew79}. Moreover, in general relativity the partial derivative $\chi_{,a}$ as $a \rightarrow 0$ and $b$ is fixed remains bounded, as well as the partial derivative $\chi_{,b}$ as $b \rightarrow 0$ and $a$ is fixed. The second derivative $\chi_{,aa}$, however, diverges near future null infinity, which implies that the presence of a conformal singularity at spacelike infinity affects the behaviour of scalar fields on future null infinity, reducing by a considerable amount their differentiability class. Such a property is proved by exploiting the integral representation of $\chi_{,aa}$, i.e. \begin{equation} \chi_{,aa}=- \int_{a}^{b}\Bigr[S_{,a}(a,b')\chi(a,b') +S(a,b')\chi_{,a}(a,b')\Bigr]db'. \label{(4.4)} \end{equation} By insertion of (4.2) into (4.4), and bearing in mind (4.3), one finds that the possible singularities of $\chi_{,aa}$ are ruled by the integrals \cite{Stew79} \begin{equation} I_{0}(a,b) \equiv \int_{0}^{b}S_{,a}(a,b')db', \label{(4.5)} \end{equation} \begin{equation} I_{1}(a,b) \equiv \int_{0}^{b}(a+b')S_{,a}(a,b')db', \label{(4.6)} \end{equation} where $I_{0}$ remains finite as $a \rightarrow 0$, whereas $I_{1}$ displays a logarithmic singularity as $a \rightarrow 0$. If $l=0$ in Sec. 3 we find, for finite values of $\theta$, the counterpart of (4.5) given by the integral \begin{equation} {\widetilde I}_{0}(a,b) \equiv \int_{0}^{b}S_{\theta,a}(a,b')db', \label{(4.7)} \end{equation} where, from the asymptotic formulae as $a \rightarrow 0$ and $b$ is fixed, we find, for all finite values of $\theta$, \begin{equation} f(a,b) \sim {ab \over (a+b)}, \label{(4.8)} \end{equation} \begin{equation} m(a,b) \sim {2M \over \sqrt{\pi}} \int_{0}^{{(a+b)^{2}\over 4 \theta a^{2}b^{2}}} \sqrt{t} {\rm e}^{-t} \; dt \sim M, \label{(4.9)} \end{equation} \begin{equation} F(a,b) \equiv (f^{2}-2m f^{3})(a,b) \sim {a^{2}b^{2} \over (a+b)^{2}}(1+{\rm O}(a)), \label{(4.10)} \end{equation} \begin{equation} S_{\theta} \sim {2Mab \over (a+b)^{3}}, \label{(4.11)} \end{equation} \begin{equation} S_{\theta,a} \sim 2Mb \left[-{2\over (a+b)^{3}}+{3b \over (a+b)^{4}} \right], \label{(4.12)} \end{equation} and hence \begin{equation} {\widetilde I}_{0} \sim -{2Mb^{2}\over (a+b)^{3}} \; {\rm as} \; a \rightarrow 0, \label{(4.13)} \end{equation} in agreement with the analysis in \cite{Stew79} for general relativity. These approximations should be abandoned only if $\theta$ is so large that (cf. (4.9)) \begin{equation} \lim_{a \to 0} \theta a^{2}={\rm constant}. \label{(4.14)} \end{equation} Furthermore, the counterpart of (4.6) is given by the integral \begin{equation} {\widetilde I}_{1}(a,b) \equiv \int_{0}^{b} (a+b')S_{\theta,a}(a,b')db' \sim -2M \log(a) \; {\rm as} \; a \rightarrow 0, \label{(4.15)} \end{equation} again in full agreement with \cite{Stew79}. Note that a more accurate asymptotic expansion of the source term would be \begin{equation} S_{\theta} \sim {2Mab \over (a+b)^{3}}(1-2b M_{2}(b))^{-1}, \label{(4.16)} \end{equation} but this does not modify the leading terms as $a \rightarrow 0$ in (4.13) and (4.15). The figures below show the behaviour of $\chi, \chi_{,a}, \chi_{,b}$ and $\chi_{,aa}$. \begin{figure}[!h] \centerline{\hbox{\psfig{figure=chi.eps,width=0.6\textwidth}}} \caption{Plot of the solution $\chi(a,b)$ in (4.2) and (4.3) with $n=1$ at small $a$, when $\theta$ takes finite values and $b=0.3$.} \end{figure} \begin{figure}[!h] \centerline{\hbox{\psfig{figure=chia.eps,width=0.6\textwidth}}} \caption{Plot of the partial derivative $\chi_{,a}$ when $\theta=10^{-3}$ and $b=0.3$. Such a derivative is clearly bounded, according to the theoretical expectations \cite{Stew79}. The identical behaviour is displayed by $\chi_{,b}$ when $\theta=10^{-3}$ and $a=0.3$.} \end{figure} \begin{figure}[!h] \centerline{\hbox{\psfig{figure=chiaa.eps,width=0.6\textwidth}}} \caption{Plot of the partial derivative $\chi_{,aa}$ when $\theta=10^{-3}$ and $b=0.3$. The logarithmic singularity as $a \rightarrow 0^{+}$ is clearly displayed, and it occurs at all finite values of $\theta$.} \end{figure} \clearpage \section{Concluding remarks} Ever since Penrose \cite{Penr64} developed a geometrical picture of an isolated system in general relativity as a space-time admitting future and past null infinity (with the associated fall-off of the metric along null geodesics going off to infinity), there has been always great interest in this coordinate-free way of bringing infinity to a `finite distance' and discussing the asymptotic structure of space-time. Moreover, the conceptual revolution brought about by non-commutative geometry \cite{Land97, Conn06, Lizz07} has led to an assessment of the very concept of space-time manifold \cite{Asch06}, with `corrections' to it evaluated, for example, along the lines of the work in Refs. \cite{Nico06, Smai04, Spal06}. Within this broad framework, the contributions of our paper are as follows. \vskip 0.3cm \noindent (i) Construction of conformal infinity for the spherically symmetric space-time which incorporates noncommutative-geometry corrections to Schwarzschild space-time. \vskip 0.3cm \noindent (ii) Evaluation of the source term (3.20) in the inhomogeneous Euler--Poisson--Darboux equation which describes the scalar wave equation in the unphysical space-time obtained after conformal rescaling of the original metric (1.4). \vskip 0.3cm \noindent (iii) Qualitative analysis of the asymptotic characteristic initial-value problem in the $l=0$ case, finding again the logarithmic singularity as shown in Eq. (4.15). In the original, `physical' space-time with metric (1.4), such a singularity implies that the large-$r$ behaviour of the scalar field is described by the asymptotic expansion \cite{Stew79} \begin{equation} \psi \sim {c_{1}\over r}+{c_{2}\over r^{2}} +{d_{1}\log(r)\over r^{3}}+{\rm O}(r^{-3}), \label{(5.1)} \end{equation} and therefore the field falls off at large $r$ rather more slowly than in flat space-time \cite{Stew79}. \vskip 0.3cm \noindent (iv) Numerical support for all results in sections 3 and 4, as shown by figures 5--7 at the end of section 4. Our results are thus an encouraging progress towards a rigorous theory of wavelike phenomena in noncommutative geometry, along the lines of the conformal-infinity program of Penrose for general relativity \cite{Penr86}. Hopefully, the physical applications to isolated gravitating systems in a noncommutative framework, and possibly to black-hole evaporation, will also become clear in the years to come. \acknowledgments The authors are grateful to the INFN for financial support. The work of G. Miele has been partially supported by PRIN {\it FISICA ASTROPARTICELLARE}.
1,314,259,992,849
arxiv
\section{Introduction} Generically, 5d gauge theories do not exist as microscopic theories since they are non-renormalizable and thus require a UV completion beyond the scale set by the inverse Yang-Mills coupling. However, under certain circumstances, it is possible to remove the UV cutoff while having a theory well defined everywhere on its moduli space \cite{Seiberg:1996bd, Morrison:1996xf,Intriligator:1997pq}. The crucial point is that minimally supersymmetric theories in 5d contain 8 supercharges and therefore a non-abelian $SU(2)_R$ R-symmetry. The effective action on the Coulomb branch follows then from a pre-potential, which in the 5d case is severely restricted by gauge-invariance and anomaly considerations.\footnote{In 5d, upon integration out massive fermions, a Chern-Simons term is produced \cite{Witten:1996qb}. This is very similar to the 3d parity anomaly.} Inspection of the exact effective gauge coupling shows that, upon appropriately choosing the gauge group and matter content, the bare Yang-Mills coupling can be removed. The resulting theory is expected to be at an isolated fixed point. A particularly interesting theory is that of a $USp(2\,N)$ gauge group with an antisymmetric hyper-multiplet and $N_f$ fundamental hyper-multiplets. According to the analysis in \cite{Seiberg:1996bd, Morrison:1996xf,Intriligator:1997pq} this theory is at a fixed point as long as $N_f<8$. Moreover it can be naturally embedded into string theory as the world-volume theory of $N$ D4 branes probing an $O8^-$ plane with $N_f$ D8 branes on top of it \cite{Seiberg:1996bd}. From the string theory perspective, the inverse bare YM coupling corresponds to the value of the dilaton at the orientifold plane. The fixed point theory corresponds to the case where the dilaton is tuned to diverge on top of the O8/D8. The $SO(2\,N_f)$ global flavor symmetry, corresponding to the D8-brane gauge symmetry, is then enhanced to $E_{N_f+1}$ via massless D0-brane states (dual to instanton particles in the 5d gauge theory) localized at the position of the orientifold \cite{Polchinski:1995df, Matalliotakis:1997qe, Bergman:1997py}. This has been recently demonstrated in \cite{Kim:2012gu} from a purely field theoretical perspective. The near-horizon limit of this brane construction gives a warped $AdS_6\times S^4$ background in massive Type IIA supergravity \cite{Brandhuber:1999np}, reinforcing the claim that the 5d field theory under consideration is indeed at a fixed point. Starting with this basic theory, three infinite families of daughter theories were constructed in \cite{Bergman:2012kr} by replacing the flat $\mathbb{R}^4$ transverse to the D4's inside the $O8/D8$ by an orbifold $\mathbb{C}^2/\mathbb{Z}_n$. This produces quiver gauge theories involving products of $USp(2\,N)$ and $SU(2\,N)$ gauge groups, with dual massive Type IIA supergravity backgrounds given by warped $AdS_6\times S^4/\mathbb{Z}_n$. The $S^5$ free energy of the quiver theories was recently computed using localization in \cite{Jafferis:2012iv}, and shown to agree precisely with the entanglement entropy for an $S^4$ in supergravity, thus providing further support for the existence of the quiver fixed points and for the $AdS_6$ duals. Note that supersymmetric $AdS_6$ solutions are remarkably hard to find \cite{Passias:2012vp}, thus rendering this series of examples is quite noteworthy. Interestingly, upon allowing for more exotic ansatze one can find other $AdS_6$ solutions \cite{AdS6}. We expect that the quiver theories also exhibit an enhanced $E_{N_f+1}$ global symmetry on the Higgs branch. Note that, on general grounds (see \textit{e.g.} \cite{Tong:2005un}), the Higgs branch of these theories coincides with the moduli space of $E_{N_f+1}$ instantons on $\mathbb{C}^2/\mathbb{Z}_n$. This enhanced $E_{N_f+1}$ symmetry is not visible in the gravity dual. In fact, the latter becomes singular at the location of the O8/D8 as both the dilaton and the curvature diverge. This is not surprising since certainly supergravity fails to capture the D0-branes which become massless and provide the necessary extra states for the symmetry enhancement. Nevertheless, the full Higgs branch of the theory contains operators which are both flavor- and instanton-blind, and thus are insensitive to this symmetry enhancement. In this paper we concentrate on such operators, which can then be thought of as (partial) probes of the Higgs branch. In the gravity dual they correspond to dual giant gravitons sitting on top of the O8/D8. As we will see, although the curvature and dilaton diverge at that point, the world-volume theory on the dual giants is perfectly well behaved, and in fact matches the expected field theory results upon geometric quantization of their phase space as in \cite{Mandal:2006tk,Martelli:2006vh} (see also \cite{Kinney:2005ej, minwalla}). The plan for the rest of the paper is as follows. In section \ref{review} we provide a lightning review of the quiver theories under consideration and their gravity duals. In section \ref{geodesics} we study a family of massless geodesics in the geometry. These massless geodesics are followed by the dual giant gravitons, which we introduce in section \ref{duals}. We then perform the geometric quantization of their phase space in section \ref{symplecticquantization} and find that it is in one-to-one correspondence to that of a $\mathbb{C}^2/\mathbb{Z}_n$ which can be actually thought as the pre-near horizon ALE space. This result is matched with the field theory expectations in section \ref{fieldtheory}. Finally, we end in section \ref{conclusions} with some comments and future prospects. \section{5d quiver theories and their $AdS_6$ duals} \label{review} Following \cite{Bergman:2012kr}, the class of 5d theories of interest can be engineered by considering in type $I'$ string theory $N$ D4-branes probing an O8-plane with $N_f$ coincident D8-branes wrapping an ALE space as follows (the boxed coordinates denote the ALE directions), \begin{equation} \begin{array}{l c | c c c c c c c c c} & 0 & 1 & 2 & 3 & 4 &\boxed{5}& \boxed{6} &\boxed{7} &\boxed{8} & 9 \\ \hline D8/O8^- & \times & \times & \times & \times & \times & \times & \times & \times & \times &\\ D4 & \times & \times & \times & \times & \times & & & & &\\ \end{array} \,. \label{syst} \end{equation} We can construct the corresponding theories by starting with Type IIA string theory on $\mathbb{C}^2/\mathbb{Z}_n$ and then performing the orientifold projection $\Omega\,I_9$. Prior to the orientifold we find an $\mathcal{N}=(1,\,1)$ 6d SUGRA multiplet together with $(n-1)$ 6d vector multiplets coming from the $(n-1)$ twisted sectors of the orbifold. Upon orientifolding this theory, since the orientifold involves an inversion, the resulting theory lives in 5d. Furthermore, due to the combined action of the inversion and the $\Omega$, the $i$-th twisted sector is identified with the $(n-i)$-th one, so that out of the original $n-1$, only half of them survive the orientifold projection, each giving rise to a 5d vector multiplet and a 5d hyper-multipelt. Obviously for the case of an even orbifold the middle twisted sector is left unpaired and hence it must be treated with special care. It turns out that there are two ways of implementing the orientifold projection on it \cite{Polchinski:1996ry}: in one, which goes under the name of no vector structure (NVS), one keeps a 5d hyper-multiplet; while in the other, which goes under the name of vector structure (VS), one keeps the vector multiplet. In addition, in the NVS case there is trapped $B_2$ flux on the 2-cycle corresponding to the middle twisted sector. The corresponding open string sectors must also be adjusted accordingly. The world-volume theories on the D4-branes depend crucially on the type of orbifold. Let us set $N_f=0$. For odd orbifolds $\mathbb{C}^2/\mathbb{Z}_{2\,k+1}$ we find a $USp(2\,N)\times SU(2\,N)^k$ gauge theory with bi-fundamentals and an antisymmetric hyper-multiplet for the last $SU$ group as shown in fig.~\ref{Z2kplus1}. Note that this theory has a $[U(1)^k]_B\times [U(1)^{k+1}]_I\times U(1)_M$ global non-R symmetry, where the subscripts $B$, $I$ and $M$ denote respectively baryonic, instantonic and mesonic symmetries.\footnote{In 5d gauge theories there is a topological current for each gauge group constructed out of its field strength as $j_I=\star(F\wedge F)$. Instantons, which in 5d are particle-like excitations, are electrically charged under these symmetries.} For even orbifolds $\mathbb{C}^2/\mathbb{Z}_{2\,k}$ without vector structure the gauge group is $SU(2\,N)^k$ and the matter content includes $k-1$ bi-fundamentals and two antisymmetric hyper-multiplets, as shown in fig.~\ref{Z2knovectorstructure}. The global symmetry group is in this case $[U(1)^k]_B\times [U(1)^k]_I\times U(1)_M$. For even orbifolds $\mathbb{C}^2/\mathbb{Z}_{2\,k}$ with vector structure we have a $USp(2\,N)\times SU(2\,N)^{k-1}\times USp(2\,N)$ gauge theory with bi-fundamental matter, fig.~\ref{Z2kvectorstructure2}. In this case, the global symmetry group is $[U(1)^{k-1}]_B\times [U(1)^{k+1}]_I\times U(1)_M$. \begin{figure}[h!] \centering \includegraphics[scale=.7]{Z2kplus1} \caption{Quiver diagram for the $\mathbb{Z}_{2\,k+1}$ case.} \label{Z2kplus1} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale=.7]{Z2knovectorstructure} \caption{Quiver diagram for the $\mathbb{Z}_{2\,k}$ no vector structure case.} \label{Z2knovectorstructure} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale=.7]{Z2kvectorstructure2} \caption{Quiver diagram for the $\mathbb{Z}_{2\,k}$ vector structure case.} \label{Z2kvectorstructure2} \end{figure} The dual massive Type IIA supergravity backgrounds are warped $AdS_6\times S^4/\mathbb{Z}_n$ with a metric and dilaton given by \begin{equation} ds^2=\hat{\Omega}^2\,\Big\{ds^2_{AdS_6}+\frac{4}{9}\,L^2\big(d\alpha^2+\cos^2\alpha\,d\tilde{\Omega}_3^2\big) \Big\} \; , \;\; e^{\Phi}=\frac{3}{2\,L}\,\Big(\frac{3}{2}\,m\,\sin\alpha \Big)^{-\frac{5}{6}}\,, \end{equation} where \begin{equation} \hat{\Omega}=\Big(\frac{3}{2}\,m\,\sin\alpha \Big)^{-\frac{1}{6}}\; , \;\; L^4=\frac{3^{8/3}\,\pi\,n\,N}{2^{2/3}\,m^{1/3}} \;, \;\; m=\frac{8-N_f}{2\,\pi} \,, \end{equation} and $d\tilde{\Omega}_3^2$ stands for the metric of the lens space $S^3/\mathbb{Z}_n$, \begin{equation} d\tilde{\Omega}_3^2=\frac{1}{4}\,(d\psi-\cos\theta\,d\phi)^2+\frac{1}{4}\,(d\theta^2+\sin^2\,d\phi^2) \,, \end{equation} where $\psi\,\in\,[0,\,\frac{4\,\pi}{n}]$. The background also includes a RR 4-form and 0-form, \begin{equation} F_0=m \;, \;\; \tilde{F}_4=\frac{10}{81}\,\Big(\frac{2}{3}\Big)^{\frac{2}{3}}\, m^{\frac{1}{3}}\,L^4\,\sin^{\frac{1}{3}}\alpha\,\cos^3\alpha\,d\alpha\wedge d\psi\wedge \omega_2 \,, \end{equation} where $\omega_2=\sin\theta\,d\theta\wedge d\phi$. Note that $\alpha\,\in\,[0,\,\frac{\pi}{2}]$, so the compact space is really a hemisphere with a boundary at $\alpha=0$. We can interpret this as the result of the orientifold action which takes $\alpha\rightarrow -\alpha$. Due to the $\alpha$-dependence of the warp factor, the background only exhibits the symmetry of the lens space, which is generically $SU(2)\times U(1)$. These symmetries correspond respectively to the $SU(2)_R$ and $U(1)_M$ in the field theory. The background is singular at $\alpha = 0$, where both the curvature and the dilaton diverge. This makes some of the properties of this solution, like the on shell Euclidean action, ill-defined at the supergravity level. This presumably requires a stringy resolution. However, many properties remain well-defined, and are indeed consistent with the dual 5d gauge theories \cite{Bergman:2012kr}. The dual giant gravitons that we will analyze below are also completely well-defined in this background. \subsection{Global coordinates} To analyze the dual giant gravitons it is convenient to work in global coordinates for $AdS_6$, dual to radial quantization of the 5d CFT's. The $AdS$ metric is then given by \begin{equation} ds_{AdS_6}^2=-(1+\frac{r^2}{L^2})\,dt^2+\frac{dr^2}{(1+\frac{r^2}{L^2})}+r^2\,d\Omega_4^2 \,. \end{equation} Dualizing the 4-form we get \begin{equation} *\tilde{F}_4 = \tilde{F}_6=\frac{10}{3}\,r^4\,dt\wedge dr\wedge \omega_4 \,, \end{equation} where $\omega_4$ is the volume 4-form of the $S^4$ in the global $AdS_6$. Since there is no $H_3$ flux, and the possible $B_2$ flux can only be along internal directions we can integrate the 6-form to get the 5-form potential: \begin{equation} C_5=-\frac{2}{3}\,r^5\,dt\wedge \omega_4 \,. \end{equation} \section{A family of massless BPS geodesics}\label{geodesics} There are two circles in the internal space $S^4/\mathbb{Z}_n$ on which we could naturally imagine particles orbiting, namely those parametrized by $\psi$ and $\phi$. Let us then consider a massless particle at fixed $\alpha,\,\theta$ moving along those coordinates. Note that since we will be interested in massless particles we need to use a Polyakov-like action obtained by introducing a world-line metric so that the zero mass limit is well-defined. More explicitly, denoting the world-line time by $\tau$, we consider $\{t(\tau),\,\psi(\tau),\,\phi(\tau)\}$. Upon gauge-fixing the world-line metric to one, the action reads \begin{equation} S=-\int \,d\tau\, \hat{\Omega}^2\,\Big[ (1+\frac{r^2}{L^2})\,\dot{t}^2-\frac{4\,L^2}{9\,n^2}\,\cos^2\alpha\,\Big( (\dot{\psi}+\frac{n}{2}\,\cos\theta\,\dot{\phi})^2+\frac{n^2}{4}\,\sin^2\theta\,\dot{\phi}^2\Big) \Big] \,, \end{equation} where the dot indicates a derivative with respect to the world-line coordinate $\tau$. We have rescaled $\psi$ so that it takes values in $[0,\,2\,\pi]$. The world-line hamiltonian is \begin{equation} H_{WL}=\frac{P_t^2}{4\,\hat{\Omega}^2\,(1+\frac{r^2}{L^2})}-\frac{9\,(4\,P_{\phi}^2+n^2\,P_{\psi}^2-4\,n\,P_{\phi}\,P_{\psi}\,\cos\theta)}{16\,L^2\,\hat{\Omega}^2\,\sin^2\theta\,\cos^2\alpha} \,. \end{equation} The constraint imposed by the world-line metric sets this to zero, which gives \begin{equation} L\,\mathcal{H}=\frac{3}{2}\,\sqrt{1+\frac{r^2}{L^2}}\,\sqrt{4\,P_{\phi}^2+n^2\,P_{\psi}^2-4\,n\,P_{\phi}\,P_{\psi}\,\cos\theta}\,\frac{1}{\cos\alpha\,\sin\theta}\,, \end{equation} where $\mathcal{H} = P_t$ is the energy of the particle. Clearly, the energy is minimized at $\alpha=0$. For $\theta$ there are two possible solutions: \begin{equation} \begin{array}{l} {\rm a)} \cos\theta= \frac{n}{2}\,\frac{P_{\psi}}{P_{\phi}}\,\,\leadsto\,\, L\,\mathcal{H}=3\,P_{\phi}\,\sqrt{g^{AdS_6}_{tt}} \\ \\ {\rm b})\,\cos\theta=\frac{2}{n}\,\frac{P_{\phi}}{P_{\psi}} \,\,\leadsto\,\, L\,\mathcal{H}=\frac{3\,n}{2}\,P_{\phi}\,\sqrt{g^{AdS_6}_{tt}} \,. \end{array} \end{equation} Since $|\cos\theta|\leq 1$, it is clear that if $\frac{P_{\phi}}{P_{\psi}}\,>\,\frac{n}{2}$ the appropriate solution will be a), while if $\frac{P_{\phi}}{P_{\psi}}\,<\,\frac{n}{2}$ the appropriate solution will be b). \section{Dual giant gravitons}\label{duals} Now consider a D4-brane wrapping $\{t,\,\Omega_4\}$, and assume that $\psi=\psi(t)$, $\phi=-\phi(t)$. The induced metric is given by (we again rescale $\psi$ so that it takes values in $[0,\,2\,\pi]$) \begin{equation} ds^2=\hat{\Omega}^2\,\Big\{-\Big((1+\frac{r^2}{L^2})-\frac{4\,L^2}{9\,n^2}\,\cos^2\alpha\,\Big[ (\dot{\psi}+\frac{n}{2}\,\cos\theta\,\dot{\phi})^2+\frac{n^2}{4}\,\sin^2\theta\,\dot{\phi}^2\Big]\Big)\,dt^2+r^2\,d\Omega_4^2\big) \Big\} \,. \end{equation} The D4-brane action is then given by \begin{eqnarray} \label{D4_action} S&=&-\mu_4\,V_4\,\int \frac{2}{3}\,L\,r^4\,\sqrt{1+\frac{r^2}{L^2}}\sqrt{1-\frac{4\,L^2}{9\,n^2\,(1+\frac{r^2}{L^2})}\,\cos^2\alpha\,\Big[ (\dot{\psi}+\frac{n}{2}\,\cos\theta\,\dot{\phi})^2+\frac{n^2}{4}\,\sin^2\theta\,\dot{\phi}^2\Big]}\nonumber \\ && -\mu_4\,V_4\,\int\,\frac{2}{3}\,r^5 \,. \end{eqnarray} The equation of motion for $\alpha$ is again solved for $\alpha=0$. Although this is a singular locus in the geometry, where both the curvature and the dilaton diverge, the behavior of BPS geodesics there is well-defined. Legendre-transforming to the hamiltonian $\mathcal{H}=\mathcal{H}(P_{\psi},\,P_{\phi},\,\theta,\,r)$ we get \begin{equation} \mathcal{H}=\frac{3}{L}\,\sqrt{1+\frac{r^2}{L^2}}\,\sqrt{\frac{1}{\sin^2\theta}\,\Big( P_{\phi}^2+\frac{n^2}{4}\,P_{\psi}^2-n\,P_{\phi}\,P_{\psi}\,\cos\theta\Big)+\frac{4\,L^4\,\mu_4^2\,V_4^2}{81}\,r^8}-\frac{2}{3}\,\mu_4\,V_4\,r^5 \,. \end{equation} We again find two solutions for $\theta$ depending on the value of $P_\phi/P_\psi$: \begin{equation} \label{choices} {\rm a})\,\cos\theta = \frac{n}{2}\,\frac{P_{\psi}}{P_{\phi}} \;\; \mbox{if} \;\; \frac{P_\phi}{P_\psi} > \frac{n}{2} \qquad {\rm b})\,\cos\theta=\frac{2}{n}\,\frac{P_{\phi}}{P_{\psi}} \;\; \mbox{if} \;\; \frac{P_\phi}{P_\psi} < \frac{n}{2} \,. \end{equation} Plugging these solutions back into $\mathcal{H}$ we find a function of $r$, whose minima lie either at $r=0$ for both solutions, or at \begin{equation} {\rm a})\, r^3=\frac{9}{2\, L^3\,\mu_4\,V_4}\,P_{\phi}\qquad {\rm b})\,r^3=\frac{9\,n}{4\, L^3\,\mu_4\,V_4}\,P_{\psi} \end{equation} respectively. Finally, the on-shell Hamiltonian at these points, for both the $r=0$ and the corresponding $r\ne 0$ solution, is \begin{equation} \label{dual_giant_energy} {\rm a})\, L\,\mathcal{H}=3\,P_{\phi}\qquad {\rm b})\, L\,\mathcal{H}=\frac{3}{2}\,n\,P_{\psi} \,. \end{equation} The $r=0$ solutions correspond to a collapsed brane, which looks like a point-like object. Consequently we recover the results from the previous section. The $r\neq 0$ solutions are the expanded ``dual giant graviton" branes. They are degenerate both in energy and charges with the point-like solutions, and hence they correspond to the same state in the dual field theory (as we will see below, this is a mesonic operator with no insertion of vector multiplet scalars). As usual \cite{Lin:2004nb} we expect the point-like and expanded configurations to have different regimes of validity in terms of their back-reaction. For a given choice of charges only one type of configuration will lead to a non-singular background. We would like to stress again that the branes live at $\alpha=0$, which is a singular point in the background. Nevertheless their world-volume theory (\ref{D4_action}) is perfectly well-defined. Finally let us note that we could go back and consider the most generic configuration where we assume $r=r(t)$, $\alpha=\alpha(t)$ and $\theta=\theta(t)$. However one can see that the minimal energy configuration is attained when the corresponding momenta and velocities vanish, thereby recovering our original ansatz. \section{Symplectic quantization} \label{symplecticquantization} In the previous section we found a dynamical system with a phase space $X$ parametrized by a set of coordinates $Q^A=\{r,\,\alpha,\,\psi,\,\theta,\,\phi\}$ and canonically conjugated momenta $P_A=\{P_r,\,P_{\alpha},\,P_{\psi},\,P_{\theta},\,P_{\phi}\}$. On general grounds, a classical system is defined once we define the symplectic space $(X,\,\omega)$ made out of phase space $X$ and a symplectic structure $\omega$. The quantization of such a system amounts to assigning to $X$ a Hilbert space $\mathscr{H}(X,\,\omega)$, where the quantum wave-functions live. Following the $AdS_5/CFT_4$ example \cite{Mandal:2006tk, Martelli:2006vh} (see also \cite{Kinney:2005ej, minwalla}), by quantizing the phase space of the giant gravitons we should recover the field theory space of dual operators. The canonical Poisson brackets are \begin{equation} \{Q^A,\,Q^B\}_{PB}=0\qquad \{P_A,\,P_B\}_{PB}=0\qquad \{Q^A,\,P_B\}_{PB}=\delta^A_{B} \,. \end{equation} Let us denote the constraints for the dynamical system for the two types of solutions as $\{f^{{\rm a})}_A,\,f^{{\rm b})}_A\}$. These are given by $f_r^{(a,b)} = P_r $, $ f_\alpha^{(a,b)} = P_\alpha$, $f_\theta^{(a,b)} = P_\theta$, and \begin{equation} \begin{array}{ll} f_\psi^{(a)} = P_{\psi}-\frac{2}{n}\,\frac{2\,L^3\,V_4\,\mu_4}{9}\,r^3\,\cos\theta & f_\phi^{(a)} = P_{\phi}-\frac{2\,L^3\,V_4\,\mu_4}{9}\,r^3 \\[5pt] f_\psi^{(b)} = P_{\psi}-\frac{4\,L^3\,\mu_4\,V_4}{9\,n}\,r^3 & f_\phi^{(b)} = P_{\phi}-\frac{n}{2}\,\frac{4\,L^3\,\mu_4\,V_4}{9\,n}\,r^3\,\cos\theta \,. \end{array} \end{equation} The equations of motion impose the constraints $f_A^{(a,b)}=0$ on the phase space. Define the matrices $M^{(a,b)}_{AB}\equiv \{f^{(a,b)}_A,f^{(a,b)}_B\}$. Since the constraints involving $\alpha$ are trivial, we can eliminate the corresponding row and column from $M^{(a,b)}$, and reduce $X$ to an eight dimensional space. The symplectic structure on the reduced phase space is obtained by computing the Dirac bracket, which in this case is \begin{equation} \{Q^A,\,Q^B\}_{DB}=(M_{AB})^{-1} \,. \end{equation} The symplectic structure for the two solutions is then given by \begin{equation} \begin{array}{l} \omega_{{a}}=\frac{4\,L^3\,\mu_4\,V_4}{3\,n}\,r^2\,\cos\theta\,dr\wedge d\psi+\frac{2}{3}\,L^3\,\mu_4\,V_4\,r^2\,dr\wedge d\phi+\frac{4\,L^3\,\mu_4\,V_4}{9\,n}\,r^2\,\sin\theta\,d\psi\wedge d\theta\\ \\ \omega_{{b}}=\frac{4\,L^3\,\mu_4\,V_4}{3\,n}\,r^2\,dr\wedge d\psi+\frac{2}{3}\,L^3\,\mu_4\,V_4\,r^2\,\cos\theta\,dr\wedge d\phi-\frac{2}{9}\,L^3\,\mu_4\,V_4\,r^3\,\sin\theta\,d\theta\wedge d\phi \,. \end{array} \end{equation} Integrating, we get the one-forms \begin{equation} \begin{array}{l} \nu_{{a}}=\frac{2\,L^3\,\mu_4\,V_4}{9}\,r^3\,(d\phi+\frac{2}{n}\,\cos\theta\,d\psi) \qquad \nu_{{b}}= \frac{2\,L^3\,\mu_4\,V_4}{9}\,r^3\,(\frac{2}{n}\,d\psi+\cos\theta\,d\phi) \,. \end{array} \end{equation} Recall that we have rescaled the $\psi$ coordinate in the original metric so as to have period $2\pi$, while at the same time the giant moves along the $-\phi$ direction. Let us go back to the original coordinates. Besides, let us introduce $\rho^2 \equiv (4/9)\mu_4 V_4 L^3 r^3$, so that \begin{equation} \label{symplectic_one_forms} \hat{\nu}_{{a}}=\frac{\rho^2}{2}\,(d\phi-\cos\theta\,d\psi) \qquad \hat{\nu}_{{b}}=\frac{\rho^2}{2}\,(d\psi-\cos\theta\,d\phi) \,. \end{equation} Having determined the symplectic form we now have a symplectic manifold $(X,\,\omega)$. We would like now to quantize this system. This amounts to associating to this classical phase space the Hilbert space $\mathscr{H}(X,\,\omega)$ of wave-functions for the quantized system. One would be naturally tempted to simply define as $\mathscr{H}(X,\,\omega)$ the space of functions on $(X,\,\omega)$. However this way wave-functions would generically depend on all coordinates on $(X,\,\omega)$, that is, on both momenta and position. As reviewed in \cite{Martelli:2006vh}, the correct quantization prescription is to identify $\mathscr{H}(X,\,\omega)$ with the space of holomorphic functions, in the complex structure defined by $\omega$, on $(X,\,\omega)$. In this way, wave-functions naturally depend only on half of the coordinates of the phase space. Thus, the upshot is that the Hilbert space associated to the classical system of giant gravitons consists of holomorphic functions on the classical space $(X,\,\omega)$. In order to understand the classical space $(X,\,\omega)$, in particular with the above $\omega_{{a},\,{b}}$, consider an auxiliary $\mathbb{C}^2$ parametrized by $(z_1,\,z_2)$. The metric, $ds^2=dz_i\,d\bar{z}_i$, can be rewritten in two equivalent ways: \begin{equation} ds^2=\left(\begin{array}{c c} d\bar{z}_1 & d\bar{z}_2 \end{array}\right)\,\left( \begin{array}{c c} 1 & 0 \\ & \\ 0 & 1\end{array}\right)\,\left(\begin{array}{c} dz_1 \\ \\dz_2\end{array}\right) \quad {\rm or}\quad ds^2=\left(\begin{array}{c c} d\bar{z}_1 & dz_2 \end{array}\right)\,\left( \begin{array}{c c} 1 & 0 \\ & \\ 0 & 1\end{array}\right)\,\left(\begin{array}{c} dz_1 \\ \\d\bar{z}_2\end{array}\right) \,.\nonumber \end{equation} This shows that $\mathbb{C}^2$ is invariant under $SU(2)_a\times SU(2)_b$, where $SU(2)_a$ rotates $(z_1,z_2)$ and $SU(2)_b$ rotates $(z_1,\bar{z}_2)$. We can define two complex structures on $\mathbb{C}^2$, \begin{equation} J_{{a}}=i\,\,(dz_1\,\wedge d\bar{z}_1+dz_2\,\wedge d\bar{z}_2) \qquad J_{{b}}=i\,(dz_1\,\wedge d\bar{z}_1-dz_2\,\wedge d\bar{z}_2) \,. \end{equation} The first is invariant under $SU(2)_a\times U(1)_b$, where $U(1)_b$ is the Cartan subgroup of $SU(2)_b$, and the second is invariant under $SU(2)_b\times U(1)_a$. Let us express these in polar coordinates: \begin{equation} z_1=\rho\,e^{i\frac{\psi+\phi}{2}}\,\sin\frac{\theta}{2}\qquad z_2=\rho\,e^{i\frac{-\psi+\phi}{2}}\,\cos\frac{\theta}{2} \,, \end{equation} where $\psi\sim \psi + 4\pi$, $\phi\sim \phi + 2\pi$ and $0\leq\theta\leq \pi$. The periodic coordinates $\psi$, $\phi$ are shifted by the $U(1)_a$ and $U(1)_b$ Cartan subgroup, respectively. In these coordinates \begin{equation} \begin{array}{l} J_{{a}}=\rho\,d\rho\wedge d\phi-\rho\,\cos\theta\,d\rho\wedge d\psi+\frac{1}{2}\,\rho^2\,\sin\theta\,d\phi\wedge d\psi=d\Big[\frac{\rho^2}{2}\,(d\phi-\cos\theta\,d\psi)\Big] \\ \\ J_{{b}}=-\rho\,\cos\theta\,d\rho\wedge d\phi+\rho\,d\rho\wedge d\psi+\frac{1}{2}\,\rho^2\,\sin\theta\,d\theta\wedge d\phi =d\Big[\frac{\rho^2}{2}\,(d\psi-\cos\theta\,d\phi)\Big] \,. \end{array} \end{equation} Now consider the orbifold $\mathbb{C}^2/\mathbb{Z}_n$ where $\mathbb{Z}_n$ acts as \begin{equation} (z_1,\,z_2)\, \rightarrow \, (\omega\,z_1,\,\omega^{-1}\,z_2)\,\qquad \omega^n=1 \,. \end{equation} This breaks $SU(2)_a \rightarrow U(1)_a$ (for $n>2$) and preserves $SU(2)_b$. In polar coordinates it simply changes the periodicity of $\psi$ to $\psi\sim \psi + 4\pi/n$. On the orbifold, the first complex structure $J_a$ preserves $U(1)_a\times U(1)_b$, whereas the second complex structure $J_b$ preserves $U(1)_a\times SU(2)_b$. Comparing with the symplectic one-forms (\ref{symplectic_one_forms}) we see that $J_{{a},\,{b}}=d\,\hat{\nu}_{{a},\,{b}}$. The geometric quantization of the phase space of dual giant gravitons is therefore mapped to that of $\mathbb{C}^2/\mathbb{Z}_n$. The wave-functions correspond to holomorphic functions on $\mathbb{C}^2/\mathbb{Z}_n$ with a given complex structure, $J_a$ or $J_b$, depending on whether $P_\phi/P_\psi$ is larger or smaller than $n/2$, and are classified according to the corresponding symmetry, $U(1)_a\times U(1)_b$ or $U(1)_a\times SU(2)_b$, respectively. Therefore there is a one-to-one map between wave-functions on $\mathbb{C}^2/\mathbb{Z}_2$, the geometrically quantized phase space of dual giants, and mesonic operators in the field theory. In fact, the translation to the field theory language is now obvious: $SU(2)_{{b}}$ corresponds to the $SU(2)_R$ R-symmetry, and $U(1)_{{a}}\in SU(2)_a$ corresponds to the $U(1)_M\in SU(2)_M$ mesonic symmetry. Thus, although they live in the near-horizon $AdS_6\times S^4/\mathbb{Z}_n$ space, the dual giant gravitons located at $\alpha=0$ actually probe the $\mathbb{C}^2/\mathbb{Z}_n$ space transverse to the D4-branes inside the O8-plane, which is the Higgs branch of the theory. \section{Field theory operators}\label{fieldtheory} The dual giant gravitons should correspond to a sub-sector of operators on the Higgs branch that are flavor-, baryon- and instanton-neutral. These operators involve only the bi-fundamental and antisymmetric hyper-multiplets, and are classified by their quantum numbers under $SU(2)_R\times U(1)_M$. Since the sub-sector we are interested in only involves hyper-multiplets, it turns out to be technically easier to consider the field theory on $\mathbb{R}^{1,\,3}\times S^1$. Upon sending the radius of the $S^1$ to zero we find a 4d theory whose quiver diagram and interactions are precisely equal to those of the original theory. From the 4d point of view, it is natural to choose an $\mathcal{N}=1$ sub-algebra and express the theory in terms of $\mathcal{N}=1$ super-fields. The natural object to consider then is the chiral ring, composed of chiral operators upon imposing the equivalence relations dictated by the F-terms. Note that in 4d our theories really have $\mathcal{N}=2$ supersymmetry, where the R-symmetry is $SU(2)_R\times U(1)'_R$. The $SU(2)_R$ part is inherited from the 5d R-symmetry, and the $U(1)'_R$ part arises from the compactification. However in the ${\cal N}=1$ chiral ring only the Cartan $U(1)_R\in SU(2)_R$ is manifest. For example, a hyper-multiplet corresponds to a pair of chiral super-fields $(Q,\,\tilde{Q})$ in conjugate representations of the gauge group, whereas $SU(2)_R$ acts on the doublet $(Q,\,\tilde{Q}^{\dagger})$, \textit{i.e.} in a non-holomorphic way. The chiral ring therefore automatically chooses the complex structure $J_a$, and will therefore only include the subset of operators that are dual to the dual giant graviton states corresponding to this choice of complex structure. Note that the other subset of giants just corresponds to non-holomorphic operators in this language. For this reason in the following we will concentrate on those operators/giants which are holomorphic in the chosen $\mathcal{N}=1$ language. Let us start with the $n=1$ case. This is a $USp(2\,N)$ theory with one antisymmetric hyper-multiplet $A$. The fundamental hyper-multiplets do not play a role in the sector in question, so we set $N_f=0$. In the 4d ${\cal N}=1$ language $A$ corresponds to a pair of antisymmetric chiral superfields $(A_1,A_2)$, transforming as a doublet under $SU(2)_M$. The interactions are captured by the super-potential \cite{Benvenuti:2010pq} \begin{equation} W=\epsilon^{\alpha\beta}{\rm Tr}\, (A_{\alpha}\Phi A_{\beta}) \,, \end{equation} where $\Phi$ is the adjoint chiral super-field of the ${\cal N}=2$ vector multiplet. The components $(A_1,A_2)$ carry charges $(1/2, -1/2)$, respectively, under $U(1)_M\in SU(2)_M$, and charges $(1/2,1/2)$ under $U(1)_R\in SU(2)_R$. The F-term is given by \begin{equation} \epsilon^{\alpha\beta}\,A_{\beta}\,A_{\alpha}=0 \,. \end{equation} The effect of this F-term is to symmetrize products of $A_{\alpha}$. All of the operators in question can therefore be expressed as \begin{equation} \mathcal{O}_{m,\,n}={\rm Tr}\,(A_1^m\,A_2^n) \,. \end{equation} Having constructed the operators by reduction to 4d, we need to come back to 5d. In 5d these operators have $\Delta=\frac{3}{2}\,(n+m)$ and $Q_{{R}}=\frac{1}{2}(n+m)$, so that they satisfy \begin{equation} \Delta=3\,Q_{{R}} \,. \end{equation} Upon identifying $Q_R$ with $P_\phi$ this agrees with the energy of the corresponding dual giant graviton (\ref{dual_giant_energy}), as it should in global $AdS$. The $U(1)_M$ charge of these operators is $Q_M=\frac{1}{2}(n-m)$. We see that $|Q_M| \leq Q_R$. Identifying $Q_M$ with $\frac{n}{2}P_\psi$, this agrees with the condition for the dual giant graviton (\ref{choices}) (recall that the $\psi$ coordinate in (\ref{choices}) was rescaled so that $\psi\sim \psi + 2\pi$). For the first few operators we find \begin{equation} \begin{array}{l | l | l} \frac{2}{3}\,\Delta & {\rm operators} & \# \\ \hline 1 & A_1,\,A_2 & t\,(z+z^{-1}) \\ 2 & A_1^2,\,A_1\,A_2,\,A_2^2 & t^2\,(z^2+1+z^{-2}) \\ 3& A_1^3,\,A_1^2\,A_2,\,A_1\,A_2^2,\,A_2^3 & t^3\,(z^3+z^1+z^{-1}+z^{-3}) \\ \end{array} \end{equation} where we introduced the fugacity $t$ which stands for the dimension of the operator and the fugacity $z$ which counts the $Q_M$ charge. Note that $z$ appears through the character of the highest weight $m$ $SU(2)$ representation, which we will denote as $[m]_z$. It is straightforward to see that the generating function is given by \begin{equation} \sum_{m} [m]_z\,t^m=\frac{1}{(1-t\,z)\,(1-\frac{t}{z})} \,. \end{equation} This is precisely the Hilbert series of $\mathbb{C}^2$, meaning that these operators are in one-to-one correspondence with holomorphic functions on $\mathbb{C}^2$. This is precisely the expected result for the case $n=1$. Let us now consider the case of $n=2$, focusing first on the NVS case with the gauge group $SU(2N)$, and two antisymmetric hyper-multiplets $A,\, A'$. In terms of the pairs of chiral super-fields $(A_1,A_2)$ and $(A'_1,A'_2)$, the 4d super-potential is given by \begin{equation} W=\epsilon^{\alpha\beta} {\rm Tr}\, (A_\alpha\,\Phi\, A_\beta +A'_\alpha\,\Phi\, A'_\beta) \,. \end{equation} In this case the global $SU(2)_M$ acts on the doublets $(A_1,A'_1)$ and $(A'_2,A_2)$. In particular, $A_1$ and $A'_2$ carry a $U(1)_M$ charge of $+1/2$, and $A'_1$ and $A_2$ carry a $U(1)_M$ charge of $-1/2$. The $SU(2)_R$ symmetry acts on $(A_1,A_2^\dagger)$ and $(A_1',A_2^{'\dagger})$, so the $U(1)_R$ charge assignment is $+1/2$ for all $A_\alpha,\, A'_\alpha$.\footnote{Note that there is one more symmetry assigning charge $1/2$ to the $A_\alpha$ and $-1/2$ to the $\tilde{A}_\alpha$. However no mesonic operators is charged under this symmetry, which is thus a baryonic $U(1)$. In this paper we are not concerned about baryonic operators.} Taking into account the F-term, which imposes $\epsilon^{\alpha\beta}(A_\beta A_\alpha + A'_\beta A'_\alpha)=0$, the first few operators are given as follows \begin{equation} \begin{array}{l | l | l} \frac{2}{3}\,\Delta & {\rm operators} & \# \\ \hline 2 & \,A_1 A'_2,\, A_1 A_2,\, A'_1 A_2 & t^2\,(z^2+1+z^{-2}) \\[2pt] 4 & A_1 A'_2 A_1 A'_2,\, A_1 A_2 A_1 A'_2,\, A_1 A_2 A_1 A_2, & t^4\,(z^4+z^2+1+z^{-2}+z^{-4}) \\[2pt] & \,A_1 A_2 A'_1 A_2,\, A'_1 A_2 A'_1 A_2 & \end{array} \,. \end{equation} These satisfy $\Delta=3\,Q_{R}$ and $|Q_M|\leq Q_{{R}}$, which are again the expected relationships for the dual giant gravitons. We also recognize here the first few terms in the expansion of \begin{equation} \frac{(1-t^4)}{(1-t^2)\,(1-t^2\,b^2)\,(1-\frac{t^2}{b^2})} \,, \end{equation} which is the Hilbert series for $\mathbb{C}^2/\mathbb{Z}_2$, thus precisely recovering the dual giant graviton result. In the VS case we have a $USp(2\,N)\times USp(2N)$ theory with one bi-fundamental hyper-multiplet, which we express in terms of 4d chiral super-fields as $(Q,\tilde{Q})$. The $U(1)_M$ charge assignment is $(1/2,-1/2)$, and the $U(1)_R$ charge assignment is $(1/2,1/2)$. The F-term imposes $Q\,\tilde{Q}=\tilde{Q}\,Q$. The first few operators are given by \begin{equation} \begin{array}{l | l | l} \frac{2}{3}\,\Delta & {\rm operators} & \# \\ \hline 2 & \,Q^2,\,Q\,\tilde{Q},\,\tilde{Q}^2 & t^2\,(z^2+1+z^{-2}) \\ 4 & Q^4,\,Q^3\,\tilde{Q},\,Q^2\,\tilde{Q}^2,\,Q\,\tilde{Q}^3,\,\tilde{Q}^4 & t^4\,(z^4+z^2+1+z^{-2}+z^{-4}) \,. \end{array} \end{equation} We again see that $\Delta=3\,Q_{{R}}$ and $|Q_M|\leq Q_R$, as expected, and we again find the Hilbert series for $\mathbb{C}^2/\mathbb{Z}_2$, as expected from the giant graviton analysis. Note that the analysis above is strictly speaking valid only at large $N$, as we have neglected possible relations among traces. However, a complete analysis at finite $N$ can be performed explicitly for some small values of $N$ and $n=1,2$ by computing the exact Hilbert series on the Higgs branch as arising from the field theory with the help of the algebraic-geometry symbolic computation program \verb+Macaulay2+ \cite{Macaulay2}. We find that the Hilbert series reproduces the expected $\mathbb{C}^2/\mathbb{Z}_n$ for $n=1,2$. While we leave a more thorough analysis of the general case for future work, all in all, based on the examples, we expect that the counting of operators in the zero baryonic charge, zero instanton charge and zero flavor charge sector matches exactly the quantization of the phase space of giant gravitons. More explicitly, the operators in this zero-charges sector of the Higgs branch are expected to be in one-to-one correspondence with holomorphic functions on $\mathbb{C}^2/\mathbb{Z}_n$.\footnote{Indeed, the same situation is found in the more familiar $AdS_5/CFT_4$ case for $A_n$ quivers \cite{DHRG}.} It is interesting to revisit now the status of the gravity computation, where we found two degenerate solutions for each choice of quantum numbers, namely the expanded and singular configurations. As we have just argued, the dual operator is a meson composed of hyper-multiplets without vector multiplet scalars. As usual, a short meson whose dimension is $\mathcal{O}(1)$ corresponds to a SUGRA fluctuation, \textit{i.e.} point-like particles following BPS geodesics. On the other hand, as the dimension increases, by the time we consider a long meson whose dimension is $\mathcal{O}(N)$, the dual configuration is best described as the expanded brane configuration. Indeed, we expect that if we were to consider the fully back-reacted geometry, the non-singular geometry corresponding to $\mathcal{O}(N)$ is that arising from the back-reaction of the expanded brane configuration, pretty much as in the LLM case \cite{Lin:2004nb}. For the purpose of counting operators however, we could just consider the expanded configurations. \section{Conclusions}\label{conclusions} In this paper we have studied the sub-sector of the Higgs branch which is both flavor and instanton blind. In terms of the fields in the corresponding quiver theories, it consists of the operators made only out of bi-fundamental and/or antisymmetric hyper-multiplets, with strictly zero baryonic and instantonic charges. In the gravity dual such operators can be put in correspondence with dual giant gravitons, namely D4-branes in global $AdS_6$, which follow massless geodesics in the internal space. The geometric quantization of the phase space associated to such branes shows that the corresponding operators are in one-to-one correspondence with holomorphic functions on $\mathbb{C}^2/\mathbb{Z}_n$. In fact, we can think of this space as that transverse to the D4-branes inside the O8-plane in the pre-near-horizon background. Conversely, at least for the simplest examples, we recover the same results from the field theory perspective. It would be interesting to check this result more thoroughly for the three whole families. We took a somewhat lengthier route in that we reduced the theory down to 4d, in order to use the more familiar $\mathcal{N}=1$ superspace. It would be interesting to overcome this technicality by working directly in 5d. The partition function which counts the operators in question corresponds to the Hilbert series of the orbifold. It is natural to expect that this corresponds to the Hilbert series of the entire Higgs branch upon setting to zero the flavor and instanton fugacities. It would be certainly very interesting to go beyond this flavor and instanton blind sector. Besides, it would be interesting to clarify whether this Hilbert series can be thought of as a limit of the super-conformal index \cite{Kim:2012gu}, in the spirit of the corresponding relation for 4d theories found in \cite{Gadde:2011uv}. Having identified the dual giant gravitons, it is natural to wonder whether genuine giant gravitons, namely those expanding in the internal part of the geometry, exist. We expect these to correspond to anti-symmetrized products of fields. In particular, there is an upper limit on the number of fields corresponding to the maximal giant graviton, which is a manifestation of the so-called string exclusion principle (see \cite{Balasubramanian:2001nh, Corley:2001zk} for the description of such phenomenon in the $AdS_5/CFT_4$ case). Taking for definiteness the $n=1$ case, the natural candidate for the maximal giant would be the Pfaffian operator ${\rm Pf}(A)$. However, as discussed in \cite{Bergman:2012kr}, this operator is related to the $N$-th power of the meson. While this naively suggests that in this case giant gravitons will be absent, certainly a more thorough analysis should be performed. Furthermore, it is natural to ask wether a microscopic description along the lines of \cite{Janssen:2002cf,Janssen:2003ri,Janssen:2004jz,Janssen:2004cd} is possible. We leave such questions open for future investigations. \section*{Acknowledgements} D.R-G. thanks Rak-Kyeong Seong for useful conversations and computing help with \verb+Macaulay2+. He also thanks the Korea Institute for Advanced Study for warm hospitality while this work was in progress. D.R-G. is supported by the Aly Kaufman fellowship. He also acknowledges partial support from the Israel Science Foundation under grant no 392/09 and from the Spanish Ministry of Science through the research grant FPA2009-07122 and Spanish Consolider-Ingenio 2010 Programme CPAN (CSD2007-00042). O.B. is supported in part by the Israel Science Foundation under grant no. 392/09, and the US-Israel Binational Science Foundation under grant no. 2008-072.
1,314,259,992,850
arxiv
\section{Introduction} The Hohenberg--Kohn theorem is commonly regarded as the theoretical foundation of density-functional theory (DFT). Omitting technical points~\cite{Lieb1983,Lammert2018,Garrigue2018}, it asserts that the electron density determines the external potential (up to a constant) and therefore the Hamiltonian and all system properties~\cite{Hohenberg1964}. To include arbitrary magnetic fields into the formalism, DFT needs to be supplemented by an additional basic variable. In current-density-functional theory (CDFT) the paramagnetic current density takes that role~\cite{Vignale1987}. It is also possible to forego any attempt to find a universal functional independent of the external potentials and instead have a formalism that is parametrically dependent on the magnetic field~\cite{GRAYCE}. A peculiar feature of CDFT is that it is the paramagnetic current density, and not the gauge-invariant total current density, that enters as a basic variable. This leaves a disconnect between ground-state CDFT and the time-dependent version of the theory, which is naturally formulated using the total current density~\cite{VIGNALE_PRB70_201102,VIGNALE_PRL77_2037}. Additionally, the total current density avoids practical issues arising from having to extract the gauge invariant part of the paramagnetic current density in approximate density functionals~\cite{Trickey,TELLGREN_JCP140_034101,TellgrenSolo2018}. As far as a CDFT for ground states formulated with the total current density is concerned, the question if a Hohenberg--Kohn theorem holds is still open and has attracted some recent attention \cite{Tellgren2012,LaestadiusBenedicks2014,Ruggenthaler2015,Tellgren2018,Garrigue2019,Garrigue2019b}. Several authors have realized that the total current density at best fits awkwardly into standard density-functional approaches and that, in fact, it is incompatible with the \emph{standard} energy minimization principle~\cite{VIGNALE_IJQC113_1422,Tellgren2012,LaestadiusBenedicks2015,TellgrenSolo2018}. However, it has been remarked that energy maximization with respect to the current density is not excluded by any known result~\cite{Tellgren2012} and recent work has shown that the Maxwell--Schr\"odinger energy minimization principle naturally leads to a density-functional theory that features the total current density~\cite{TellgrenSolo2018}. To date, a work by Diener~\cite{Diener} is the only candidate for a density-functional theory of the total current that does not modify the underlying Schr\"odinger equation. Logical gaps in his formulation have been identified before~\cite{Tellgren2012,LaestadiusBenedicks2014}, although one specific criticism was mistaken (Proposition~8 in Ref.~\onlinecite{LaestadiusBenedicks2014}, which we correct below at the end of Sec.~\ref{sec:OrthodoxDiener}). Nonetheless, despite the gaps, Diener's unique approach is interesting as it comes tantalizingly close to succeeding and it has so far been unclear whether the approach can be rigorously completed. In this work, we first clarify the underlying assumptions in Diener's approach by reinterpreting it as based on a maximin variational principle. Based on simple facts about convexity of the resulting energy functional, it can be concluded that Diener's approach is neither capable of reproducing the ground-state energy nor the correct total current density. We also establish that Diener's construction of a Hohenberg--Kohn map suffers from an irreparable error: the selection of a vector potential via a stationary search over current densities is not correct. Our analysis is very general and applies even if previously identified issues~\cite{Tellgren2012,LaestadiusBenedicks2014} could somehow be resolved. \section{Preliminaries} \label{sec:Prel} Our point of departure is the time-independent magnetic Schr\"odinger equation for electrons with the Hamiltonian (in SI-based atomic units, compared to Diener~\cite{Diener} we use the convention $e\mathbf{A} \to \mathbf{A}$ and $-e\Phi \to v$ for the potentials) \begin{equation} H(v,\mathbf{A}) = \frac{1}{2} \sum_j \left( -{\mathrm{i}}\nabla_j + \mathbf{A}(\mathbf{r}_j) \right)^2 + \sum_j v(\mathbf{r}_j) + W. \end{equation} Here $(v,\mathbf{A})$ are the external electromagnetic potentials and $W = \sum_{i<j} r_{ij}^{-1}$ is the electron--electron repulsion operator. We use the short-hand notation $H_0 = H(0,\mathbf{0})$ for the universal part of the Hamiltonian. Spin has no bearing on the present work and we therefore leave out all spin degrees-of-freedom from the notation. For pure states $\pureqmstate(\mathbf{r}_1,\ldots,\mathbf{r}_N)$, where $\Gamma = \vert \pureqmstate\rangle\langle \pureqmstate\vert $ is the density matrix, the particle density and paramagnetic current density are given by, respectively, \begin{equation} \begin{split} \label{eq:dens-para} \rho_{\pureqmstate}(\mathbf{r}_1) & = N \int |\pureqmstate|^2 \,\mathrm{d}\mathbf{r}_2 \cdots \,\mathrm{d}\mathbf{r}_N, \\ \mathbf{j}^{\mathrm{p}}_{\pureqmstate}(\mathbf{r}_1) & = N \, \mathrm{Im} \int \bar{\pureqmstate} \nabla_1 \pureqmstate \,\mathrm{d}\mathbf{r}_2 \cdots \,\mathrm{d}\mathbf{r}_N, \end{split} \end{equation} and with well-known extensions to mixed states. Under a gauge transformation $\mathbf{A} \mapsto \mathbf{A} + \nabla f$, the paramagnetic current density transforms as $\mathbf{j}^{\mathrm{p}} \mapsto \mathbf{j}^{\mathrm{p}} - \rho \nabla f$. The gauge-invariant, total current density is thus given by $\mathbf{j} = \mathbf{j}^{\mathrm{p}}+\rho\mathbf{A}$. From a direct calculation, using the densities defined in Eq.~\eqref{eq:dens-para}, \begin{equation}\label{eq:known-to-CDFTist} \begin{split} \expval{H(v,\mathbf{A})}{\qmstate} &= \expval{H_0}{\qmstate} + \int \mathbf{j}^{\mathrm{p}}_\qmstate \cdot \mathbf{A} \,\mathrm{d} {\mathbf{r}} \\ &\quad +\int \rho_\qmstate (v+\tfrac{1}{2}\vert \mathbf{A}\vert^2) \,\mathrm{d} {\mathbf{r}}. \end{split} \end{equation} Using Eq.~\eqref{eq:known-to-CDFTist} the ground-state energy can be obtained from the expression \begin{equation} \begin{split} & E(v,\mathbf{A}) = \inf_{\qmstate} \expval{H(v,\mathbf{A})}{\qmstate} \\ & = \inf_{\rho,\mathbf{j}^{\mathrm{p}}} \left\{ \FVR(\rho,\mathbf{j}^{\mathrm{p}}) + \int (\rho (v+\tfrac{1}{2} \vert \mathbf{A} \vert^2) + \mathbf{j}^{\mathrm{p}}\cdot\mathbf{A}) \,\mathrm{d}\mathbf{r} \right\}, \end{split} \end{equation} where we have introduced the Vignale--Rasolt universal functional \cite{Vignale1987}, \begin{equation}\label{eq:FVR} \FVR(\rho,\mathbf{j}^{\mathrm{p}}) = \inf_{\qmstate \mapsto (\rho,\mathbf{j}^{\mathrm{p}})} \expval{H_0}{\qmstate}. \end{equation} A recent result establishes that the infimum in Eq.~\eqref{eq:FVR} can be replaced by a minimum for any physically reasonable densities~\cite{Kvaal2020}. It is known that the paramagnetic current density (together with $\rho$) does not determine the external potentials~\cite{Capelle2002}, although the original proof idea~\cite{Vignale1987} can be used to establish a mapping from $(\rho,\mathbf{j}^{\mathrm{p}})$ to nondegenerate ground states~\cite{Tellgren2012}. This was termed a \emph{weak} Hohenberg--Kohn result in Ref.~\onlinecite{LaestadiusTellgren2018}, where the degenerate case was further analysed. Another formulation is obtained by introducing the Grayce--Harris semiuniversal density functional \cite{GRAYCE}, \begin{equation}\label{eqGHfun} \begin{split} &\FGH(\rho,\mathbf{A}) = \inf_{\qmstate \mapsto \rho} \expval{H(0,\mathbf{A})}{\qmstate} \\ &\quad = \int \tfrac{1}{2} \rho \vert \mathbf{A} \vert^2 \,\mathrm{d}\mathbf{r}+ \inf_{\mathbf{j}^{\mathrm{p}}} \left\{ \FVR(\rho,\mathbf{j}^{\mathrm{p}}) +\int \mathbf{j}^{\mathrm{p}}\cdot\mathbf{A} \,\mathrm{d}\mathbf{r} \right\}, \end{split} \end{equation} which enables the ground-state energy to be written as the (magnetic field-) B-DFT variational principle, \begin{equation} E(v,\mathbf{A}) = \inf_{\rho} \left\{ \FGH(\rho,\mathbf{A}) + \int \rho v \,\mathrm{d}\mathbf{r} \right\}. \end{equation} The semiuniversal nature of $\FGH(\rho,\mathbf{A})$ directly leads to a type of Hohenberg--Kohn result: For every fixed $\mathbf{A}$, a positive ground state state $\rho(\mathbf{r}) > 0$ determines $v$ up to a constant~\cite{GRAYCE}. The relationship between the above two frameworks has recently been highlighted and analyzed~\cite{TellgrenSolo2018,REIMANN_JCTC13_4089}, with particular focus on convexity properties and variational principles connecting the formalisms. [See Appendix~\ref{appConvexStuff} for basic definitions of convexity and related notions.] At least for small vector potentials, the physical interpretation is that convexity of the energy in $\mathbf{A}$ is associated with diamagnetism, while concavity in $\mathbf{A}$ is associated with paramagnetism. Here, we note that the mixed-state version of $\FVR$ defined in Eq.~\eqref{eq:FVR} is jointly convex in $(\rho,\mathbf{j}^{\mathrm{p}})$ (but the pure-state version is not, see Proposition~8 in Ref.~\onlinecite{Laestadius2014}). The mixed state version of $\FGH$ is likewise convex in $\rho$; however, it is neither convex nor concave in $\mathbf{A}$. As discussed in Ref.~\onlinecite{TellgrenSolo2018}, the Grayce--Harris functional is paraconcave (``concave up to a square'') in $\mathbf{A}$, i.e., the difference $\bar{\FGH}(\rho,\mathbf{A}) = \FGH(\rho,\mathbf{A}) - \int \tfrac{1}{2} \rho \vert\mathbf{A}\vert^2 \,\mathrm{d}\mathbf{r}$ is concave. Loosely interpreted in physical terms this means that all systems appear paramagnetic when the diamagnetic term is removed. The corresponding transformation of the ground-state energy $E(v,\mathbf{A})$ is a change of variables $\bar{E}(u,\mathbf{A}) = E(u-\tfrac{1}{2} \vert\mathbf{A}\vert^2, \mathbf{A})$, which makes $\bar{E}(u,\mathbf{A})$ jointly concave in $(u,\mathbf{A})$, unlike the original $E(v,\mathbf{A})$. That $\FGH(\rho,\mathbf{A})$ cannot be convex in $\mathbf{A}$ is fairly obvious from the physical interpretation. However, since this property will be important in the further results below, we give a full proof. \begin{proposition} For some $\rho$, the Grayce--Harris functional $\FGH(\rho,\mathbf{A})$ is not convex in $\mathbf{A}$. \end{proposition} \begin{proof} Consider a $\rho$ such that for $\mathbf{A}=0$ one has a ground-state degeneracy that allows for a current $\pm \mathbf{j}^{\mathrm{p}}_{\mathrm{gs}} \neq \mathbf{0}$. Both signs are possible for $\mathbf{j}^{\mathrm{p}}_{\mathrm{gs}}$ due to time-reversal symmetry. Now, take $\mathbf{A} \neq 0$ such that for one of the ground states one has $\int \mathbf{j}^{\mathrm{p}}_{\mathrm{gs}} \cdot\mathbf{A} \,\mathrm{d} {\mathbf{r}} = -\left| \int \mathbf{j}^{\mathrm{p}}_{\mathrm{gs}} \cdot \mathbf{A} \,\mathrm{d}{\mathbf{r}} \right|<0$. Then for sufficiently small but nonzero $\mathbf{A}$, \begin{equation} \label{eq:PenzHawkEye} \begin{split} &\FGH(\rho,\mathbf{A}) = \int \tfrac{1}{2} \rho |\mathbf{A}|^2 \,\mathrm{d}\mathbf{r} + \inf_{\qmstate\mapsto\rho} \left\{ \expval{H_0}{\qmstate} + \int \mathbf{j}^{\mathrm{p}}_{\qmstate}\cdot\mathbf{A} \,\mathrm{d}\mathbf{r} \right\} \\ & \quad\leq G(\rho,\mathbf 0) + \int \tfrac{1}{2} \rho \vert \mathbf{A} \vert^2 \,\mathrm{d} {\mathbf{r}} - \left| \int \mathbf{j}^{\mathrm{p}}_{\mathrm{gs}}\cdot\mathbf{A} \,\mathrm{d} {\mathbf{r}} \right| < \FGH(\rho,\mathbf 0). \end{split} \end{equation} On the other hand, invoking time-reversal symmetry, namely $\FGH(\rho,+\mathbf{A}) = \FGH(\rho,-\mathbf{A})$, the assumption of convexity of $\FGH(\rho,\mathbf{A})$ in $\mathbf{A}$ would have entailed $G(\rho,\mathbf{A}) = \tfrac{1}{2} (\FGH(\rho,\mathbf{A}) + \FGH(\rho,-\mathbf{A}) ) \geq G(\rho,\mathbf 0)$, in contradiction with Eq.~\eqref{eq:PenzHawkEye}. \end{proof} Note that the above result substantially understates the extent of the non-convexity---it is not restricted at all to very special densities $\rho$. For example, some $\rho$ correspond to paramagnetic systems that have concave $G(\rho,\mathbf{A})$ in $\mathbf{A}$. Moreover, most $\rho$ are such that increasing the magnetic-field strength will reorder the energy spectrum so that states with permanent paramagnetic currents eventually become the ground state. These level crossings introduce non-convexity as well. \section{Diener's formulation as a maximin variational principle} Next, we turn to Diener's unconventional attempt to formulate a total current-density-functional theory. Diener's formalism is greatly simplified and clarified by starting from the ground-state energy and algebraically manipulating the formula until we obtain a variational expression that can be related to his working equations. Taking the B-DFT variational principle as the point of departure, it is indeed sufficient to rewrite the Grayce--Harris functional. Letting $\mathbf{k}$ denote an arbitrary current density, we begin by adding an energy term that clearly gives a vanishing net contribution: \begin{widetext} \begin{equation} \label{eqDinminimax} \begin{split} G(\rho,\mathbf{A}) & = \int \tfrac{1}{2} \rho \vert\mathbf{A} \vert^2 \,\mathrm{d}\mathbf{r} + \inf_{\mathbf{j}^{\mathrm{p}}} \left\{ \FVR(\rho,\mathbf{j}^{\mathrm{p}}) +\int \mathbf{j}^{\mathrm{p}}\cdot\mathbf{A} \,\mathrm{d}\mathbf{r} - \inf_{\mathbf{k}} \int \frac{|\mathbf{j}^{\mathrm{p}} + \rho \mathbf{A} - \mathbf{k}|^2}{2\rho} \,\mathrm{d}\mathbf{r} \right\} \\ & = \inf_{\mathbf{j}^{\mathrm{p}}} \sup_{\mathbf{k}} \left\{ \FVR(\rho,\mathbf{j}^{\mathrm{p}}) + \int \mathbf{k}\cdot\mathbf{A} \,\mathrm{d}\mathbf{r} - \int \frac{|\mathbf{j}^{\mathrm{p}}-\mathbf{k}|^2}{2\rho} \,\mathrm{d}\mathbf{r} \right\} . \end{split} \end{equation} \end{widetext} While $\mathbf{k}$ is a dummy variable that is being optimized over, its value at the solution to the above minimax problem will satisfy $\mathbf{k} = \mathbf{j}^{\mathrm{p}} + \rho \mathbf{A}$ and hence exactly reproduce the total current density. This way, the issue that the correct energy cannot be obtained from a \emph{standard} minimization principle for the total current density is avoided. Using the general fact that $\inf_x \sup_y f(x,y) \geq \sup_y \inf_x f(x,y)$, we next obtain \begin{equation} \label{eqDinmaximin} \begin{split} & G(\rho,\mathbf{A}) \\ &\geq \sup_{\mathbf{k}} \inf_{\mathbf{j}^{\mathrm{p}}} \left\{\FVR(\rho,\mathbf{j}^{\mathrm{p}}) + \int \mathbf{k}\cdot\mathbf{A} \,\mathrm{d}\mathbf{r} - \int \frac{|\mathbf{j}^{\mathrm{p}}-\mathbf{k}|^2}{2\rho} \,\mathrm{d}\mathbf{r} \right\} \\ & = \sup_{\mathbf{k}} \left\{ \FD(\rho,\mathbf{k}) + \int \mathbf{k}\cdot\mathbf{A} \,\mathrm{d}\mathbf{r} \right\} =: \GD(\rho,\mathbf{A}). \end{split} \end{equation} We have above introduced $\GD(\rho,\mathbf{A})$ and identified Diener's proposed total current-density functional \begin{equation} \label{eqDienerFun} \begin{split} \FD(\rho,\mathbf{k}) & = \inf_{\mathbf{j}^{\mathrm{p}}} \left\{ \FVR(\rho,\mathbf{j}^{\mathrm{p}}) - \int \frac{|\mathbf{j}^{\mathrm{p}}-\mathbf{k}|^2}{2\rho} \,\mathrm{d}\mathbf{r} \right\} \\ & = \inf_{\qmstate\mapsto \rho} \left\{ \expval{H_0}{\qmstate} - \int \frac{|\mathbf{j}^{\mathrm{p}}_{\qmstate}-\mathbf{k}|^2}{2\rho} \,\mathrm{d}\mathbf{r} \right\}. \end{split} \end{equation} The issue now arises as to whether the above maximin principle always achieves equality in Eq.~\eqref{eqDinmaximin}. If this were true, we would have succeeded in expressing the ground-state energy in terms a universal functional $\FD$ of the total current density. Unfortunately, this can immediately be disproven on the basis of convexity properties: The right-hand side of Eq.~\eqref{eqDinmaximin}, i.e., $\GD$, is manifestly convex in $\mathbf{A}$, and hence can only describe diamagnetic systems, whereas the Grayce--Harris functional $\FGH(\rho,\mathbf{A})$ is nonconvex in $\mathbf{A}$. This establishes the following result: \begin{proposition} \label{PropGDneqFGH} For some $(\rho,\mathbf{A})$, we have a strict inequality $\FGH(\rho,\mathbf{A}) > \GD(\rho,\mathbf{A})$. \end{proposition} A remaining issue is whether Diener's functional $\FD(\rho,\mathbf{k})$ or the variational principle for $\GD(\rho,\mathbf{A})$ are useful for other purposes, such as reconstructing the correct external vector potential from an input pair $(\rho,\mathbf{j} = \mathbf{j}^{\mathrm{p}} + \rho\mathbf{A})$ or delivering the correct total current density from a pair $(\rho,\mathbf{A})$. The former would establish a Hohenberg--Kohn-type mapping, since then $(\rho,\mathbf{j})$ determines $(\rho,\mathbf{A})$ up to a gauge. In a next step one could use the B-DFT extension of the Hohenberg--Kohn theorem to determine $v$~\cite{GRAYCE,LaestadiusBenedicksPenz,Garrigue2019}. In Diener's work, this is in fact the primary intended use of the minimization principle that defines $\FD$. Moreover, he relies heavily on the fact that a state $\qmstate$ and an arbitrary vector field $\mathbf{k}$ can be ``related'' through the effective vector potential \begin{equation} \label{eq:aD} \aD(\qmstate,\mathbf{k}) := \frac{\mathbf{k} - \mathbf{j}^{\mathrm{p}}_{\qmstate}}{\rho_{\qmstate}}. \end{equation} By definition we have $\mathbf{k} = \mathbf{j}^{\mathrm{p}}_{\qmstate} + \rho_{\qmstate} \aD(\qmstate,\mathbf{k})$, mimicking the relationship between the total current density, the paramagnetic current, and the actual external vector potential. If supplying the true total current density $\mathbf{j} = \mathbf{j}^{\mathrm{p}} + \rho \mathbf{A}$ to $\FD(\rho,\mathbf{j})$ always yields a minimizer $\qmstate_{\mathrm{m}}$ in Eq.~\eqref{eqDienerFun} such that $\aD(\qmstate_{\mathrm{m}},\mathbf{k}) = \mathbf{A}$, a Hohenberg--Kohn-type mapping would be established. More precisely, since the input to $\FD$ is gauge invariant, the external vector potential can at best be determined up to a gauge. Hence, we have to allow for $\aD(\qmstate_{\mathrm{m}},\mathbf{k}) = \mathbf{A} + \nabla f$ and multiple gauge dependent minimizers $\jparaDm$ in Eq.~\eqref{eqDienerFun}, one of which corresponds to a gauge in which $\aD(\qmstate_{\mathrm{m}},\mathbf{k}) = \mathbf{A}$. This weaker statement would be sufficient to establish the Hohenberg--Kohn-type mapping. Unfortunately, the next proposition shows that such an $\FD$-based mapping does not exist. \begin{proposition} \label{prop:EITmasterwork} For some $(\rho,\mathbf{A})$, Diener's current density functional $\FD$ fails to reconstruct the external potential. That is, for any minimizer $\jparaDm$ in Eq.~\eqref{eqDienerFun} we have \begin{equation} \frac{\mathbf{j} - \jparaDm}{\rho} \neq \mathbf{A}. \end{equation} \end{proposition} \begin{proof} Fix an arbitrary pair $(\rho,\mathbf{A})$ for which there exist current densities $(\mathbf{j}^{\mathrm{p}}_0,\mathbf{j}_0=\mathbf{j}^{\mathrm{p}}_0 + \rho\mathbf{A})$ that solve the minimax problem Eq.~\eqref{eqDinminimax}. Inserting $\mathbf{j}_0$ into Diener's functional yields \begin{equation} \begin{split} \FD(\rho,\mathbf{j}_0) & = \inf_{\mathbf{j}^{\mathrm{p}}} \left\{ \FVR(\rho,\mathbf{j}^{\mathrm{p}}) - \int \frac{|\mathbf{j}^{\mathrm{p}}-\mathbf{j}_0|^2}{2\rho} \,\mathrm{d}\mathbf{r} \right\} \\ & =\FVR(\rho,\jparaDm) - \int \frac{|\jparaDm-\mathbf{j}_0|^2}{2\rho} \,\mathrm{d}\mathbf{r}, \end{split} \end{equation} where $\jparaDm$ is a minimizer. Now assume, arguendo, that this minimizer can always be chosen to satisfy \begin{equation} \label{eqDienerAlwaysA} \frac{\mathbf{j}_0 - \jparaDm}{\rho} = \mathbf{A}. \end{equation} But this is equivalent to $\jparaDm = \mathbf{j}_0 - \rho \mathbf{A} = \mathbf{j}^{\mathrm{p}}_0$. As a direct consequence, we have the lower bound \begin{equation} \begin{split} &\GD(\rho,\mathbf{A}) = \sup_{\mathbf{k}} \left\{ \FD(\rho,\mathbf{k}) + \int \mathbf{k}\cdot\mathbf{A} \,\mathrm{d}\mathbf{r} \right\} \\ & \geq \FD(\rho,\mathbf{j}_0) + \int \mathbf{j}_0\cdot\mathbf{A} \,\mathrm{d}\mathbf{r} \\ & = \FVR(\rho,\mathbf{j}^{\mathrm{p}}_0) + \int \mathbf{j}_0\cdot\mathbf{A} \,\mathrm{d}\mathbf{r} - \int \frac{|\mathbf{j}^{\mathrm{p}}_0-\mathbf{j}_0|^2}{2\rho} \,\mathrm{d}\mathbf{r} \\ & = \FVR(\rho,\mathbf{j}^{\mathrm{p}}_0) + \int (\mathbf{j}^{\mathrm{p}}_0+\rho\mathbf{A})\cdot\mathbf{A} \,\mathrm{d}\mathbf{r} - \int \frac{|\rho\mathbf{A}|^2}{2\rho} \,\mathrm{d}\mathbf{r} \\ & = \FGH(\rho,\mathbf{A}). \end{split} \end{equation} Combining the above bound with the fact that $\GD(\rho,\mathbf{A}) \leq \FGH(\rho,\mathbf{A})$ from Eq.~\eqref{eqDinmaximin}, we have established that $\GD(\rho,\mathbf{A}) = \FGH(\rho,\mathbf{A})$ for arbitrary $(\rho,\mathbf{A})$. This, however, is impossible in light of Proposition~\ref{PropGDneqFGH}. Hence, we conclude that the assumption that the minimizer $\jparaDm$ can always be chosen to satisfy Eq.~\eqref{eqDienerAlwaysA} is false, which completes the proof. \end{proof} It should be noted that we do not need to explicitly impose that the total current density arising from an eigenstate is divergence-free. This condition, $\nabla\cdot\mathbf{k}=0$, is not needed in the minimax principle for $\FGH$. However, it possibly makes a difference in the maximin principle $\GD$, yet adding it does not circumvent the problems noted above. Finally, it must be remarked that in his original work, Diener actually relies on a stationarity principle for a quantity $G_{\mathrm{stat}}$, rather than on the above maximin principle for $\GD$. However, this difference is inessential and, in fact, only adds to the problems identified above. The following bounds are immediate: \begin{widetext} \begin{equation} \begin{split} \FGH(\rho,\mathbf{A}) & = \inf_{\mathbf{j}^{\mathrm{p}}} \sup_{\mathbf{k}} \left\{ \FVR(\rho,\mathbf{j}^{\mathrm{p}}) + \int \mathbf{k}\cdot\mathbf{A} \,\mathrm{d}\mathbf{r} - \int \frac{|\mathbf{j}^{\mathrm{p}}-\mathbf{k}|^2}{2\rho} \,\mathrm{d}\mathbf{r} \right\} \\ & \geq \GD(\rho,\mathbf{A}) = \sup_{\mathbf{k}} \inf_{\mathbf{j}^{\mathrm{p}}} \left\{\FVR(\rho,\mathbf{j}^{\mathrm{p}}) + \int \mathbf{k}\cdot\mathbf{A} \,\mathrm{d}\mathbf{r} - \int \frac{|\mathbf{j}^{\mathrm{p}}-\mathbf{k}|^2}{2\rho} \,\mathrm{d}\mathbf{r} \right\} \\ & \geq G_{\mathrm{stat}}(\rho,\mathbf{A}) = \stat_{\mathbf{k}} \inf_{\mathbf{j}^{\mathrm{p}}} \left\{ \FVR(\rho,\mathbf{j}^{\mathrm{p}}) + \int \mathbf{k}\cdot\mathbf{A} \,\mathrm{d}\mathbf{r} - \int \frac{|\mathbf{j}^{\mathrm{p}}-\mathbf{k}|^2}{2\rho} \,\mathrm{d}\mathbf{r} \right\}. \end{split} \end{equation} \end{widetext} Hence, by Proposition~\ref{PropGDneqFGH} it follows that $\FGH(\rho,\mathbf{A}) > \GD(\rho,\mathbf{A}) \geq G_{\mathrm{stat}}(\rho,\mathbf{A})$, for some $(\rho,\mathbf{A})$. The problems with the maximin principle for $\GD$ thus directly carry over to the stat-min principle for $G_{\mathrm{stat}}$. Naturally, a pure minimization principle, obtained by replacing the maximization over $\mathbf{k}$ by a minimization, can only make problems worse. \section{Diener's original formulation} \label{sec:OrthodoxDiener} The previous section reinterpreted Diener's formulation in terms of a maximin principle. In the present section, we provide a direct disproof in terms of Diener's original concepts. As was already clear from Proposition~\ref{prop:EITmasterwork}, Diener's proof is unfortunately in error and $\FD$ cannot be used for a Hohenberg--Kohn result in CDFT. Recall Eq.~\eqref{eq:aD}, where for given $\qmstate$ and $\mathbf{k}$ we have the vector potential $\aD(\qmstate,\mathbf{k}) = (\mathbf{k} - \mathbf{j}^{\mathrm{p}}_{\qmstate})/\rho_{\qmstate}$. The following proposition is a direct consequence of Eq.~\eqref{eq:known-to-CDFTist}. \begin{proposition}[Eq.~(6) in Diener~\cite{Diener}] \label{prop:Eq6D} Let $H(v,\mathbf{A})$ be fixed. Then for any $\qmstate$ and any current density $\mathbf{k}$ \begin{equation} \label{AL:eq1} \begin{split} \expval{H(v,\mathbf{A})}{\qmstate} = \ED(\qmstate,\mathbf{k}) + \int ( \mathbf{k} \cdot \mathbf{A} + \rho_{\qmstate} v )\,\mathrm{d}{\mathbf{r}} & \\ + \int \tfrac{1}{2} \rho_{\qmstate} \vert \mathbf{A}- \aD(\qmstate,\mathbf{k}) \vert^2 \,\mathrm{d}{\mathbf{r}} &, \end{split} \end{equation} with \begin{align} \ED(\qmstate,\mathbf{k}) &= \expval{H_0}{\qmstate} - \int \frac{ \vert \mathbf{k} - \mathbf{j}^{\mathrm{p}}_{\qmstate} \vert^2}{2\rho_{\qmstate}}\,\mathrm{d}{\mathbf{r}} . \end{align} \label{prop:D6} \end{proposition} Note that $\ED(\qmstate,\mathbf{k}) = \expval{H_{\aD(\qmstate,\mathbf{k})}}{\qmstate}$ with $H_{\aD(\qmstate,\mathbf{k})} = H_0 - \tfrac{1}{2} \sum_j \vert \aD(\qmstate,\mathbf{k}; \mathbf r_j) \vert^2$, i.e., $\ED$ can be viewed as an expectation value over a state-dependent Hamiltonian $H_{\aD(\qmstate,\mathbf{k})}$. Equation~\eqref{AL:eq1} can also be stated as \begin{equation} \label{eq:G1} \begin{split} \ED(\qmstate,\mathbf{k}) + \int( \mathbf{k}\cdot \mathbf{A} + \rho_{\qmstate} v) \,\mathrm{d} {\mathbf{r}} = \expval{H(v,\mathbf{A})}{\qmstate}& \\ - \int \tfrac{1}{2} \rho_{\qmstate} \vert \mathbf{A}- \mathbf a(\qmstate,\mathbf{k}) \vert^2 \,\mathrm{d} {\mathbf{r}} &. \end{split} \end{equation} On the left-hand side of Eq.~\eqref{eq:G1} we have $\ED$ (albeit not a functional of the densities) and a \emph{linear} coupling between $(v,\mathbf{A})$ and the variables $(\rho,\mathbf{k})$. This mimics the situation that one has in density-only DFT, however, with one important difference: the expression on the left-hand side of Eq.~\eqref{eq:G1} does not equal the expectation value $\expval{H(v,\mathbf{A})}{\qmstate}$. To obtain a density-functional setting, Diener minimized the left-hand side of Eq.~\eqref{eq:G1} over all $\qmstate \mapsto \rho$ and transitioned from $\ED(\qmstate,\mathbf{k})$ to a density functional by defining \begin{align} \FD(\rho,\mathbf{k}) := \inf_{\qmstate \mapsto \rho} \ED(\qmstate,\mathbf{k}), \end{align} which is equivalent to Eq.~\eqref{eqDienerFun}. The existence and uniqueness of minimizers of $\FD$ was never investigated by Diener---a possible minimum was simply tacitly assumed. As far as the attempt to obtain a Hohenberg--Kohn theorem is concerned, Diener's proof cannot be completed, as will be demonstrated here based on Proposition~\ref{prop:EITmasterwork}. However, we will first make an attempt at providing the best possible presentation of Diener's argument. A word on notation: If a minimizer of $\FD(\rho,\mathbf{k})$ exists we denote it by $\qmstate_{\mathrm{m}}$ and call it a ``Diener minimizer''. For such a $\qmstate_\mathrm{m}$ we have $\qmstate_{\mathrm{m}} \mapsto \rho$ and \begin{equation} \FD(\rho,\mathbf{k}) = \ED( \qmstate_{\mathrm{m}}, \mathbf{k} ). \end{equation} That such a minimizer indeed can be guaranteed to exist under certain assumptions is proven in Appendix~\ref{app:proof}. Diener has formulated an unorthodox variational principle, Eqs.~(10) and (15) in Ref.~\onlinecite{Diener}, which we restate in the following proposition. \begin{proposition}[Diener's generalized variational principle] \label{prop:Dvarp} Let $v,\mathbf{A}$ be fixed. Diener's functional $\FD$ verifies for any $\rho$, for any $\qmstate\mapsto \rho$, and any current density $\mathbf{k}$, the inequality \begin{equation}\label{in1} \begin{split} \FD(\rho,\mathbf{k}) + \int ( \mathbf{k} \cdot \mathbf{A} + \rho v)\,\mathrm{d} {\mathbf{r}} \leq \expval{H(v,\mathbf{A})}{\qmstate} & \\ - \int \tfrac{1}{2} \rho \vert \mathbf{A}- \mathbf{a}(\qmstate,\mathbf{k})\vert^2 \,\mathrm{d} {\mathbf{r}} &. \end{split} \end{equation} Moreover, suppose a minimizer $\qmstate_{\mathrm{m}}$ of $\FD(\rho,\mathbf{k})$ exists, then \begin{equation} \label{in1-1} \begin{split} & \expval{H(v,\mathbf{A})}{\qmstate_{\mathrm{m}}} - \int \tfrac{1}{2} \rho \vert \mathbf{A}- \aD(\qmstate_{\mathrm{m}},\mathbf{k}) \vert^2 \,\mathrm{d}{\mathbf{r}} \\ & \quad \leq \expval{H(v,\mathbf{A})}{\qmstate} - \int \tfrac{1}{2} \rho \vert \mathbf{A}- \aD(\qmstate,\mathbf{k}) \vert^2 \,\mathrm{d}{\mathbf{r}}. \end{split} \end{equation} \end{proposition} \begin{proof} The first inequality, Eq.~\eqref{in1}, follows from minimizing the left-hand side of Eq.~\eqref{eq:G1} over $\qmstate \mapsto \rho$, \begin{equation} \label{eq:AL-add-on1} \begin{split} &\FD(\rho,\mathbf{k}) + \int ( \mathbf \mathbf{k} \cdot \mathbf{A} + \rho v) \,\mathrm{d}{\mathbf{r}} \\ &= \inf_{\qmstate\mapsto \rho} \left\{ \ED(\qmstate,\mathbf{k}) + \int ( \mathbf{k} \cdot \mathbf{A} + \rho v) \,\mathrm{d} {\mathbf{r}} \right\} \\ &\leq \trace{ H(v,\mathbf{A})\qmstate } - \int \tfrac 1 2 \rho_\qmstate \vert \mathbf{A} - \mathbf a(\qmstate,\mathbf{k}) \vert^2 \,\mathrm{d}{\mathbf{r}} . \end{split} \end{equation} To obtain Eq.~\eqref{in1-1}, we note that the left-hand side of Eq.~\eqref{eq:AL-add-on1} can be rearranged into \begin{equation} \begin{split} &\ED(\qmstate_\mathrm{m},\mathbf{k}) + \int ( \mathbf{k} \cdot \mathbf{A} + \rho v ) \,\mathrm{d} {\mathbf{r}} \\ &= \trace{ H_0 \qmstate_\mathrm{m} } + \int ( \mathbf{k} \cdot \mathbf{A} + \rho v) \,\mathrm{d} {\mathbf{r}} - \int \tfrac 1 2 \rho \vert \aD(\qmstate_{\mathrm{m}},\mathbf{k}) \vert^2 \,\mathrm{d}{\mathbf{r}} \\ &= \trace{ H(v, \mathbf{A}) \qmstate_\mathrm{m} } - \int \tfrac{1}{2} \rho \vert \mathbf{A}- \aD(\qmstate_{\mathrm{m}},\mathbf{k}) \vert^2 \,\mathrm{d}{\mathbf{r}} . \end{split} \end{equation} Here we used $\FD(\rho,\mathbf{k}) =\ED(\qmstate_\mathrm{m},\mathbf{k})$ and Eq.~\eqref{eq:known-to-CDFTist} with $\mathbf{k} = \mathbf{j}^{\mathrm{p}}_{\qmstate_\mathrm{m}} + \rho\aD(\qmstate_{\mathrm{m}},\mathbf{k})$. \end{proof} In light of Proposition~\ref{prop:Dvarp}, a natural question to ask is the relation between Diener minimizers $\qmstate_\mathrm{m}$ and ground states $\qmstate_0$. We can offer the following answer: For a Hamiltonian $H(v,\mathbf{A})$ where $\mathbf{A}=\aD(\qmstate_{\mathrm{m}},\mathbf{k})$, the Diener minimizer $\qmstate_\mathrm{m}$ is a ground state. However, a ground state $\qmstate_0\mapsto (\rho_0,\mathbf{j}_0)$ generally does not need to be a minimizer of $\FD(\rho_0,\mathbf{j}_0)$ (the proof is given below in Proposition~\ref{prop:amazinEIT}). \begin{corollary} \label{cor:gs-thing} Suppose $\qmstate_\mathrm{m} \mapsto \rho_0$ to be a Diener minimizer for $\FD(\rho_0,\mathbf{k})$ and that $H(v,\mathbf{A})$, with $\mathbf{A} = \aD(\qmstate_{\mathrm{m}},\mathbf{k})$, has $\rho_0$ as a ground-state density. Then $v$ is unique up to a constant. Moreover, for any $\Gamma \mapsto \rho_0$ it holds (Eq.~(15) in Diener) \begin{equation} \label{eq:D-var-prin} \begin{split} \expval{H(v,\mathbf{A})}{\qmstate_{\mathrm{m}}} \leq \expval{H(v,\mathbf{A})}{\qmstate} - \int \tfrac{1}{2} \rho \vert \mathbf{A}- \aD(\qmstate,\mathbf{k}) \vert^2 \,\mathrm{d}{\mathbf{r}}, \end{split} \end{equation} and as a consequence $\qmstate_\mathrm{m}$ is a ground state for $H(v,\mathbf{A})$ and $\mathbf{k} = \mathbf{j}^{\mathrm{p}}_{\qmstate_\mathrm{m}} + \rho_0 \mathbf{A} =:\mathbf{j}_0$ is the ground-state current density. Further, $\aD(\qmstate_0,\mathbf{j}_0) = \aD(\qmstate_{\mathrm{m}},\mathbf{j}_0)$ for any other ground state of $H(v,\mathbf{A})$ with $\qmstate_0\mapsto \rho_0$. \end{corollary} Note that Corollary~\ref{cor:gs-thing} implies that all ground states with $\qmstate_0\mapsto \rho_0$ have the same paramagnetic current densities, since the diamagnetic part always is $\rho_0 \mathbf{A}$. [See also the joint-degeneracy theorem in Ref.~\onlinecite{Capelle2007}.] \begin{proof}[Proof of Corollary~\ref{cor:gs-thing}] Firstly, since the vector potential $\mathbf{A}=\aD(\qmstate_{\mathrm{m}},\mathbf{k})$ in $H(v,\mathbf{A})$ is fixed and $\rho_0$ is by assumption a ground-state density, the Hohenberg--Kohn result of B-DFT~\cite{GRAYCE} gives that $v$ is determined up to a constant. The inequality in Eq.~\eqref{eq:D-var-prin} is a direct consequence of Eq.~\eqref{in1-1} with $\aD(\qmstate_{\mathrm{m}},\mathbf{k}) = \mathbf{A}$. Since Eq.~\eqref{eq:D-var-prin} implies the weaker bound $\expval{H(v,\mathbf{A})}{\qmstate_{\mathrm{m}}} \leq \expval{H(v,\mathbf{A})}{\qmstate}$ for any $\Gamma\mapsto \rho_0$, it follows that $\qmstate_{\mathrm{m}}$ is a ground state of $H(v,\mathbf{A})$. In particular, Eq.~\eqref{eq:D-var-prin} gives for any ground state of $H(v,\mathbf{A})$ with $\Gamma_0\mapsto \rho_0$, \begin{equation} \int \rho_0 \vert \mathbf{A}-\mathbf a(\qmstate_0,\mathbf{k})\vert^2 \,\mathrm{d} {\mathbf{r}} = 0 \end{equation} and thus $\rho_0\vert \mathbf{A}- \mathbf a(\qmstate_0,\mathbf{k}) \vert^2 =0$ almost everywhere (a.e.). By the unique-continuation property from sets of positive measure~\cite{Garrigue2019,LaestadiusBenedicksPenz}, we have $\vert \{\rho_0=0 \} \vert=0$, so $\mathbf a(\qmstate_0,\mathbf{k}) = \mathbf{A} = \aD(\qmstate_{\mathrm{m}},\mathbf{k})$ (a.e.). \end{proof} The main question at this point is, how to guarantee the required $\mathbf{A}=\aD(\qmstate_{\mathrm{m}},\mathbf{k})$. To meet that end, Diener suggested in Ref.~\onlinecite{Diener} to choose $\mathbf{A}$ and, for arbitrary $\rho$, find the stationary point, \begin{equation} \mathbf{j}_\mathrm{stat}(\rho,\mathbf{A}) = \mathrm{arg} \, \stat_{\mathbf{k}} \left\{ F_\mathrm{D}(\rho,\mathbf{k}) + \int \mathbf{k}\cdot \mathbf{A} \,\mathrm{d}{\mathbf{r}} \right\}, \end{equation} where it was implicitly assumed that (i) there is a Diener minimizer $\qmstate_{\mathrm{m}}$ and (ii) $\FD(\rho,\mathbf{k})$ is differentiable with respect to $\mathbf{k}$. Under these assumptions, Diener claimed that this $\mathbf{j}_{\mathrm{stat}}$ has the desired property \begin{equation} \aD(\qmstate_\mathrm{m}, \mathbf{j}_{\mathrm{stat}}) =\mathbf{A} \quad \text{(up to a gauge)}. \end{equation} For a ground state $\qmstate_0 \mapsto \rho_0$ of $H(v,\mathbf{A})$ one then has by Eq.~\eqref{eq:G1} \begin{equation} \label{eq:AL91} \begin{split} \FD(\rho_0,\mathbf{j}_\mathrm{stat}(\rho_0,\mathbf{A})) + \int ( \mathbf{j}_{\mathrm{stat}}(\rho_0,\mathbf{A}) \cdot \mathbf{A}+ \rho_0 v)\,\mathrm{d} {\mathbf{r}} &\\ \leq \expval{H(v,\mathbf{A})}{\qmstate_0}, \end{split} \end{equation} where the left-hand side equals $\expval{H(v,\mathbf{A})}{\qmstate_{\mathrm{m}}}$. To sum it up, a stationary variation over $\mathbf{k}$ is thought to select the correct $\aD(\qmstate_\mathrm{m}, \mathbf{j}_{\mathrm{stat}}) =\mathbf{A}$ while in the next step minimizing over densities gives the ground-state energy because of Eq.~\eqref{eq:AL91}, \begin{equation} \begin{split} E(v,\mathbf{A}) &= \inf_\rho \stat_\mathbf{k} \left\{ \FD(\rho,\mathbf{k}) + \int ( \mathbf{k}\cdot\mathbf{A} + \rho v ) \,\mathrm{d}{\mathbf{r}} \right\}. \end{split} \end{equation} The attempted proof of Diener for a Hohenberg--Kohn result then relies on the augmented variational principle in Eq.~\eqref{eq:D-var-prin} that moreover has to be a strict inequality for $\qmstate$ not being a Diener minimizer $\qmstate_{\mathrm{m}}$. But since Corollary~\ref{cor:gs-thing} shows that under certain assumptions such Diener minimizers are ground states, an \emph{additional} condition of uniqueness of ground states gives a strict inequality. The usual Hohenberg--Kohn argument by contradiction could then be completed by means of Eq.~\eqref{eq:D-var-prin}. Furthermore, under the assumption that $\aD(\qmstate_\mathrm{m}, \mathbf{j}_{\mathrm{stat}})=\mathbf{A}$ gets selected, there is also a more direct argument available. Suppose that $(\rho_0,\mathbf{j}_0)$ is the ground-state density pair of two different Hamiltonians with vector potentials $\mathbf{A}$ and $\mathbf{A}'$, respectively. Then if for $\FD(\rho_0,\mathbf{k}) + \int \mathbf{k}\cdot \mathbf{A} \,\mathrm{d} {\mathbf{r}}$ and $\FD(\rho_0,\mathbf{k}) + \int \mathbf{k}\cdot \mathbf{A}' \,\mathrm{d}{\mathbf{r}}$ Diener's stationary search selects $\mathbf{j}_0 = \mathbf{j}_\mathrm{stat}(\rho_0,\mathbf{A}) = \mathbf{j}_\mathrm{stat}(\rho_0,\mathbf{A}')$ that has $\aD(\qmstate_\mathrm{m}, \mathbf{j}_0)$ equal to both $\mathbf{A}$ and $\mathbf{A}'$ up to a gauge, then the magnetic field is the same for both systems. The Hohenberg--Kohn result, i.e., that the scalar potentials also are equal (up to an additive constant), then would follow by the B-DFT result of Grayce and Harris~\cite{GRAYCE}. Alas, as a corollary to our main Proposition~\ref{prop:EITmasterwork}, the next proposition shows that Diener's stationary search (as suggested and erroneously proved in Ref.~\onlinecite{Diener}) does \emph{not} select $\aD(\qmstate_\mathrm{m}, \mathbf{j}_{\mathrm{stat}})=\mathbf{A}$ up to a gauge. Furthermore, we also have, as a corollary to Proposition~\ref{prop:EITmasterwork}, that ground states are \emph{not} in general minimizers of the Diener functional $\FD$. \begin{proposition} \label{prop:amazinEIT} (i) Let $\rho$ and $\mathbf{A}$ be fixed. The Diener optimization \begin{equation} \Gstat(\rho,\mathbf{A}) = \stat_\mathbf{k} \left\{ F_\mathrm{D}(\rho,\mathbf{k}) + \int \mathbf{k}\cdot\mathbf{A} \,\mathrm{d}{\mathbf{r}} \right\} \end{equation} does not in general select $\mathbf{j}_\mathrm{stat}$ such that $\mathbf{a}(\qmstate_\mathrm{m}, \mathbf{j}_{\mathrm{stat}}) = \mathbf{A}$ (up to a gauge). (ii) A ground state with the density pair $(\rho,\mathbf{j})$ is not in general a Diener minimizer of $\FD(\rho,\mathbf{j})$. \end{proposition} \begin{proof} For (i), we shall establish $\GD (\rho,\mathbf{A}) \geq \FGH(\rho,\mathbf{A})$ for arbitrary $(\rho,\mathbf{A})$, which by Proposition~\ref{PropGDneqFGH} is a contradiction. If no stationary point exists there is nothing to prove. Therefore assume that the value of $\Gstat(\rho,\mathbf{A})$ is realized by a $\mathbf{j}_{\mathrm{stat}}$ and its contribution via $\FD(\rho,\mathbf{j}_{\mathrm{stat}})$, which, in turn, is realized by some state $\qmstate_{\mathrm{m}}$ with paramagnetic current density $\jparaDm$. Because the input to $\FD$ is gauge invariant, any gauge transformed state $\qmstate'_{\mathrm{m}}$, with $\jparaDm' = \jparaDm + \rho \nabla\chi$ and gauge function $\chi$, is an equally valid minimizer. By stipulation, we have $\rho \, \aD(\qmstate_{\mathrm{m}},\mathbf{j}_{\mathrm{stat}}) = \mathbf{j}_{\mathrm{stat}} - \jparaDm = \rho (\mathbf{A} + \nabla f)$, where $f$ is a gauge function for $\mathbf{A}$. It follows that choosing $\chi = f$ reproduces the external vector potential exactly, i.e., $\aD(\qmstate'_{\mathrm{m}},\mathbf{j}_{\mathrm{stat}}) = \mathbf{A}$, and also $\mathbf{j}_{\mathrm{stat}} = \jparaDm' + \rho\mathbf{A}$. Hence, \begin{equation} \begin{split} \GD & (\rho,\mathbf{A}) \geq \Gstat(\rho,\mathbf{A}) = \stat_{\mathbf{k}} \left\{ F_\mathrm{D}(\rho,\mathbf{k}) + \int \mathbf{k}\cdot \mathbf{A} \,\mathrm{d}\mathbf{r} \right\} \\ & = \expval{H_0}{\qmstate'_{\mathrm{m}}} + \int \mathbf{j}_{\mathrm{stat}}\cdot \mathbf{A} \,\mathrm{d}{\mathbf{r}} - \int \tfrac{1}{2} \rho \vert \mathbf{A} \vert^2 \,\mathrm{d}{\mathbf{r}} \\ & = \expval{H_0}{\qmstate'_{\mathrm{m}}} + \int \left( \jparaDm'\cdot\mathbf{A} + \tfrac{1}{2} \rho |\mathbf{A}|^2 \right) \,\mathrm{d}\mathbf{r} \geq \FGH(\rho,\mathbf{A}). \end{split} \end{equation} For part (ii), we demonstrate that the assumption that a ground state also is a Diener minimizer leads to a contradiction. Let $\mathbf{A}(\mathbf{r}) = \frac{1}{2} B \mathbf{e}_z\times\mathbf{r}$ be a vector potential representing a uniform magnetic field along the $z$-axis. Let $v_B(\mathbf{r}) = -Z/|\mathbf{r}| + \frac{1}{2} (\omega_0^2 - \tfrac{1}{4} B^2)(x^2+y^2)$, with $\omega_0\neq 0$ and note that the effective scalar potential $u = v_B + \tfrac{1}{2} |\mathbf{A}|^2 = -Z/|\mathbf{r}| + \frac{1}{2} \omega_0^2 (x^2+y^2)$ is independent of $B$. The Hamiltonian $H(v_B,\mathbf{A}) = H(v_0,\mathbf{0}) + \frac{1}{2} B L_z$ has cylindrical symmetry and the eigenstates therefore have quantized angular momentum component $L_z = -{\mathrm{i}}\sum_j [\mathbf{r}_j\times\nabla_j]_z$. Due to the quantization, the paramagnetic term becomes a trivial shift and the ground state is piecewise constant as a function of $B$, with jumps corresponding to level crossings. Consequently, the ground-state density $\rho$ is also piecewise constant in $B$. For values of $Z$ that correspond to an open-shell atom (e.g., a carbon atom with $Z=6$ and six electrons), there is a ground state $\qmstate_{-M}$ for $B\geq 0$ with $\expval{L_z}{\qmstate_{-M}} = -M$. For $B\leq 0$, the ground state is $\qmstate_{+M} = \qmstate_{-M}^*$ with the same density, $\qmstate_{\pm M} \mapsto \rho$, but $\expval{L_z}{\qmstate_{+M}} = +M$. For sufficiently small $|B|$, the Grayce--Harris functional is given by \begin{equation} \FGH(\rho,\mathbf{A}) = \FGH(\rho,\mathbf{0}) - \frac{1}{2} |MB| + \int \tfrac{1}{2} \rho \vert \mathbf{A} \vert^2 \,\mathrm{d}\mathbf{r}, \end{equation} which is nonconvex in $B$ because of the term $-\tfrac{1}{2}|MB|$ and is independent of the sign of $B$. For $B>0$, the total current density is given by $\mathbf{j}_{+} = \mathbf{j}^{\mathrm{p}}_{\qmstate_{-M}} + \rho \mathbf{A}$ and for $B<0$ it is $\mathbf{j}_{-} = \mathbf{j}^{\mathrm{p}}_{\qmstate_{+M}} + \rho \mathbf{A} = -\mathbf{j}_{+}$. By stipulation, the ground states $\qmstate_{\pm M}$ for all sufficiently small $|B|$ are also a minimizers of $\FD(\rho,\mathbf{j}_{\pm})$. Then \begin{equation} \begin{split} \GD(\rho,\mathbf{A}) & \geq \FD(\rho, \mathbf{j}_{\pm}) + \int \mathbf{j}_{\pm}\cdot\mathbf{A} \,\mathrm{d} \mathbf{r} \\ & = \expval{H_0}{\qmstate_{\mp M}} - \int (\tfrac{1}{2} \rho \vert \mathbf{A} \vert^2 + \mathbf{j}_{\pm}\cdot\mathbf{A}) \,\mathrm{d} \mathbf{r} \\ & = \FGH(\rho,\mathbf{A}). \end{split} \end{equation} Combined with the generic fact $\FGH(\rho,\mathbf{A}) \geq \GD(\rho,\mathbf{A})$, we now have $\FGH(\rho,\mathbf{A}) = \GD(\rho,\mathbf{A})$ for a whole interval of small $|B|$. However, $\GD(\rho,\mathbf{A})$ is convex in $\mathbf{A}$ (and therefore also in $B$) and therefore cannot equal the nonconvex $\FGH(\rho,\mathbf{A})$ on an interval of small $|B|$. This contradiction completes the proof. \end{proof} In the previous section, Propositions~\ref{PropGDneqFGH} and~\ref{prop:EITmasterwork} established via a reinterpretation as a minimax principle that Diener's approach cannot work, since it attempts to derive claims that are false. Proposition~\ref{prop:amazinEIT} above further shows, in the terminology of Ref.~\onlinecite{Diener}, that the central steps in Diener's reasoning towards a Hohenberg--Kohn-like result fail. Our results here are thus definitive and go further than previous critiques, which identified an unfounded strict inequality, a self-consistency condition that would require further analysis, and a variational collapse for specific types of ground state densities~\cite{Tellgren2012,Laestadius2014}. It may also be instructive to note a case where Diener's approach does go through---albeit under extreme restrictions. When only $\qmstate$ with vanishing $\mathbf{j}^{\mathrm{p}}_\qmstate$ are allowed (or when only real valued states are allowed), we can choose a $H(v,\mathbf{A})$ with ground state $\Gamma_0$ and densities $\rho_0$ and $\mathbf{j}_0 = \mathbf{0}+\rho_0 \mathbf{A}$. In this case, the densities trivially determine the external vector potential $\mathbf{A}=\mathbf{j}_0/\rho_0$. However, the correct $\mathbf{k}=\mathbf{j}_0=\rho_0\mathbf{A}$ is also recovered from the variational principle \begin{equation} \label{eq:37} \begin{split} &\Gstat(\rho_0,\mathbf{A})= \stat_{\mathbf{k}} \left\{ \FD(\rho_0,\mathbf{k}) + \int \mathbf{k}\cdot \mathbf{A} \,\mathrm{d}{\mathbf{r}} \right\} \\ & = \stat_{\mathbf{k}} \left\{ \inf_{\qmstate\mapsto \rho_0,\mathbf{j}^{\mathrm{p}}_\qmstate=\mathbf{0}} \expval{H_0}{\qmstate} - \int \frac{\vert \mathbf{k} \vert^2}{2 \rho_0} \,\mathrm{d} {\mathbf{r}} + \int \mathbf{k}\cdot \mathbf{A} \,\mathrm{d}{\mathbf{r}} \right\}. \end{split} \end{equation} This restrictive case works because the coupling term $\mathbf{k}\cdot\mathbf{j}^{\mathrm{p}}_\qmstate/\rho_0$ is absent and $\inf_{\qmstate\mapsto \rho_0,\mathbf{j}^{\mathrm{p}}_\qmstate=\mathbf{0}} \expval{H_0}{\qmstate}$ is independent of $\mathbf{k}$, such that Eq.~\eqref{eq:37} leads to (at $\mathbf{k} = \mathbf{j}_{\mathrm{stat}}$) \begin{equation} - \frac{\mathbf{j}_{\mathrm{stat}}}{\rho_0} + \mathbf{A} = \mathbf 0 \iff \mathbf{a}(\qmstate_{\mathrm{m}},\mathbf{j}_{\mathrm{stat}}) = \frac{\mathbf{j}_{\mathrm{stat}}}{\rho_0} = \mathbf{A} . \end{equation} Before concluding, we also take this opportunity to correct an incorrect claim in a previous publication of one of the authors, namely Proposition~8 in Ref.~\onlinecite{Laestadius2014}, which essentially misconstrued the contradiction reached in a reductio ad absurdum proof as a problem with the proof itself. Specifically, Proposition~8 in Ref.~\onlinecite{Laestadius2014} considers two ground state energies $E = E(v,\mathbf{A})$ and $E' = E(v',\mathbf{A}')$. Leaving aside for the sake of the argument all other problems with Diener's proof idea, one then reaches the contradiction $E+E'<E+E'$, but contrary to Proposition~8 in Ref.~\onlinecite{Laestadius2014} this is not an additional flaw of the attempted proof. \section{Conclusions} We have revisited Diener's attempted construction of a density-functional theory featuring the gauge-invariant, total current density. The underlying crucial assumptions have been clarified by a reformulation in terms of a maximin principle. As Diener's construction employs a nonstandard variational principle, it avoids some of the usual difficulties with the total current density as a variational parameter. Nonetheless, we have shown here that his attempted construction fails to establish a current-density-functional theory. Since the correct ground-state energy cannot be obtained within this framework. Moreover, the attempt to establish a Hohenberg--Kohn mapping for total current densities suffers from irreparable gaps in the reasoning. We have shown that there must be counterexamples for which the procedure does not retrieve the correct external vector potential from a given current density. On the other hand, in broad outline, Diener's formulation shares notable features with the recently proposed Maxwell--Schr\"odinger DFT (MDFT)~\cite{TellgrenSolo2018}, though details differ on crucial points. Diener introduces an effective vector potential, which is equivalent to a total current density, while MDFT takes the induced magnetic field into account that is equivalent to a vector potential or a current density and the total current density then arises naturally as a basic variable. In both cases, the total current density is a variational parameter that is varied independently from the wave function and the external potentials. Moreover, in Diener's approach this variational parameter originates from a nonstandard, and unfortunately mistaken, re-expression of the Schr\"odinger variational principle. In MDFT, it comes from a modified energy minimization principle that simply adds the energy of the induced magnetic field. One can thus view MDFT as a proof of concept for deriving density-functional theories of the total current from modified variational principles. The very same considerations incorporating a fully quantized electromagnetic field lead to quantum-electrodynamical DFT (QEDFT)~\cite{Ruggenthaler2015}. Such extended density-functional theories form a physically better motivated and theoretically more sound way for a density-functional framework including the total current density. \section*{Acknowledgements} A.~L.\ acknowledges support from the Research Council of Norway (RCN) under CoE Grant Nos. 287906 and 262695 (Hylleraas Centre for Quantum Molecular Sciences). E.~I.~T.\ acknowledges support from RCN under Grant No.~287950 and ERC-STG-2014 Grant No.~639508. M.~P.\ acknowledges support by the Erwin Schrödinger Fellowship J 4107-N27 of the FWF (Austrian Science Fund). The authors are thankful to L.~Garrigue and M.~A.~Csirik for comments and suggestions that greatly improved the manuscript and moreover acknowledge the support of the Centre for Advanced Study (CAS) in Oslo, Norway, which funded and hosted the workshop ``Do Electron Current Densities Determine All There Is to Know?'' during 2018.
1,314,259,992,851
arxiv
\section{Introduction} \input{intro} \input{scheme_pasj} \section{Accuracy and Performance} \input{performance_pasj} \section{Summary and Discussion} \input{summary} \bigskip The authors thanks Piet Hut for useful comments and the name of the hybrid scheme, Keigo Nitadori and Ataru Tanikawa for fruitful discussions, and the referee, Simon F. Portegies Zwart, for useful comments on the manuscript. M. F. is financially supported by Research Fellowships of the Japan Society for the Promotion of Science (JSPS) for Young Scientists. This research is partially supported by the Special Coordination Fund for Promoting Science and Technology (GRAPE-DR project), Ministry of Education, Culture, Sports, Science and Technology, Japan. Part of calculations were done using the GRAPE system at the Center for Computational Astrophysics (CfCA) of the National Astronomical Observatory of Japan. \subsection{Comparison with Direct Scheme for Small-$N$ Model} As a test of the Bridge scheme, we performed a fully self-consistent $N$-body simulation of a star cluster within a galaxy and compared the results with those obtained with the fourth-order Hermite scheme. In this section, we describe the results and the performance of the Bridge scheme. We adopted a King model with the non-dimensional central potential $W_0=9$ as the model of the parent galaxy and with $W_0=7$ as that of the star cluster. The system of units is the Heggie unit (Heggie \& Mathieu 1986), where the gravitational constant $G$ is 1 and the mass and the binding energy of the parent galaxy are 1 and 0.25, respectively. The initial position of the star cluster is at distance 2.5 from the center of the parent galaxy and the initial velocity is 0.65. Both the galaxy and the star cluster are expressed as a self-consistent $N$-body models. The number of particles of the parent galaxy, $N_{\rm G}$, is $10^5$ and that of the star cluster, $N_{\rm SC}$, is $2\times 10^3$. In table \ref{tb:testrun}, we summarize the model parameters and initial conditions. If we assume the total mass of the star cluster $M_{\rm SC}=10^5 \MO$ and the unit length is 10 pc, the total mass of the Galaxy $M_{\rm G}=10^7 \MO$ and the unit time and velocity are 0.15 Myr and 66 km s$^{-1}$, respectively. These values would correspond to the central region of a galaxy somewhat smaller than our galaxy. The potential is softened using the usual Plummer softening. The softening length for the gravitational interactions between star cluster particles is $\epsilon _{\rm SC} = 2.0 \times 10^{-4}=0.1 \times 4/N_{\rm SC}$, and that between others (i.e. between galaxy particles and for galaxy particles and star cluster particles) is $\epsilon _{\rm G} = 6.25 \times 10^{-3}$. We used the opening angle $\theta = 0.75$ with the center-of-mass (dipole-accurate) approximation. The maximum group size for GRAPE calculation (Makino 1991) is 8192. For the leapfrog integrator, we adopted the stepsizes of $\Delta t=1/128$ and 1/256. The maximum timestep for the Hermite scheme with individual timesteps is equal to the timestep of the tree. All particles synchronize at each timestep for tree. Within these steps, star cluster particles are integrated the Hermite scheme with individual timesteps \citep{MA92}. For the timestep criterion, we adopted the standard formula, which is given in \citet{MA92}. These parameters are summarized in table \ref{tb:param}. The simulations are performed on GRAPE-6 \citep{M03} for runs Direct 1 and 2, GRAPE-6A \citep{Fk05} for runs Bridge 1 and 2. We summarize them in table \ref{tb:runs}. Total energy was conserved to better than 0.06\% when $\Delta t=1/256$, 0.2\% $\Delta t=1/128$ with the Bridge scheme, and 0.006\% with the Hermite scheme. To compare the results, we followed the time evolution of the position, bound mass, core radius, and core density of the star cluster. The bound mass and orbit are calculated in the same way as in \citet{Fj06}. We defined the core radius and the core density using the formula proposed by \citet{CH85}. \begin{table}[htbp] \begin{center} \caption{Model Parameters of Testmodel \label{tb:testrun}} \begin{tabular}{lcc} \hline \hline Parameters & Galaxy & Star cluster \\ \hline Galactic halo & King 9 & King 7 \\ Total mass & 1.0 & 0.01 \\ Binding energy & 0.25 & $2.5 \times 10^{-4}$ \\ Half-mass radius & $9.8\times 10^{-1}$ & $8.1\times 10^{-2}$\\ $N$ & $10^5$& $2\times 10^3$ \\ \hline Initial position & & ( 2.5, 0, 0) \\ Initial velocity & &( 0, 0.65, 0) \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[htbp] \begin{center} \caption{Parameters for $N$-body Simulation\label{tb:param}} \begin{tabular}{lcc} \hline \hline Parameters & Value \\ \hline $\epsilon _{\rm G}$ & $6.25 \times 10^{-3}$ \\ $\epsilon _{\rm SC}$ & $2.0 \times 10^{-4}$ \\ \hline $\theta$ & 0.75\\ $n_{\rm crit}$ & 8192\\ $\Delta t$ & 1/128, 1/256\\ \hline \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{Runs\label{tb:runs}} \begin{tabular}{lcccc} \hline \hline Runs & methods & seed & stepsize &run time (h)\\ \hline Direct 1 & direct & 1 & 1/256 & 34\\ Direct 2 & direct & 2 & 1/256 & 34\\ Bridge 1 & hybrid & 1 & 1/128 & 10\\ Bridge 2 & hybrid & 2 & 1/256 & 19\\ \hline \end{tabular} \end{center} \end{table} Figures \ref{fig:radius_test} and \ref{fig:mass_test} show the evolution of the distance from the galactic center and the bound mass of the star cluster. Figures \ref{fig:core_test} and \ref{fig:density_test} show the core radius and the core density of the star cluster. These results show that the Bridge scheme works very well. The difference between the results of the Hermite scheme and that of the Bridge scheme is smaller than run-to-run variations in each method. Figures \ref{fig:core_test} and \ref{fig:density_test} show that core collapse occurs at $T=150-180$. Core collapse occurs at \begin{eqnarray} t_{\rm cc} \simeq ct_{\rm rh}, \label{eq:t_cc} \end{eqnarray} where $t_{\rm rh}$ is the star cluster's half-mass relaxation time \citep{SH71}, \begin{eqnarray} t_{\rm rh} = 0.14 \frac{r_{\rm h}^{3/2}\ N_{\rm SC}} {(GM_{\rm SC})^{1/2}\ln \Lambda}. \label{eq:t_rh} \end{eqnarray} Here $r_{\rm h}$, $N_{\rm SC}$, and $M_{\rm SC}$ are the half-mass radius, the number of particles, and the mass of the star cluster. We adopted the Coulomb logarithm $\ln \Lambda \simeq \ln(0.1 N_{\rm SC})$. In isolated star cluster in which all stars has the same mass, $c\simeq 15$ \citep{Cohn80}. From these equations, the core collapse time of our cluster is calculated as $t_{\rm cc} \simeq 180$. This value is consistent with the results of our simulation. Note that in our model we used the scaling of $M_{\rm G}=4E_{\rm G}=1$. \begin{figure}[htbp] \begin{center} \FigureFile(80mm,50mm){figure2.eps} \end{center} \caption{Distance of the star cluster from the GC plotted as a function of time. Solid and dashed curves show the results of the runs in which all particles calculated with the direct (Hermite) scheme. Dashed-doted and dotted curves show those with the Bridge scheme.} \label{fig:radius_test} \end{figure} \begin{figure}[htbp] \begin{center} \FigureFile(80mm,50mm){figure3.eps} \end{center} \caption{Bound mass of the star cluster plotted as a function of time. Curves have the same meanings as in figure \ref{fig:radius_test}.} \label{fig:mass_test} \end{figure} \begin{figure}[htbp] \begin{center} \FigureFile(80mm,50mm){figure4.eps} \end{center} \caption{Core radius, $r_{\rm c}$, plotted as a function of time. Curves have the same meanings as in figure \ref{fig:radius_test}.} \label{fig:core_test} \end{figure} \begin{figure}[htbp] \begin{center} \FigureFile(80mm,50mm){figure5.eps} \end{center} \caption{Core density, $\rho _{\rm c}$, as a function of time. Curves have the same meanings as in figure \ref{fig:radius_test}.} \label{fig:density_test} \end{figure} The total energy error of the system in Bridge 2 run is shown in Figure \ref{fig:err_test}. The total energy is conserved very well. The total energy error depends on the parameters for tree, $\Delta t$ and $\theta$. This is because most of the energy error is generated in the parent galaxy, which is much larger than the star cluster and has much larger energy. To see whether the internal energy of the star cluster is conserved or not, we measured the energy error of the internal motion of the star cluster within each step, $\Delta t$. The cumulative error of each step is shown in Figure \ref{fig:err_test}. Note that we plot the energy error of the star cluster relative to the internal energy of the star cluster, which is 0.1 \% of the total energy of the system. Although the error become larger after core collapse occurred, it is conserved well. \begin{figure}[htbp] \begin{center} \FigureFile(80mm,50mm){figure6.eps} \end{center} \caption{The energy error of the system, for the hybrid run with $\Delta t =1/256$ (Run Bridge 2). Full curve shows the total energy error of the system and dashed curve shows the cumulative energy error of the star cluster.} \label{fig:err_test} \end{figure} The distributions of the CPU time in runs with $\Delta t=1/128$ and $\Delta t = 1/256$ are shown in table \ref{tb:performance}. In these simulations, the cost of the tree part was much larger than that of the direct part. The CPU time of the tree part is almost constant throughout the simulation. In contrast, the cost of the direct part increased after $T \simeq 140$, because the core density become higher. \begin{table}[htbp] \begin{center} \caption{Distribution of the CPU Time of our Simulations\label{tb:performance}} \begin{tabular}{lcc} \hline \hline Section of Code & \multicolumn{2}{c}{Percentage of CPU Time (\%)}\\ & $\Delta t = 1/128$ & $\Delta t = 1/256$\\ \hline Tree & 87.2 & 91.2\\ Direct & 10.7 & 6.6\\ The others & 2.1 & 2.2\\ \hline \end{tabular} \end{center} \end{table} \subsection{Large-$N$ Models} We performed simulations of large-$N$ models. The number of particles is sufficiently large for simulations of star clusters near the GC ($N_{\rm G}=2\times 10^6$, $N_{\rm SC}=65,536$). The model of the galaxy represents the central region of the Galaxy from $\sim 1$ to several pc from the GC. For the star cluster, we followed the model used in Portegies Zwart et al. (2003). They modeled the Arches and Quintuplet star clusters. The core radius of our model, $r_{\rm core}$, is 0.087 pc. Using this model, we performed two simulations, in which the star cluster has circular and eccentric orbits, respectively. In table \ref{tb:models}, we summarize the model parameters. \begin{table*}[htbp] \begin{center} \caption{Models} \begin{tabular}{ccccccccc} \hline \hline & King $W_0$& $N$ & $M({\rm M_{\odot}})$ & $r_{\rm c}$ (pc) & $r_{\rm t}$ (pc)\\ \hline The Galaxy & 10 & $2 \times 10^6 $ & $8.0 \times 10^7$ & 0.66 & 120 \\ Star cluster & 3 & 65536 & $7.9 \times 10^4$ & 0.087 & 0.47 \\ \hline \end{tabular} \end{center} \label{tb:models} \end{table*} We performed $N$-body simulations using the Bridge code. For the tree part, we used the opening angle $\theta = 0.75$ with the center-of-mass (dipole-accurate) approximation. The maximum group size for a GRAPE calculation (Makino 1991) is 8192. The stepsize of leapfrog integrator is $\Delta t = 1/512$ (Heggie unit). The potential is softened using Plummer softening. The softening length for gravitational interactions between star cluster particles, $\epsilon _{\rm SC}$, is $1.0\times 10^{-5}$ pc and that for others, $\epsilon _{\rm G}$, is $3.9\times 10^{-2}$ pc. We stopped the simulations at $T =0.75 ({\rm Myr}) = 5 ({\rm unit\ time})$. These parameters are summarized in table \ref{tb:param2}. After the core collapse, the structures of the star clusters are not expressed correctly in our simulations because we use a softened potential for stars. We used GRAPE-6 (Makino et al. 2003) for force calculation. The total energy was conserved better than $5\times 10^{-5}$ for the circular orbit (figure \ref{fig:err_cir}) and $8\times 10^{-5}$ for the eccentric orbit (figure \ref{fig:err_ecc}) throughout the simulations. \begin{table}[htbp] \begin{center} \caption{Parameters for $N$-body Simulation\label{tb:param2}} \begin{tabular}{lcc} \hline \hline Parameters & Value \\ \hline $\epsilon _{\rm G}$ & $3.9 \times 10^{-2}$ (pc)\\ $\epsilon _{\rm SC}$ & $1.0 \times 10^{-5}$ (pc)\\ \hline $\Delta t$ & $2.9 \times 10^{-4}$ (Myr)\\ (Heggie unit) &1/512 \\ \hline $\theta$ & 0.75\\ $n_{\rm crit}$ & 8192\\ \hline \end{tabular} \end{center} \end{table} \begin{table}[htbp] \begin{center} \caption{Initial Conditions} \begin{tabular}{ccccc} \hline \hline Simulation & Initial position (pc) & Initial velocity (km s$^{-1}$)\\ \hline Circular & 2 & 130\\ Eccentric & 5 & 72 \\ \hline \end{tabular} \end{center} \end{table} Figure \ref{fig:snapshots} show the snapshots from the run in which the orbit of the star cluster is eccentric. Figure \ref{fig:results} shows the time evolution of the distance from the GC, bound mass, and core radius of the star clusters. In both simulations, core collapse occurs at 0.5 - 0.6 Myr. We obtained the core collapse time, $t_{\rm cc}=0.51$ Myr, from equation (\ref{eq:t_cc}) and (\ref{eq:t_rh}), where the half-mass radius of the star cluster, $r_{\rm h}=0.13$ (pc). We adopted $c=0.20$, which is suggested by \citep{PM02}. The core collapse time of our simulations is consistent with the results of the previous studies. \begin{figure*}[htbp] \begin{center} \FigureFile(158mm,238mm){figure7.eps} \end{center} \caption{Snapshots of the star clusters projected onto $x-y$ plane. The orbit of the star cluster is eccentric.} \label{fig:snapshots} \end{figure*} \begin{figure*}[htbp] \begin{center} \FigureFile(160mm,160mm){figure8.eps} \end{center} \caption{The distance from the GC (top), bound mass (middle), and core radius (bottom) of the star clusters plotted as a function of time. The orbits of the star clusters are circular in the left panels and eccentric in the right panels.} \label{fig:results} \end{figure*} Figure \ref{fig:err_cir} and \ref{fig:err_ecc} show the total energy error of the system and the internal energy error of the star clusters. The internal energy errors are cumulative as in small-$N$ models. The energies conserve very well. \begin{figure}[htbp] \begin{center} \FigureFile(80mm,50mm){figure9.eps} \end{center} \caption{Same as figure \ref{fig:err_test}, but for large-$N$ model with circular orbit.} \label{fig:err_cir} \end{figure} \begin{figure}[htbp] \begin{center} \FigureFile(80mm,50mm){figure10.eps} \end{center} \caption{Same as figure \ref{fig:err_test}, but for large-$N$ model with eccentric orbit.} \label{fig:err_ecc} \end{figure} The total CPU times and the distributions of the CPU time are shown in table \ref{tb:time}. The total CPU time was about 40 hours. The direct part consumes about half of the CPU time. \begin{table*} \begin{center} \caption{CPU percentage for the test models\label{tb:time}} \begin{tabular}{|l|cc|cc|} \hline Section of Code & \multicolumn{2}{|c|}{CPU Time (sec)} & \multicolumn{2}{|c|}{Percentage of CPU Time (\%)}\\ & Circular & Eccentric & Circular & Eccentric\\ \hline \hline Tree & $5.6\times 10^4$ & $6.3\times 10^4$ & 42.3 & 45.3\\ Direct & $7.5\times 10^4$ & $7.5\times 10^4$ & 56.9 & 53.6\\ The others & $1.1\times 10^3$ & $1.4\times 10^3$ & 0.8 & 1.0\\ \hline Total & $1.3\times 10^5 \sim 37 (\rm{h})$ & $1.4\times 10^5 \sim 39 ({\rm h})$ & 100.0 & 99.9 \\ \hline \end{tabular} \end{center} \end{table*} \subsection{Performance Model of the Hybrid Scheme} We analyzed the CPU time of each part in detail. Figure \ref{fig:CPU_time_cir} shows the CPU time per 4 steps for each part for the run with circular orbit. Simulation time is represented using the Heggie unit. We used the parent galaxy to define the Heggie unit, i.e., $M_{\rm G}=4E_{\rm G}=1$. The unit time in the Heggie unit corresponds to 0.15 Myr. Hereafter we use the Heggie unit for time to discuss the performance of the hybrid scheme. The CPU time of the tree part is almost constant throughout the simulation. In contrast, the cost of the direct part gradually decreases and suddenly increase after $T \simeq 3.5$. As shown in figure \ref{fig:step}, the CPU time of the direct part is proportional to the number of the steps for the Hermite scheme, $n_{\rm step}$. Figure \ref{fig:step_cir} shows the time evolution of the average number of timesteps per particle, $n_{\rm step}$. It gradually decrease until $T=3$, and stays nearly constant. In figure \ref{fig:step}, CPU time suddenly increases starting at $T=3.6$. This time corresponds to the time of core collapse, and after that the internal dynamics of the star cluster is not correctly followed because of the finite softening. In the discussions below, we consider the behavior of the CPU time before core collapse. \begin{figure}[htbp] \begin{center} \FigureFile(80mm,50mm){figure11.eps} \end{center} \caption{CPU time of the direct and tree parts per 4 steps ($\Delta t = 1/128$) for the circular orbit. The solid, dashed, and dotted curves show the CPU time of tree, direct, and the others parts, respectively.} \label{fig:CPU_time_cir} \end{figure} \begin{figure}[htbp] \begin{center} \FigureFile(80mm,50mm){figure12.eps} \end{center} \caption{The number of steps of the direct part per particle per 4 steps (1/128 unit time) for the circular orbit.} \label{fig:step_cir} \end{figure} \begin{figure}[htbp] \begin{center} \FigureFile(80mm,50mm){figure13.eps} \end{center} \caption{The CPU time of the direct part per 4 steps (1/128 unit time) before $T=3.6$. Filled circle and open circle show the results of the circular orbit and eccentric orbit. Solid line shows the model of equation (\ref{eq:CPU_time}).\label{fig:step}} \end{figure} From these results, we can construct the performance model of the Bridge code. The total CPU time for a step, $\Delta t$, can be written by the number of the tree particles, $N_{\rm tree}$, that of the direct particles, $N_{\rm direct}$, and that of steps for Hermite scheme per step, $n_{\rm step}$. The cost of tree is proportional to $N_{\rm tree}\log N_{\rm tree}\sim N_{\rm tree}$. The CPU time of the direct part depends on $N_{\rm tree}$ and $n_{\rm step}$. The cost of force calculation is proportional to $N_{\rm direct}^2$ and the other costs are proportional to $N_{\rm direct}$. Therefore, the total CPU time per $\Delta t$ is given by \begin{eqnarray} T_{\rm CPU} = \alpha N_{\rm tree} + (\beta N_{\rm direct} + \gamma N_{\rm direct}^2 ) n_{\rm step}, \label{eq:CPU_time} \end{eqnarray} where $\alpha$, $\beta$, and $\gamma$ are constants. Here, $\alpha$ is almost constant through a simulation, but depends on $\theta$. The value of $\gamma$ is determined by the performance of GRAPE. Makino et al. (2003) shows that the calculation time on GRAPE per interaction per particle is expressed as \begin{eqnarray} T_{\rm GRAPE} = \frac{1}{9\times 10^7 n_{\rm pipes}}\ ({\rm sec}), \end{eqnarray} where $n_{\rm pipes}$ is the total number of pipelines. With GRAPE-6A $n_{\rm pipes}=24$, with GRAPE6 $n_{\rm pipes}=192$. Hence, we estimated $\gamma=4.6\times 10^{-10}$ (sec), for GRAPE-6A, $\gamma=5.8\times 10^{-11}$ (sec) for GRAPE-6. From the results of our runs, we estimated the values of the constants, $\alpha = 1.2\times 10^{-5}$ (sec) for $\theta = 0.75$ and $\beta = 6.9 \times 10^{-6}$ (sec). The number of particles that we need for fully self-consistent $N$-body star cluster simulation is $N_{\rm G} \sim 2\times 10^6$ for a galaxy and $N_{\rm SC} \sim 6.5\times 10^4$ for a star cluster. In this case, the total CPU time for the Bridge scheme is estimated as $1.2\times 10^5$ sec $\sim 34$ hours for a simulation with $\Delta t = 1/512$ and 5 unit time integration on GRAPE-6. Here we used $n_{\rm step}=33$ from the results of our simulations. The actual time for such a simulation was 37 - 39 hours (see table \ref{tb:time}). So our model predicts the CPU time with $\sim$ 20 \% accuracy. We can also estimate that for the Hermite scheme. The CPU time of the Hermite scheme, or the direct scheme, can be estimated using the second and third terms of equation (\ref{eq:CPU_time}), where we used $n_{\rm step}=770$ per unit time. Therefore, the total CPU time is estimated as about $260$ hours per 5 unit times on GRAPE-6 for $N=2\times 10^6$. It's about seven times longer than that for the Bridge scheme. \section{The Mixed Variable Symplectic Method} The MVS integrator were introduced by \citet{WH91} and by Kinoshita, Yoshida, \& Nakai (1991). It is now widely used for long-term integrations of planetary systems. In the case of planetary systems, their Hamiltonian can be divided into Kepler motions and interaction between planets. The MVS integrator suppresses the error of the motion due to the numerical integration of the solar potential since it integrates the Kepler motions analytically. Let us first describe briefly a symplectic integrator, the leapfrog scheme. The Hamilton equation is rewritten in terms of a Poisson bracket operator as \begin{eqnarray} \frac{df}{dt} = \{f,H\}, \label{eq:df} \end{eqnarray} where $f$ is a function of $t$. If we introduce a differential operator $D_H$ defined as $D_H f$:$=\{f,H\}$, the formal solution of equation (\ref{eq:df}) is written as \begin{eqnarray} f(t) = e^{tD_H} f(0). \end{eqnarray} An integration algorithm can be thought of as an approximate expression of this operator. As an example, we describe a second-order leapfrog integrator. The Hamiltonian for an $N$-body system is written as \begin{eqnarray} H &=& \sum^{N}_{i}\frac{p_i^2}{2m_i} - \sum^{N}_{i<j}\frac{Gm_i m_j}{r_{ij}}. \end{eqnarray} If we define \begin{eqnarray} H_A &=& - \sum^{N}_{i<j}\frac{Gm_i m_j}{r_{ij}} \\ H_B &=& \sum^{N}_{i}\frac{p_i^2}{2m_i}, \end{eqnarray} then we can express the the formal solution, the time evolution from $t$ to $t+\Delta t$, as \begin{eqnarray} f(t+\Delta t) = e^{\Delta t(A+B)} f(t), \end{eqnarray} where $A$:$=D_{H_A}$ and $B$:$=D_{H_B}$. This operator can be rewritten by the Taylor series as \begin{eqnarray} e^{\Delta t(A+B)} = \prod^{k}_{i=1} e^{a_i \Delta t A}e^{b_i \Delta t B} + O(\Delta t ^{n+1}), \label{eq:Taylor} \end{eqnarray} where $(a_i, b_i)$ $(i=1, 2, \dots ,k)$ is a set of real numbers and $n$ is a given integer, which corresponds to the order of integrator. By neglecting the $O(\Delta t^{n+1})$ then, we obtain a mapping from $f(t)$ to $f'(t+\Delta t)$ as \begin{eqnarray} f'(t+\Delta t) = \prod^{k}_{i=1} e^{a_i \Delta t A}e^{b_i \Delta t B} f(t). \end{eqnarray} This mapping is symplectic because it is just a product of symplectic mappings. This is an $n$-th order symplectic integrator. We can achieve $n=2$ with k=2, with the choice of the coefficients $a_1=a_2=1/2$ and $b_1=1, b_2=0$. Now, equation (\ref{eq:Taylor}) is reduced to \begin{eqnarray} e^{\Delta t (A+B)}=e^{\frac{1}{2}\Delta tA}e^{\Delta tB} e^{\frac{1}{2}\Delta tA} + O(\Delta t ^3). \end{eqnarray} Therefore the time evolution is expressed as \begin{eqnarray} f'(t+\Delta t) = e^{\frac{1}{2}\Delta tA}e^{\Delta tB} e^{\frac{1}{2}\Delta tA} f(t). \label{eq:time_evolution} \end{eqnarray} This is the second-order leapfrog scheme, which is rewritten as \begin{eqnarray} \boldsymbol{v}_{\frac{1}{2}} &=& \boldsymbol{v}_{0} + \frac{1}{2}\ \Delta t\ \boldsymbol{a}_{0}, \label{eq:lpv}\\ \boldsymbol{x}_{1} &=& \boldsymbol{x}_{0} + \Delta t\ \boldsymbol{v}_{\frac{1}{2}}, \label{eq:lcx}\\ \boldsymbol{v}_{1} &=& \boldsymbol{v}_{0} + \frac{1}{2}\ \Delta t\ \boldsymbol{a}_{1}, \label{eq:lcv} \end{eqnarray} where subscripts, 0, $\frac{1}{2}$, 1, correspond to values at $t, t+\frac{1}{2}\Delta t, t+\Delta t$, respectively. The procedure of leapfrog scheme is as follows. \begin{enumerate} \item Calculate the acceleration at a time, $t$, and update the velocity [eq. (\ref{eq:lpv})]. \item Update positions using new velocity $\boldsymbol{v}_{\frac{1}{2}}$ [eq. (\ref{eq:lcx})]. \item Calculate the acceleration at $t+\Delta t$ using the new positions, $\boldsymbol{x}_1$, and update velocity [eq. (\ref{eq:lcv})]. \item Repeat 1-3. \end{enumerate} Now, we explain an MVS integrator. The Hamiltonian for a planetary system can be expressed as \begin{eqnarray} H = H_{\rm Kep} + H_{\rm Int}, \end{eqnarray} where $H_{\rm Kep}$ is the kinetic energy plus solar potential and $H_{\rm Int}$ is the interaction energy between planets. If we define \begin{eqnarray} H_A = H_{\rm Int},\ H_B = H_{\rm Kep}, \end{eqnarray} equation (\ref{eq:time_evolution}) becomes \begin{eqnarray} f'(t+\Delta t)=e^{\frac{1}{2}\Delta tI}e^{\Delta tK} e^{\frac{1}{2}\Delta tI} f(t). \end{eqnarray} Here $I$:=$D_{H_{\rm Int}}$ and $K$:$=D_{H_{\rm Kep}}$. Note that $e^{\Delta t K}$ generates motions of planets along unperturbed Kepler orbits, while $e^{\Delta t I}$ generates changes of momenta due to planet-planet interactions. This changes of momenta are called ``velocity kicks.'' The difference from the usual leapfrog integrator is that $e^{\Delta t K}$ is given analytically by Kepler motion. Therefore the MVS method is expressed as \begin{eqnarray} \boldsymbol{v}'_{\frac{1}{2}} &=& \boldsymbol{v}_{0} + \frac{1}{2}\ \Delta t\ \boldsymbol{a}_{\rm Int, 0}, \label{eq:MVS1}\\ \boldsymbol{x}_{0} &\rightarrow& ({\rm Kepler\ motion}) \rightarrow \boldsymbol{x}_{1}, \label{eq:MVS2-1} \\ \boldsymbol{v}_{\frac{1}{2}} &\rightarrow& ({\rm Kepler\ motion}) \rightarrow \boldsymbol{v}'_{\frac{1}{2}},\label{eq:MVS2-2} \\ \boldsymbol{v}_{1} &=& \boldsymbol{v}'_{\frac{1}{2}} + \frac{1}{2}\ \Delta t\ \boldsymbol{a}_{\rm Int, 1}. \label{eq:MVS3} \end{eqnarray} The integration proceeds as follows: \begin{enumerate} \item Calculate the accelerations of planets due to gravitational interactions between planets, $\boldsymbol{a}_{\rm Int, 0}$, at time $t$ and change velocities by giving the velocity kicks [eq.(\ref{eq:MVS1})]. \item Update the positions and the velocities by $\Delta t$ along its osculating Kepler orbit analytically [eq.(\ref{eq:MVS2-1}) and (\ref{eq:MVS2-2})]. \item Calculate $\boldsymbol{a}_{\rm Int, 0}$ at $t+\Delta t$ and change velocities by giving the velocity kicks [eq.(\ref{eq:MVS3})]. \item Repeat 1-3. \end{enumerate} MVS is a very powerful algorithm for long-term integration of planetary systems. In general, the integration errors are $O(\Delta t ^n)$, where $n$ is the order of the integrator. With a MVS integrator, if $H_{\rm Int}$ is $O(\epsilon)$ of $H_{\rm Kepler}$, the integration errors are only $O(\epsilon \Delta t^n)$. This $\epsilon$ is of the order of the planetary mass in the unit of solar mass and usually is very small. As a result, the error become much smaller than that of usual symplectic methods. \section{The New Hybrid Scheme} Now we consider simulations of systems consisting of a star cluster and its parent galaxy. For such a simulation, our new scheme should provide: \begin{enumerate} \item high accuracy for star clusters \item fast integrator for galaxies \item fully self-consistent treatment of the total system. \end{enumerate} To achieve these goals, we constructed a new scheme which is a combination of the direct and the tree scheme. In our new scheme, the internal interactions of star clusters are calculated with high accuracy using the direct and Hermite scheme, while all other interactions (galaxy-galaxy, galaxy-star cluster) are calculated with the tree algorithm. We combine these two methods by extending the idea of the MVS. In the MVS scheme, the Hamiltonian is divided into the kinetic energy plus solar potential and the interaction energy between planets. In our hybrid scheme, we separate the Hamiltonian as \begin{eqnarray} H &=& H_{\rm \alpha} + H_{\rm \beta},\\ H_{\rm \alpha} &=& -\sum ^{N_{\rm G}}_{i<j} \frac{Gm_{{\rm G},i}m_{{\rm G},j}}{r_{ij}} - \sum ^{N_{\rm G}}_{i=1} \sum^{N_{\rm SC}}_{j=1} \frac{Gm_{{\rm G},i} m_{{\rm SC},j}}{r_{ij}},\\ H_{\rm \beta} &=& \sum^{N_{\rm G}}_{i=1} \frac{p_{{\rm G},i}^2}{2m_{{\rm G},i}} + \sum^{N_{\rm SC}}_{i=1} \frac{p_{{\rm SC},i}^2}{2m_{{\rm SC},i}} - \sum ^{N_{\rm SC}}_{i<j} \frac{Gm_{{\rm SC},i} m_{{\rm SC},j}}{r_{ij}}, \label{eq:H_beta} \end{eqnarray} where $N_{\rm G}$ and $N_{\rm SC}$ are the number of the galaxy particles star cluster particles, respectively, and $H_{\rm \alpha}$ is the potential energy of the gravitational interactions between galaxy particles and between galaxy particles and star cluster particles, while $H_{\rm \beta}$ is the kinetic energy of all particles and the potential energy of star cluster particles. Using the replacement $H_A = H_{\rm \alpha}$ and $H_B = H_{\rm \beta}$, we obtain the time evolution as \begin{eqnarray} f'(t+\Delta t)=e^{\frac{1}{2}\Delta t\alpha}e^{\Delta t\beta} e^{\frac{1}{2}\Delta t\alpha} f(t), \end{eqnarray} where $\alpha$:=$D_{H_{\rm \alpha}}$, $\beta$:$=D_{H_{\rm \beta}}$. In our scheme, we integrate star cluster particles and galaxy particles in different ways. Let us first discuss star cluster particles. They are integrated in a way similar to the MVS. The Keplerian in the MVS corresponds to the second and third terms in equation (\ref{eq:H_beta}). Unlike the MVS, however, we cannot solve the Hamiltonian analytically. Hence, we replace an analytical solution (Kepler motion) in the MVS by a solution calculated by a higher-order integrator (e.g. the fourth-order Hermite integrator with individual timesteps). Thus, the integrator for star clusters is written as \begin{eqnarray} \boldsymbol{v}'_{\rm SC,0} &=& \boldsymbol{v}_{\rm SC,0} + \frac{1}{2}\ \Delta t\ \boldsymbol{a}_{\rm \{ G\rightarrow SC,0\}},\label{eq:hybrid1}\\ \boldsymbol{x}_{\rm SC, 0} &\rightarrow& ({\rm Hermite\ scheme})\rightarrow \boldsymbol{x}_{\rm SC, 1},\label{eq:hybrid2}\\ \boldsymbol{v}'_{\rm SC,0} &\rightarrow& ({\rm Hermite\ scheme})\rightarrow \boldsymbol{v}'_{\rm SC,1},\label{eq:hybrid3}\\ \boldsymbol{v}_{\rm SC,1} &=& \boldsymbol{v}'_{\rm SC,1} + \frac{1}{2}\ \Delta t\ \boldsymbol{a}_{\rm \{ G\rightarrow SC,1\}},\label{eq:hybrid4} \end{eqnarray} where subscripts, SC and G, stand for the star cluster and the galaxy, subscripts, 0, $\frac{1}{2}$, and 1, indicate times $t_0$, $t_{\frac{1}{2}}=t_0+\frac{1}{2}\Delta t$, and $t_1=t_0+\Delta t$, respectively, and $\boldsymbol{v}'_{\rm SC,\frac{1}{2}}$ represent a new velocity at $t_{\frac{1}{2}}$, which have been integrated using the Hermite scheme. For galaxies, we use the leapfrog integrator expressed as \begin{eqnarray} \boldsymbol{v}_{\rm G, \frac{1}{2}} &=& \boldsymbol{v}_{\rm G,0} + \frac{1}{2}\ \Delta t\ \boldsymbol{a}_{\{ \rm All\rightarrow G,0\}}, \label{eq:hybrid5}\\ \boldsymbol{x}_{\rm G,1} &=& \boldsymbol{x}_{\rm G,0} + \Delta t\ \boldsymbol{v}_{\rm G, \frac{1}{2}},\label{eq:hybrid6} \\ \boldsymbol{v}_{\rm G,1} &=& \boldsymbol{v}_{\rm G, \frac{1}{2}} + \frac{1}{2}\ \Delta t\ \boldsymbol{a}_{\rm \{All\rightarrow G,1\}},\label{eq:hybrid7} \end{eqnarray} where $\boldsymbol{a}_{\rm \{All\rightarrow G\}}$ denotes the acceleration due to gravitational forces from all particles (including star cluster particles) to the galaxy particle. The galaxy particles have longer timescale than that of particles in the star cluster. Therefore, we adopt a second-order leapfrog integrator with shared timestep and tree algorithm. This scheme is less accurate than fourth-order Hermite scheme, but much faster and is symplectic. We call our new scheme ``the Bridge scheme'' (Bridge is for Realistic Interactions in Dense Galactic Environment). The procedure of the Bridge scheme is summarized in figure \ref{fig:scheme} and as follows: \begin{enumerate} \item Make a tree at $t_0$ and calculate the accelerations from all particles on galaxy particles, $\boldsymbol{a}_{\rm \{All \rightarrow G, 0\}}$, and from galaxy particles on star cluster particles, $\boldsymbol{a}_{\rm \{G \rightarrow SC, 0\}}$, using the tree. \item {\bf Star cluster}: give a velocity kick [eq. (\ref{eq:hybrid1})].\\ {\bf Galaxy}: update velocity [eq. (\ref{eq:hybrid5})]. \item {\bf Star cluster}: integrate positions and the velocities from $t_0$ to $t_1$ using Hermite scheme with individual timesteps [eq. (\ref{eq:hybrid2}) and (\ref{eq:hybrid3})].\\ {\bf Galaxy}: Update position with leapfrog scheme [eq. (\ref{eq:hybrid6})]. \item Make a new tree at $t_1$ and calculate the accelerations from all particles on galaxy particles, $\boldsymbol{a}_{\rm \{All \rightarrow G, 1\}}$, and from galaxy particles on star cluster particles, $\boldsymbol{a}_{\rm \{G \rightarrow SC, 1\}}$. \item {\bf Star cluster}: give a velocity kick [eq. (\ref{eq:hybrid4})].\\ {\bf Galaxy}: update velocity [eq. (\ref{eq:hybrid7})]. \end{enumerate} As shown in equations (\ref{eq:hybrid1}) and (\ref{eq:hybrid5}), the forces on particles of galaxies and those on particles of star clusters are calculated differently. The former is from all particles, while the latter is from particles in the galaxy only. Therefore, we assigned two values of mass to each tree node. (We used center-of-mass approximation to forces from tree nodes.) One is total mass of all particles under the node, and the other is the mass of galaxy particles. To calculate forces on galaxy particles, we use the mass of all particles, and for star cluster particles we use the mass of galaxy particles. \begin{figure}[htbp] \begin{center} \FigureFile(80mm,50mm){figure1.eps} \end{center} \caption{Procedure of the Bridge scheme.} \label{fig:scheme} \end{figure} \subsection{Summary} We have developed a fast and accurate algorithm, ``the Bridge scheme,'' for fully self-consistent $N$-body simulations of a star cluster moving in its parent galaxy, where both are modeled as $N$-body systems. The Bridge scheme is a hybrid of the tree and direct schemes and is based on an extension of MVS. We performed self-consistent $N$-body simulations of a star cluster in a galaxy and compared the results with the Bridge scheme and that with the direct scheme (the Hermite scheme). They agreed each other very well and the energy error was sufficiently small. We also showed that we can perform a full $N$-body simulation of a star cluster and a galaxy system with $N_{\rm SC}=65536$ and $N_{\rm G}=2\times 10^6$ using our new scheme more than seven times faster than the direct scheme. \subsection{Comparison with Tree-based Algorithms} In previous studies, several tree-based algorithm with block timesteps were developed. \citet{HK89} adopted block timesteps in a tree code. In their scheme, the tree is reconstructed in each step. When the timesteps of particle do not vary so widely, the cost of the tree reconstruction is not so expensive. However, the cost is very expensive for star clusters, because star clusters have wide range in their timesteps. \citet{MA93} developed a tree-based high-order integration scheme for collisional systems using block timesteps and multipole (up to octupole) expansion. In this scheme, the tree is reconstructed at the appropriate cell timesteps determined by the motions of the particles in the cells. Instead of reconstructing tree at each step, the moment of each cell is predicted. However, the accuracy is limited by the time interval of tree construction. If longer timesteps are permitted, the tree evolves during the step and the errors increase. If the time interval is short, the cost of tree construction become large. In addition, their algorithms are difficult to use with GRAPE. \subsection{Applications to Other Problems} Our initial motivation for developing the Bridge scheme is to use it for the problem of a star cluster orbiting in its parent galaxy. However, it might have much wide application range. For example, if the parent galaxy has the central massive black hole, it is natural to handle it and stars near by with the direct scheme, and the rest of the system by tree. In this case, some particles must move ``tree'' and ``direct'' treatment, but in principle such a code can be developed. Our method can be applied to any large-$N$ systems in which small part of the system shows collisional behavior.
1,314,259,992,852
arxiv
\section{Introduction} \label{sec:intro} \subsection{Direct and inverse problems} Consider a physical system whose behavior depends on some parameters. Here are some examples: \begin{enumerate} \item X-ray images depend on how the object attenuates X-rays (described by an attenuation coefficient depending on position). \item The way in which boundary current (current flux density) depends on boundary voltage of an electrically conducting object depends on the (position-dependent) conductivity. \item The spectrum of oscillations of a drum depends on the shape of the drum. \end{enumerate} \noindent The direct problem asks to determine the behavior, given the parameters: \begin{enumerate} \item Given the attenuation coefficient, find the attenuation of any X-ray. \item Given the conductivity, find how the boundary current depends on boundary voltage. \item Given the drum shape, find the spectrum. \end{enumerate} \noindent The inverse problem asks the opposite: \begin{enumerate} \item Given the attenuation data for all lines, find the attenuation coefficient everywhere. \item Given how the boundary current depends on boundary voltage, find the conductivity everywhere inside. \item Given the spectrum, find the shape. \end{enumerate} \noindent These inverse problems are theoretical problems in physics. We are interested in the mathematical formulations of these problems, particularly the first one. Solving the mathematical problem is a necessary step in solving the physical problem, but there are many more steps to take. We will ignore numerical implementation, data acquisition, and other practical considerations, and focus on the underlying mathematics. It is not at all unusual that a physical problem becomes a mathematical problem after some analysis. This is done in a number of courses in physics, and physicists are well acquainted with solving mathematical problems arising from physics. The issue with these three inverse problems is that the underlying mathematical problems are hard: \begin{enumerate} \item Given the integral of a continuous (or other) function $\mathbb{R}^n\to\mathbb{R}$ over each line, reconstruct the function. \item Let $\Omega\subset\mathbb{R}^n$ be a nice domain and $\gamma\colon\Omega\to(0,\infty)$ with $\log(\gamma)\in L^\infty$. Given $\{(u|_{\partial\Omega},\gamma\nu\cdot\nabla u|_{\partial\Omega}) ;u\in H^1(\Omega),\nabla\cdot(\gamma\nabla u)=0\}$, find~$\gamma$. \item Given the Dirichlet spectrum of the Laplace operator on a domain $\Omega\subset\mathbb{R}^n$, find the domain. \end{enumerate} \noindent Here and henceforth ``nice'' is not a precise term, but used when we want to avoid a precise technical definition for the sake of clarity. The first one is the simplest, also because it is linear. The second one is harder and there are some big open problems related to it, but it is still relatively well understood. Our understanding of the third problem is very limited. To give specific examples, the first problem has been solved for compactly supported distributions, the second one for $n=2$ (and $n\geq3$ if~$\gamma$ is Lipschitz), and the third one very partially (there are some counterexamples and rigidity results and very few full uniqueness results). \begin{ex} Let us then see how the mathematical and physical versions of the first problem are related. Feel free to assume any regularity assumptions on~$f$. Consider a ray of light traveling on the real axis in the positive direction. Let the intensity at $x\in\mathbb{R}$ be~$I(x)$. If the attenuation function is $f\colon\mathbb{R}\to\mathbb{R}$ (a sufficiently regular positive function), then~$I$ satisfies the Beer--Lambert law \begin{equation} I'(x)=-f(x)I(x). \end{equation} Solve this differential equation. Show that if $I(0)\neq0$ (if the intensity was zero, there would not be any real measurement), then the knowledge of~$I(0)$ and~$I(L)$ determines $\int_0^Lf(x)\,\der x$. \end{ex} In physics, the attenuation coefficient is often denoted by~$\mu$. Since it is the most important function on this course, it will be most convenient to follow the mathematical convention and call it~$f$. \begin{ex} Consider a bounded domain (some physical object) $\Omega\subset\mathbb{R}^3$. Suppose the attenuation is described by a continuous function $f\colon\mathbb{R}^3\to[0,\infty)$ with $f=0$ in $\mathbb{R}^3\setminus\Omega$. Consider a line segment $\gamma\colon[0,L]\to\mathbb{R}^3$, $\gamma(t)=x_0+tv$, and suppose that~$\gamma(0)$ and~$\gamma(L)$ are both outside~$\Omega$. Suppose that we fire an X-ray beam along~$\gamma$ and measure the initial and final intensity. Argue that such a measurement determines the integral of~$f$ over~$\gamma$. \end{ex} \subsection{Goals} All the mathematical inverse problems above are of the following form: Consider a function $F\colon X\to Y$. Given~$F(x)$, find~$x$. The direct problem is finding the function~$F$ (and proving it is well defined), and this function is called the forward operator. The function~$F$ can be complicated. Let us see what the sets~$X$ and~$Y$ are in the three examples above: \begin{enumerate} \item $\{\text{continuous compactly supported functions supported in }\bar\Omega\subset\mathbb{R}^n\}\\{}\qquad\to\{\text{real-valued functions on the set of lines}\}$ \item $\{\gamma\colon\Omega\to(0,\infty);\log(\gamma)\in L^\infty\}\\{}\qquad\to \mathcal{P}(H^{1/2}(\partial\Omega)\times H^{-1/2}(\partial\Omega)) $ \item $\{\text{smooth bounded domains in }\mathbb{R}^n\}\\{}\qquad\to\{\text{multisets of positive real numbers}\}$ \end{enumerate} \noindent Once one understands the forward operator, one can start studying the corresponding inverse problem. So, what does it exactly mean to find~$x$? There are several kinds of goals: \begin{itemize} \item Uniqueness: Show that if $F(x)=F(x')$, then $x=x'$. \item Reconstruction: Give a formula or other method to reconstruct~$x$, given~$F(x)$. That is, find a left inverse function $G\colon Y\to X$ so that $G\circ F=\id_X$. \item Stability: Show that if $F(x)\approx F(x')$, then $x\approx x'$. Equip the spaces~$X$ and~$Y$ with suitable norms or topologies, and prove that the left inverse~$G$ is continuous. \end{itemize} \noindent A left inverse is what we use to process the data. We have measured~$F(x)$, and we compute~$G(F(x))$ to find~$x$. There is no need for a two-sided inverse, and there can be several ways~$G$ to analyze the data. Ideally, we want a stable reconstruction, so that the left inverse~$G$ is continuous. This has nothing to do with continuity of the unknown function; the operators~$F$ and~$G$ can map between any kinds of function spaces. The study of any mathematical inverse problem starts with uniqueness, and that is what we shall focus on in this course. That is, our sole goal is to prove that a certain function~$F$ is injective. Some uniqueness proofs immediately give a formula for~$G$. One important aspect we will ignore is range characterizations. This is about finding what kind of data can really arise from real measurements --- finding the set $F(X)\subset Y$. \subsection{The X-ray transform} The forward operator in the X-ray tomography problem is known as the X-ray transform. There are several different notations out there. We will denote it by~$\mathcal{I}$. It maps functions on~$\mathbb{R}^n$ into functions on the set of all lines. In general, it is defined so that if~$f$ is a function in~$\mathbb{R}^n$ and~$\gamma$ is a line in~$\mathbb{R}^n$, then~$\mathcal{I} f(\gamma)$ is the integral of~$f$ over~$\gamma$. This definition can be extended to various classes of functions, or even distributions. To emphasize, let us give this as s definition: \begin{definition} \label{def:xrt} Let $f\colon\mathbb{R}^n\to\mathbb{R}$ (or~$\mathbb{C}$) be a sufficiently regular function. Denote by~$\Gamma$ the set of all straight lines in~$\mathbb{R}^n$. The X-ray transform of~$f$ is the function $\mathcal{I} f\colon\Gamma\to\mathbb{R}$ (or~$\mathbb{C}$) defined by letting~$\mathcal{I} f(\gamma)$ be the integral of~$f$ over~$\gamma$. \end{definition} Let us see one example of a definition precisely. \begin{ex} Let $B\subset\mathbb{R}^n$ be the unit ball, and let~$C_B$ denote the space of continuous functions $f\colon\mathbb{R}^n\to\mathbb{R}$ with $f(x)=0$ for $x\notin\bar B$. Show that if the space~$C_B$ is equipped with the norm $\aabs{f}=\sup_B\abs{f}$, then it is a Banach space. \end{ex} \begin{ex} Let us parametrize all lines in~$\mathbb{R}^n$ with $x\in\mathbb{R}^n$ and $v\in S^{n-1}$. Explain why $\mathcal{I}\colon C_B\to C_b(\mathbb{R}^n\times S^{n-1})$ given by \begin{equation} \mathcal{I} f(x,v) = \int_{-\infty}^\infty f(x+tv)\,\der t \end{equation} is well defined, linear, and continuous, when $C_b(\mathbb{R}^n\times S^{n-1})$, the space of continuous and bounded functions $\mathbb{R}^n\times S^{n-1}\to\mathbb{R}$, is also equipped with the supremum norm. (It turns out that this~$\mathcal{I}$ is injective but it does not have a continuous left inverse.) \end{ex} We will not pursue optimal regularity in this course. Our interest will be in ideas and tools, not proving theorems with sharp assumptions. A reader with suitable experience in analysis is invited to consider lower regularity versions of the results presented here. In particular, we want to show that the X-ray transform defined in the previous exercise is injective. If we need to make additional assumptions like differentiability, we will. Since the operator is linear, we need to show that $\mathcal{I} f=0$ implies $f=0$. We will prove this result in a number of different ways and review the necessary tools. This is the whole plan for this course. \begin{ex} Physically, there is a constraint on the attenuation function~$f$. Namely, the attenuation must be non-negative: $f\geq0$. Recall the Beer--Lambert law and explain why this is physically reasonable. \end{ex} \begin{ex} Prove that if $f\in C_B$, $f\geq0$, and $\mathcal{I} f=0$ (the integral is zero over all lines), then $f=0$. This is far easier to prove than injectivity of the X-ray transform~$\mathcal{I}$. Does the desired uniqueness result for non-negative attenuation functions follow from this observation? \end{ex} Non-continuous attenuation functions are physically relevant. We restrict our attention to continuous functions for technical convenience. In some exercises we will consider non-continuous functions, but they will be integrable over each geodesic. In one dimension the problem is hopeless. Therefore we make the standing assumption that the dimension~$n$ is at least~$2$ unless otherwise mentioned. \begin{ex} Show that the X-ray transform $\mathcal{I}\colon C_B\to C_b$ as defined above is not injective if $n=1$. \end{ex} \subsection{The Radon transform and parametrizations} \label{sec:radon-transform} In the X-ray transform a function is integrated over all lines. In the Radon transform a function is integrated over all hyperplanes. In the plane these two transforms coincide, but in higher dimensions they do not. Let us give a more detailed description of the Radon transform. Let~$H$ be the set of all hyperplanes in~$\mathbb{R}^n$. Then the Radon transform of a, say, compactly supported continuous function $f\colon\mathbb{R}^n\to\mathbb{R}$ is a function $Rf\colon H\to\mathbb{R}$ given by $Rf(h)=\int_hf\,\der\mathcal{H}^{n-1}$. The integral over the hyperplane~$h$ is of course taken with respect to the Hausdorff measure of dimension $n-1$. This is the same thing as identifying the hyperplane (isometrically) with~$\mathbb{R}^{n-1}$ and using the usual Lebesgue measure. \begin{ex} \label{ex:radon-xrt} Explain how one can calculate the Radon transform of a function, given its X-ray transform. Then explain how injectivity of the Radon transform implies injectivity of the X-ray transform. \end{ex} Whichever transform we study, we need to describe the lines or hyperplanes somehow. There are various options: \begin{itemize} \item Consider the abstract set of all lines in~$\mathbb{R}^n$. \item Parametrize a line with a point $x\in\mathbb{R}^n$ and a direction $v\in S^{n-1}$. The line is $x+v\mathbb{R}$. \item Parametrize a line in~$\mathbb{R}^2$ with the closest point to the origin. (This only fails to parametrize the lines through origin.) \end{itemize} \noindent This is not all. One can also use fan beam coordinates, parallel beam coordinates, or identify a line with a direction and the boundary point of entrance. When the parametrization of lines is redundant, the X-ray transform should have the same value with different parameters representing the same line. This is a simple example of a (partial) range characterization. We will not try to characterize $\mathcal{I}(C_B)\subset C_b(\mathbb{R}^n\times S^{n-1})$, for example. Hyperplanes in~$\mathbb{R}^n$ can also be parametrized by the closest point to the origin (with difficulties at the origin), just like one can do with lines in the plane. This is common in the analysis of the Radon transform. \begin{ex} Consider the characteristic function of a ball centered at the origin with unit radius. Find the X-ray transform using each of the ways listed above to describe the lines. \end{ex} \begin{ex} Let us use the second parametrization given above, characterizing lines as $x+v\mathbb{R}$. The X-ray transform of a certain function $f\colon\mathbb{R}^n\to\mathbb{R}$ is \begin{equation} \mathcal{I} f(x,v) = \begin{cases} \sqrt{2+\abs{v\cdot x}^2-\abs{x}^2} & \text{when }2+\abs{v\cdot x}^2-\abs{x}^2\geq0 \\ 0 & \text{otherwise}. \end{cases} \end{equation} What is the function~$f$? \end{ex} \begin{ex} \label{ex:2d-hd} The most typical X-ray imaging method is computerized tomography (CT), where a three-dimensional image (of the attenuation function) is reconstructed slice by slice. If we can show that the X-ray transform is injective in two dimensions, then it follows that it will also be injective in higher dimensions. Explain why this is so. \end{ex} \begin{ex}Do you have any questions or comments regarding section~\thesection? Was something confusing or unclear? Were there mistakes?\end{ex} \section{The Fourier series} \label{sec:fs} \subsection{Introduction} Consider the function series \begin{equation} \label{eq:fs-real} f(x) = b_0 + \sum_{k=1}^\infty (b_k\cos(kx)+c_k\sin(kx)). \end{equation} Whether or not the series converges and in which sense depends on the sequences of coefficients~$(b_k)_{k=0}^\infty$ and~$(c_k)_{k=1}^\infty$. It is quite obvious that if the series defines a reasonable function, then it will be periodic with period~$2\pi$. The surprise is that every~$2\pi$-periodic function can be written as a series like this, and that the coefficient sequences are unique. The regularity of the function and the mode of convergence depends on how fast (if at all) $b_k,c_k\to0$ as $k\to\infty$. Having two coefficient series like above is quite awkward for a number of reasons. It is far more convenient to study the series \begin{equation} \label{eq:fs-complex} f(x) = \sum_{k\in\mathbb{Z}}a_ke^{ikx} \end{equation} with complex coefficients~$a_k$. Even if the function~$f$ is real-valued, complex coefficients are needed, so the whole theory is best built over~$\mathbb{C}$. The definition of the X-ray transform can be easily extended from real functions to complex ones; see definition~\ref{def:xrt}. \begin{ex} Compare the series in~\eqref{eq:fs-real} and~\eqref{eq:fs-complex}. Write either $e^{it}=\cos(t)+i\sin(t)$ or $\cos(t)=\frac12(e^{it}+e^{-it})$ and $\sin(t)=\frac1{2i}(e^{it}-e^{-it})$, and compare the two representations term by term. (No need to justify yet why it is enough to compare the terms.) Express each coefficient~$a_k$ in terms of~$b_k$ and~$c_k$, and vice versa. \end{ex} \begin{ex} Define the equivalence relation~$\sim$ on~$\mathbb{R}$ by declaring $x\sim y$ whenever $\frac1{2\pi}(x-y)\in\mathbb{Z}$. Explain briefly why this is an equivalence relation. We define the quotient $\mathbb{R}/2\pi\mathbb{Z}$ as the set of equivalence classes. Explain how functions $\mathbb{R}/2\pi\mathbb{Z}\to\mathbb{C}$ correspond uniquely to $2\pi$-periodic functions $\mathbb{R}\to\mathbb{C}$. \end{ex} In fact, more is true than implied by the previous exercise. The quotient $\mathbb{R}/2\pi\mathbb{Z}$ inherits a lot of structure from~$\mathbb{R}$: topology, the structure of a smooth manifold, measure, various function spaces\dots We will take much of Fourier analysis as a given fact. More details can be found in a book or course focusing on Fourier analysis. We will review some key results needed to successfully and understandingly apply Fourier tools to X-ray tomography. \subsection{Fourier transform and inverse Fourier transform on a circle} Consider the space $L^2(\mathbb{R}/2\pi\mathbb{Z})$ of measurable $2\pi$-periodic functions $f\colon\mathbb{R}\to\mathbb{C}$ that satisfy \begin{equation} \int_0^{2\pi}\abs{f(x)}^2\,\der x < \infty. \end{equation} It is a complex Hilbert space with the inner product \begin{equation} \ip{f}{g} = \int_0^{2\pi}\overline{f(x)}g(x)\,\der x. \end{equation} We defined $L^2(\mathbb{R}/2\pi\mathbb{Z})$ to be a space of functions $\mathbb{R}\to\mathbb{C}$, not $\mathbb{R}/2\pi\mathbb{Z}\to\mathbb{C}$. However, due to periodicity we can regard the functions in this space as functions on the quotient $\mathbb{R}/2\pi\mathbb{Z}$. \begin{ex} Recall the space~$L^2(0,2\pi)$ of square integrable Lebesgue measurable functions $(0,2\pi)\to\mathbb{C}$. This is a Hilbert space, and it is naturally isomorphic to $L^2(\mathbb{R}/2\pi\mathbb{Z})$. Give the natural isomorphisms in both directions. Are they isometric? (The fact that they are isomorphic follows from the fact that they are both separable infinite-dimensional complex Hilbert spaces, but there is something far simpler and more natural here.) \end{ex} Let us denote by~$\ell^2(\mathbb{Z})$ the space of ``sequences'' (functions) $a\colon\mathbb{Z}\to\mathbb{C}$ with $\sum_{k\in\mathbb{Z}}\abs{a_k}^2<\infty$. This, too, is a Hilbert space. For convenience, we equip it with the norm \begin{equation} \aabs{a}^2 = 2\pi \sum_{k\in\mathbb{Z}}\abs{a_k}^2 \end{equation} and the corresponding inner product. (The inner product is left implicit, and the reader is encouraged to figure out what the inner product should be. One can of course use the polar formula to find the inner product from the norm, but in a simple case like this one one can see the correct inner product by eye.) \begin{definition} \label{def:1d-ft} The Fourier transform of a $2\pi$-periodic function or distribution expressed as the Fourier series~\eqref{eq:fs-complex} takes the function~$f$ into the sequence~$(a_k)_{k\in\mathbb{Z}}$ of Fourier coefficients. The inverse Fourier transform takes the sequence back to the function or distribution. In symbols, $\mathcal{F} f=a$ and $\mathcal{F}^{-1}a=f$. \end{definition} The definition above is purposely vague. It describes the overall idea of the Fourier transform and its inverse in the present context. The same definition can be used for a large number of different function spaces. Observe that the Fourier transform and its inverse are linear operators. The Fourier transform of functions on the whole line is a different animal, and we shall greet it later. A central result in Fourier analysis is that the Fourier transform is well-defined and the inverse exists. Even more is true: \begin{theorem} \label{thm:1d-fs} The Fourier transform on $\mathbb{R}/2\pi\mathbb{Z}$ is a unitary isometry $\mathcal{F}\colon L^2(\mathbb{R}/2\pi\mathbb{Z})\to\ell^2(\mathbb{Z})$, given by \begin{equation} (\mathcal{F} f)(k) = \frac1{2\pi}\int_0^{2\pi}f(x)e^{-ikx}\,\der x, \end{equation} which is well defined as a Lebesgue integral. The inverse Fourier transform $\mathcal{F}^{-1}\colon\ell^2(\mathbb{Z})\to L^2(\mathbb{R}/2\pi\mathbb{Z})$ is also unitary and isometric, and is given by \begin{equation} (\mathcal{F}^{-1}a)(x) = \sum_{k\in\mathbb{Z}}a_ke^{ikx}, \end{equation} where the series of functions converges in $L^2(\mathbb{R}/2\pi\mathbb{Z})$. \end{theorem} This theorem will not be proven on this course. The theorem can be rephrased as the functions $x\mapsto\frac1{\sqrt{2\pi}}e^{ikx}$, $k\in\mathbb{Z}$, being an orthonormal Hilbert basis for $L^2(\mathbb{R}/2\pi\mathbb{Z})$. In general, a Hilbert space is isometric to the~$\ell^2$ space over the index set of a Hilbert basis. Every~$L^2$ function $f\colon\mathbb{R}/2\pi\mathbb{Z}\to\mathbb{C}$ can be written uniquely as a series \begin{equation} f(x) = \sum_{k\in\mathbb{Z}}\mathcal{F} f(k)e^{ikx}. \end{equation} This series is the Fourier series. Sometimes~$\mathcal{F} f$ is denoted by~$\hat f$. The elements of a sequence are typically denoted as~$a_k$ instead of~$a(k)$, but in Fourier analysis it is customary and convenient to write~$\mathcal{F} f(k)$ or~$\hat f(k)$ instead of~$\mathcal{F} f_k$ or~$\hat f_k$. \begin{ex} Recall definitions from an earlier course or some other source. What does it mean in formulas (involving sums and integrals) that the Fourier transform is isometric and unitary? \end{ex} One may wonder why the Fourier transform of a $2\pi$-periodic function $\mathbb{R}\to\mathbb{C}$ is a function on~$\mathbb{Z}$, not on~$\mathbb{R}$. This has nothing to do with the specific problem, it is a mathematical property. A function cannot be $2\pi$-periodic unless all frequencies are integers. To make this statement more rigorous, one can show that the Fourier transform (in the sense of whole~$\mathbb{R}$, not $\mathbb{T}^1=\mathbb{R}/2\pi\mathbb{Z}$) of a periodic function is a distribution supported on the lattice~$2\pi\mathbb{Z}$. The same is true in higher dimensions as well. Another way to see this will come in section~\ref{sec:ft} when we discuss the Fourier transform in greater generality. The fact that only discrete frequencies are possible is not obvious at first. It is a key result in Fourier analysis that is seldom stated explicitly. \subsection{Multidimensional Fourier series} In the previous section we considered Fourier series in one dimension. The theory is very, very similar in higher dimensions. In higher dimensions, one studies functions $\mathbb{R}^n\to\mathbb{C}$ which are $2\pi$-periodic in all~$n$ real variables. Notice that the space of such functions is not rotation invariant; the coordinate axes give~$n$ preferred directions. These preferred directions are perhaps more apparent in the lattice \begin{equation} 2\pi\mathbb{Z}^n = \{x\in\mathbb{R}^n;x_i/2\pi\in\mathbb{Z}\text{ for all }i\}. \end{equation} As we did above in one dimension, we may quotient the space~$\mathbb{R}^n$ with the lattice~$2\pi\mathbb{Z}^n$ to form $\mathbb{R}^n/2\pi\mathbb{Z}^n$. \begin{ex} What is the equivalence relation in~$\mathbb{R}^n$ corresponding to the lattice? What are the equivalence classes? \end{ex} The quotient space (not a quotient \emph{vector} space) $\mathbb{R}/2\pi\mathbb{Z}$ is homeomorphic (in fact isometric) to the circle~$S^1$. The quotient space $\mathbb{R}^n/2\pi\mathbb{Z}^n$ is the same as $(\mathbb{R}/2\pi\mathbb{Z})^n$ or~$(S^1)^n$, but not~$S^n$. We have~$n$ coordinates, each considered modulo~$2\pi$. The topological space $\mathbb{R}^n/2\pi\mathbb{Z}^n$ is the $n$-dimensional torus. The most famous torus is the two-dimensional one, and the one-dimensional torus is often called simply the circle. For the differential geometrically oriented: The lattice acts isometrically on the space~$\mathbb{R}^n$, so the quotients inherits the (Euclidean) Riemannian metric. The torus with this metric is locally isometric to~$\mathbb{R}^n$, and is called the flat torus. \begin{ex} The Euclidean space~$\mathbb{R}^n$ is an additive group and~$2\pi\mathbb{Z}^n$ is a subgroup. Why is the quotient group $\mathbb{R}^n/2\pi\mathbb{Z}^n$ well defined? (That is, why is the subgroup normal?) How does this quotient group correspond to the quotient space $\mathbb{R}^n/2\pi\mathbb{Z}^n$ described above? Describe the group operation. \end{ex} A function $\mathbb{R}^n/2\pi\mathbb{Z}^n\to\mathbb{C}$ --- or, equivalently, a function on~$\mathbb{R}^n$ with period~$2\pi$ in each variable --- is written as a Fourier series as follows: \begin{equation} \label{eq:hd-fs} f(x) = \sum_{k\in\mathbb{Z}^n}a_ke^{ik\cdot x}. \end{equation} Let us define the Fourier series similarly to what we did in definition~\ref{def:1d-ft} in one dimension: \begin{definition} \label{def:hd-ft} The Fourier transform of a function or distribution on the torus~$\mathbb{T}^n$ expressed as the Fourier series~\eqref{eq:hd-fs} takes the function~$f$ into the sequence~$(a_k)_{k\in\mathbb{Z}^n}$ of Fourier coefficients. The inverse Fourier transform takes the sequence back to the function or distribution. In symbols, $\mathcal{F} f=a$ and $\mathcal{F}^{-1}a=f$. \end{definition} The spaces $L^2(\mathbb{R}^n/2\pi\mathbb{Z}^n)$ and~$\ell^2(\mathbb{Z}^n)$ are defined analogously to the one-dimensional case. The norm on the latter space is \begin{equation} \aabs{a}^2 = (2\pi)^n\sum_{k\in\mathbb{Z}^n}\abs{a_k}^2. \end{equation} Using these spaces, we have the following generalization of theorem~\ref{thm:1d-fs}: \begin{theorem} \label{thm:hd-fs} The Fourier transform on the torus~$\mathbb{T}^n$ is a unitary isometry $\mathcal{F}\colon L^2(\mathbb{R}^n/2\pi\mathbb{Z}^n)\to\ell^2(\mathbb{Z}^n)$, given by \begin{equation} (\mathcal{F} f)(k) = \frac1{(2\pi)^n}\int_{[0,2\pi]^n}f(x)e^{-ik\cdot x}\,\der x, \end{equation} which is well defined as a Lebesgue integral. The inverse Fourier transform $\mathcal{F}^{-1}\colon\ell^2(\mathbb{Z}^n)\to L^2(\mathbb{R}^n/2\pi\mathbb{Z}^n)$ is also unitary and isometric, and is given by \begin{equation} (\mathcal{F}^{-1}a)(x) = \sum_{k\in\mathbb{Z}^n}a_ke^{ikx}, \end{equation} where the series of functions converges in $L^2(\mathbb{R}^n/2\pi\mathbb{Z}^n)$. \end{theorem} \begin{ex} Let us denote $e_k(x)=e^{ik\cdot x}$. For any $k\in\mathbb{Z}^n$ we have $e_k\in L^2(\mathbb{R}^n/2\pi\mathbb{Z}^n)$. Using the given inner product, prove that \begin{equation} \ip{e_k}{e_m} = c\delta_{km}, \end{equation} where~$\delta_{km}$ is the Kronecker delta and~$c$ is a constant. What is the constant? Do not appeal to theorem~\ref{thm:hd-fs}, but calculate by hand. \end{ex} \begin{ex} When defining the X-ray transform on~$\mathbb{T}^1$ and~$\mathbb{T}^n$, we made use of the exponential functions~$e^{ik\cdot x}$. Show that if $k\in\mathbb{Z}^n$ and $x\in\mathbb{T}^n=\mathbb{R}^n/2\pi\mathbb{Z}^n$, the value of $e^{ik\cdot x}$ does not depend on the representative of~$x$ in~$\mathbb{R}^n$. This means that the exponential function~$e^{ik\cdot x}$ is indeed well defined. Is the exponent $k\cdot x$ well defined, too? \end{ex} The general idea is to show that $\mathcal{I} f=0\implies\mathcal{F} f=0\implies f=0$. That is, the X-ray transform of~$f$ is easier to connect to the Fourier transform~$\mathcal{F} f$ than~$f$ itself. The Fourier series can be defined on other spaces, for example non-flat tori, using the eigenfunctions of the Laplace operator. It will be considerably more clumsy and it will not work so nicely together with the X-ray transform. Fourier analysis tends to be most convenient when one has enough symmetry. \begin{ex} One aspect of the basis functions used in the Fourier series is that they are eigenfunctions of the Laplace operator. What is the eigenvalue of the function $x\mapsto e^{ik\cdot x}$? \end{ex} \begin{bex} Show that the number~$7$ cannot be written as the sum of three integer squares. Recall Lagrange's four-square theorem. Using these tools, show that the set of eigenvalues of the Laplace operator $\Delta=\sum_{k=1}^n\partial_k^2$ on the torus~$\mathbb{T}^n$ is $-\mathbb{N}$ if and only if $n\geq4$. \end{bex} \begin{ex}Do you have any questions or comments regarding section~\thesection? Was something confusing or unclear? Were there mistakes?\end{ex} \section{X-ray tomography on a torus} \label{sec:torus} In this section we will give our first injectivity proof based on Fourier series on~$\mathbb{T}^n$. \subsection{Geodesics on a torus} The analogue of a straight line in differential geometry is a geodesic. Similarly to the problem we set out to study, one can ask whether a function on a manifold is determined by its integrals over geodesics. This is an active field of study, but beyond the scope of this course. However, we will study this problem now on the flat torus $\mathbb{T}^n=\mathbb{R}^n/2\pi\mathbb{Z}^n$. The reason is that this provides one of the simplest proofs of the injectivity of the X-ray transform in a bounded Euclidean domain. Geodesics, like any curves, can be regarded as subsets of the space or as functions from an interval to the space. As a set, a geodesic in~$\mathbb{R}^n$ is simply a straight line. As a function, a geodesic can be described as $\gamma\colon[0,1]\to\mathbb{R}^n$, $\gamma(x)=x+tv$. The velocity $v\in\mathbb{R}^n$ can be any non-zero vector; it will be convenient not to assume unit speed in this section. A geodesic is a curve with constant velocity. Here we chose to parametrize the geodesic by $[0,1]$, and we have therefore described a geodesic between two points ($x$ and $x+v$). Another option is to replace the interval with~$\mathbb{R}$; this leads to what is called a maximal geodesic. In our Euclidean X-ray tomography problem we consider the integrals of an unknown function over all geodesics through a given domain. It is irrelevant whether the geodesics are maximal or between two points, as long as the two points are outside (or at the boundary of) the domain. Let us then turn to geodesics on a torus. Let $q\colon\mathbb{R}^n\to\mathbb{T}^n$ be the quotient map that takes a point to its equivalence class. One can write it as $q(x)=x+2\pi\mathbb{Z}^n\subset\mathbb{R}^n$. This formula is seldom very useful in practice, but perhaps it helps get a hold of the idea. A geodesic between two points on the torus~$\mathbb{T}^n$ is simple to describe: we may compose a geodesic on~$\mathbb{R}^n$ with the quotient map. We take $x\in\mathbb{R}^n$ and $v\in\mathbb{R}^n\setminus0$ and define $\gamma\colon[0,1]\to\mathbb{T}^n$ by $\gamma(t)=q(x+tv)$. We will be interested in maximal geodesics that do not terminate in either direction, which corresponds to replacing $[0,1]$ above by~$\mathbb{R}$. On a torus, there is an interesting new class of maximal geodesics: closed geodesics, also known as periodic geodesics. The simplest example of a periodic geodesic is \begin{equation} \mathbb{R}\to\mathbb{T}^n, \quad t\mapsto (2\pi t,0,\dots,0), \end{equation} which has period~$1$. Geodesics $\mathbb{R}\to\mathbb{T}^n$ with period~$1$ can be naturally identified with geodesics $\gamma\colon[0,1]\to\mathbb{T}^n$ for which $\gamma(0)=\gamma(1)$. The most convenient way to describe geodesics for our purposes is to take two parameters $x\in\mathbb{T}^n$ and $v\in\mathbb{R}^n$, and let the corresponding geodesic $[0,1]\to\mathbb{T}^n$ be \begin{equation} \label{eq:vv1} \gamma(t) = q(x'+tv), \end{equation} where $x'\in\mathbb{R}^n$ is any point so that $q(x')=x$. Equivalently, we may take \begin{equation} \label{eq:vv2} \gamma(t) = x+q(tv), \end{equation} where ``$+$'' is the addition on~$\mathbb{T}^n$ --- which is naturally an abelian group. We shall write this geodesic simply as \begin{equation} \gamma(t) = x+tv\in\mathbb{T}^n, \end{equation} where the quotient is left implicit. All geodesics on a torus are of this form. This is because the quotient map $q\colon\mathbb{R}^n\to\mathbb{T}^n$ is a local isometry and isometries preserve geodesics. It is crucial that the torus is flat. If one uses another metric (such as the donut embedded in~$\mathbb{R}^3$), the geodesics will be different and there will be less symmetry. \begin{ex} Consider the geodesic described in~\eqref{eq:vv1} and~\eqref{eq:vv2} above. Show that the endpoints coincide if and only if $v\in2\pi\mathbb{Z}^n$. \end{ex} \begin{ex} \label{ex:t1-geodesic} Explain how a geodesic with velocity $v\in2\pi\mathbb{Z}^n$ can be regarded as a function $\mathbb{R}/\mathbb{Z}\to\mathbb{T}^n$. \end{ex} In the X-ray transform on a torus we will only integrate over periodic geodesics. The reason for this is two-fold. First, periodic geodesics are convenient and, as it turns out, sufficient. Second, the integrals are ill-defined over a non-periodic geodesic. There is a way to renormalize the integral, but it is rather awkward. By exercise~\ref{ex:t1-geodesic} a periodic geodesic is a function $\mathbb{R}/\mathbb{Z}\to\mathbb{T}^n$, and it is easy to integrate a continuous function over the compact set $\mathbb{R}/\mathbb{Z}$. However, when there is no periodicity, one would have to integrate over all of~$\mathbb{R}$, and the resulting integral typically does not exist (as a finite number). \subsection{Injectivity from a torus to a Euclidean space} For any $v\in2\pi\mathbb{Z}^n\setminus0$, $x\in\mathbb{T}^n$ and $f\in C(\mathbb{T}^n)$, we write \begin{equation} \mathcal{I}_vf(x) = \int_0^1f(x+tv)\,\der t. \end{equation} If~$v$ is fixed, this defines an operator \begin{equation} \mathcal{I}_v\colon C(\mathbb{T}^n)\to C(\mathbb{T}^n). \end{equation} For us the key property is that~$\mathcal{I}_v$ is linear, but it does indeed map continuous functions to continuous functions. It has other properties as well: $\aabs{\mathcal{I}_v}=1$ and $\mathcal{I}_v^2=\mathcal{I}_v$. It is also a symmetric operator $L^2(\mathbb{T}^n)\to L^2(\mathbb{T}^n)$. \begin{definition} \label{def:xrt-torus} We call the family of operators~$\mathcal{I}_v$ with $v\in2\pi\mathbb{Z}^n\setminus0$ the X-ray transform on the torus~$\mathbb{T}^n$. \end{definition} This point of view is convenient here, although it would be possible to realize the X-ray transform as a single operator as well. In the usual view~$\mathcal{I}$ is not a symmetric operator and that leads us to consider its normal operator later in this course. Now consider a function $f\in C_B\subset C(\mathbb{R}^n)$. The function~$f$ is supported in the closed unit ball~$\bar B$, so we can extend it periodically to a function~$\tilde f$ on~$\mathbb{R}^n$ so that $\tilde f=f$ on $(-\pi,\pi)^n$. Observe that $\bar B\subset(-\pi,\pi)^n$. \begin{ex} Give a formula for~$\tilde f$ in terms of~$f$. \end{ex} Since it is periodic, the function~$\tilde f$ can be regarded as a function on the torus~$\mathbb{T}^n$. \begin{lemma} \label{lma:torus-to-Rn} The X-ray transform of $\tilde f\in C(\mathbb{T}^n)$ is uniquely determined by the X-ray transform of $f\in C_B$. \end{lemma} \begin{proof} The idea is simple, but writing it down is awkward. We will do it anyway. Take any $v\in2\pi\mathbb{Z}^n\setminus0$. We need to show that $\mathcal{I}_v\tilde f$ can be expressed in terms of~$\mathcal{I} f$. Recall that~$\mathcal{I} f$ is a function defined on the set of all lines in~$\mathbb{R}^n$. The restriction $q|_{[-\pi,\pi)^n}\colon[-\pi,\pi)^n\to\mathbb{T}^n$ is a bijection. Let us denote its inverse by~$\iota$. It satisfies $q\circ\iota=\id_{\mathbb{T}^n}$. Since~$f$ is supported in~$\bar B$, we may consider it to be a function defined on $[-\pi,\pi)^n$. Then we have $f=\tilde f\circ q$ and $\tilde f=f\circ\iota$. Let us denote $C=\{x\in[-\pi,\pi)^n;x_i=-\pi\text{ for some }i\}$. This cone-shaped set~$C$ is the part of the boundary of the cube: $C=[-\pi,\pi)^n\cap\partial([-\pi,\pi)^n)$. Consider any $x\in\mathbb{T}^n$ and $v\in2\pi\mathbb{Z}^n\setminus0$. Let $\tilde\gamma[0,1]\to[-\pi,\pi)^n$ be the ``curve'' corresponding to the geodesic $t\mapsto\gamma(t)=x+q(tv)$, defined by $\tilde\gamma(t)=\iota(x+q(tv))$. The image $\tilde\gamma([0,1])$ consists of line segments in~$\mathbb{R}^n$, so it is not strictly a curve. Since $\abs{v}\geq2\pi$, we know that at some point $\tau\in[0,1]$ we have $\tilde\gamma(\tau)\in C$. Since the geodesic~$\gamma$ is periodic, we may shift the variable so that $\tau=0$. Alternatively, this can be seen as $\mathcal{I}_vf(x)=\mathcal{I}_vf(x-q(\tau v))$. If $\tilde\gamma([0,1])$ is contained in~$C$, then it does not meet the support of~$\tilde f$. Therefore $\tilde f\circ\gamma$ vanishes identically and so $\mathcal{I}_vf(x)=0$. If the curve~$\tilde\gamma$ is not contained in this set, then it meets $C$ only finitely many times. By the previous considerations one of these times is at $t=0$, and by periodicity also at $t=1$. Let the other times of hitting~$C$ be $0<t_1<\dots<t_m<1$, and denote $t_0=0$ and $t_1=1$. It is possible that $m=0$ and there are no other times. We have \begin{equation} \begin{split} \mathcal{I}_v\tilde f(x) &= \mathcal{I}_v\tilde f(x') \\&= \int_0^1\tilde f(x'+tv)\,\der t \\&= \int_0^1f(\iota(x'+tv))\,\der t \\&= \sum_{j=0}^{m}\int_{t_j}^{t_{j+1}}f(\iota(x'+tv))\,\der t . \end{split} \end{equation} Now, each $\int_{t_j}^{t_{j+1}}f(\iota(x'+tv))\,\der t$ is an integral of~$f$ over a straight line joining two boundary points of the cube~$[-\pi,\pi]^n$. This is, by definition, determined by the X-ray transform $\mathcal{I} f$, since~$f$ is supported inside the cube. Therefore~$\mathcal{I}_v\tilde f$ can be written in terms of~$\mathcal{I} f$ --- although there is no pretty formula --- and the proof is complete. \end{proof} \begin{ex} Describe the function $\iota\circ q\colon\mathbb{R}^n\to[-\pi,\pi)^n$ in words, formulas, pictures, or a combination thereof. \end{ex} \begin{ex} Explain why the~$\tau$ must exist in the proof above. That is, justify more carefully why the geodesic must hit the ``boundary''~$C$. \end{ex} \begin{ex} Explain why $\mathcal{I}_vg(x)=\mathcal{I}_vg(x+sv)$ for any $s\in\mathbb{R}$, $g\in C(\mathbb{T}^n)$, and $v\in2\pi\mathbb{Z}^n$. \end{ex} The conclusion of the lemma is important: \begin{ex} \label{ex:torus-to-Rn} Suppose we know that $\mathcal{I}_vg=0$ for all $v\in2\pi\mathbb{Z}^n\setminus0$ implies that the function $g\in C(\mathbb{T}^n)$ has to vanish identically. Show that if $\mathcal{I} f=0$ for some $f\in C_B$, then $f=0$. \end{ex} In other words, injectivity of the X-ray transform in the Euclidean space follows from an injectivity result on the torus. This is our first solution of the inverse problem of X-ray tomography. The missing step is proving the desired result on the torus. \subsection{Interplay between the X-ray and Fourier transforms on a torus} For any fixed $v\in2\pi\mathbb{Z}^n\setminus0$ and $f\in C(\mathbb{T}^n)$, the X-ray transform~$\mathcal{I}_vf$ is a continuous function on the torus~$\mathbb{T}^n$. Therefore it makes sense to calculate its Fourier transform. \begin{lemma} \label{lma:ft-xrt-torus} Let $v\in2\pi\mathbb{Z}^n\setminus0$ and $f\in C(\mathbb{T}^n)$. Then for every $k\in\mathbb{Z}^n$ \begin{equation} \mathcal{F}(\mathcal{I}_vf)(k) = \begin{cases} \mathcal{F} f(k) & \text{when }k\cdot v=0\\ 0 & \text{otherwise}. \end{cases} \end{equation} \end{lemma} \begin{proof} The proof is a mere calculation: \begin{equation} \label{eq:vv3} \begin{split} \mathcal{F}(\mathcal{I}_vf)(k) &= \frac1{(2\pi)^n} \int_{\mathbb{T}^n} e^{-ik\cdot x} \mathcal{I}_vf(x) \,\der x \\&= \frac1{(2\pi)^n} \int_{\mathbb{T}^n} e^{-ik\cdot x} \int_0^1 f(x+tv) \,\der t \,\der x \\&\stackrel{\text{a}}{=} \frac1{(2\pi)^n} \int_0^1\int_{\mathbb{T}^n} e^{-ik\cdot x} f(x+tv) \,\der x \,\der t \\&\stackrel{\text{b}}{=} \frac1{(2\pi)^n} \int_0^1\int_{\mathbb{T}^n} e^{-ik\cdot (y-tv)} f(y) \,\der y \,\der t \\&\stackrel{\text{c}}{=} \frac1{(2\pi)^n} \int_{\mathbb{T}^n} e^{-ik\cdot y} f(y) \,\der y \times \int_0^1e^{i(k\cdot v)t}\,\der t \\&\stackrel{\text{d}}{=} \mathcal{F} f(k) \times \begin{cases} 1 & \text{when }k\cdot v=0\\ 0 & \text{otherwise}. \end{cases} \end{split} \end{equation} It only remains to justify the steps. \end{proof} \begin{ex} Explain the steps a--d in~\eqref{eq:vv3}. \end{ex} We will next show that the X-ray transform is injective. Bear in mind that the X-ray transform is understood as a family of operators. Here injectivity means ``collective injectivity''; the individual operators are not injective. \begin{theorem} \label{thm:xrt-torus} Let $f\in C(\mathbb{T}^n)$. If $\mathcal{I}_vf=0$ for all $v\in2\pi\mathbb{Z}^n\setminus0$, then $f=0$. \end{theorem} \begin{proof} Since the Fourier transform is bijective by theorem~\ref{thm:hd-fs}, it suffices to show that the Fourier series of~$f$ vanishes. To that end, take any $k\in\mathbb{Z}^n$. There is some $v\in2\pi\mathbb{Z}^n\setminus0$ so that $k\cdot v=0$ (exercise). By lemma~\ref{lma:ft-xrt-torus} we have $\mathcal{F}\mathcal{I}_vf(k)=\mathcal{F} f(k)$. Since $\mathcal{I}_vf=0$ by assumption, we get $\mathcal{F} f(k)=0$ for all $k\in\mathbb{Z}^n$. \end{proof} \begin{ex} \label{ex:Z-OG} Show that for any $k\in\mathbb{Z}^n$ there exists $w\in\mathbb{Z}^n\setminus0$ so that $k\cdot w=0$. \end{ex} \begin{ex} Show that if $v\in2\pi\mathbb{Z}^n\setminus0$, then~$\mathcal{I}_v$ is not injective. Use lemma~\ref{lma:ft-xrt-torus} or take a function $f\in C^\infty(\mathbb{T}^n)$ and consider the function $v\cdot\nabla f(x)$. \end{ex} \begin{ex} \label{ex:torus-xrt-scaling} Let $v\in2\pi\mathbb{Z}^n\setminus0$ and $m\in\mathbb{Z}\setminus0$. Show that $\mathcal{I}_{mv}=\mathcal{I}_v$. \end{ex} \begin{ex} All of the results in this section are valid for $n=1$ apart from exercise~\ref{ex:Z-OG}. When $n=1$, one can only find an orthogonal $w\in\mathbb{Z}$ for $k=0$. By exercise~\ref{ex:torus-xrt-scaling} all one can measure about $f\in C(\mathbb{T}^1)$ is~$\mathcal{I}_1f$. What does this mean for recovering the Fourier coefficients~$\mathcal{F} f(k)$? \end{ex} \begin{ex} We have excluded $v=0$ from our discussion. Why is this reasonable, considering the original problem? What is the operator~$\mathcal{I}_0$? \end{ex} As a corollary, we get the following injectivity result: \begin{theorem} \label{xrtthm:torus} Suppose $f\in C_B$ integrates to zero over all lines through~$B$. Then $f=0$. \end{theorem} \begin{proof} This follows from lemma~\ref{lma:torus-to-Rn}, exercise~\ref{ex:torus-to-Rn}, and theorem~\ref{thm:xrt-torus}. \end{proof} \begin{ex} Summarize in your own words the proof of injectivity of the X-ray transform given in this section. \end{ex} Injectivity in a larger ball and therefore in the whole space~$C_c(\mathbb{R}^n)$ follows by a scaling argument. It is worth noting that in this proof we did not use X-rays in all directions. Only the directions in $2\pi\mathbb{Z}^n\setminus0$ were used. If one projects this set radially to the unit sphere~$S^{n-1}$, one gets a countable dense set. \begin{ex}Do you have any questions or comments regarding section~\thesection? Was something confusing or unclear? Were there mistakes?\end{ex} \section{Injectivity via angular Fourier series} \label{sec:ang-fs} In this section and the next section we will give our second injectivity proof based on Fourier series with respect to the angular variable in polar coordinates. \subsection{Angular Fourier series} In this section we will give a new way to prove injectivity of the X-ray transform. This is the one found by Allan Cormack, who together with the electrical engineer Godfrey Hounsfield was awarded the Nobel Prize in Physiology or Medicine for the development of computer assisted tomography in 1979. However, Cormack was not the first one to solve the mathematical inverse problem; it had been done in 1917 by Johann Radon, but without an idea to apply it to tomography. Radon's inversion method will be covered in section~\ref{sec:radon}. We study the problem in two dimensions. It is most convenient to consider the problem in the punctured closed disc \begin{equation} \label{eq:pud} {\bar D^*} = \{x\in\mathbb{R}^2;0<\abs{x}\leq1\}. \end{equation} Recall exercise~\ref{ex:2d-hd} concerning the two-dimensional case. Our aim is to reconstruct a continuous function $f\colon{\bar D^*}\to\mathbb{C}$ from its integrals over all lines through~${\bar D^*}$. We will not use the lines that pass through the origin. That is, we throw away some data. Avoiding the origin simply makes the use of polar coordinates more convenient and does not make the result any weaker. Our original problem was to reconstruct a function in the whole disc, but it turns out to be convenient to throw away some data. This is not unusual in inverse problems. It is often best to look at a convenient subset of the data. However, the results are often stated for all of the data for clarity. We will use polar coordinates $r\in(0,1]$ and $\theta\in\mathbb{R}/2\pi\mathbb{Z}$ on~${\bar D^*}$. For any fixed~$r$, the function $f(r,{\,\cdot\,})$ is a continuous function $\mathbb{R}/2\pi\mathbb{Z}\to\mathbb{C}$. We expand it in a Fourier series. Now the coefficients of the Fourier series depend on the variable~$r$. We have \begin{equation} \label{eq:fs-ang-ak} f(r,\theta) = \sum_{k\in\mathbb{Z}}a_k(r)e^{ik\theta}. \end{equation} The Fourier coefficients may be calculated as \begin{equation} \label{eq:ang-f-component} a_k(r) = \frac1{2\pi} \int_0^{2\pi} e^{-ik\theta}f(r,\theta)\,\der\theta. \end{equation} From this expression one can see that each $a_k\colon(0,1]\to\mathbb{C}$ is continuous. The only difference to the usual Fourier series on the circle is the appearance of the parameter~$r$. We will write $f_k(r,\theta)=a_k(r)e^{ik\theta}$, so that the Fourier series becomes simply \begin{equation} \label{eq:fs-ang-fk} f(r,\theta) = \sum_{k\in\mathbb{Z}} f_k(r,\theta). \end{equation} We will not study the details of this series too deeply, but we remark that the terms are $L^2$-orthogonal and the usual~$L^2$ theory of Fourier series applies with some modifications due to the presence of~$r$. It will suffice for us that $f=0$ if and only if $f_k=0$ for all $k\in\mathbb{Z}$. \begin{ex} \label{ex:ang-fs} Suppose $f\colon{\bar D^*}\to\mathbb{C}$ is continuous. Show that the following are equivalent: \begin{enumerate}[(a)] \item $f=0$ \item $f_k=0$ for all $k\in\mathbb{Z}$ \item $a_k=0$ for all $k\in\mathbb{Z}$ \end{enumerate} Theorem~\ref{thm:1d-fs} will be of use. In fact, the whole angular Fourier series makes sense because of this theorem. \end{ex} In higher dimensions the functions~$e^{ik\theta}$ need to be replaced with spherical harmonics. This is one of the reasons why it is convenient to restrict to dimension two. One can study the angular Fourier series in the whole plane if one wants. As long as the function is continuous or~$L^2$ (or whatever space one might be working with), one can apply the one-dimensional Fourier series circle by circle. \subsection{The X-ray transform in polar coordinates} For any point $x\in{\bar D^*}$, let~$L_x$ be the line segment connecting boundary points of the unit disc so that~$x$ is the closest point to the origin on~$L_x$. If $\abs{x}=1$, the line will degenerate into a point. This is a convenient way to parametrize all lines through the closed unit disc that do not meet the origin. For a continuous function $f\colon{\bar D^*}\to\mathbb{C}$, we define~$\mathcal{I} f(x)$ to be the integral of~$f$ over~$L_x$. Again, we use polar coordinates, so that the X-ray transform of~$f$ is a function $\mathcal{I} f(r,\theta)$. It will be useful to write this as a Fourier series in the variable~$\theta$. For $\theta\in\mathbb{R}/2\pi\mathbb{Z}$, denote $v_\theta=(\cos(\theta),\sin(\theta))$. For $r>0$ and $\theta\in\mathbb{R}/2\pi\mathbb{Z}$, the corresponding line can be written as \begin{equation} L_{r,\theta} = \{x\in\mathbb{R}^2;x\cdot v_\theta=r\}. \end{equation} As mentioned above, this covers all the lines that do not meet the origin. If we use ``extended polar coordinates'' where $r\geq0$, the we can indeed parametrize all lines. In some sense, this corresponds to replacing the origin with ``directed origins'', which is a compactification of the punctured disc. In fact, one can even let the radius~$r$ to be any real number; this would lead to a global two-fold parametrization of all the lines. \subsection{Rotations and diagonalizability} Fix any $\phi\in\mathbb{R}$. Let us define the rotation operator~$\mathcal{R}_\phi$ on functions defined on~${\bar D^*}$ so that $(\mathcal{R}_\phi f)(r,\theta)=f(r,\theta+\phi)$. It is clear that~$\mathcal{R}_\phi$ maps continuous functions to continuous functions. For a continuous $f\colon{\bar D^*}\to\mathbb{C}$, both~$f$ and~$\mathcal{I} f$ are functions on~${\bar D^*}$. This allows us to make sense of the function~$\mathcal{R}_\phi\mathcal{I} f$. The interplay between rotations and the X-ray transform is important. \begin{ex} \label{ex:rot-commute} Take any $\phi\in\mathbb{R}$ and a continuous $f\colon{\bar D^*}\to\mathbb{C}$. Explain why $\mathcal{I}\mathcal{R}_\phi f=\mathcal{R}_\phi\mathcal{I} f$. \end{ex} The fact that rotations commute with the X-ray transform will bring additional structure. \begin{lemma} \label{lma:ang-fubini} Let $f(x;\phi)$ be a continuous function defined on ${\bar D^*}\times[0,2\pi]$. Let~$\gamma$ be any line through~${\bar D^*}$ that does not meet the origin. Then \begin{equation} \int_0^{2\pi}(\mathcal{I} f({\,\cdot\,};\phi))(\gamma)\,\der\phi = \mathcal{I} F(\gamma), \end{equation} where $F(x)=\int_0^{2\pi}f(x;\phi)\,\der\phi$. \end{lemma} \begin{ex} Prove the lemma. \end{ex} \begin{lemma} \label{lma:ang-fs-xrt} Let $f\colon{\bar D^*}\to\mathbb{C}$ be a continuous function. Then \begin{equation} \frac1{2\pi} \int_0^{2\pi}e^{-ik\theta}\mathcal{I} f(r,\theta)\,\der\theta = \mathcal{I} f_k(r,0) \end{equation} for all $k\in\mathbb{Z}$. \end{lemma} The angle~$0$ might seem weird at first. It is best to regard the right-hand side as the X-ray transform of the one-dimensional function~$a_k$. Introducing a non-zero angle is possible in the formula above, but it gives no additional information. \begin{proof}[Proof of lemma~\ref{lma:ang-fs-xrt}] In the integrals below limits are occasionally shifted from $(0,2\pi)$ due to changes of variables. Since the relevant functions are $2\pi$-periodic, we do not need to change the interval of integration. Fix any $k\in\mathbb{Z}$. First, we observe that \begin{equation} \label{eq:vv4} \begin{split} \frac1{2\pi} \int_0^{2\pi}e^{-ik\theta}\mathcal{R}_\theta f(r,\phi)\,\der\theta &= \frac1{2\pi} \int_0^{2\pi}e^{-ik\theta}f(r,\phi+\theta)\,\der\theta \\&= \frac1{2\pi} \int_0^{2\pi}e^{-ik(\omega-\phi)}f(r,\omega)\,\der\omega \\&= e^{ik\phi}a_k(r) \\&= f_k(r,\phi). \end{split} \end{equation} Using the definitions, exercise~\ref{ex:rot-commute}, lemma~\ref{lma:ang-fubini}, and equation~\eqref{eq:vv4}, we get \begin{equation} \begin{split} \frac1{2\pi} \int_0^{2\pi}e^{-ik\theta}\mathcal{I} f(r,\theta)\,\der\theta &= \frac1{2\pi} \int_0^{2\pi}e^{-ik\theta}\mathcal{R}_\theta\mathcal{I} f(r,0)\,\der\theta \\&= \frac1{2\pi} \int_0^{2\pi}e^{-ik\theta}\mathcal{I}\mathcal{R}_\theta f(r,0)\,\der\theta \\&= \frac1{2\pi} \int_0^{2\pi}\mathcal{I} (e^{-ik\theta}\mathcal{R}_\theta f)(r,0)\,\der\theta \\&= \mathcal{I}\left( \frac1{2\pi}\int_0^{2\pi}(e^{-ik\theta}\mathcal{R}_\theta f)\,\der\theta \right) (r,0) \\&= \mathcal{I} f_k(r,0). \end{split} \end{equation} This concludes the proof. \end{proof} \begin{ex} \label{ex:xrt-fs} The function $f(r,\theta)$ was written as a Fourier series $f=\sum_{k\in\mathbb{Z}}f_k$ in~\eqref{eq:fs-ang-fk}. Similarly, $g(r,\theta)=\mathcal{I} f(r,\theta)$ can be written as a Fourier series $g=\sum_{k\in\mathbb{Z}}g_k$. Give a formula for the function~$g_k$ in terms of~$\mathcal{I} f$. Explain why~$g_k$ depends on~$f_k$ but not on any other~$f_m$ for $m\neq k$. \end{ex} Our goal, as always, is to show that if $\mathcal{I} f=0$, then $f=0$. By exercise~\ref{ex:xrt-fs} it follows from the assumption that $\mathcal{I} f_k=0$ for every $k\in\mathbb{Z}$. We will then fix any~$k$ and show that $\mathcal{I} f_k=0$ implies $f_k=0$. This problem is essentially one-dimensional, since~$f_k$ corresponds to the continuous function $a_k\colon(0,1]\to\mathbb{C}$. The aim of the next section is to solve this family of one-dimensional problems. After that we know that $f_k=0$ for each~$k$, and so $f=0$. \subsection{Remarks on symmetries and functional analysis} The Fourier series of the X-ray transform depends in a rather simple way on the Fourier series of the original function. The~$k$th Fourier component of the X-ray transform only depends on the~$k$th Fourier component of the function. This is not a coincidence. The X-ray transform is an operator that takes a function on~${\bar D^*}$ to another function on~${\bar D^*}$. It commutes with the rotation operator~$\mathcal{R}_\theta$ for any~$\theta$, so it also commutes with the derivative~$\partial_\theta$ with respect to the angular coordinate. Of course, the derivative operator does not map continuous functions to continuous functions, so it should be defined on a different space or be treated as an unbounded operator, but we ignore this technicality. Now, the derivative~$\partial_\theta$ and the X-ray transform~$\mathcal{I}$ commute. At least physicists are likely to remember that if two symmetric matrices commute, they are simultaneously diagonalizable. Similar results hold for infinite-dimensional spaces. The symmetric situation is an example of a broader phenomenon. The operator~$\partial_\theta$ is antihermitean, but the integral transform does not need any adjointness properties. In our specific case this suggests that if we write the whole function space as a direct sum of eigenspaces of~$\partial_\theta$, then the X-ray transform is block diagonal. This is indeed what happens. For example, in~$L^2(D)$, the eigenspace of~$\partial_\theta$ with eigenvalue~$ik$, $k\in\mathbb{Z}$, is \begin{equation} H_k = \{f\in L^2(D);f(r,\theta)=a(r)e^{ik\theta}\text{ for some function }a\}. \end{equation} One can do similar things over other function spaces. Our result in this section shows (apart from regularity assumptions), that $\mathcal{I}(H_k)\subset H_k$. It is also somewhat easy to see that $H_k\perp H_m$ when $k\neq m$. Moreover, if the operator has suitable continuity properties in~$L^2$ --- and the X-ray transform has --- then one has a very convenient theory in a Hilbert space. In higher dimensions~$H_k$ can be defined similarly, with the exponential functions~$e^{ik\theta}$ replaced by spherical harmonics. \begin{ex} Let the two matrices $A,B\in\mathbb{R}^{n\times n}$ be symmetric. For simplicity, you may additionally assume that all eigenvalues have multiplicity one. (The result will be true without this assumption.) Prove that if $AB=BA$, then there is an orthogonal matrix~$U$ so that~$UAU^T$ and~$UBU^T$ are both diagonal. You may assume it known that for a single real symmetric matrix such a~$U$ exists. \end{ex} Passing from rotation symmetry ($\mathcal{R}_\phi$) to angular derivatives ($\partial_\theta$) was a useful trick. One may ask how one might find the derivative operator, given the rotations. A formal calculation gives \begin{equation} \label{eq:lie-algebra} \partial_\theta = \left.\frac{\mathrm{d}}{\mathrm{d}\phi}\mathcal{R}_\phi\right|_{\phi=0}. \end{equation} The derivative does not exist as a limit of the difference quotient in~$L^2$, but it does exist in~$C^\infty$, for example. Passing from a full symmetry to a differential symmetry is an example of passing from a Lie group to its Lie algebra. If something commutes with the Lie group, then it commutes with the Lie algebra, and the Lie algebra of a symmetry group can often be realized as differential operators. \begin{ex} Show that if~$f\in C^1({\bar D^*})$, then \begin{equation} \partial_\theta f(r,\theta) = \left.\frac{\mathrm{d}}{\mathrm{d}\phi}(\mathcal{R}_\phi f)(r,\theta)\right|_{\phi=0}. \end{equation} In other words, prove equation~\eqref{eq:lie-algebra}. (Notice that while~$\mathcal{R}_\phi$ maps $C^1({\bar D^*})\to C^1({\bar D^*})$, the derivative~$\partial_\theta$ only maps $C^1({\bar D^*})\to C^0({\bar D^*})$.) \end{ex} Another thing worth pointing out is that rotation symmetry was crucially important, but Euclidean geometry was not. Similar arguments work in other rotation symmetric situations. \begin{ex}Do you have any questions or comments regarding section~\thesection? Was something confusing or unclear? Were there mistakes?\end{ex} \section{Abel transforms} \label{sec:abel} \subsection{The block diagonal structure of the X-ray transform in polar coordinates} As discovered in the previous section, the X-ray transform has a peculiar block diagonal structure. Now it remains to find the operators on the diagonal. We shall not use the block diagonal structure in any formal way, but it is an underlying idea the reader should be aware of. Consider the function $f_k(r,\theta)=e^{ik\theta}a_k(r)$, where $a_k\colon(0,1]\to\mathbb{C}$ is continuous. We want to find an explicit formula for the X-ray transform of~$f_k$. To this end, consider the line~$L_{s,\phi}$ whose closest point to the origin is~$(s,\phi)$. We may assume $0<s<1$. Using unit length parametrization with zero parameter at the midpoint, we can write this line as the curve \begin{equation} \gamma\colon[-\sqrt{1-s^2},\sqrt{1-s^2}]\to{\bar D^*} \end{equation} with \begin{equation} \gamma(t) = (\sqrt{s^2+t^2},\phi+\arctan(t/s)). \end{equation} \begin{ex} Justify this formula geometrically. \end{ex} We will split the interval in half and change the variable of integration from arc length $t\in(0,\sqrt{1-s^2})$ to radius $r=\sqrt{s^2+t^2}\in(s,1)$. Now we can simply calculate: \begin{equation} \label{eq:vv5} \begin{split} \mathcal{I} f_k(s,\phi) &= \int_{-\sqrt{1-s^2}}^{\sqrt{1-s^2}}f_k(\sqrt{s^2+t^2},\phi+\arctan(t/s))\,\der t \\&= \sum_{\pm} \int_0^{\sqrt{1-s^2}}f_k(\sqrt{s^2+t^2},\phi\pm\arctan(t/s))\,\der t \\&= \sum_{\pm} \int_s^1f_k(r,\phi\pm\arccos(s/r))\frac{\mathrm{d} t}{\mathrm{d} r}\,\der r \\&= \sum_{\pm} \int_s^1a_k(r)e^{ik\phi\pm ik\arccos(s/r)}\frac{1}{\sqrt{1-(s/r)^2}}\,\der r . \end{split} \end{equation} Two steps need justification, and they are left as exercises. \begin{ex} Explain why $\arctan(t/s)=\arccos(s/r)$. \end{ex} \begin{ex} Why is the Jacobian $\frac{\mathrm{d} t}{\mathrm{d} r}$ equal to $1/\sqrt{1-(s/r)^2}$ as indicated above? \end{ex} Our change of variable was in fact singular. But the singularity is integrable and our calculation is still valid, but to be pedantic, one may want to consider the integral with $t\in(\varepsilon,\sqrt{1-s^2})$ first and then let $\varepsilon\to0$. To proceed with the calculation, we must do some trigonometric manipulations. \begin{ex} Show that $\sum_{\pm}e^{ik\phi\pm ik\arccos(s/r)}=2e^{ik\phi}\cos(k\arccos(s/r))$. \end{ex} It turns out that for $k\in\mathbb{Z}$ and $x\in[-1,1]$, we have $\cos(k\arccos(x))=T_{\abs{k}}(x)$, where~$T_k$ is the~$k$th Chebyshev polynomial of the first kind. For convenience, we will use the notation~$T_k$ instead of~$T_{\abs{k}}$ even when $k<0$. It follows from this cosine property of the Chebyshev polynomials that $\max_{x\in[0,1]}T_k(x)=1$ for any $k\in\mathbb{Z}$. This family of polynomials can be defined recursively for $k\in\mathbb{N}$ by $T_0(x)=1$, $T_1(x)=x$, and $T_k(x)=2xT_{k-1}(x)-T_{k-2}(x)$. Once one establishes this recursion relation, it indeed follows that the function~$T_k$ is indeed a polynomial. \begin{ex} Justify the formulas for~$T_0$ and~$T_1$ and recursion relation for~$T_k$ using the property that $\cos(kx)=T_k(\cos(x))$ for all $k\in\mathbb{N}$. \end{ex} Now we can proceed from~\eqref{eq:vv5} to \begin{equation} \mathcal{I} f_k(s,\phi) = 2e^{ik\phi} \int_s^1h(r)\frac{T_k(s/r)}{\sqrt{1-(s/r)^2}}\,\der r. \end{equation} Based on the last section (exercise~\ref{ex:xrt-fs} implies that Fourier transform of the~$k$th Fourier component of~$f$ contains only the~$k$th Fourier component), we expected to pull out the factor~$e^{ik\phi}$, but the exact structure of the rest might be a bit of a surprise. \subsection{Abel transforms} \begin{definition} \label{def:abel} Fix any $k\in\mathbb{Z}$. For a continuous function $h\colon(0,1]\to\mathbb{C}$ we define a new continuous function $\mathcal{A}_kh\colon(0,1]\to\mathbb{C}$ by \begin{equation} \label{eq:Ak-def} (\mathcal{A}_kh)(s) = 2\int_s^1h(r)\frac{T_k(s/r)}{\sqrt{1-(s/r)^2}}\,\der r. \end{equation} Here~$T_k$ is the $\abs{k}$th Chebyshev polynomial. We call~$\mathcal{A}_k$ the~$k$th generalized Abel transform. \end{definition} \begin{ex} The integral above is actually only defined for $s\in(0,1)$. Show that $\lim_{s\to1}\mathcal{A}_kh(s)=0$, so that it makes sense to let $\mathcal{A}_kh(1)=0$ regardless of the value~$h(1)$. \end{ex} The reason for calling~$\mathcal{A}_k$ a generalized Abel transform is that for $k=0$ we have $T_0\equiv1$ and~$\mathcal{A}_0$ is (one form of) the Abel transform. These are all integral transforms that take one function on the interval and turn it into another function on the interval by means of an integral formula. We have thus found that if \begin{equation} f(r,\theta) = \sum_{k\in\mathbb{Z}}e^{ik\theta}a_k(r), \end{equation} then \begin{equation} \mathcal{I} f(r,\theta) = \sum_{k\in\mathbb{Z}}e^{ik\theta}\mathcal{A}_k a_k(r). \end{equation} This means that the Abel transforms are the operators on our block diagonal. We want to show that if $\mathcal{A}_kf_k=0$, then $f_k=0$. That is, we want to show that the generalized Abel transform $\mathcal{A}_k\colon C((0,1])\to C((0,1])$ is an injection. (We will not need or prove that~$\mathcal{A}_k$ maps continuous functions to continuous functions.) \begin{lemma} \label{lma:abel} The generalized Abel transform $\mathcal{A}_k\colon C((0,1])\to C((0,1])$ is an injection. Moreover, $h\in C((0,1])$ can be calculated from~$\mathcal{A}_k h$ via \begin{equation} \label{eq:abel-inv} h(r) = -\frac1\pi\frac{\mathrm{d}}{\mathrm{d} r} \int_r^1\mathcal{A}_kh(s)\frac{T_k(s/r)}{s\sqrt{(s/r)^2-1}}\,\der s. \end{equation} \end{lemma} \begin{proof} It suffices to prove~\eqref{eq:abel-inv}. We start by examining the integral: \begin{equation} \label{eq:vv6} \begin{split} J(r) &\coloneqq \int_r^1\mathcal{A}_kh(s)\frac{T_k(s/r)}{s\sqrt{(s/r)^2-1}}\,\der s \\&= \int_r^1 \left( 2\int_s^1h(t)\frac{T_k(s/t)}{\sqrt{1-(s/t)^2}}\,\der t \right) \frac{T_k(s/r)}{s\sqrt{(s/r)^2-1}}\,\der s \\&= 2 \int_r^1 \int_s^1 h(t) \frac{T_k(s/t)}{\sqrt{1-(s/t)^2}} \frac{T_k(s/r)}{s\sqrt{(s/r)^2-1}} \,\der t \,\der s \\&= 2 \int_r^1 \int_r^t h(t) \frac{T_k(s/t)}{\sqrt{1-(s/t)^2}} \frac{T_k(s/r)}{s\sqrt{(s/r)^2-1}} \,\der s \,\der t \\&= 2 \int_r^1 h(t) K_k(r,t) \,\der t , \end{split} \end{equation} where \begin{equation} \label{eq:K-def} K_k(r,t) = \int_r^t \frac{T_k(s/t)}{\sqrt{1-(s/t)^2}} \frac{T_k(s/r)}{s\sqrt{(s/r)^2-1}} \,\der s. \end{equation} Now, somewhat magically, \begin{equation} \label{eq:K-fact} K_k(r,t) = \frac\pi2 \quad \text{whenever $0<r<t$ and $k\in\mathbb{Z}$}. \end{equation} Some cases will be treated in the exercises. Therefore \begin{equation} J(r) = \pi \int_r^1 h(t) \,\der t. \end{equation} The desired result now follows by the fundamental theorem of calculus. \end{proof} \begin{ex} Explain why the limits work as they do when we applied Fubini's theorem in~\eqref{eq:vv6}. \end{ex} \begin{ex} Prove that for every $\lambda>0$ we have $K_k(\lambda r,\lambda t)=K_k(r,t)$. You can use this to make simplifying assumptions in subsequent calculations if you want to. \end{ex} \begin{ex} Make the change of variable $s^2=\frac12[(t^2+r^2)+y(t^2-r^2)]$ to~\eqref{eq:K-def} and simplify the resulting expression. It is wise to regard the measure as~$\mathrm{d} s/s$. You can leave the Chebyshev polynomials untouched. \end{ex} \begin{ex} Calculate the integral \begin{equation} \int_{-1}^1\frac{y}{\sqrt{1-y^2}}\,\der y \end{equation} by hand. \end{ex} \begin{ex} Calculate by hand \begin{equation} B = \int_{-1}^1\frac{1}{(a+y)\sqrt{1-y^2}}\,\der y, \end{equation} where $a>1$ is a real parameter. It may be convenient to differentiate $\arctan\left(\frac{1+ay}{\sqrt{(a^2-1)(1-y^2)}}\right)$. \end{ex} \begin{ex} Making use of $T_0(x)=1$, $T_1(x)=x$, and the previous exercises, calculate $K_0(r,t)$ and $K_1(r,t)$. \end{ex} \begin{bex} Prove the recurrence relation $K_{k+2}(r,t)=K_k(r,t)$. This together with the previous results shows~\eqref{eq:K-fact}. \end{bex} One can define an operator~$\mathcal{B}_k$ by \begin{equation} \mathcal{B}_kh(r) = -\frac1\pi \int_r^1h(s)\frac{T_k(s/r)}{s\sqrt{(s/r)^2-1}}\,\der s. \end{equation} It seems that~$\mathcal{B}_k$ is very similar in nature to~$\mathcal{A}_k$, so one expects it to have similar mapping properties. Since $\mathcal{B}_k\mathcal{A}_k h(r)=\int_r^1h(r)\,\der r$, this means that both~$\mathcal{A}_k$ and~$\mathcal{B}_k$ are ``integrals of order~$\frac12$''. In fact, the X-ray transform does integrate by order~$\frac12$, but we will not give this statement a precise meaning in this course. Having Chebyshev polynomials in~$\mathcal{A}_k$ is not important at all for injectivity. It does help with finding an explicit inversion formula, but similar injectivity results are true in far more generality. The important things are the limits of integration and the kind of singularity at the lower limit. \subsection{Injectivity of the X-ray transform} We have now collected the needed tools, and it remains to declare the result. \begin{theorem} \label{xrtthm:cormack} A continuous function ${\bar D^*}\to\mathbb{C}$ is uniquely determined by its integrals over all straight lines. \end{theorem} \begin{ex} Summarize the proof of the theorem in your own words. Refer to the key steps (equations, lemmas, exercises, or other). \end{ex} Observe that no regularity assumption was made at the origin. Singularities at the origin do not matter. \begin{bex} One can also go the other way around. Assume that theorem~\ref{xrtthm:cormack} is true (this has been proved with other methods in the previous section). Use the tools developed in this and the previous section to prove that the generalized Abel transforms~$\mathcal{A}_k$ are injective. \end{bex} \subsection{Helgason's support theorem} In fact, even more is true than theorem~\ref{xrtthm:cormack}. \begin{proposition} \label{prop:helgason} Let $R\in(0,1)$. If a continuous function $f\colon\bar D\to\mathbb{C}$ integrates to zero over all lines with distance${}>R$ to the origin, then $f(x)=0$ when $\abs{x}>R$. \end{proposition} \begin{proof} Since $\mathcal{I} f(r,\theta)$ only depends on $f(s,\phi)$ for $s\geq r$, it follows that $\mathcal{A}_ka_k(s)=0$ for all $s>R$. The inversion formula for the generalized Abel transform~$\mathcal{A}_k$ is also valid for this case: If $h\colon(0,1]\to\mathbb{C}$ is continuous and $\mathcal{A}_kh(r)=0$ for all $r\in(R,1]$, then $h(s)=0$ for $s\in(R,1]$. Therefore $a_k(s)=0$ for all $s>R$ and $k\in\mathbb{Z}$, and so $f(s,\phi)=0$ for all $s>R$ and $\phi\in\mathbb{R}/2\pi\mathbb{Z}$. \end{proof} We may consider the disc~$\bar D(0,R)$ to be an obstacle. A sufficiently nice function is uniquely determined outside the obstacle by its integrals over all lines that avoid the obstacle. Of course, nothing can be said about the function inside the obstacle from this data. Results of this kind are often called support theorems for the X-ray transform. From a more physical point of view, this is a matter of exterior tomography --- there are actual physical obstacles in the real world that one cannot fire X-rays through. One of the most famous support theorems is due to Sigur\dh{}ur Helgason. We present a variant of the two-dimensional version. \begin{theorem}[Helgason's support theorem in the plane] \label{thm:helgason} Let $K\subset\mathbb{R}^2$ be a compact and convex set. Suppose $f\in C_c(\mathbb{R}^2)$ integrates to zero over all lines $L\subset\mathbb{R}^2$ for which $L\cap K=\emptyset$. Then $f|_{\mathbb{R}^2\setminus K}=0$. \end{theorem} \begin{ex} \label{ex:convex-intersection} Argue that a compact and convex planar set is the intersection of all closed discs containing it. Then prove theorem~\ref{thm:helgason} using proposition~\ref{prop:helgason}. (You may use the result that states that a compact convex set and a point outside it can be separated by a line which is disjoint from both the point and the set.) \end{ex} \begin{ex} Explain why Helgason's support theorem (often) fails if the compact set~$K$ is not convex. Also, what does the support theorem say if $K=\emptyset$? \end{ex} If~$K$ is not compact, the support theorem can fail. For example, if~$K$ is a closed half plane, then the data only contains integrals parallel to~$\partial K$, which is certainly insufficient. If a set is ``almost convex'', then Helgason's support theorem can still work. For example, if~$K$ is the union of a closed ball and a point, the theorem is still valid as stated. This is because, in some sense, a single point is removable --- the missing lines can be approximated by existing ones. \begin{ex}Do you have any questions or comments regarding section~\thesection? Was something confusing or unclear? Were there mistakes?\end{ex} \section{Radon's inversion method} \label{sec:radon} In this section we will give our third injectivity proof by the first method due to Johann Radon in 1917. \subsection{The X-ray transform and circular averages} We will reconstruct a function in~$C_c(\mathbb{R}^2)$ from its line integral with Radon's method. It is closely related to the previous one using the angular Fourier series as we shall see. Our notation follows mainly that of Radon's original work, but we have made some adjustments. The circular average of~$f$ over the circle centered at $x\in\mathbb{R}^2$ with radius $r>0$ is \begin{equation} \label{eq:r-fbar} \bar f_x(r) = \frac1{2\pi} \int_0^{2\pi} f(x_1+r\cos(\theta),x_2+r\sin(\theta))\,\der\theta. \end{equation} \begin{ex} Fix any $r\in\mathbb{R}$ and $\theta\in\mathbb{R}/2\pi\mathbb{Z}=\mathbb{T}^1$. Consider the curve $\gamma_{r,\theta}\colon\mathbb{R}\to\mathbb{R}^2$ given by \begin{equation} \gamma_{r,\theta}(t) = (r\cos(\theta)-t\sin(\theta),r\sin(\theta)+t\cos(\theta)). \end{equation} Show that its closest point to the origin is at $(r\cos(\theta),r\sin(\theta))$, that $\abs{\dot\gamma(t)}=1$ for all $t\in\mathbb{R}$, and that $\gamma(\mathbb{R})=\{x\in\mathbb{R}^2;x_1\cos(\theta)+x_2\sin(\theta)=r\}$. \end{ex} Using these~$r$ and~$\theta$ we can parametrize all the lines in the plane, including the ones going through the origin. There is a two-fold redundancy as exercise~\ref{ex:r-flip} shows. We define the X-ray transform of $f\in C_c(\mathbb{R}^2)$ as $\mathcal{I} f\colon\mathbb{R}\times\mathbb{T}^1\to\mathbb{R}$ given by the formula \begin{equation} \mathcal{I} f(r,\theta) = \int_\mathbb{R} f(\gamma_{r,\theta}(t))\,\der t. \end{equation} \begin{ex} Explain why~$\mathcal{I} f$ is bounded and continuous when $f\in C_c(\mathbb{R}^2)$. \end{ex} \begin{ex} \label{ex:r-flip} Show that $\mathcal{I} f(r,\theta)=\mathcal{I} f(-r,\theta+\pi)$. How do the curves~$\gamma_{r,\theta}$ and~$\gamma_{-r,\theta+\pi}$ differ? \end{ex} We will also define a circular average of the X-ray transform~$\mathcal{I} f$. The average over the circle with center $x\in\mathbb{R}^2$ and radius $r>0$ is defined to be \begin{equation} \overline{\mathcal{I} f}_x(r) = \frac1{2\pi} \int_0^{2\pi} \mathcal{I} f(x_1\cos(\theta)+x_2\sin(\theta)+r,\theta)\,\der\theta. \end{equation} We will verify in the next exercise that this formula is geometrically correct; this is really the average over all the lines tangent to the said circle. \begin{ex} Consider the circle with center~$x$ and radius~$r$. Take any angle $\theta\in\mathbb{R}/2\pi\mathbb{Z}$. Consider the point~$z$ on the circle where the exterior unit normal vector is $(\cos(\theta),\sin(\theta))$. Let~$L$ be the line normal to the circle at~$z$. Show that the integral of~$f$ over~$L$ is \begin{equation} \mathcal{I} f(x_1\cos(\theta)+x_2\sin(\theta)+r,\theta). \end{equation} Draw a picture to illustrate the situation. \end{ex} We will reconstruct~$f$ from~$\mathcal{I} f$ via~$\overline{\mathcal{I} f}$. \subsection{Reduction to the Abel transform} The key of the proof is the following integral identity. \begin{lemma} \label{lma:radon-identity} If $f\in C_c(\mathbb{R}^2)$, $x\in\mathbb{R}^2$, and $r>0$, then \begin{equation} \overline{\mathcal{I} f}_x(r) = 2\int_r^\infty\frac{\bar f_x(s)s}{\sqrt{s^2-r^2}}\,\der s. \end{equation} \end{lemma} \begin{proof} By translation invariance we may assume that $x=0$ (exercise). Consider $r>0$ fixed. Define $\psi\colon[0,\infty)\times\mathbb{T}^1\to\mathbb{R}^2\setminus D(0,r)$ by \begin{equation} \label{eq:r-psi} \psi(t,\theta) = (r\cos(\theta)-t\sin(\theta),r\sin(\theta)+t\cos(\theta)) = \gamma_{r,\theta}(t). \end{equation} This is a diffeomorphism and the Jacobian determinant is simply~$t$ (exercise). A computation gives \begin{equation} \label{eq:r-id-calc} \begin{split} \overline{\mathcal{I} f}_0(r) &\stackrel{\text{a}}{=} \frac1{2\pi} \int_0^{2\pi} \mathcal{I} f(r,\theta)\,\der\theta \\&\stackrel{\text{b}}{=} \frac1{2\pi} \int_0^{2\pi} \int_\mathbb{R} f(r\cos(\theta)-t\sin(\theta),r\sin(\theta)+t\cos(\theta)) \,\der t \,\der\theta \\&\stackrel{\text{c}}{=} \frac1{\pi} \int_{\mathbb{T}^1} \int_0^\infty f(r\cos(\theta)-t\sin(\theta),r\sin(\theta)+t\cos(\theta)) t^{-1} t \,\der t \,\der\theta \\&\stackrel{\text{d}}{=} \frac1{\pi} \int_{[0,\infty)\times\mathbb{T}^1} f(\psi_r(t,\theta)) \left(\abs{\psi_r(t,\theta)}^2-r^2\right)^{-1/2} t \,\der t \,\der\theta \\&\stackrel{\text{e}}{=} \frac1{\pi} \int_{\mathbb{R}^2\setminus D(0,r)} f(x) \left(\abs{x}^2-r^2\right)^{-1/2} \,\der x \\&\stackrel{\text{f}}{=} \frac1{\pi} \int_r^\infty \left( \int_0^{2\pi} \frac{f(s\cos(\theta),s\sin(\theta))}{\sqrt{s^2-r^2}} \,\der\theta \right) s\,\der s \\&\stackrel{\text{g}}{=} 2 \int_r^\infty \frac{\bar f_0(s)}{\sqrt{s^2-r^2}} r\,\der r. \end{split} \end{equation} This is the claimed identity for $x=0$. \end{proof} \begin{ex} Fix any $a\in\mathbb{R}^n$ and denote by $T_a\colon C_c(\mathbb{R}^2)\to C_c(\mathbb{R}^2)$ the translation operator defined by $T_af(x)=f(x+a)$. Explain geometrically why \begin{equation} \bar f_x(r) = \overline{T_xf}_0(r) \end{equation} and \begin{equation} \overline{\mathcal{I} f}_x(r) = \overline{\mathcal{I} T_xf}_0(r). \end{equation} This means that the statement of lemma~\ref{lma:radon-identity} can be formulated in terms of shifted functions while keeping all the circles centered at the origin. \end{ex} \begin{ex} Explain why the function~$\psi_r$ defined in~\eqref{eq:r-psi} is a bijection. You can choose algebraic calculation, geometric reasoning, or a combination thereof. \end{ex} \begin{ex} Show that the Jacobian determinant $\det(D\psi_r(t,\theta))$ of~$\psi_r$ is~$t$. \end{ex} \begin{ex} Explain briefly what happened in the steps a--g of~\eqref{eq:r-id-calc}. \end{ex} We define the Abel transform of a compactly supported continuous function $h\colon[0,\infty)\to\mathbb{R}$ to be $\mathcal{A} h\colon(0,\infty)\to\mathbb{R}$ given by \begin{equation} \mathcal{A} h(r) = 2\int_r^\infty \frac{h(s)s}{\sqrt{s^2-r^2}}\,\der s. \end{equation} With the help of this notation we can rewrite lemma~\ref{lma:radon-identity} as \begin{equation} \overline{\mathcal{I} f}_x(r) = \mathcal{A}\bar f_x(r). \end{equation} Comparing to~\eqref{eq:Ak-def}, we see that in fact $\mathcal{A}=\mathcal{A}_0$ apart from the upper limit of integration. As long as this limit is finite --- as it is due to the compact support of~$h$ --- we may use the same inversion formula~\eqref{eq:abel-inv} to invert~$\mathcal{A}$. We only proved injectivity of~$\mathcal{A}_k$ for $k=0$ and $k=\pm1$ by hand, and here we only need the special case $k=0$. We have \begin{equation} \label{eq:radon-inv-xr} \bar f_x(r) = -\frac1\pi\frac{\mathrm{d}}{\mathrm{d} r} \int_r^\infty\frac{\overline{\mathcal{I} f}_x(s)}{s\sqrt{(s/r)^2-1}}\,\der s. \end{equation} for all $x\in\mathbb{R}^2$ and $r>0$. \begin{ex} Prove that $\lim_{r\to0}\bar f_x(r)=f(x)$ when $f\colon\mathbb{R}^2\to\mathbb{R}$ is continuous. \end{ex} This little observation together with the identity~\eqref{eq:radon-inv-xr} shows that~$\mathcal{I} f$ determines~$f(x)$ for all~$x$ and therefore proves the desired injectivity result. However, the formula becomes more useful when actually calculate the limit, and this we shall do next. \subsection{An explicit inversion formula} To find the explicit inversion formula without requiring too many tools, we make the additional assumption that $f\in C^2_c(\mathbb{R}^2)$. Fix any point $x\in\mathbb{R}^2$. Let us denote $F(r)=\overline{\mathcal{I} f}_x(r)$. It follows from this regularity assumption that $F\in C^2_c(\mathbb{R})$. Notice that the same formula can be used to define~$F(r)$ for all $r\in\mathbb{R}$, not only $r>0$. If we denote \begin{equation} G(r) = \int_r^\infty\frac{F(s)}{s\sqrt{(s/r)^2-1}}\,\der s, \end{equation} the problem is to find \begin{equation} f(x)=-\frac1\pi\lim_{r\to0}G'(r). \end{equation} Let us first find a new formula for~$G'(r)$ when $r>0$. After the change of variable from $s\in(r,\infty)$ to $z=\sqrt{r^2+s^2}\in(0,\infty)$, we get \begin{equation} \label{eq:r-vv1} G(r) = \int_0^\infty\frac{rF(\sqrt{z^2+r^2})}{z^2+r^2}\,\der z. \end{equation} Observe that the limits no longer depend on~$r$. \begin{ex} Make this change of variable and verify the formula above for~$G(r)$. \end{ex} When $r>0$, we may easily differentiate under the integral sign, and we obtain \begin{equation} \label{eq:r-vv2} \begin{split} G'(r) &= \int_0^\infty \Bigg( F'(\sqrt{z^2+r^2})\frac{r^2}{(z^2+r^2)^{3/2}} \\&\qquad + F(\sqrt{z^2+r^2})\frac{z^2-r^2}{(z^2+r^2)^{2}} \Bigg)\,\der z. \end{split} \end{equation} Integrating by parts in the second term gives \begin{equation} \label{eq:r-vv3} G'(r) = \int_0^\infty \frac{F'(\sqrt{z^2+r^2})}{\sqrt{z^2+r^2}} \,\der z. \end{equation} \begin{ex} Justify the steps from~\eqref{eq:r-vv1} to~\eqref{eq:r-vv2} and~\eqref{eq:r-vv3}. \end{ex} Due to the symmetry property (see exercise~\ref{ex:r-flip}) of $\mathcal{I} f(r,\theta)$, we have $F(r)=F(-r)$. Since $F\in C^2_c(\mathbb{R})$, it then follows that $F'(0)=0$ and $\abs{F'(r)}\leq C\abs{r}$ for some constant~$C$. This will help us study the integral in~\eqref{eq:r-vv3}. \begin{ex} Show that if $F\colon\mathbb{R}\to\mathbb{R}$ satisfies $F(x)=F(-x)$ for all $x\in\mathbb{R}$ and is differentiable at the origin, then $F'(0)=0$. \end{ex} The natural guess is that the limit $\lim_{r\to0}G'(r)$ would be \begin{equation} L = \int_0^\infty \frac{F'(z)}{z} \,\der z. \end{equation} Notice that since $\abs{F'(z)}\leq C\abs{z}$ and~$F'$ is compactly supported and continuous, the integral~$L$ exists. We have \begin{equation} \begin{split} \abs{G'(r)-L} &= \abs{ \int_0^\infty \left( \frac{F'(\sqrt{z^2+r^2})}{\sqrt{z^2+r^2}} - \frac{F'(z)}{z} \right) \,\der z } \\&= \Bigg\lvert \int_0^\infty \Bigg( \frac{F'(\sqrt{z^2+r^2})-F'(z)}{\sqrt{z^2+r^2}} \\&\qquad+ F'(z \left(\frac1{\sqrt{z^2+r^2}}-\frac1z\right) \Bigg) \,\der z \Bigg\rvert \\&\leq \int_0^\infty \Bigg( \frac{\abs{F'(\sqrt{z^2+r^2})-F'(z)}}{\sqrt{z^2+r^2}} \\&\qquad+ \abs{F'(z)} \abs{\frac{z-\sqrt{z^2+r^2}}{z\sqrt{z^2+r^2}}} \Bigg) \,\der z \\&\leq \int_0^\infty \left( \frac{C\abs{\sqrt{z^2+r^2}-z}}{\sqrt{z^2+r^2}} + Cz \frac{\sqrt{z^2+r^2}-z}{z\sqrt{z^2+r^2}} \right) \,\der z \\&= 2C \int_0^\infty \frac{\sqrt{z^2+r^2}-z}{\sqrt{z^2+r^2}} \,\der z \\&= 2C \left[ z-\sqrt{z^2+r^2} \right]_0^\infty \\&= 2Cr. \end{split} \end{equation} Therefore we have proved that $G'(r)\to L$ as $r\to0$. Let us collect our findings into a theorem: \begin{theorem} \label{xrtthm:radon} A function $f\in C_c(\mathbb{R}^2)$ is uniquely determined by its X-ray transform. Moreover, if $f\in C^2_c(\mathbb{R}^2)$, it can be reconstructed pointwise by \begin{equation} f(x) = -\frac1\pi \int_0^\infty \frac{\frac{\mathrm{d}}{\mathrm{d} r}\overline{\mathcal{I} f}_x(r)}{r} \,\der r. \end{equation} \end{theorem} \begin{ex} Summarize the proof of theorem~\ref{xrtthm:radon}. \end{ex} The reconstruction formula can also be written as a Stieltjes integral like Radon did: \begin{equation} f(x) = -\frac1\pi \int_0^\infty \frac{\mathrm{d}\overline{\mathcal{I} f}_x(r)}{r} . \end{equation} \subsection{Relation to the angular Fourier series} Let us now see how the methods of section~\ref{sec:ang-fs} are related to the idea of this section. By translation invariance it suffices to show that~$\mathcal{I} f$ uniquely determines~$f(0)$. We write the function $f(r,\theta)$ as a Fourier series in~$\theta$: \begin{equation} f(r,\theta) = \sum_{k\in\mathbb{Z}}a_k(r)e^{ik\theta}. \end{equation} We also write the X-ray transform as a Fourier series: \begin{equation} \mathcal{I} f(r,\theta) = \sum_{k\in\mathbb{Z}}b_k(r)e^{ik\theta}. \end{equation} As discussed in section~\ref{sec:ang-fs} the function~$b_k$ only depends on the function~$a_k$, and we found in section~\ref{sec:abel} that $b_k=\mathcal{A}_ka_k$. Let us look at $k=0$. Now $a_0(r)=\bar f_0(r)$ and $b_0(r)=\overline{\mathcal{I} f}_0(r)$. The two observations $b_0=\mathcal{A}_0a_0$ and $\bar f_0(r)=\mathcal{A}(\overline{\mathcal{I} f}_0)(r)$ are therefore the same. The function~$a_0$ can be reconstructed from~$b_0$ by inverting the Abel transform. Now $f(0)=\lim_{r\to0}a_0(r)$, which gives a reconstruction formula at the origin. In conclusion, it is enough to look at the zeroth component of the angular Fourier series if one varies the origin of the polar coordinates. The reconstruction works at any chosen origin. The zeroth component of the angular Fourier series is nothing but the circular average. \begin{ex}Do you have any questions or comments regarding section~\thesection? Was something confusing or unclear? Were there mistakes?\end{ex} \section{The geometry of Euclidean geodesics} \label{sec:geod-geom} In this section and the next two sections we will give our fourth injectivity proof based on analysis on the sphere bundle. \subsection{The sphere bundle} As silly as it sounds, we will now study the geometry of straight lines in the Euclidean space~$\mathbb{R}^n$. These same geometrical ideas will remain valid and applicable on Riemannian manifolds. We will restrict our attention to Euclidean spaces, but some differential geometric ideas will be involved, and we will have to consider a certain non-Euclidean space. In this section we will study straight lines as curves parametrized by arc length. This is most conveniently done on the sphere bundle \begin{equation} S\mathbb{R}^n = \mathbb{R}^n\times S^{n-1} \end{equation} of the Euclidean space~$\mathbb{R}^n$. This is a bundle over~$\mathbb{R}^n$ and comes with the natural projection $\pi\colon S\mathbb{R}^n\to\mathbb{R}^n$ to the first component. In the Euclidean setting the bundle is simply a product of two spaces and the bundle is trivial. In a non-Euclidean situation the bundle structure becomes more complicated. A point $(x,v)\in S\mathbb{R}^n$ describes a point and a velocity at that point. The velocity variable~$v$ takes values in the fiber~$S^{n-1}$ of the bundle. Parametrization by unit speed instead of arbitrary speed is very convenient, as it makes the fibers compact. The usefulness becomes apparent when we integrate over a sphere bundle later on. There are two kinds of directions on~$S\mathbb{R}^n$. Directions in the~$\mathbb{R}^n$ component are called horizontal and those in~$S^{n-1}$ are called vertical. This terminology will reappear in the next section when we consider horizontal and vertical derivatives of a function on the sphere bundle. This choice of words correspond to the canonical way of drawing the base~$\mathbb{R}^n$ of the bundle horizontally and the fibers~$S^{n-1}$ vertically. \subsection{The geodesic flow} Straight lines are geodesics. There is a dynamical system associated with geodesics, and we will examine it next. \begin{definition} \label{def:dyn-sys} A continuous time dynamical system on a set~$Z$ is a function $\phi\colon\mathbb{R}\times Z\to Z$ which satisfies $\phi_0(z)=z$ and $\phi_s(\phi_t(z))=\phi_{s+t}(z)$ for all $z\in Z$ and $s,t\in\mathbb{R}$. \end{definition} A dynamical system describes the time evolution of a point in the phase space~$Z$. Every $z\in Z$ has a unique trajectory or integral curve $t\mapsto\phi_t(z)$. It is also possible to view a dynamical system more algebraically, as the action of the additive group~$\mathbb{R}$ on the set~$Z$. \begin{ex} \label{ex:ds1} Which of the following are dynamical systems on~$\mathbb{R}$? \begin{enumerate}[(a)] \item $\phi_t(z)=z+3t$. \item $\phi_t(z)=4$. \item $\phi_t(z)=z^t$. \item $\phi_t(z)=z-t^2$. \item $\phi_t(z)=e^{-2t}z$. \item $\phi_t(z)=z+tz$. \end{enumerate} Explain briefly. \end{ex} The geodesic flow is a dynamical system on the sphere bundle. It is simply given by \begin{equation} \phi_t(x,v) = (x+tv,v). \end{equation} The geodesic flow could be equally well defined on any bundle $\mathbb{R}^n\times A$ for $A\subset\mathbb{R}^n$ with the same formula. The most natural choices are $A=\mathbb{R}^n$ (all velocities possible) and $A=S^{n-1}$ (unit speed geodesics). This is what we meant above by saying that unit speed geodesics make fibers compact. \begin{ex} Why cannot the geodesic flow on~$\mathbb{R}^n$ be a dynamical system on~$\mathbb{R}^n$? How does moving to the sphere bundle help? \end{ex} As in the case of geodesics, dynamical systems are often studied on manifolds. Then there is a vector field~$W$ on the manifold~$Z$ so that for any initial point $z\in Z$ the function $f(t)=\phi_t(z)$ solves the differential equation $f'(t)=W(f(t))$. Such a vector field is called the generator of the flow. Whenever~$\phi$ is smooth enough, the generator exists and can be computed by differentiating the flow with respect to~$t$ --- that is, $W(z)=\partial_t\phi_t(z)|_{t=0}$. One can also impose much more structure on a flow but it will not be necessary for us here. We only remark that the geodesic flow can be seen as a contact flow or a Hamiltonian flow. \begin{ex} On the real line~$\mathbb{R}$ a vector field can be considered to be just a function $\mathbb{R}\to\mathbb{R}$. Go back to the dynamical systems of exercise~\ref{ex:ds1}. What are their generators? \end{ex} The generator of the geodesic flow is called the geodesic vector field, and it is denoted by~$X$. It is typical in differential geometry to identify a vector field with the associated differential operator. For example, a vector field $w\colon\mathbb{R}^n\to\mathbb{R}^n$ is identified with the differential operator $f\mapsto w\cdot\nabla f$ which maps scalar functions to scalar functions. For us the geodesic vector field is just a differential operator, but we still call it a vector field to follow standard terminology. The differential operator corresponding to the generator of the flow is the derivative along the flow. Consider a function $u\colon S\mathbb{R}^n\to\mathbb{R}$. The geodesic vector field is defined to be \begin{equation} Xu(x,v) = \partial_t u(\phi_t(x,v))|_{t=0}. \end{equation} \begin{ex} Using the definition of the geodesic flow, find a formula for the geodesic vector field. For a function $u\colon\mathbb{R}^n\times S^{n-1}\to\mathbb{R}$, let us denote the gradient with respect to the first component by~$\nabla_x u$. \end{ex} If we were to write~$X$ as a vector field instead of a differential operator, it would be $X(x,v)=(v,0)$. The second component is the zero vector field on~$S^{n-1}$. The fact that the second component vanishes means that~$X$ is horizontal. \begin{ex} Suppose $u(x,v)=x\cdot v$. What is $Xu(x,v)$? \end{ex} A geodesic on~$\mathbb{R}^n$ is the projection of a trajectory of the geodesic flow. Trajectories are of the form $t\mapsto\phi_t(x,v)=(x+tv,v)$, and geodesics are of the form $t\mapsto\pi(\phi_t(x,v))=x+tv$. This process can also be reversed. If $\gamma\colon\mathbb{R}\to\mathbb{R}^n$ is a differentiable unit speed curve, we can define its lift $\tilde\gamma\colon\mathbb{R}\to S\mathbb{R}^n$ by $\tilde\gamma(t)=(\gamma(t),\dot\gamma(t))$. The lift of a geodesic is a trajectory of the geodesic flow. \subsection{The manifold of geodesics} Previously we discussed the set~$\Gamma$ of all straight lines in~$\mathbb{R}^n$. We can describe its structure a little more now. The geodesic flow gives rise to an equivalence relation on the sphere bundle, where~$(x,v)$ is considered equivalent to~$(x',v')$ if and only if $\phi_t(x,v)=(x',v')$ for some $t\in\mathbb{R}$. We can form the quotient space of the sphere bundle with this relation, and we denote it by $S\mathbb{R}^n/\phi$. This quotient space has the structure of a smooth Riemannian manifold. In general, quotient manifolds are somewhat ill-behaved, but this particular quotient does make sense. For geodesics on a general manifold this is no longer the case. However, the quotient is always sensible as a topological space, but the resulting space can be wild. This $S\mathbb{R}^n/\phi$ is the space of all oriented lines. We previously denoted it by~$\Gamma$. To get the set of all unoriented lines, one has to take another quotient to identify opposite orientations. This second quotient is well behaved. \begin{bex} Consider the geodesic flow on the sphere bundle of the torus~$\mathbb{T}^2$. The flow is well defined since the manifold is geodesically complete. The sphere bundle $S\mathbb{T}^2=\mathbb{T}^2\times S^1$ is a topological space and the quotient with respect to any equivalence relation can be given the quotient topology. Show that the topological quotient space $S\mathbb{T}^2/\phi$ is not Hausdorff. Does the same apply in~$\mathbb{T}^n$ in any dimension~$n$? \end{bex} \subsection{Bounded domains and integral functions} It will be convenient to consider geodesics on bounded sets. For a bounded set $\Omega\subset\mathbb{R}^n$ we define the unit sphere bundle~$S\Omega$ as $\Omega\times S^{n-1}$. This causes a technical inconvenience for the geodesic flow on~$S\Omega$, since it is not defined for all times. One can define a local dynamical system on a space~$Z$ so that for each $z\in Z$ the flow~$\phi_t(z)$ is defined for times~$t$ in some interval $J_z\subset\mathbb{R}$. For a simple example, consider $Z=[0,1)$ and $\phi_t(z)=z+t$ whenever it is well defined. In this case $J_z=[-z,1-z)$. For $z\in(0,1)$ the interval~$J_z$ contains a neighborhood of $0\in\mathbb{R}$ and the dynamical system behaves locally just as well as the usual kind of a dynamical system, but one just cannot go arbitrarily far in time. Things are a little more complicated at $z=0$, where one can only follow the flow to the positive direction. The geodesic flow on a bounded domain is essentially of this type, with only one direction available at~$\partial(S\Omega)$. At a tangential boundary point the flow is stuck; it cannot move in either direction. Now, let $\Omega\subset\mathbb{R}^n$ be a smooth and strictly convex domain and~$\bar\Omega$ its closure. For $(x,v)\in S\Omega$, let $\tau(x,v)=\max\{t>0;\phi_t(x,v)\in S\bar\Omega\}$. This is the time it takes for a geodesic starting at $(x,v)$ to escape~$\Omega$. The boundary of the sphere bundle~$S\Omega$ is $\partial(S\Omega)=\partial\Omega\times S^{n-1}$. Observe that $\partial(S\Omega)=S\bar\Omega\setminus S\Omega$. For $x\in\partial\Omega$, let~$\nu(x)$ denote the outer unit normal vector to~$\partial\Omega$. A point $(x,v)\in\partial(S\Omega)$ is called an inward boundary point if $v\cdot\nu(x)<0$. Similarly, the outward part of the boundary consists of points in~$\partial(S\Omega)$ where the inner product is positive and the tangential part of the points where it is zero. Let us denote the inward boundary by ${\partial_{\text{in}}(S\Omega)}\subset\partial(S\Omega)$. \begin{ex} The definition of~$\tau(x,v)$ can be naturally extended to all $(x,v)\in S\bar\Omega$. What is the definition at an inward boundary point? What should~$\tau$ be defined to be at other boundary points? \end{ex} \begin{ex} \label{ex:tau-c1} The domain $\Omega\subset\mathbb{R}^n$ is called smooth if there is a smooth boundary defining function $\rho\colon\mathbb{R}^n\to\mathbb{R}$ so that $\Omega=\{x\in\mathbb{R}^n;\rho(x)>0\}$ and $\nabla\rho\neq0$ at~$\partial\Omega$. By smoothness of~$\partial\Omega$ we refer to the smoothness of the boundary defining function~$\rho$. (This is equivalent with~$\partial\Omega$ being locally a graph of the required smoothness.) In addition, we may assume that $\abs{\nabla\rho(x)}=1$ for all $x\in\partial\Omega$. The the outer unit normal is given by $\nu(x)=-\nabla\rho(x)$. Consider a point $(x,v)\in S\Omega$ for which $\phi_{\tau(x,v)}(x,v)$ points outward (is not tangential). Use the implicit function theorem to show that the function~$\tau$ is~$C^1$ in a neighborhood of~$(x,v)$. \end{ex} We define the integral function $u^f\colon S\bar\Omega\to\mathbb{R}$ of a function $f\colon S\bar\Omega\to\mathbb{R}$ as \begin{equation} \label{eq:uf-def} u^f(x,v) = \int_0^{\tau(x,v)}f(\phi_t(x,v))\,\der t \end{equation} whenever this integral makes sense. In words, $u^f(x,v)$ is the integral of~$f$ over the lift of the geodesic starting at the point~$x$ in the direction~$v$. This kind of integral function will play a big role in our next proof of injectivity of the X-ray transform. \begin{ex} \label{ex:u-characteristic} Let~$\Omega$ be the unit ball and $f\equiv1$ the constant function on~$\bar\Omega$. Find a formula for $u^f\colon S\bar\Omega\to\mathbb{R}$. As you will notice, the resulting function has differentiability issues at the tangential part of the boundary. \end{ex} This integral function satisfies a fundamental theorem of calculus: \begin{ex} \label{ex:ftc-SM} Prove that $Xu^f=-f$ in~$S\Omega$ for $f\in C(S\bar\Omega)$. \end{ex} Previously we defined the X-ray transform of a scalar function $f\colon\mathbb{R}^n\to\mathbb{R}$. With the help of the previous exercise we can in fact define the X-ray transform of a compactly supported continuous function $f\colon S\mathbb{R}^n\to\mathbb{R}$. This leads to so-called tensor tomography. \begin{ex}Do you have any questions or comments regarding section~\thesection? Was something confusing or unclear? Were there mistakes?\end{ex} \section{The sphere bundle in two dimensions} \label{sec:2D-SM} \subsection{Horizontal and vertical vector fields} To simplify matters, we choose $n=2$ for this and the next section. Now~$\Omega$ is a smooth and convex planar domain. The unit sphere bundle $S\Omega=\Omega\times S^1$ could also be called the circle bundle. When convenient, we may consider a function~$f$ on~$\Omega$ to be a function on~$S\Omega$ which just happens to be independent of~$v$. Formally, this amounts to replacing~$f$ with~$\pi^*f$. The pullback is defined as $\pi^*f=f\circ\pi$. The sphere bundle is three-dimensional. One special direction is given by the geodesic vector field. There is a second horizontal direction and one vertical direction, too, and we will study derivatives in these directions next. We will define the horizontal vector field and the vertical vector field. As in the case of the geodesic vector field, these will be differential operators. Points $(x,v)\in S\bar\Omega$ can be written as $(x,v_\theta)$ for $x\in\bar\Omega$ and $\theta\in\mathbb{R}/2\pi\mathbb{Z}$, where $v_\theta=(\cos(\theta),\sin(\theta))$. The vertical vector field is simply differentiation with respect to~$\theta$. That is, for $u\colon S\bar\Omega\to\mathbb{R}$ we define $Vu\colon S\bar\Omega\to\mathbb{R}$ by \begin{equation} Vu(x,v_\theta) = \partial_\theta u(x,v_\theta). \end{equation} \begin{ex} Let~$\Omega$ be the unit disc and write~$x$ in polar coordinates. Write the function~$u^f$ of exercise~\ref{ex:u-characteristic} in these coordinates (one radius and two angles). Calculate~$Vu^f$. (The derivative will blow at at the tangential part of the boundary of the sphere bundle. This is the typical place for regularity issues.) \end{ex} The geodesic vector field may be written as $Xu=v_\theta\cdot\nabla_xu=\cos(\theta)\partial_{x_1}u+\sin(\theta)\partial_{x_2}u$. Let~$v^\perp$ denote the rotation of~$v$ by~$\frac\pi2$ clockwise. In terms of angles, $v_\theta^\perp=v_{\theta-\pi/2}$. We define the horizontal vector field~$X_\perp$ so that \begin{equation} \begin{split} X_\perp u(x,v_\theta) &= v^\perp_\theta\cdot\nabla_xu(x,v_\theta) \\&= \sin(\theta)\partial_{x_1}u(x,v_\theta) - \cos(\theta)\partial_{x_2}u(x,v_\theta). \end{split} \end{equation} \begin{ex} Earlier we used~$v^\perp$ to denote something else. How are these two~$v^\perp$s related in the two-dimensional setting? \end{ex} \begin{bex} For $f\in C^2(\Omega)$, let~$\pi^*f$ be the pullback over the projection $\pi\colon S\Omega\to\Omega$. Show that~$f$ is harmonic if and only if $(X^2+X_\perp^2)\pi^*f=0$ in~$S\Omega$. \end{bex} The geodesic vector field~$X$ at~$(x,v)$ is the derivative with respect to~$x$ in the direction of~$v$. The horizontal vector field~$X_\perp$ is the derivative with respect to~$x$ in the direction orthogonal to~$v$. The vertical vector field is the derivative with respect to the direction~$v$. As vector fields (as opposed to differential operators), these three vector fields are orthogonal and have unit length. They are an orthonormal basis to the tangent spaces of the sphere bundle. This happens on any two-dimensional Riemannian manifold when the sphere bundle is equipped with the so-called Sasaki metric. In higher dimensions the Sasaki metric is trickier, and its definition needs to be taken to a separate course. In higher dimensions there are still natural horizontal and vertical derivatives, but they are no longer vector fields. To avoid technicalities, we stick to dimension two. \begin{ex} \label{ex:VXu=0} Show that if $f\in C(\bar\Omega)$, then $VXu^f=0$ in~$S\Omega$. Here $f\in C(\bar\Omega)$ is identified with $\pi^*f\in C(S\bar\Omega)$. \end{ex} \subsection{Commutators} \label{sec:commutators} To calculate with differential operators, we need a couple of basic tools. We need to be able to integrate by parts and change the order of differentiation. Integration by parts comes in the next section, and now we will study what happens when the order of differentiation changes. In our situation the order of differentiation does matter, but it only matters to a lower order, so to say. The effect of changing the order is captured by commutators. The commutator of two linear operators~$A$ and~$B$ is $[A,B]=AB-BA$. For example, consider the following two operators on functions on the real line: \begin{equation} \begin{split} (Af)(x)&=f'(x),\\ (Bf)(x)&=h(x)f(x), \end{split} \end{equation} where~$h$ is a sufficiently smooth function. Then \begin{equation} \begin{split} [A,B]f(x) &= (ABf)(x) - (BAf)(x) \\&= \frac{\mathrm{d}}{\mathrm{d} x}(h(x)f(x)) - h(x)f'(x) \\&= h'(x)f(x). \end{split} \end{equation} \begin{ex} Let us call~$A$ a first order differential operator on the real line if it is of the form \begin{equation} Af(x) = h(x)f'(x)+g(x)f(x) \end{equation} for some smooth functions~$h$ and~$g$. Show that the commutator of two first order differential operators is a first order differential operator. \end{ex} In general, the product of differential operators of orders~$k$ and~$m$ is a differential operator of order $k+m$, and the commutator has order $k+m-1$. The leading order derivative may vanish, in which case the order is actually lower. \begin{ex} Show that $[X,V]=X_\perp$. \end{ex} \begin{ex} Show that $[V,X_\perp]=X$. \end{ex} \begin{ex} Show that $[X,X_\perp]=0$. \end{ex} The case of a two-dimensional Riemannian manifold is surprisingly similar. The first two commutators above stay intact, and $[X,X_\perp]$ contains~$V$ and the curvature. In higher dimensions the formulas are a little trickier, but still the same in spirit. In fact, the only thing we need to know about commutators in the proof is the following lemma. Observe that~$X_\perp$ does not appear in the claim, but it is useful for the proof. \begin{lemma} \label{lma:XV,VX} Our vector fields satisfy \begin{equation} [XV,VX] = -X^2. \end{equation} \end{lemma} \begin{proof} We simply use our commutator formulas and calculate: \begin{equation} \begin{split} [XV,VX] &= XVVX-VXXV \\&= (VX+X_\perp)VX-VX(VX+X_\perp) \\&= X_\perp VX-VXX_\perp \\&= [X_\perp,VX] \\&= [X_\perp,V]X+V[X_\perp,X] \\&= -X^2. \end{split} \end{equation} Here we used the commutator property $[A,BC]=[A,B]C+B[A,C]$, which is trivial to verify by hand. There are several ways to go about this calculation; the intermediate steps here are but an example. \end{proof} The commutator of two second order operators is typically of third order. In this particular case it happens to be second order because~$XV$ and~$VX$ only differ by the first order operator~$X_\perp$. \begin{ex} Consider linear operators ($n\times n$ matrices, for example) $A$, $B$, and~$C$. Show that $[A,BC]=[A,B]C+B[A,C]$ and $[AB,C]=A[B,C]+[A,C]B$. \end{ex} \begin{ex} Compute $[XVV,VX_\perp]$ and $[V^2,X_\perp^2]$. \end{ex} \subsection{Integration on the sphere bundle} The sphere bundle is a product space, and we can naturally use the product measure~$\Sigma$. Therefore the integral of $g\in C(S\bar\Omega)$ is \begin{equation} \int_{S\Omega}g(x,v)\,\der\Sigma(x,v) = \int_\Omega\int_{S^1}g(x,v)\,\der S(v)\,\der x. \end{equation} Alternatively, the~$S^1$ integral can be written as $\int_0^{2\pi}g(x,v_\theta)\,\der\theta$. The boundary~$\partial\Omega$ is a closed smooth curve, and we have a natural measure on it. One way to describe it is to write the curve as $\alpha\colon[0,L]\to\mathbb{R}^2$ with arc length parametrization and then integrate on the interval $[0,L]$. This gives rise to a measure~$\tilde\sigma$ on~$\partial(S\Omega)$, given by \begin{equation} \int_{\partial(S\Omega)}g\,\der\tilde\sigma = \int_{S^1} \int_0^L g(\alpha(t),v) \,\der t \,\der S(v) \end{equation} for any $g\in C(S\bar\Omega)$. It turns out that the measure $\sigma=\abs{v\cdot\nu(x)}\tilde\sigma$ is more natural. It will appear in a change of variables formula for integration over the sphere bundle. \begin{proposition}[Santal\'o's formula] \label{prop:santalo} Let $\Omega\subset\mathbb{R}^2$ be a convex, bounded, and smooth domain. For $g\in C(S\bar\Omega)$ we have \begin{equation} \int_{S\Omega}g\,\der\Sigma = \int_{\partial_{\text{in}}(S\Omega)} \int_0^{\tau(x,v)} g(\phi_t(x,v)) \,\der t \,\der\sigma(x,v). \end{equation} Alternatively, the integral can be taken over the entire~$\partial(S\Omega)$ since~$\tau$ vanishes outside~${\partial_{\text{in}}(S\Omega)}$. \end{proposition} \begin{proof} First, we change the order of integration in the integral over~$S\Omega$: \begin{equation} \label{eq:vv8} \begin{split} \int_{S\Omega}g\,\der\Sigma &= \int_\Omega\int_{S^1}g(x,v)\,\der S(v)\,\der x \\&= \int_{S^1}\int_\Omega g(x,v)\,\der x\,\der S(v). \end{split} \end{equation} Now fix any $v\in S^1$ and consider the inner integral \begin{equation} \label{eq:vv9} I(v) = \int_\Omega g(x,v)\,\der x. \end{equation} We extend~$g$ to~$S\mathbb{R}^2$ by zero for convenience. We write the plane as an orthogonal direct sum $\mathbb{R}^2=v\mathbb{R}\oplus v^\perp\mathbb{R}$. With this decomposition, we have \begin{equation} I(v) = \int_\mathbb{R} \int_\mathbb{R} g(sv^\perp+tv,v) \,\der t \,\der s . \end{equation} The inner integral is an integral along the geodesic flow as desired. We will turn the outer integral into an integral over the boundary. Let us denote \begin{equation} \partial_v\Omega = \{x\in\partial\Omega;v\cdot\nu(x)<0\}. \end{equation} We parametrize this part of the boundary by a (counterclockwise) unit speed curve $\beta\colon[0,L_v]\to\partial_v\Omega$. Observe that $\{(x,v);v\in S^1,x\in\partial_v\Omega\}={\partial_{\text{in}}(S\Omega)}$. Let us denote \begin{equation} a(v) = \min\{s\in\mathbb{R};(sv^\perp+v\mathbb{R})\cap\bar\Omega\neq\emptyset\} \end{equation} and \begin{equation} b(v) = \max\{s\in\mathbb{R};(sv^\perp+v\mathbb{R})\cap\bar\Omega\neq\emptyset\}. \end{equation} Now, there is a function $w\colon(a(v),b(v))\to(0,L_v)$ so that $\beta(w(s))-sv^\perp\in v\mathbb{R}$. In fact,~$w$ is a~$C^1$ diffeomorphism with $w'(s)=-1/v\cdot\nu(\beta(w(s)))>0$. The details are left as an exercise. We change the variable of integration from~$s$ to $z=w(s)$ and obtain \begin{equation} \begin{split} I(v) &= \int_\mathbb{R} \int_\mathbb{R} g(sv^\perp+tv,v) \,\der t \,\der s \\&= \int_{a(v)}^{b(v)} \int_\mathbb{R} g(sv^\perp+tv,v) \,\der t \,\der s \\&= \int_{0}^{L_v} \int_\mathbb{R} g(w^{-1}(z)v^\perp+tv,v) \,\der t (-v\cdot\nu(\beta(z))) \,\der z \\&= \int_0^{L_v} \int_0^{\tau(\beta(z),v)} g(\beta(z)+tv,v) \,\der t \abs{v\cdot\nu(\beta(z))} \,\der z. \end{split} \end{equation} Since~$\beta$ is a subcurve of~$\alpha$ (restriction to a subinterval, possibly after rechoosing the initial and final point on~$\alpha$) and $\tau=0$ on the part $\alpha\setminus\beta$, we get \begin{equation} \label{eq:vv10} I(v) = \int_0^L \int_0^{\tau(\alpha(z),v)} g(\alpha(z)+tv,v) \,\der t \abs{v\cdot\nu(\alpha(z))} \,\der z. \end{equation} Combining~\eqref{eq:vv8}, \eqref{eq:vv9}, and \eqref{eq:vv10}, we find \begin{equation} \begin{split} \int_{S\Omega}g\,\der\Sigma &= \int_{S^1} \int_0^L \int_0^{\tau(\alpha(z),v)} g(\alpha(z)+tv,v) \,\der t \abs{v\cdot\nu(\alpha(z))} \,\der z \,\der S(v) \\&= \int_{\partial(S\Omega)} \left( \int_0^{\tau(x,v)} g(x+tv,v) \,\der t \right) \abs{v\cdot\nu(x)} \,\der\tilde\sigma(x,v) \\&= \int_{\partial(S\Omega)} \left( \int_0^{\tau(x,v)} g(\phi_t(x,v)) \,\der t \right) \,\der\sigma(x,v) \end{split} \end{equation} as claimed. \end{proof} \begin{ex} Draw a picture or two and explain what the function~$w$ does. \end{ex} \begin{ex} Explain why $w\in C^1$ and $w'(s)=-1/v\cdot\nu(\beta(w(s)))>0$. \end{ex} \begin{ex} Show that the measure of~$\Omega$ is $\frac1{2\pi}\int_{\partial_{\text{in}}(S\Omega)}\tau(x,v)\,\der\sigma(x,v)$. \end{ex} Santal\'o's formula states that the integral over the sphere bundle can be calculated by calculating it one geodesic at a time, first integrating over the (lifted) geodesic and the integrating over the initial points and directions of these geodesics at~${\partial_{\text{in}}(S\Omega)}$. The space of all geodesics through~$\Omega$ can be identified with~${\partial_{\text{in}}(S\Omega)}$. The measure spaces $(\Gamma,\mu)$ and $({\partial_{\text{in}}(S\Omega)},\sigma)$ (with Borel $\sigma$-algebras) are two descriptions of the same thing. The formula is a change of variables. It will help us find Green-type formulas for our three vector fields (see exercises below), and those will lead to integration by parts on the sphere bundle. The formula also gives rise to some integral properties which will be convenient. \begin{ex} Show that if $g\in C^1(S\bar\Omega)$, then \begin{equation} \int_{S\Omega}Vg\,\der\Sigma = 0. \end{equation} Santal\'o is not needed. \end{ex} \begin{ex} Show that if $g\in C^1(S\bar\Omega)$, then \begin{equation} \int_{S\Omega}Xg\,\der\Sigma = \int_{\partial_{\text{in}}(S\Omega)} \left( g(\phi_{\tau(x,v)}(x,v)) - g(x,v) \right) \,\der\sigma(x,v). \end{equation} Santal\'o is useful. \end{ex} \begin{ex} Show that if $g\in C^2(S\bar\Omega)$ and $g|_{\partial(S\Omega)}=0$, then \begin{equation} \int_{S\Omega}X_\perp g\,\der\Sigma = 0. \end{equation} Use the previous two exercises and the commutator formulas. \end{ex} \begin{ex}Do you have any questions or comments regarding section~\thesection? Was something confusing or unclear? Were there mistakes?\end{ex} \section{X-ray tomography and the transport equation} \label{sec:pestov} \subsection{Integration revisited} For $g,h\in L^2(S\Omega,\Sigma)$ we write \begin{equation} \ip{g}{h} = \int_{S\Omega}gh\,\der\Sigma \end{equation} and $\aabs{g}=\sqrt{\ip{g}{g}}$. Our functions in this section are real-valued so no conjugation is needed. The complex case is not harder, but the real approach is technically convenient. Santal\'o's formula (proposition~\ref{prop:santalo}) makes it easy to find integration by parts formulas for our three vector fields. \begin{lemma} \label{lma:sm-ibp} For any $g,h\in C^\infty_c(S\Omega)$ we have \begin{equation} \begin{split} \ip{g}{Xh} &= -\ip{Xg}{h}, \\ \ip{g}{Vh} &= -\ip{Vg}{h},\quad\text{and} \\ \ip{g}{X_\perp h} &= -\ip{X_\perp g}{h}. \end{split} \end{equation} \end{lemma} \begin{ex} Prove the lemma using results from the previous section. The next exercise is also useful. \end{ex} \begin{ex} An operator $A\colon C^\infty(S\Omega)\to C^\infty(S\Omega)$ is called a derivation if it is linear and satisfies $A(gh)=gAh+hAg$ and $A1=0$, where~$1$ stands for the constant function. Show that the commutator of two derivations is a derivation. Explain why~$X$, $V$, and~$X_\perp$ are derivations. \end{ex} \subsection{A second order PDE} Now we finally begin our analysis of the X-ray transform. We want to show that if $f\in C_c^\infty(\Omega)$ integrates to zero over all lines, then $f=0$. The starting point is the so-called transport equation $Xu^f=-f$. However, it will be more convenient to take a second derivative and pass to a second order homogeneous equation. First, let us define the integral function~$u^f$ as above. Recall from exercise~\ref{ex:VXu=0} that $VXu^f=0$. In addition,~$u^f$ satisfies the boundary condition $u^f|_{\partial(SM)}=0$. For outward pointing and tangential directions, this is because $\tau=0$. For inward pointing directions, this is because $\mathcal{I} f=0$. \begin{ex} Explain how to identify the functions~$\mathcal{I} f$ and~$u^f|_{\partial_{\text{in}}(S\Omega)}$. \end{ex} The function $u=u^f$ solves the boundary value problem \begin{equation} \label{eq:bvp} \begin{cases} VXu=0 & \text{in }S\Omega\\ u=0 & \text{on }\partial(S\Omega). \end{cases} \end{equation} Clearly $u=0$ is a solution. If the solution to this second order PDE is unique, then it follows that $u^f=0$ and therefore $f=-Xu^f=0$. This leads to injectivity of the X-ray transform. The operator~$XV$ is not elliptic, hyperbolic, or parabolic. Therefore we do not have access to standard uniqueness theorems, and we have to show uniqueness by hand. \begin{ex} Show that \begin{equation} XV=\frac14(X+V)^2-\frac14(X-V)^2+\frac12X_\perp \end{equation} and \begin{equation} VX=\frac14(X+V)^2-\frac14(X-V)^2-\frac12X_\perp . \end{equation} Since the three vector fields~$X$, $V$, and~$X_\perp$ are orthonormal (with respect to the Sasaki metric), so are $\frac1{\sqrt{2}}(X+V)$, $\frac1{\sqrt{2}}(X-V)$, and~$X_\perp$. Therefore our two operators look locally like the operators $\frac12(\partial_x^2-\partial_y^2\pm\partial_z)$ in~$\mathbb{R}^3$. \end{ex} One might expect that the first order term is not that relevant, but it turns out to be very important. In other words, the order of the operators~$X$ and~$V$ is crucial. We will show that assuming zero boundary values the PDE $VXu=0$ has unique solutions, but $XVu=0$ never does. The uniqueness result will be proven in the next section. The non-uniqueness result is easier and is left as exercise~\ref{ex:XV}. The order of the operators is crucial. The PDE $VXu=0$ will turn out to have unique solutions, but $XVu=0$ never does. If $u(x,v)=h(x)$ for $h\in C^\infty(\bar\Omega)$ with $h|_{\partial\Omega}=0$, then $Vu=0$ and the boundary condition is also satisfied. In fact, functions of this form are precisely the solutions of $XVu=0$ with zero boundary values, as follows from the next exercise. \begin{ex} \label{ex:X=0} Suppose $g\in C^1(S\bar\Omega)$. Show that if $g|_{\partial_{\text{in}}(S\Omega)}=0$ and $Xg=0$, then $g=0$. \end{ex} \begin{ex} \label{ex:V-boundary} Also, explain why $g|_{\partial_{\text{in}}(S\Omega)}=0$ implies $Vg|_{\partial_{\text{in}}(S\Omega)}=0$. \end{ex} \begin{ex} \label{ex:XV} Suppose $u\in C^\infty(S\bar\Omega)$ with $u|_{\partial(S\Omega)}=0$. Show that $XVu=0$ in~$S\Omega$ if and only if there is $f\in C^\infty(\bar\Omega)$ with zero boundary values so that $u=\pi^*f$. \end{ex} \subsection{Properties of the integral function} Before studying the boundary value problem~\eqref{eq:bvp} further, it is good to verify that~$u^f$ is sufficiently regular. \begin{lemma} \label{lma:u-smooth} The function~$u^f$ defined above is in~$C_c^\infty(S\Omega)$. \end{lemma} \begin{proof} Recall exercise~\ref{ex:tau-c1}. If the boundary~$\partial\Omega$ is smooth, it follows with the same argument and the smooth version of the implicit function theorem that~$\tau$ is smooth near $(x,v)\in SM$ when $\phi_{\tau(x,v)}(x,v)$ points outward. The function~$u^f$ is defined by \begin{equation} u^f(x,v) = \int_0^{\tau(x,v)}f(x+tv)\,\der t. \end{equation} Since~$f$ and~$\tau$ are smooth, so is~$u^f$. (We omit some technical details here, but the statement is hopefully plausible to the reader. One can also extend~$f$ by zero to~$\mathbb{R}^2$ to get rid of the~$\tau$.) So far we have not used the fact that~$f$ is compactly supported nor tried to prove that so is~$u^f$. Also, smoothness at tangential exits has not been established yet. Assume~$f$ is supported in a compact set $K\subset\Omega$. By exercise~\ref{ex:compact-convex} we may assume that~$K$ is convex. We will show that~$u^f$ is supported in $SK=K\times S^1$. This will also prove smoothness near points $(x,v)\in S\Omega$ where $\phi_{\tau(x,v)}(x,v)$ is tangential to~$\partial\Omega$; see exercise~\ref{ex:convex-transversal}. Therefore it only remains to prove the support condition for~$u^f$. Take any $x\in\bar\Omega\setminus K$ and $v\in S^1$. Let \begin{equation} \gamma(x,v) = \{x+tv;t\in[0,\tau(x,v)]\} \end{equation} be the line from~$x$ to~$\partial\Omega$ in the direction of~$v$. If $\gamma(x,v)\cap K=\emptyset$, then $u^f(x,v)=0$ since~$u^f(x,v)$ is the integral of~$f$ over~$\gamma(x,v)$. Because~$K$ is convex and $x\notin K$, at most one of the line segments~$\gamma(x,v)$ and~$\gamma(x,-v)$ can meet~$K$. Thus if $\gamma(x,v)\cap K\neq\emptyset$, then $\gamma(x,-v)\cap K=\emptyset$. By the argument given above, $u^f(x,-v)=0$. On the other hand, $u^f(x,v)+u^f(x,-v)=0$ for all $(x,v)\in S\Omega$ since $\mathcal{I} f=0$, so $u^f(x,-v)=0$ implies $u^f(x,v)=0$. We have thus shown that $u^f(x,v)=0$ when $x\notin K$. \end{proof} \begin{ex} \label{ex:compact-convex} Suppose $\Omega\subset\mathbb{R}^n$ is a convex open set and $K\subset\Omega$ compact. Show that the convex hull of~$K$ is compact and contained in~$\Omega$. (Carath\'eodory's theorem can be useful.) \end{ex} \begin{ex} \label{ex:convex-transversal} Suppose $\Omega\subset\mathbb{R}^n$ is a bounded and convex~$C^1$ domain. Suppose $(x,v)\in S\bar\Omega$ is such that $\phi_{\tau(x,v)}(x,v)$ is tangential to~$\partial\Omega$. Show that $x\in\partial\Omega$ and $v\cdot\nu(x)=0$. \end{ex} We have chosen to work with compactly supported smooth functions to avoid technical difficulties. The same method works for $f\in C^2(\bar\Omega)$ as well, with no assumptions on boundary values. This would require more delicate analysis of boundary behaviour, since in general $u^f\notin C^2(S\bar\Omega)$ even if $f\in C^2(\bar\Omega)$. In fact, without assuming $\mathcal{I} f=0$, one only has $u^f\in C^{1/2}(S\bar\Omega)$; see exercise~\ref{ex:u-characteristic}. We will next prove an integral identity. The statement concerns second and first order derivatives, but the proof uses derivatives up to order four. In cases like this the theorem can be shown to hold in~$C^2$ using the density of~$C^\infty$ in~$C^2$. \subsection{The Pestov identity} The key to proving uniqueness of~\eqref{eq:bvp} is an integral identity known as the Pestov identity. It was introduced by Mukhometov, and it could well be called the Mukhometov--Pestov identity. \begin{proposition}[Pestov identity] \label{prop:pestov} If $u\in C^\infty_c(S\Omega)$, then \begin{equation} \aabs{VXu}^2 = \aabs{XVu}^2 +\aabs{Xu}^2. \end{equation} \end{proposition} \begin{proof} Using lemmas~\ref{lma:sm-ibp} and~\ref{lma:XV,VX} we find \begin{equation} \begin{split} \aabs{VXu}^2 -\aabs{XVu}^2 &= \ip{VXu}{VXu} -\ip{XVu}{XVu} \\&= -\ip{VVXu}{Xu} +\ip{XXVu}{Vu} \\&= \ip{XVVXu}{u} -\ip{VXXVu}{u} \\&= \ip{(XVVX-VXXV)u}{u} \\&= \ip{[XV,VX]u}{u} \\&= \ip{-X^2u}{u} \\&= \ip{Xu}{Xu} \end{split} \end{equation} as claimed. \end{proof} As mentioned earlier, the assumption of compact support is not necessary. It is enough that $u|_{\partial(S\Omega)}=0$, but the proof would be somewhat more technical. Smoothness is not necessary either, it is just convenient. \begin{ex} Use the Santal\'o formula to rewrite~$\aabs{Xw}^2_{L^2(S\Omega)}$ when $w\in C_c^\infty(S\Omega)$. What is the integral you end up calculating over each geodesic? Two terms of this kind appear on the right-hand side of the Pestov identity. \end{ex} \begin{bex} Stare at the Pestov identity, experience enlightenment, and explain what it means and why it should hold true. \end{bex} The Pestov identity makes proving our injectivity result easy: \begin{theorem} \label{xrtthm:pestov} Let $\Omega\subset\mathbb{R}^2$ be a bounded, smooth, and strictly convex domain. If $f\in C_c^\infty(\Omega)$ satisfies $\mathcal{I} f=0$, then $f=0$. \end{theorem} \begin{ex} Prove theorem~\ref{xrtthm:pestov} by applying proposition~\ref{prop:pestov} to~$u^f$. \end{ex} This argument provides us with yet another uniqueness proof. However, it does not give an inversion formula for the X-ray transform. There are inversion formulas within this framework, but finding one requires considerably more work than proving uniqueness. \subsection{Remarks about manifolds} This method can also be used to prove injectivity results on many Riemannian manifolds with boundary. The sphere bundle and the derivatives on it are still well defined and useful. The Santal\'o formula still holds true. To be able to use the Pestov identity, the integral function~$u^f$ needs to be regular enough, and the right-hand side of the identity needs to be non-negative. To obtain convenient regularity, the manifold is typically assumed to be compact and have strictly convex boundary. Strict convexity is defined in terms of the curvature of the boundary. The Pestov identity on a two-dimensional manifold~$M$ with boundary reads \begin{equation} \aabs{VXu}^2 = \aabs{XVu}^2 +\aabs{Xu}^2 -\int_{SM}K\abs{Vu}^2\,\der\Sigma , \end{equation} where~$K$ is the Gaussian curvature of the surface. If $K\leq0$, then the desired positivity result follows. Indeed, the X-ray transform is injective on non-positively curves surfaces with strictly convex boundary. In higher dimensions things are somewhat more complicated. Vertical and horizontal derivatives are no longer given by vector fields, and gradient-like operators are needed instead. In addition, curvature can no longer be adequately described with a scalar function. However, there is a Pestov identity and it can be used to prove similar results. Positivity now depends on the sectional curvature. These results can be generalized in various ways. Development and application of the relevant tools in differential geometry require a separate course. \begin{ex}Do you have any questions or comments regarding section~\thesection? Was something confusing or unclear? Were there mistakes?\end{ex} \section{X-ray tomography of vector fields} \label{sec:vf} \subsection{Definition of the X-ray transform} So far we have only discussed X-ray tomography of scalar functions $f\colon\mathbb{R}^n\to\mathbb{R}$. We can ask a similar question for other kinds of functions as well, and we will only explore one generalization in this course: X-ray tomography of vector fields. A vector field in the Euclidean space is a function $f\colon\mathbb{R}^n\to\mathbb{R}^n$. The integral of~$f$ over a line $\gamma\colon\mathbb{R}\to\mathbb{R}^n$ is the X-ray transform \begin{equation} \label{eq:vf-xrt} \mathcal{I} f(\gamma) = \int_\mathbb{R} f(\gamma(t))\cdot\dot\gamma(t)\,\der t \end{equation} whenever this integral exists. (Some people call the transform something else in the case of vector fields. We do not.) We will continue to use unit speed parametrization, although in this particular case it does not make a difference. Those familiar with differential forms may identify a vector field with a one-form, and the integral of a $k$-form over an oriented $k$-dimensional manifold is parametrization invariant. \begin{ex} Prove that formula~\eqref{eq:vf-xrt} for the X-ray transform of a vector field is in fact invariant under any orientation-preserving reparametrization. What happens if orientation is flipped? Does the integral of a scalar function change if you reparametrize it or change orientation? \end{ex} We can now ask our main question in this new setting: Does~$\mathcal{I} f$ determine~$f$ uniquely? In other words, if $\mathcal{I} f=\mathcal{I} g$, do we then necessarily have $f=g$? \subsection{An application} \label{sec:doppler-application} Generalizing mathematical questions is commonplace, and a mathematician may not need any further motivation for this variant of the problem. While mathematical interest might be a sufficient reason for this detour, we will also present one physical application. The applications of X-ray tomography of scalar and vector fields are not limited to what is mentioned in this course, and some applications call for further generalizations. Consider a stationary flow of a liquid (or fluid), described by the flow field $u\colon\mathbb{R}^3\to\mathbb{R}^3$. That is, at the point~$x$ the liquid flows with velocity~$u(x)$. The speed of sound can be described by a scalar field $c\colon\mathbb{R}^3\to(0,\infty)$, but we assume that it is constant. (Gravity causes position dependence to the speed of sound, and~$c$ can be coupled with~$u$ if the flow is compressible and large.) If $\abs{u(x)}\ll c$ for all~$x$, then sound waves in the moving liquid travel at roughly straight lines, but their speeds along those lines are changed by~$u$. (We can consider the flow to be a small perturbation to the completely still reference situation. Travel time has first order dependence on~$u$, but the change of trajectories only has a second order effect on it. Therefore in the linearized problem geometry is unchanged but travel times change. We will not attempt to make this linearization procedure precise.) Consider a straight line $\gamma\colon[0,L]\to\mathbb{R}^3$ parametrized by arc length. If $u=0$, then the time to travel from~$\gamma(0)$ to~$\gamma(L)$ is \begin{equation} \int_0^L \frac1{c}\,\der s. \end{equation} The presence of~$u$ changes this to \begin{equation} \int_0^L \frac1{c+u(\gamma(s))\cdot\dot\gamma(s)}\,\der s \approx \frac Lc - c^{-2} \int_0^L u(\gamma(s))\cdot\dot\gamma(s)\,\der s. \end{equation} Therefore the (linearized) travel time measurement determines~$\mathcal{I} u$. Linearized travel time tomography often leads to X-ray tomography, but the unknown objects may or may not be scalar functions. See theorem~\ref{thm:outlook-lin}. The physical problem is then whether such time-of-flight measurements determine the flow field~$u$. Does it help if the liquid is incompressible, which means $\nabla\cdot u=0$? \begin{bex} The X-ray transform of vector fields is also known as the Doppler transform. How is our physical example related to the Doppler effect? \end{bex} \subsection{Non-uniqueness and potentials} It turns out that the answer to our main question is ``no'': a vector field~$f$ is not uniquely determined by~$\mathcal{I} f$. There are vector fields~$f$ that are not identically zero but for which $\mathcal{I} f=0$. The next best thing to ask for is a characterization of the kernel of the X-ray transform. Can we characterize the set of those~$f$ for which $\mathcal{I} f=0$? There is a special class of vector fields we study first: gradient fields. If $h\colon\mathbb{R}^n\to\mathbb{R}$ is a smooth scalar function, then~$\nabla h$ is a smooth vector field. Let us calculate the X-ray transform of such a vector field. \begin{ex} Let $h\colon\mathbb{R}^n\to\mathbb{R}$ be a smooth scalar function and $\gamma\colon[0,L]\to\mathbb{R}^n$ a line. Show that \begin{equation} \int_0^L \nabla h(\gamma(t))\cdot\dot\gamma(t)\,\der t = h(\gamma(L))-h(\gamma(0)). \end{equation} Explain why, if $h\in C^\infty_c(\mathbb{R}^n)$, then~$\mathcal{I}(\nabla h)$ is well defined and identically zero. \end{ex} This means that there is a freedom to change a vector field~$f$ to $f+\nabla h$ without changing~$\mathcal{I} f$ at all. This is called a gauge freedom. We now ask a refined question: If a sufficiently nice vector field $f\colon\mathbb{R}^n\to\mathbb{R}^n$ satisfies $\mathcal{I} f=0$, then is there a scalar function~$h$ so that $f=\nabla h$? The answer to this refined question is indeed positive, and we will prove it in one special case. For simplicity, we will only prove the result in two dimensions. In exercise~\ref{ex:2d-hd} we saw that for scalar functions the higher dimensional result follows from the one in dimension two. The same argument works here, too: \begin{ex} Suppose this is known: If a compactly supported smooth vector field~$f$ on~$\mathbb{R}^2$ satisfies $\mathcal{I} f=0$, then there is a smooth compactly supported scalar function~$h$ on the plane so that $f=\nabla h$. Show this: A smooth compactly supported vector field~$f$ on~$\mathbb{R}^3$ satisfies $\mathcal{I} f=0$ if and only if there is a smooth compactly supported scalar function~$h$ so that $f=\nabla h$. (If you want, you can make use of the theorem that a compactly supported smooth vector field on~$\mathbb{R}^3$ is a gradient field of a compactly supported potential if and only if it has zero curl. Therefore it is enough to show that $\nabla\times f=0$. The same argument works in higher dimensions as well if one uses differential forms and the fact that the first de Rham cohomology group of the Euclidean space is zero.) \end{ex} In three dimensions one can write a vector field as a sum of a gradient field and a solenoidal (divergence-free) vector field in a unique way. This is known as the Helmholtz decomposition. There is an analogous decomposition in higher dimensions and also on manifolds, known as the Hodge decomposition. The X-ray transform of the gradient component is always zero, but the rest is uniquely determined as we shall see. This kind of result is known as solenoidal injectivity. In particular, it follows that a solenoidal vector field is uniquely determined by its X-ray transform. Our physical example problem is indeed uniquely solvable under the additional assumption that the flow is incompressible (solenoidal). \subsection{Solenoidal injectivity} We will now prove solenoidal injectivity in two dimensions by making use of the Pestov identity. Let $\Omega\subset\mathbb{R}^2$ be a bounded, smooth, and strictly convex domain. A vector field $f\colon\Omega\to\mathbb{R}^2$ can be regarded as a function~$\tilde f$ on~$S\mathbb{R}^2$ as $\tilde f(x,v)=f(x)\cdot v$. We can define the integral function in two ways, by considering~$f$ as a function on~$S\Omega$ (see~\eqref{eq:uf-def} for a definition of~$u^{\tilde f}$ in terms of~$\tilde f$) or by using an integral formula like~\eqref{eq:vf-xrt}: \begin{equation} u^f(x,v) = \int_0^{\tau(x,v)}f(x+tv)\cdot v\,\der t. \end{equation} These two approaches lead to exactly the same function: $u^f=u^{\tilde f}$. (These notes attempt to distinguish the vector field~$f$ on~$\Omega$ and the function~$\tilde f$ on~$S\Omega$ consistently, but be prepared for failures.) We assume that $\mathcal{I} f=0$. An inspection of the proof of lemma~\ref{lma:u-smooth} shows that $u^f\in C_c^\infty(S\Omega)$ also in the case of vector fields. The fundamental theorem of calculus of exercise~\ref{ex:ftc-SM} is still valid when~$f$ is seen as a function on~$S\Omega$. The same proof gives that $Xu^f(x,v)=-\tilde f(x,v)=-f(x)\cdot v$. However, now~$\tilde f$ does depend on direction, and so typically $V\tilde f\neq0$. This causes a major change in our proof and result. \begin{ex} Let~$f$ be a smooth vector field on~$\mathbb{R}^2$, and define a function $\tilde f\colon S\mathbb{R}^2\to\mathbb{R}$ by $\tilde f(x,v)=f(x)\cdot v$. Calculate~$V\tilde f(x,v)$ and interpret the result geometrically. \end{ex} \begin{ex} \label{ex:vf-pestov-cancel} Consider the function~$u^f$ on~$S\Omega$ defined above for a vector field~$f$ with $\mathcal{I} f=0$. Show that $\aabs{VXu^f}=\aabs{Xu^f}$. \end{ex} \begin{theorem} \label{thm:vf} Let $\Omega\subset\mathbb{R}^2$ be a bounded, smooth and convex domain. If a compactly supported smooth vector field $f\colon\Omega\to\mathbb{R}^2$ integrates to zero over all lines, then there is $h\in C_c^\infty(\Omega)$ so that $f=\nabla h$. \end{theorem} \begin{proof} As discussed above, the integral function~$u^f$ is compactly supported and smooth, so we may apply the Pestov identity: \begin{equation} \aabs{VXu^f}^2 = \aabs{XVu^f}^2 + \aabs{Xu^f}^2. \end{equation} By exercise~\ref{ex:vf-pestov-cancel}, this leads to \begin{equation} 0 = \aabs{XVu^f}^2. \end{equation} This implies that the function $XVu^f\in C_c^\infty(S\Omega)$ is identically zero. Since~$u^f$ vanishes in a neighborhood of $\partial(S\Omega)$, so does~$Vu^f$. The function~$Vu^f$ has zero boundary values and is annihilated by the geodesic vector field, so it has to vanish identically. These conclusions follow from exercises~\ref{ex:X=0} and~\ref{ex:V-boundary}. See also exercise~\ref{ex:XV}. Because $Vu^f=0$, the function~$u^f(x,v)$ is in fact independent of~$v$. Recall that~$V$ is the derivative with respect to $v\in S^1$ and~$S^1$ is connected. Therefore there is a scalar function $h\in C_c^\infty(\Omega)$ so that $u^f(x,v)=-h(x)$. Using $Xu^f=-\tilde f$, it follows that $f=\nabla h$. The details are left as an exercise. \end{proof} \begin{ex} Complete the proof above by showing that $f=\nabla h$. \end{ex} In the proof above we needed to produce a potential~$h$ for the vector field~$f$. The potential turned out to be essentially the integral function~$u^f$. In fact, we have $u^f=-\pi^*h$. The hardest part was showing that~$u^f$ can be considered as a scalar function on~$\Omega$. \begin{ex} Let~$f_0$ be a scalar function and~$f_1$ a vector field. Their sum is not a very reasonable object at first, and it can be considered just as a formal sum. How can you consider $f_0+f_1$ as a function on~$S\mathbb{R}^n$? How should we define $\mathcal{I}(f_0+f_1)$? What does reversing orientation of~$\gamma$ do to~$\mathcal{I} f(\gamma)$? Assume now that~$f_0$ and~$f_1$ are smooth and compactly supported. Using previously obtained results, argue why $\mathcal{I}(f_0+f_1)=0$ implies that $f_0=0$ and $f_1=\nabla h$ for some $h\in C_c^\infty(\mathbb{R}^n)$. (It is possible to use the Pestov identity to prove results like this, but here it is easier to study orientation reversals and apply theorems~\ref{xrtthm:pestov} and~\ref{thm:vf}.) \end{ex} Let us then see what this solenoidal injectivity result means for reconstructing a vector field from data. If~$f$ and~$g$ are vector fields (or sums of scalars and vector fields) and $\mathcal{I} f=\mathcal{I} g$, then there is a scalar potential~$h$ vanishing at the boundary so that $f=g+\nabla h$ (and the scalar parts of~$f$ and~$g$ coincide). We chose to use the Pestov identity, but it is not the only way to prove this statement in a Euclidean space. We remark that the same proof works for non-positively curved Riemannian manifolds of dimension two with strictly convex boundary. Let us see what happens in one dimension. As we have seen before, the X-ray transform is not injective in one dimension scalar functions. There are non-trivial continuous functions $f\colon[0,1]\to\mathbb{R}$ so that the X-ray transform vanishes. In one dimension scalar functions and vector fields are essentially the same object, but the solenoidal injectivity requires less than full injectivity. The conclusion might be surprising: we have no injectivity for scalars, but we do have solenoidal injectivity for vector fields. \begin{ex} Consider the one-dimensional set $(0,1)$ or its closure. What does it mean if the X-ray transform is solenoidally injective on this space? Prove this solenoidal injectivity. \end{ex} \begin{bex} Use the tools of section~\ref{sec:torus} to prove solenoidal injectivity on the torus~$\mathbb{T}^n$, $n\geq2$, for smooth vector fields. You can write a vector field on~$\mathbb{T}^n$ as a function $f\colon\mathbb{T}^n\times\mathbb{R}^n\to\mathbb{C}$ which is linear in the second variable. You will need the lemma that if a linear function $\phi\colon\mathbb{R}^n\to\mathbb{C}$ vanishes in all directions orthogonal to $k\in\mathbb{R}^n$, then there is $a\in\mathbb{C}$ so that $\phi(v)=ak\cdot v$. \end{bex} \subsection{Higher order tensor fields} We have studied X-ray tomography for symmetric covariant tensor fields of order~$0$ (scalar functions) and~$1$ ((co)vector fields). One can study the same problem for tensor fields of any order $m\in\mathbb{N}$. When $m=0$, the left-hand side of the Pestov identity vanishes. When $m=1$, the term on the left exactly cancels a term on the right. When $m\geq2$, the term on the left is typically larger than the corresponding one on the right, so our idea of proof no longer works as such. An important new ingredient for $m\geq2$ is to write $u^f\in C^\infty(\Omega\times S^1)$ as a Fourier series on $S^1=\mathbb{T}^1$. We will not pursue this here, but some more details will be given in section~\ref{sec:outlook-tt}. \begin{ex}Do you have any questions or comments regarding section~\thesection? Was something confusing or unclear? Were there mistakes?\end{ex} \section{The Fourier transform} \label{sec:ft} \subsection{A general view to Fourier transforms} Previously we studied the Fourier transform on a torus~$\mathbb{T}^n$. The Fourier transform took a function on the torus~$\mathbb{T}^n$ to a function on the lattice~$\mathbb{Z}^n$, and the inverse Fourier transform did the opposite. One could in fact define a the Fourier transform on the lattice, and that would turn out to be essentially the same as the inverse Fourier transform for the torus. In this section we will study the Fourier transform on~$\mathbb{R}^n$. It will take a function $\mathbb{R}^n\to\mathbb{C}$ to another function $\mathbb{R}^n\to\mathbb{C}$, and the inverse transform is very similar to the transform itself. Before going any deeper into this, we will look at the Fourier transforms in far greater generality to see that the two Fourier transforms in this course are merely two special cases of a far more general structure. Let~$G$ be a topological group. It means that it is a topological space and a group so that the group operations $G\to G$, $x\mapsto x^{-1}$ and $G\times G\to G$, $(x,y)\mapsto xy$ are continuous. We assume that~$G$ is abelian and locally compact. Let~$\hat G$ denote the set of all continuous homomorphisms $G\to S^1$. Here $S^1\subset\mathbb{C}$ is considered as the multiplicative group of unit complex numbers. Elements of~$\hat G$ are called characters of~$G$. The set~$\hat G$ can be endowed with a group structure by pointwise multiplication of characters: $(\alpha\beta)(x)=\alpha(x)\beta(x)$. The set~$\hat G$ is a set of functions, and it can be equipped with the topology of locally uniform convergence. These structures make~$\hat G$ into a topological group. What is important is that~$\hat G$ is also a locally compact abelian group and that~$\hat{\hat{G}}$ is naturally isomorphic to~$G$. This result is known as the Pontryagin duality theorem and~$\hat G$ is called the dual group of~$G$. The Fourier transform takes a function on~$G$ into a function on~$\hat G$, and the inverse Fourier transform reverses this. More precisely, the Fourier transform of $f\colon G\to\mathbb{C}$ is the function $\mathcal{F} f\colon\hat G\to\mathbb{C}$ defined by \begin{equation} \mathcal{F} f(\alpha) = \int_G\alpha(x)f(x)\,\der x \end{equation} when this integral exists, possibly with a normalization constant or complex conjugation of the character. The integral is with respect to a Haar measure, a translation invariant Radon measure. The Haar measure is unique up to a multiplicative constant. In light of the duality theorem, it is not surprising that the inverse Fourier transform for~$G$ resembles the Fourier transform for~$\hat G$. The Fourier transform requires a measure on the underlying space, and that is the Haar measure. When~$G$ and~$\hat G$ are equipped with compatible Haar measures, the~$L^2$ theory (and much more) of Fourier transforms can be extended to any locally compact abelian groups. Let us make the dual groups a little more concrete with examples. The dual group of~$\mathbb{T}^n$ is~$\mathbb{Z}^n$ and vice versa, and this we encountered earlier with Fourier series. The dual group of~$\mathbb{R}^n$ is~$\mathbb{R}^n$ itself, and this we will study now. The measures on~$\mathbb{T}^n$ and~$\mathbb{R}^n$ are the Lebesgue measures, and the one on the lattice is the counting measure. \begin{ex} An element $k\in\mathbb{Z}^n$ can be identified with a character $\chi_k\colon\mathbb{T}^n\to S^1$ by $\chi_k(x)=e^{ik\cdot x}$. Show that for any $k\in\mathbb{Z}^n$ the corresponding character~$\chi_k$ is indeed a well defined and continuous homomorphism. Recall that $\mathbb{T}^n=\mathbb{R}^n/2\pi\mathbb{Z}^n$. How can you identify a point~$x$ on the torus~$\mathbb{T}^n$ with a character $\psi_x\in\widehat{\mathbb{Z}^n}$? No need to prove anything; just give the formula. \end{ex} Note that these characters were used in the formulas for the Fourier transform and its inverse on the torus. This is how Fourier transforms work in general, by integrating a function against a character. If~$G$ is not abelian, then~$\hat G$ should be replaced with (equivalence classes of) irreducible representations of~$G$. This coincides with the dual group in the abelian case since irreducible complex representations of abelian groups are one-dimensional. Moreover, one-dimensional representations coincide with their characters, so the characters introduced here are the same as the representation theoretic characters. Fourier analysis on non-abelian groups is possible via representation theory. Finally, we remark that there are several different conventions for the Fourier transform on a torus or a Euclidean space. The differences concern the placement of factors of~$2\pi$. It is impossible to get completely rid of the factors. \subsection{The Fourier transform on a Euclidean space} It is typical to call the Fourier transform on a torus the Fourier series and the one on a Euclidean space the Fourier transform. Fourier analysis on other groups is much rarer. The Fourier transform of a function $f\colon\mathbb{R}^n\to\mathbb{C}$ is $\mathcal{F} f\colon\mathbb{R}^n\to\mathbb{C}$ defined by \begin{equation} \mathcal{F} f(\xi) = \int_{\mathbb{R}^n}e^{-i\xi\cdot x}f(x)\,\der x \end{equation} whenever this integral makes sense. Again, we are purposely vague since the definition can be extended to various classes of functions or distributions. \begin{theorem} \label{thm:ft} The Fourier transform is a bijection $\mathcal{F}\colon L^2(\mathbb{R}^n)\to L^2(\mathbb{R}^n)$, given by \begin{equation} \mathcal{F} f(\xi) = \int_{\mathbb{R}^n}e^{-i\xi\cdot x}f(x)\,\der x \end{equation} for $f\in L^1(\mathbb{R}^n)\cap L^2(\mathbb{R}^n)$ and extended by continuity to the rest of~$L^2(\mathbb{R}^n)$. The inverse Fourier transform $\mathcal{F}^{-1}\colon L^2(\mathbb{R}^n)\to L^2(\mathbb{R}^n)$ is given by \begin{equation} (\mathcal{F}^{-1}f)(x) = (2\pi)^{-n} \int_{\mathbb{R}^n}e^{i\xi\cdot x}f(\xi)\,\der \xi. \end{equation} The Fourier transform is unitary in the sense that \begin{equation} \int_{\mathbb{R}^n}\overline{g(x)}f(x)\,\der x = (2\pi)^{-n} \int_{\mathbb{R}^n}\overline{\mathcal{F} g(\xi)}\mathcal{F} f(\xi)\,\der \xi. \end{equation} \end{theorem} Again, the proof will be omitted. \begin{ex} What is the relation between~$\aabs{f}_{L^2}$ and~$\aabs{\mathcal{F} f}_{L^2}$? \end{ex} To simplify matters, we will apply the Fourier transform to compactly supported continuous functions. What we need to know is that $\mathcal{F} f=0$ implies $f=0$. Under additional assumptions very little information on~$\mathcal{F} f$ is needed to conclude that $f=0$, and we will study this next. \subsection{A Paley--Wiener theorem} A general and important phenomenon in Fourier analysis is the correspondence between decay and regularity. Fast decay of~$f(x)$ as $\abs{x}\to\infty$ corresponds to high regularity of~$\mathcal{F} f$ and vice versa. For a famous example, the Schwartz space contains by definition functions which have high regularity (infinitely differentiable) and fast decay (all derivatives vanish faster than $\abs{x}^{-N}$ for any $N\in\mathbb{N}$), and the Fourier transform of the Schwartz function space is precisely the space itself. We will study the ultimate form of decay at infinity: compact support. This should lead to very high regularity, and that turns out to be the case. Our theorem in this subsection is a version of the Paley--Wiener theorem. \begin{definition} A function $f\colon\mathbb{R}^n\to\mathbb{C}$ is called real analytic if it is smooth and for every point $x\in\mathbb{R}^n$ there is $r>0$ so that the Taylor series of~$f$ around~$x$ converges to~$f$ in $B(x,r)$. \end{definition} In complex analysis one can define analyticity in a similar fashion by demanding that a complex Taylor series converges to the function in a small neighborhood of any point. This turns out to be equivalent with complex differentiability (the existence of the derivative as a limit of a difference quotient). When working over the reals this is no longer the case; real analyticity is far stronger than real differentiability. \begin{ex} \label{ex:analytic-open} Show that if a real analytic function $f\colon\mathbb{R}^n\to\mathbb{C}$ vanishes in a non-empty open set $U\subset\mathbb{R}^n$, then~$f$ is identically zero. \end{ex} \begin{ex} \label{ex:non-analytic} Define the function $f\colon\mathbb{R}\to\mathbb{R}$ by \begin{equation} f(x) = \begin{cases} 0, & x\leq0 \\ \exp(-1/x), & x>0. \end{cases} \end{equation} Consider it known that $f\in C^\infty(\mathbb{R})$. Explain and justify (or prove): For any $x\in\mathbb{R}$ the Taylor series of~$f$ at~$x$ converges in some open neighborhood of~$x$. However,~$f$ is not real-analytic. \end{ex} The main result of this section is this: \begin{theorem} \label{thm:pw} The Fourier transform of a compactly supported function $f\in L^1(\mathbb{R}^n)$ is real analytic. \end{theorem} In light of exercise~\ref{ex:non-analytic}, it is not enough to estimate the derivatives to establish a positive radius of convergence for the Taylor series. We really need to show that the limit is correct. Let us collect some tools before the proof. First, recall a lemma from measure and integration theory: \begin{lemma} \label{lma:d-int} Fix integers $1\leq j\leq n$. Consider a function $g\colon\mathbb{R}^n\times\mathbb{R}^n\to\mathbb{C}$. Suppose that for every $y\in\mathbb{R}^n$ we have $g({\,\cdot\,},y)\in L^1(\mathbb{R}^n)$, that for every $x\in\mathbb{R}^n$ and $y\in\mathbb{R}^n$ the partial derivative~$\partial_{y_j}g(x,y)$ exists, and that there is a function $h\in L^1(\mathbb{R}^n)$ so that $\abs{\partial_{y_j}g(x,y)}\leq h(x)$ for all $x\in\mathbb{R}^n$ and $y\in\mathbb{R}^n$. Then the function \begin{equation} G(y) = \int_{\mathbb{R}^n}g(x,y)\,\der x \end{equation} has the partial derivative~$\partial_{y_i}G(y)$ everywhere and \begin{equation} \partial_{y_i}G(y) = \int_{\mathbb{R}^n}\partial_{y_i}g(x,y)\,\der x, \end{equation} where the last integral is a well-defined Lebesgue integral. \end{lemma} \begin{ex} Suppose $f\in L^1(\mathbb{R}^n)$ vanishes outside a compact set~$K$. Denote by~$f_j$ the function $f_j(x)=x_jf(x)$. Show that $\partial_{\xi_j}\mathcal{F} f(\xi)=-i\mathcal{F} f_j(\xi)$ and the partial derivative exists everywhere. \end{ex} Similarly, one can find that for any vector $v\in\mathbb{C}^n$ one has \begin{equation} \label{eq:ft-der} v\cdot\nabla\mathcal{F} f(\xi) = -i\mathcal{F} f_v(\xi), \end{equation} where $f_v(x)=(v\cdot x) f(x)$. \begin{ex} Suppose $f\in L^1(\mathbb{R})$ vanishes outside a compact set~$K$. Argue that $\mathcal{F} f\in C^\infty$. (The same result holds in~$\mathbb{R}^n$ for any~$n$.) \end{ex} \begin{ex} \label{ex:ft-translate} Suppose $f\in L^1(\mathbb{R}^n)$ vanishes outside a compact set~$K$. Show that $\mathcal{F} f(\xi)=\mathcal{F} g(\zeta)$, where $g(x)=e^{i(\zeta-\xi)\cdot x}f(x)$. \end{ex} \begin{proof}[Proof of theorem~\ref{thm:pw}] Repeated application of equation~\eqref{eq:ft-der} gives \begin{equation} \label{eq:vv11} (v\cdot\nabla)^m\mathcal{F} f(\xi) = (-i)^n\mathcal{F}(\mu_v^mf)(\xi), \end{equation} where~$\mu_v^m$ is the multiplication operator defined by $\mu_v^mf(x)=(v\cdot x)^m f(x)$. Notice that~$v\cdot\nabla$ is a derivative in the direction $v\in\mathbb{R}^n$, and it makes sense to take the~$m$th iterated derivative. Take any $\rho\in\mathbb{C}^n$. We have $e^{\rho\cdot x}=\sum_{k\in\mathbb{N}}\frac1{k!}(\rho\cdot x)^k$. It is clear that each partial sum is dominated by $\sum_{k\in\mathbb{N}}\frac1{k!}\abs{\rho\cdot x}^k=e^{\abs{\rho\cdot x}}$. This majorant is uniformly bounded when $x\in K$ and~$\abs{\rho}$ is uniformly bounded. Therefore, by the dominated convergence theorem and exercise~\ref{ex:ft-translate} with $\zeta=\xi_0$, \begin{equation} \begin{split} \mathcal{F} f(\xi) &= \mathcal{F} g(\xi_0) \\&= \mathcal{F}\left( \sum_{k=0}^\infty\frac{i^k}{k!}\mu_{(\xi-\xi_0)}^kf \right) (\xi_0) \\&= \sum_{k=0}^\infty\frac{i^k}{k!}\mathcal{F}(\mu_{(\xi-\xi_0)}^kf)(\xi_0). \end{split} \end{equation} Applying~\eqref{eq:vv11} to each term gives \begin{equation} \mathcal{F} f(\xi) = \sum_{k=0}^\infty\frac1{k!}((\xi-\xi_0)\cdot\nabla)^k\mathcal{F} f(\xi_0). \end{equation} This is precisely the Taylor series of~$\mathcal{F} f$ about the point~$\xi_0$ evaluated at~$\xi$; see also exercise~\ref{ex:taylor}. We have shown that this series converges to~$\mathcal{F} f(\xi)$ as desired. \end{proof} \begin{ex} We proved above that the Taylor series of~$\mathcal{F} f$ at~$\xi_0$ converges. What can you deduce about the radius of convergence? \end{ex} \begin{ex} \label{ex:taylor} Let us compare two different representations of higher dimensional Taylor polynomials. Suppose $f\in C^\infty(\mathbb{R}^n)$, $m\in\mathbb{N}$, and $v\in\mathbb{R}^n$. Show that \begin{equation} \sum_{k=0^m}\frac1{k!}(v\cdot\nabla)^kf(0) = \sum_{\abs{\alpha}\leq m}\frac1{\alpha!} v^\alpha\partial^\alpha f(0). \end{equation} Here $\alpha\in\mathbb{N}^n$ is a multi-index. (If~$f$ is real analytic and~$v$ is within the radius of convergence at the origin, both sides equal~$f(v)$ in the limit $m\to\infty$. It is worth noting that the Taylor series can be formally written as $f(v)=(e^{v\cdot\nabla}f)(0)$.) \end{ex} For the fun of it, let us see a couple of examples of real analytic functions. \begin{ex} Calculate the Fourier transform of the characteristic function of the cube $[-1,1]^3\subset\mathbb{R}^3$. Make sure the function is defined everywhere. \end{ex} \begin{ex} Calculate the Fourier transform of the characteristic function of the unit ball $B\subset\mathbb{R}^3$. Make sure the function is defined everywhere. \end{ex} \begin{ex}Do you have any questions or comments regarding section~\thesection? Was something confusing or unclear? Were there mistakes?\end{ex} \section{The normal operator} \label{sec:normal} In this and the next section we will give our fifth injectivity proof based on the normal operator of~$\mathcal{I}$. \subsection{Why care about a normal operator} \begin{definition} \label{def:normal} The normal operator of a bounded linear operator $A\colon E\to F$ between complex or real Hilbert spaces is $A^*A\colon E\to E$, where $A^*\colon F\to E$ is the adjoint of~$A$. \end{definition} Let us discuss this definition and the concepts appearing in it in more detail. We will work over~$\mathbb{C}$, but there is no significant difference to the real version. The adjoint is defined to be the operator that satisfies \begin{equation} \ip{y}{Ax}_F = \ip{A^*y}{x}_E \end{equation} for all $x\in E$ and $y\in F$. \begin{ex} Using this definition, show that the adjoint~$A^*$ is unique if it exists. \end{ex} \begin{ex} Using the definition, show carefully that $(A^*)^*=A$. \end{ex} Existence of the adjoint follows from the Riesz representation theorem which characterizes the dual of a Hilbert space. Namely, for any~$y$, the mapping $x\mapsto \ip{y}{Ax}_F$ is in~$E^*$, and by the representation theorem there is $z\in E$ so that $\ip{y}{Ax}_F=\ip{z}{x}_E$. It is easy to check that this~$z$ has to depend linearly on~$y$. This gives rise to a linear operator~$A^*$ which maps~$y$ to~$z$. It also turns out that~$A^*$ is bounded. \begin{ex} Show that $\aabs{A^*}=\aabs{A}$ in the operator norm. \end{ex} It is very convenient to work with self-adjoint operators. An operator~$A$ is called self-adjoint if~$A^*=A$. For the operator~$A$ to be self-adjoint, we must have $E=F$, but this is not always the case. Therefore it is convenient to replace~$A$ with its normal operator. Self-adjointness in itself is convenient, but the normal operator tends to be nicer than the original operator. \begin{ex} Using the definition of an adjoint given above, show that the normal operator of any bounded linear operator between Hilbert spaces is self-adjoint. \end{ex} In our case~$A$ is the X-ray transform. Now~$E$ is a function space over~$\mathbb{R}^n$ and~$F$ is a function space over the set of all lines in~$\mathbb{R}^n$. There is no natural way to identify the two spaces, so we will study the normal operator of the X-ray transform. Our goal is then to show that~$\mathcal{I}^*\mathcal{I}$ is injective, from which it follows that~$\mathcal{I}$ is injective; see exercise~\ref{ex:left-inv}. \subsection{Measures on spheres and sets of lines} \label{sec:measures} The sphere~$S^{n-1}$ has a canonical measure. We will not define it, but we will give some descriptions, some of which count as definitions for the reader with suitable knowledge of measure theory or differential geometry. We will give a similar treatment to the set of lines soon. In fact, a regular Borel measure is uniquely determined by the integrals of functions in~$C_c$, so our descriptions do secretly constitute a definition of a measure. The sphere inherits a metric from~$\mathbb{R}^n$. The metric allows us to define Hausdorff measures of any dimension, and the natural one has dimension $n-1$. The sphere~$S^{n-1}$ is also a smooth manifold of dimension $n-1$. It inherits a Riemannian metric from~$\mathbb{R}^n$, and the Riemannian metric induces a Riemannian volume form. This leads to the same measure as the Hausdorff approach. The Riemannian metric gives rise to a metric (distance along great circles in this case), and this is the same Hausdorff measure as the Euclidean (chordal) metric. We will denote the metric on the sphere by~$S$. The most important property for us is a spherical Fubini's theorem. For $f\in C_c(\mathbb{R}^n)$, we have \begin{equation} \int_{\mathbb{R}^n}f(x)\,\der x = \int_0^\infty\int_{S^{n-1}}f(r\omega)r^{n-1}\,\der S(\omega)\,\der r. \end{equation} This property could also be used as a definition of the measure~$S$. Let us denote the set of all straight lines in~$\mathbb{R}^n$ by~$\Gamma$. The lines themselves are easy to visualize, but the set of lines is a somewhat less intuitive geometrical object. There is a natural structure of a Riemannian manifold on~$\Gamma$, and that gives rise to other structures as well: topology, metric, measure, \dots The manifold structure is a bit tricky and unnecessary for us, but we will need to understand the structure and measure of~$\Gamma$. We will write lines as $x+v\mathbb{R}=\{x+vt;t\in\mathbb{R}\}\subset\mathbb{R}^n$ for $x\in\mathbb{R}^n$ and $v\in S^{n-1}$. This parametrization is redundant --- each line is counted several times --- but in the set~$\Gamma$ every line is only included once. \begin{ex} Let $x_1,x_2\in\mathbb{R}^n$ and $v_1,v_2\in S^{n-1}$. When is $x_1+v_1\mathbb{R}=x_2+v_2\mathbb{R}$? \end{ex} We will give some more details on the structure of~$\Gamma$ later in connection with sphere bundles. For now we rely on intuition and acknowledge that the space~$C_c(\Gamma)$ of continuous and compactly supported functions $\Gamma\to\mathbb{C}$ is not rigorously defined. We point out that although the lines themselves are not compact, there are non-trivial compact sets in the space of lines. Let us then describe the measure~$\mu$ on~$\Gamma$. For any $v\in S^{n-1}$, we denote by $v^\perp\coloneqq\{x\in\mathbb{R}^n;x\cdot v=0\}$ the orthogonal complement of the space spanned by~$v$. The space~$v^\perp$ can be identified with~$\mathbb{R}^{n-1}$, and we denote the measure there by~$\mathcal{H}^{n-1}$ (the $(n-1)$-dimensional Hausdorff measure). The measure~$\mu$ is defined so that the integral of $g\in C_c(\Gamma)$ is \begin{equation} \int_\Gamma g(\gamma)\,\der\mu(\gamma) = \int_{S^{n-1}}\int_{v^\perp}g(x+v\mathbb{R})\,\der\mathcal{H}^{n-1}(x)\,\der S(v). \end{equation} In this representation the same line appears twice in the integral --- in both orientations. The same formula can be used for the space of oriented lines as well. The double counting could be removed by replacing~$S^{n-1}$ with its antipodal quotient (the real projective space of dimension $n-1$), but multiple counting of finite order is not an issue for our purposes. More precisely, one can define an equivalence relation~$\sim$ on~$S^{n-1}$ identifying antipodal points; the antipodal quotient is $S^n/{\sim}$. \begin{ex} For a vector $a\in\mathbb{R}^n$, define the translation operator $\phi_a\colon\Gamma\to\Gamma$ by $\phi_a(\gamma)=a+\gamma$. Show that for $g\in C_c(\Gamma)$ we have $\int_\Gamma g\,\der\mu=\int_\Gamma g\circ\phi_a\,\der\mu$ for any~$a$. \end{ex} \begin{ex} The previous exercise shows that the measure~$\mu$ is translation invariant. It is also rotation invariant. What does this property mean? Write the statement in terms of the integral of an arbitrary function like above. Then prove the statement. \end{ex} In the definition above we chose to take base points of lines in direction~$v$ in the hyperplane~$v^\perp$. If the hyperplanes were chosen differently, the measure would still be translation invariant, but rotation invariance requires a good choice. If we were to use a single fixed hyperplane, it would fail to parametrize all the required lines when~$v$ is contained in it. However, this is a zero measure error. If the fixed hyperplane is given as~$w^\perp$ for some fixed~$w$, one would need to multiply the Hausdorff measure with $\abs{w\cdot v}$. Using~$v^\perp$ induces the inconvenience of changing the space of integration depending on~$v$, but the geometrical picture is far clearer. \subsection{The formal adjoint of the X-ray transform} Now we are ready to find the formal adjoint of the X-ray transform. The adjoint~$\mathcal{I}^*$ is a convenient operator turning (by composition) the X-ray transform into an operator from a function space to itself. To find a convenient operator, it suffices to find the formal normal operator. Our Hilbert spaces are~$L^2(\mathbb{R}^n)$ and~$L^2(\Gamma)$. In the definition of the adjoint, we will not use all~$L^2$ functions --- in fact, it is not important whether the X-ray transform is continuous or well defined $L^2(\mathbb{R}^n)\to L^2(\Gamma)$. Instead, we will only use functions in~$C_c(\mathbb{R}^n)$ and~$C_c(\Gamma)$ to find the adjoint. The whole point is to find an operator that ends up behaving nicely, and it does not matter how fishy the method to find the operator is. We want to use the~$L^2$ inner product, but the adjoint as we defined it does not make sense since the X-ray transform is not continuous $L^2(\mathbb{R}^n)\to L^2(\Gamma)$. This is why we need a formal adjoint, found by using only the nicer subset~$C_c$ of~$L^2$ over both~$\mathbb{R}^n$ and~$\Gamma$. \begin{ex} Let $p\in[1,\infty)$. Let~$f$ be the characteristic function of the ball $B(0,R)\subset\mathbb{R}^n$. Show that $\aabs{f}_{L^p(\mathbb{R}^n)}=aR^{n/p}$ and $\aabs{\mathcal{I} f}_{L^p(\Gamma)}=bR^{(n+p-1)/p}$ for some constants~$a$ and~$b$ depending on the exponent~$p$ and the dimension~$n$. Therefore the X-ray transform is not continuous $L^p(\mathbb{R}^n)\to L^p(\Gamma)$ for any $p\in(1,\infty)$. (It is quite easy to see that the X-ray transform is also discontinuous for $p=\infty$ but is continuous for $p=1$.) \end{ex} Let $f\in C_c(\mathbb{R}^n)$ and $g\in C_c(\Gamma)$. Then \begin{equation} \label{eq:vv7} \begin{split} \ip{f}{\mathcal{I}^*g} &= \ip{\mathcal{I} f}{g} \\&= \int_{S^{n-1}}\int_{v^\perp} \overline{\mathcal{I} f(x+v\mathbb{R})} g(x+v\mathbb{R}) \,\der\mathcal{H}^{n-1}(x) \,\der S(v) \\&= \int_{S^{n-1}}\int_{v^\perp} \int_\mathbb{R} \overline{f(x+tv)} g(x+v\mathbb{R}) \,\der t \,\der\mathcal{H}^{n-1}(x) \,\der S(v) \\&\stackrel{\text{a}}{=} \int_{S^{n-1}} \int_{v^\perp} \int_\mathbb{R} \overline{f(x+tv)} g(x+tv+v\mathbb{R}) \,\der t \,\der\mathcal{H}^{n-1}(x) \,\der S(v) \\&\stackrel{\text{b}}{=} \int_{S^{n-1}} \int_{\mathbb{R}^n} \overline{f(y)} g(y+v\mathbb{R}) \,\der y \,\der S(v) \\&= \int_{\mathbb{R}^n} \overline{f(y)} \left( \int_{S^{n-1}} g(y+v\mathbb{R}) \,\der S(v) \right) \,\der y. \end{split} \end{equation} \begin{ex} Explain the steps a and b in~\eqref{eq:vv7}. \end{ex} Here we used Fubini's theorem on the direct sum (product) $v\mathbb{R}\oplus v^\perp=\mathbb{R}^n$. This calculation indicates that the formal adjoint is \begin{equation} \mathcal{I}^*g(x) = \int_{S^{n-1}} g(x+v\mathbb{R}) \,\der S(v). \end{equation} The formal adjoint of the X-ray transform is also known as the back projection operator. There is a certain kind of duality between points and lines. It might be more illuminating to describe the situation in words: \begin{itemize} \item For $f\in C_c(\mathbb{R}^n)$ and $\gamma\in\Gamma$, the X-ray transform~$\mathcal{I} f(\gamma)$ is the integral of~$f(x)$ over all~$x$ for which $x\in\gamma$. \item For $g\in C_c(\Gamma)$ and $x\in\mathbb{R}^n$, the back projection $\mathcal{I}^* g(x)$ is the integral of~$g(\gamma)$ over all~$\gamma$ for which $x\in\gamma$. \end{itemize} \noindent Now that we have found the adjoint, it remains to find the normal operator. \begin{ex} Show that for $f\in C_c(\mathbb{R}^n)$ we have \begin{equation} \label{eq:xrt-normal} \mathcal{I}^*\mathcal{I} f(x) = 2\int_{\mathbb{R}^n}f(x+y)\abs{y}^{1-n}\,\der y. \end{equation} This is the (formal) normal operator that we have been looking for. \end{ex} \subsection{Convolutions and Riesz potentials} Now, we ought to show that the normal operator $\mathcal{I}^*\mathcal{I}\colon C_c(\mathbb{R}^n)\to C(\mathbb{R}^n)$ defined by~\eqref{eq:xrt-normal} is injective. \begin{ex} \label{ex:left-inv} Consider a function $F\colon X\to Y$ between any two sets. Show that there is a left inverse $F^{-1}_L\colon Y\to X$ so that $F^{-1}_L\circ F=\id_X$ if and only if~$F$ is injective. (Similarly, invertibility from the right is equivalent with surjectivity, but we do not need this side. In fact, this equivalence for right inverses is equivalent with the axiom of choice, but for left inverses it is not. One-sided inverse functions are typically not unique.) Suppose we have a left inverse~$A$ for~$\mathcal{I}^*\mathcal{I}$. What is a left inverse of~$\mathcal{I}$? \end{ex} The convolution of two functions $f,h\colon\mathbb{R}^n\to\mathbb{C}$ is the function $f*h\colon\mathbb{R}^n\to\mathbb{C}$ defined by \begin{equation} f*h(x) = \int_{\mathbb{R}^n}f(x-y)h(y)\,\der y \end{equation} whenever this integral makes sense. \begin{ex} The normal operator is a convolution: $\mathcal{I}^*\mathcal{I} f=f*h$. What is~$h$? \end{ex} \begin{definition} For $\alpha\in(0,n)$, the Riesz potential~$I_\alpha$ is an integral operator defined by \begin{equation} I_\alpha f = f*h_\alpha, \end{equation} where \begin{equation} h_\alpha(x) = c_\alpha^{-1} \abs{x}^{\alpha-n} \end{equation} and~$c_\alpha$ is a constant. \end{definition} The Riesz representation theorem and the Riesz potential are named after two different people. They were brothers. To prove injectivity of the X-ray transform, we will show that the Riesz potentials are injective. \begin{theorem} \label{thm:riesz-potential} The Riesz potential $I_\alpha\colon C_c(\mathbb{R}^n)\to C(\mathbb{R}^n)$ is an injection for every $\alpha\in(0,n)$. \end{theorem} The proof of this property of Riesz potentials is postponed to the next section. We present the statement here because of its corollary: \begin{theorem} \label{xrtthm:riesz} The X-ray transform is injective on~$C_c(\mathbb{R}^n)$. \end{theorem} \begin{ex} Prove theorem~\ref{xrtthm:riesz} using the results and ideas obtained in this section. \end{ex} \subsection{Remarks} The normal operator depends on the choice of the target space and the inner product on it. If we parametrized lines redundantly with $\mathbb{R}^n\times S^{n-1}$, the adjoint of the X-ray transform would take $g\in C_c(\mathbb{R}^n\times S^{n-1})$ into the function \begin{equation} \mathcal{I}^*g(x) = \int_{S^{n-1}}\int_\mathbb{R} g(x+tv,v)\,\der t\,\der S(v). \end{equation} This is very similar to what we found before, but there is an additional integral over~$\mathbb{R}$. If we now try to compute the normal operator, we find that \begin{equation} \mathcal{I}^*\mathcal{I} f(x) = \int_{S^{n-1}}\int_\mathbb{R} \int_R f(x+tv+sv)\,\der s\,\der t\,\der S(v). \end{equation} This differs from our earlier normal operator by the factor~$\int_\mathbb{R}\,\der t$ which is famously infinite. This is why redundancy in the parametrization of geodesics is problematic. We had two-fold redundancy, so we ended up with a factor~$2$ in our normal operator. It is easier to divide by~$2$ than by~$\infty$. Let us then see what the effect of changing inner products is. Let $P\in\mathbb{R}^{n\times n}$ and $Q\in\mathbb{R}^{m\times m}$ be symmetric and positive definite. Equip~$\mathbb{R}^n$ with the inner product \begin{equation} \ip{x}{y}_P = x^TPy \end{equation} and similarly~$\mathbb{R}^m$ with $\ip{{\,\cdot\,}}{{\,\cdot\,}}_Q$. Let $A\colon\mathbb{R}^n\to\mathbb{R}^m$ be a linear operator (matrix). Let $B\colon\mathbb{R}^m\to\mathbb{R}^n$ be the adjoint with respect to these inner products. That is, suppose \begin{equation} \ip{x}{Ay}_Q = \ip{Bx}{y}_P \end{equation} for all $x\in\mathbb{R}^n$ and $y\in\mathbb{R}^m$. \begin{ex} Show that $B=P^{-1}A^TQ$. Therefore the normal operator is $BA=P^{-1}A^TQA$. \end{ex} The conclusion is that changing the inner product can introduce an operator between~$\mathcal{I}^*$ and~$\mathcal{I}$, where~$\mathcal{I}^*$ is understood as the $L^2$-adjoint. This is popular in X-ray tomography. The corresponding inversion method is known as filtered back projection. \begin{ex}Do you have any questions or comments regarding section~\thesection? Was something confusing or unclear? Were there mistakes?\end{ex} \section{Riesz potentials} \label{sec:riesz} In this section we will study the injectivity of Riesz potentials. We lack the prerequisite theory of distributions to give a precise proof of theorem~\ref{thm:riesz-potential}, but we will discuss two approaches to prove injectivity. \subsection{The Fourier approach} The first approach makes use of the Fourier transform. The calculations in this section are heuristic. It is possible to make rigorous sense of them and give a precise proof of theorem~\ref{thm:riesz-potential}, but we will avoid the technicalities. Consider a function $f\colon\mathbb{R}^n\to\mathbb{C}$ which we want to reconstruct from $I_\alpha f$ for some $\alpha\in(0,n)$. As in section~\ref{sec:normal}, denote $h_\alpha(x)=c_\alpha\abs{x}^{\alpha-n}$. Now $I_\alpha f=f*h_\alpha$. We will take a Fourier transform of this identity. \begin{ex} Show that if $f,g\in C_c(\mathbb{R}^n)$, then $\mathcal{F}(f*g)(\xi)=\mathcal{F} f(\xi)\mathcal{F} g(\xi)$ for all $\xi\in\mathbb{R}^n$. \end{ex} The exercise is a simple calculation using definitions. However, the function~$h_\alpha$ is not in~$C_c(\mathbb{R}^n)$. It is locally integrable, but not in any~$L^p$ space. However, the same property of convolutions and Fourier transforms does hold in more generality, and we have \begin{equation} \label{eq:riesz-ft} \mathcal{F}(I_\alpha f) = \mathcal{F} f\cdot\mathcal{F} h_\alpha \end{equation} in the sense of distributions. Both~$h_\alpha$ and~$I_\alpha f$ are distributions. Then we want to compute the Fourier transform~$\mathcal{F} h_\alpha$. Our definition of the Fourier transform is not applicable, but the definition can be extended to distributions. With such an extended definition one can calculate that \begin{equation} \mathcal{F} h_\alpha(\xi) = b_\alpha\abs{\xi}^{-\alpha} \end{equation} for some constant $b_\alpha>0$. Now if~$I_\alpha f$ vanishes, then by~\eqref{eq:riesz-ft} also~$\mathcal{F} f\cdot\mathcal{F} h_\alpha$ vanishes. That is, $\abs{\xi}^{-\alpha}\mathcal{F} f(\xi)=0$ for all~$\xi$. This implies that~$\mathcal{F} f$ vanishes, and so $f=0$. This shows injectivity of~$I_\alpha$. However, a number of steps were far from rigorous, including dividing by~$\abs{\xi}^{-\alpha}$ on the Fourier side. This approach also gives an inversion formula: \begin{equation} f = \mathcal{F}^{-1}(\mu_\alpha\mathcal{F}(I_\alpha f)) , \end{equation} where $\mu_\alpha(\xi)=b_\alpha^{-1}\abs{\xi}^\alpha$. \subsection{The Laplace approach} The second approach makes use of the Laplace operator. We found in section~\ref{sec:normal} that the normal operator~$\mathcal{I}^*\mathcal{I}$ is, up to a multiplicative constant, the Riesz potential~$I_1$. To show that~$I_1$ is injective, we show that $I_1\circ I_1$ is injective. To make the argument rigorous, we assume $n\geq3$ and we will also assume more regularity in a moment. But first, let us see what the operator~$\mathcal{I}^*\mathcal{I}\xrt^*\mathcal{I}$ or~$I_1I_1$ does. \begin{lemma} \label{lma:I1I1=I2} If $f\in C_c(\mathbb{R}^n)$, $n\geq3$, there is a constant $c>0$ so that $I_1I_1f=cI_2f$. \end{lemma} In general, the Riesz potentials satisfy $I_\alpha I_\beta=I_{\alpha+\beta}$, but we will not try to prove this in full generality. \begin{proof}[Proof of lemma~\ref{lma:I1I1=I2}] First, a simple calculation gives \begin{equation} \begin{split} c_1^2I_1I_1f(x) &= c_1 \int_{\mathbb{R}^n} I_1f(x-y)\abs{y}^{1-n} \,\der y \\&= \int_{\mathbb{R}^n} \left( \int_{\mathbb{R}^n} f(x-y-z) \abs{z}^{1-n} \,\der z \right) \abs{y}^{1-n} \,\der y \\&= \int_{\mathbb{R}^n} \left( \int_{\mathbb{R}^n} f(x-w) \abs{w-y}^{1-n} \,\der w \right) \abs{y}^{1-n} \,\der y \\&= \int_{\mathbb{R}^n} f(x-w) \left( \int_{\mathbb{R}^n} \abs{w-y}^{1-n} \abs{y}^{1-n} \,\der y \right) \,\der w. \end{split} \end{equation} By rotation invariance, the inner integral is \begin{equation} \int_{\mathbb{R}^n} \abs{w-y}^{1-n} \abs{y}^{1-n} \,\der y = \phi(\abs{w}) \end{equation} for some function~$\phi$. When $r>0$, a simple scaling argument (exercise) shows that $\phi(r)=r^{2-n}\phi(1)$. Therefore \begin{equation} c_1^2I_1I_1f(x) = \phi(0) \int_{\mathbb{R}^n} f(x-w) \abs{w}^{2-n} \,\der w = \phi(0)c_2I_2f(x) . \end{equation} This is the desired conclusion. \end{proof} \begin{ex} Explain why~$\phi(1)$ is a finite positive number. \end{ex} \begin{ex} Make the simple scaling argument. \end{ex} Now we will turn to inverting~$I_2$. The inverse operator is simply --- and perhaps surprisingly --- the Laplacian. In fact, we will show that $-b\Delta I_2f=f$ for a suitable constant $b>0$. For technical convenience, we assume $f\in C^2_c(\mathbb{R}^n)$ and $n\geq3$. We have \begin{equation} I_2f(x) = c_2^{-1} \int_{\mathbb{R}^n} f(x-y) \abs{y}^{2-n} \,\der y. \end{equation} Since $f\in C^2_c(\mathbb{R}^n)$ and $y\mapsto\abs{y}^{2-n}$ is locally integrable, application og lemma~\ref{lma:d-int} gives \begin{equation} c_2 \Delta I_2f(x) = \int_{\mathbb{R}^n} (\Delta f)(x-y) \abs{y}^{2-n} \,\der y. \end{equation} We split the integral in two parts, integrating separately near the singularity at $y=0$ and far from it. For any $\varepsilon>0$ (which will be let go to zero later) we have \begin{equation} \begin{split} c_2 \Delta I_2f(x) &= \underbrace{ \int_{B(0,\varepsilon)} (\Delta f)(x-y) \abs{y}^{2-n} \,\der y }_{\eqqcolon P(x,\varepsilon)} \\&\qquad+ \underbrace{ \int_{\mathbb{R}^n\setminus B(0,\varepsilon)} (\Delta f)(x-y) \abs{y}^{2-n} \,\der y }_{\eqqcolon Q(x,\varepsilon)}. \end{split} \end{equation} By a direct computation \begin{equation} \abs{P(x,\varepsilon)} \leq \max\abs{\Delta f} \cdot \int_{B(0,\varepsilon)} \abs{y}^{2-n} \,\der y \to 0 \end{equation} as $\varepsilon\to0$. \begin{ex} Verify this limit, either by direct calculation in spherical coordinates or by appealing to local integrability of $y\mapsto\abs{y}^{2-n}$ and absolute continuity of the Lebesgue integral. \end{ex} The second integral contains no singularities, and we may integrate by parts. Let us first recall a more general result: \begin{ex} Let $\Omega\subset\mathbb{R}^n$ be an open set with smooth boundary and denote the exterior unit normal vector by~$\nu$. Denote the surface measure on~$\partial\Omega$ by~$S$. Suppose $u\in C^2(\mathbb{R}^n)$ and $v\in C^2_c(\mathbb{R}^n)$. Show that \begin{equation} \begin{split} \int_\Omega u(x)\Delta v(x) \,\der x &= \int_\Omega v(x)\Delta u(x) \,\der x \\&\qquad+ \int_{\partial\Omega}(u(x)\nabla v(x)-v(x)\nabla u(x))\cdot\nu(x)\,\der S(x). \end{split} \end{equation} Find or recall suitable integration by parts formulas. \end{ex} We will use this exercise in our specific case. Note that when $\Omega=\mathbb{R}^n\setminus\bar B(0,\varepsilon)$, we may freely change the values of our functions near the origin to make them smooth. We find \begin{equation} \begin{split} Q(x,\varepsilon) &= \int_{\mathbb{R}^n\setminus B(0,\varepsilon)} (\Delta_x f)(x-y) \abs{y}^{2-n} \,\der y \\&= \int_{\mathbb{R}^n\setminus B(0,\varepsilon)} (\Delta_y f)(x-y) \abs{y}^{2-n} \,\der y \\&= \int_{\mathbb{R}^n\setminus B(0,\varepsilon)} f(x-y) \Delta_y\abs{y}^{2-n} \,\der y \\&\quad+ \int_{\partial B(0,\varepsilon)}\abs{y}^{2-n}\nabla f(x-y)\cdot y\abs{y}^{-1}\,\der S(y) \\&\quad+ \int_{\partial B(0,\varepsilon)}f(x-y)\nabla\abs{y}^{2-n}\cdot y\abs{y}^{-1}\,\der S(y) . \end{split} \end{equation} The exterior unit normal vector at $x\in\partial(\mathbb{R}^n\setminus\bar B(0,\varepsilon))$ is $-x/\abs{x}$ and the gradient of $f(x-y)$ with respect to~$y$ is $-\nabla f$ evaluated ta $x-y$. This makes all the signs as they are. This integral can be simplified significantly. \begin{ex} Show that $\nabla_y\abs{y}^{2-n}=(2-n)\abs{y}^{-n}y$. \end{ex} \begin{ex} Show that $\Delta_y\abs{y}^{2-n}=0$ when $y\neq0$. \end{ex} \begin{ex} Show that \begin{equation} \int_{\partial B(0,\varepsilon)}\abs{y}^{2-n}\,\der S(y) = a\varepsilon \end{equation} for some constant $a>0$ depending on dimension. Therefore the corresponding term vanishes as $\varepsilon\to0$. \end{ex} The only term of~$Q(x,\varepsilon)$ that does not vanish as $\varepsilon\to0$ is \begin{equation} \begin{split} & \int_{\partial B(0,\varepsilon)}f(x-y)\nabla\abs{y}^{2-n}\cdot y\abs{y}^{-1}\,\der S(y) \\&= \int_{\partial B(0,\varepsilon)}f(x-y)(2-n)\abs{y}^{1-n}\,\der S(y) \\&= \int_{\partial B(0,\varepsilon)}f(x-y)(2-n)\varepsilon^{1-n}\,\der S(y) \\&= \int_{\partial B(0,1)}f(x-\varepsilon z)(2-n)\,\der S(z). \end{split} \end{equation} As $\varepsilon\to0$, we have $f(x-\varepsilon z)\to f(x)$ uniformly for $z\in\bar B(0,1)$. \begin{ex} Collect the observations we have made and show that \begin{equation} c_2 \Delta I_2f(x) = (2-n)af(x). \end{equation} Therefore with a suitable choice of $b>0$ we have $-\Delta I_2f=f$. \end{ex} We have proven a lemma for the inversion of~$I_2$: \begin{lemma} \label{lma:laplace-I2} Let $n\geq3$. There is a constant $b>0$ depending on~$n$ so that \begin{equation} -b\Delta I_2f=f \end{equation} for all $f\in C^2_c(\mathbb{R}^n)$. \end{lemma} This allows us to prove some injectivity results for Riesz potentials. \begin{theorem} If $n\geq3$, the Riesz potentials~$I_1$ and~$I_2$ are injective on the space~$C^2_c(\mathbb{R}^n)$. \end{theorem} \begin{ex} Prove the theorem using lemmas~\ref{lma:I1I1=I2} and~\ref{lma:laplace-I2}. \end{ex} \begin{ex}Do you have any questions or comments regarding section~\thesection? Was something confusing or unclear? Were there mistakes?\end{ex} \section{Partial data} \label{sec:partial} In this section we will give our sixth injectivity proof based on Fourier analysis on~$\mathbb{R}^n$. This method will also give a partial data result. \subsection{Various kinds of limitations} In real life measurement situations there are various kinds of limitations to the measurements. Sometimes one can fire X-rays through an object in any position and direction, but not always. One might need to avoid hitting something sensitive, or the geometry of the measurement situation restricts the available directions. In general, one might ask how large a set of lines is needed so that a function is uniquely determined by its integrals over them. We have mostly studied X-ray tomography with full data so far. We had one result with partial data so far, namely Helgason's support theorem (theorem~\ref{thm:helgason}), which concerns tomography around a convex obstacle. In this section we will study a particular partial data scenario, where the set of directions of X-rays is not the whole sphere. This is called limited angle tomography. \subsection{Full data with the Fourier transform} Before embarking on the study of partial data, let us first solve the simpler full data problem with these tools. Consider a function $f\in C_c(\mathbb{R}^n)$ and define the X-ray transform as \begin{equation} \mathcal{I} f(x,v) = \int_\mathbb{R} f(x+tv)\,\der t. \end{equation} We could also have used the notation~$\mathcal{I}_vf(x)$ as in section~\ref{sec:torus}. We will restrict the parameters $(x,v)\in\mathbb{R}^n\times S^{n-1}$ so that $x\cdot v=0$. In other words, $x\in v^\perp$, where~$v^\perp$ denotes the subspace orthogonal to~$v$. Since $\mathcal{I} f(x+sv,v)=\mathcal{I} f(x,v)$ for any $s\in\mathbb{R}$, this restriction does not reduce our data. In other words, $\mathcal{I} f(x,v)$ for all parameters $(x,v)\in\mathbb{R}^n\times S^{n-1}$ is uniquely determined by the restriction to $v\in S^{n-1}$ and $x\in v^\perp$. Fix any $v\in S^{n-1}$ and consider the function $\mathcal{I} f({\,\cdot\,},v)$ on the $(n-1)$-dimensional space $v^\perp\subset\mathbb{R}^n$. We can calculate the Fourier transform of $\mathcal{I} f({\,\cdot\,},v)$ on this space~$v^\perp$. This function is continuous and compactly supported: $\mathcal{I} f({\,\cdot\,},v)\in C_c(v^\perp)$. For $\xi\in v^\perp$, we denote \begin{equation} (\mathcal{F}_{v^\perp}\mathcal{I} f({\,\cdot\,},v))(\xi) = \int_{v^\perp} e^{-i\xi\cdot x} \mathcal{I} f(x,v)) \,\der\mathcal{H}^{n-1}(x). \end{equation} \begin{ex} \label{ex:og-fourier} Fix any $v\in S^{n-1}$. Suppose $\xi\in\mathbb{R}^n$ is orthogonal to~$v$. Show that $(\mathcal{F}_{v^\perp}\mathcal{I} f({\,\cdot\,},v))(\xi)=\mathcal{F} f(\xi)$. This result is known as the Fourier slice theorem. \end{ex} \begin{theorem} \label{xrtthm:fourier} If $f\in C_c(\mathbb{R}^n)$ integrates to zero over all lines, then $f=0$. \end{theorem} \begin{proof} Take any $\xi\in\mathbb{R}^n$ and choose $v\in S^{n-1}$ so that $v\cdot\xi=0$. Since $\mathcal{I} f({\,\cdot\,},v)=0$, exercise~\ref{ex:og-fourier} gives $\mathcal{F} f(\xi)=0$. Therefore $\mathcal{F} f=0$, and by injectivity of the Fourier transform also $f=0$. \end{proof} \subsection{Limited angle tomography} Now we turn to our partial data problem. Let $D\subset S^{n-1}$ be the set of allowed directions. The question is whether $f\in C_c(\mathbb{R}^n)$ is uniquely determined by~$\mathcal{I} f(x,v)$ for all $x\in\mathbb{R}^n$ and $v\in D$. If $D=S^{n-1}$, then the result is stated in theorem~\ref{xrtthm:fourier} --- and our other injectivity theorems. However, if~$D$ is finite, it turns out that there is an infinite dimensional subspace of functions $f\in C_c(\mathbb{R}^n)$ for which this data $\mathcal{I} f|_{\mathbb{R}^n\times D}=0$. The simplest case is not hard to see. \begin{ex} Suppose $D=\{v\}$ is a singleton. Show that there is a function $f\in C_c(\mathbb{R}^n)\setminus0$ for which $\mathcal{I} f({\,\cdot\,},v)=0$. \end{ex} The general case for any finite set follows from a convolution argument: \begin{ex} Fix any $v\in S^{n-1}$. Let $f,g\in C_c(\mathbb{R}^n)$. Show that $\mathcal{I}(f*g)({\,\cdot\,},v)=(\mathcal{I} f({\,\cdot\,},v))*g$. (It then follows that if $\mathcal{I} f({\,\cdot\,},v_1)=0$ and $\mathcal{I} g({\,\cdot\,},v_2)=0$, then $\mathcal{I}(f*g)({\,\cdot\,},v)=0$ for both $v\in\{v_1,v_2\}$. The only thing left to worry about is that the convolution of two non-trivial compactly supported functions cannot vanish identically, but we shall not worry about it here.) \end{ex} Let us denote \begin{equation} D^\perp = \{\xi\in\mathbb{R}^n;\xi\cdot v=0\text{ for some }v\in D\}. \end{equation} Notice that~$D^\perp$ is not the orthogonal complement of the linear space spanned by~$D$. Instead, it is the union of the orthogonal complements of the elements in~$D$. In general~$D^\perp$ is not a vector space. Now we are ready to state and prove our theorem: \begin{theorem} \label{thm:limited-angle} Let $f\in C_c(\mathbb{R}^n)$. Suppose $D\subset S^{n-1}$ is such that $D^\perp\subset\mathbb{R}^n$ contains an interior point. If $\mathcal{I} f(x,v)=0$ for all $x\in\mathbb{R}^n$ and $v\in D$, then $f=0$. \end{theorem} \begin{proof} Take any $\xi\in D^\perp$. Then there is $v\in D$ so that $v\cdot\xi=0$. Since $\mathcal{I} f({\,\cdot\,},v)=0$, exercise~\ref{ex:og-fourier} gives $\mathcal{F} f(\xi)=0$. Therefore the Fourier transform~$\mathcal{F} f$ vanishes in~$D^\perp$. The continuous function~$f$ is compactly supported, so by theorem~\ref{thm:pw} the Fourier transform~$\mathcal{F} f$ is real analytic. By assumption~$D^\perp$ contains a non-empty open set, and~$\mathcal{F} f$ vanishes in it. Now exercise~\ref{ex:analytic-open} implies that~$\mathcal{F} f$ has to vanish identically due to analyticity. Since the Fourier transform is injective as mentioned in theorem~\ref{thm:ft}, we conclude that the function~$f$ vanishes identically. \end{proof} A new question arises: How much is needed about the set~$D$ of admissible directions to ensure that~$D^\perp$ contains an interior point? We will look at a couple of examples. First, it is clear that~$D$ needs to be uncountable. If~$D$ is countable, then~$D^\perp$ is a union of countably many hyperplanes, and such a union cannot have interior points. By a simple approximation argument one can replace~$D$ with~$\bar D$, so we may in fact assume that~$D$ is closed if we want to. If~$D$ contains an interior point, so does~$D^\perp$ (exercise~\ref{ex:perp-interior}). In fact, much less is needed, as the next exercise shows. \begin{ex} Consider the spacetime $\mathbb{R}^4=\mathbb{R}^3\times\mathbb{R}$ and imagine that measurements are only done along light rays. In natural units ($c=1$) this means that our set~$D$ is $\{(v_1,v_2)\in\mathbb{R}^3\times\mathbb{R};\abs{v_1}^2+\abs{v_2}^2=1,\abs{v_1}=\abs{v_2}\}$. The restriction to unit length is just a feature of our framework and unimportant for this case; the important restriction is $\abs{v_1}=\abs{v_2}$. This condition defines the light cone (the set of lightlike directions). Show that \begin{equation} D^\perp = \{(\eta_1,\eta_2)\in\mathbb{R}^3\times\mathbb{R};\abs{v_1}\geq\abs{v_2}\}. \end{equation} This contains the spacelike and lightlike directions but not timelike. \end{ex} \begin{ex} \label{ex:perp-interior} Show that if $v\in S^{n-1}$ is an interior point of~$D$, then any non-zero $\xi\in\mathbb{R}^n$ orthogonal to~$v$ is an interior point of~$D^\perp$. \end{ex} \subsection{On stability and singularities} We found out above that~$D^\perp$ having an interior point is sufficient for injectivity. However, it is insufficient for stability. Stability (of Lipschitz type) would mean an estimate of the kind \begin{equation} \aabs{f} \leq C \aabs{\mathcal{I} f} \end{equation} with suitable norms and~$\mathcal{I} f$ restricted to the set where data is available. No matter which Sobolev norms one chooses for functions on~$\mathbb{R}^n$ and the relevant subset of~$\Gamma$, there is no continuous left inverse for the partial data X-ray transform. Stable inversion is possible if $D=S^{n-1}$, and even if $D^\perp=\mathbb{R}^n$. The reason for instability is that some kinds of singularities are undetected. Using microlocal analysis one can have very fine control over singularities, and it is possible to ask whether a distribution is smooth or singular at a given point in a given direction. This requires the introduction of a wave front set. To be able to detect a singularity at a point $x\in\mathbb{R}^n$ in direction $v\in S^{n-1}$, the data must contain a line through~$x$ in a direction orthogonal to~$v$. A precise formulation of results of this kind of result is way beyond our reach here. In our limited angle tomography situation the condition mentioned above amounts to $D^\perp=\mathbb{R}^n$. In this case one can reconstruct the Fourier transform everywhere directly, and invert the Fourier transform to obtain the original function. This is stable. On the other hand, if~$D^\perp$ contains interior points but is not the whole space, then the data needs to be analytically continued (by virtue of exercise~\ref{ex:analytic-open}), and analytic continuation is unstable without strong a priori estimates. \begin{ex} Let $\Gamma'\subset\Gamma$ be a set of unoriented lines in~$\mathbb{R}^n$. Suppose~$\Gamma'$ satisfies the stability condition mentioned above: For every $x\in\mathbb{R}^n$ and $v\in S^{n-1}$ there is a line $\gamma\in\Gamma'$ going through~$x$ in a direction orthogonal to~$v$. Show that if $n=2$, then $\Gamma'=\Gamma$, but if $n\geq3$, this is not necessarily the case. \end{ex} \subsection{Local reconstruction} Another interesting question is whether a function can be reconstructed at a point from integrals of lines near that point. More precisely, let $x\in\mathbb{R}^n$ and let $U\ni x$ be a neighborhood (a region of interest). Does the knowledge of $\mathcal{I} f(\gamma)$ for all~$\gamma$ that meet~$U$ determine~$f|_U$ for some class of functions~$f$? It turns out that this is not possible. However, this data is enough to detect the singularities of~$f|_U$. That is, one can locally reconstruct jumps and other singularities accurately, but not a smooth function. Microlocal reconstruction is possible, local is not. In many practical applications it is indeed important to find the singularities of the unknown to identify sharp features, and it is not a big issue if the smooth part remains beyond reach. \begin{ex} \label{ex:roi-normal} Let $f\in C_c(\mathbb{R}^n)$ and let $U\subset\mathbb{R}^n$ be an open set. Explain why the integrals of~$f$ over all lines that meet~$U$ determine~$\mathcal{I}^*\mathcal{I} f|_U$. \end{ex} However, local reconstruction is possible for the Radon transform in three dimensions. Consider a point $x\in\mathbb{R}^3$ and a neighborhood $U\ni x$. Then the integrals of $f\in C_c(\mathbb{R}^3)$ over all hyperplanes that meet~$U$ determine~$f|_U$. As in exercise~\ref{ex:roi-normal}, this data determines~$R^*Rf|_U$, where~$R$ stands for the Radon transform. It turns out that for some constant $c>0$ we have $-c\Delta R^*Rf=f$, where~$\Delta$ is the Laplace operator, so that~$f$ can be recovered from~$R^*Rf$ by differentiation. In~$\mathbb{R}^n$, the normal operator of the X-ray transform can be inverted by the non-local operator~$(-\Delta)^{1/2}$ and that of the Radon transform by $(-\Delta)^{(n-1)/2}$. Local reconstruction is possible for the Radon transform in odd dimensions starting at three when the exponent $(n-1)/2$ is an integer. Non-integer powers of the Laplace operator can be defined via Fourier transform. \begin{ex} We saw above that local reconstruction for the Radon transform is possible in~$\mathbb{R}^3$. On the other hand, we saw in exercise~\ref{ex:radon-xrt} that injectivity of the Radon transform implies injectivity for the X-ray transform. Why does this not lead to local reconstruction for the X-ray transform in~$\mathbb{R}^3$? \end{ex} \begin{ex}Do you have any questions or comments regarding section~\thesection? Was something confusing or unclear? Were there mistakes?\end{ex} \section{Outlook} \label{sec:outlook} To conclude the course we review some directions of further study on the subject. The written descriptions are brief; they will be elaborated on and discussed in the lecture. The statements will not be made fully precise; the purpose is to give a flavor of what is known, not all details. Some results have been weakened for technical convenience. Before looking further, let us summarize what the course briefly. \subsection{Overview of the course} It is now time to look back and see what, if anything, we have accomplished during the course. The course started with the physical problem of X-ray tomography and its mathematical formulation. We then proved a uniqueness result in five different ways. These results are collected in theorems \ref{xrtthm:torus}, \ref{xrtthm:cormack}, \ref{xrtthm:radon}, \ref{xrtthm:pestov}, \ref{xrtthm:riesz}, and \ref{xrtthm:fourier}. However, these do not exhaust all known inversion methods. Our methods had various different assumptions, but they all proved this: \begin{theorem} Suppose $f\in C^\infty_c(\mathbb{R}^n)$, $n\geq2$. If $\mathcal{I} f=0$, then $f=0$. \end{theorem} In addition, we proved a number of related results. We proved an injectivty result for the X-ray transform on tori (theorem~\ref{thm:xrt-torus}) and solenoidal injectivity for vector field tomography (theorem~\ref{thm:vf}). We also gave a proof of two partial data results, namely Helgason's support theorem (theorem~\ref{thm:helgason}) and a result in limited angle tomography (theorem~\ref{thm:limited-angle}). \begin{ex} We gave five uniqueness proofs for the X-ray transform. Give a quick overview of the mathematical tools needed for each of the five proofs. Give five lists, one for each proof. Which proofs did you find most accessible and simple, and which ones hardest to follow? \end{ex} \subsection{Geodesic X-ray tomography} So far we have studied functions in Euclidean domains and integrated them over straight lines. But what if the domain is replaced by a Riemannian manifold with boundary and lines by geodesics? Most of our methods will be inapplicable on manifolds, but not all. Most of our tools (Fourier series, Fourier transform, convolutions, polar coordinates) fail on a general manifold. If the manifold happens to be spherically symmetric, then our radial Fourier series approach works: \begin{theorem} Equip the closed unit ball $\bar B\subset\mathbb{R}^n$ with a rotation symmetric Riemannian metric so that every maximal geodesic meets the boundary and therefore has finite length. Such manifolds are called non-trapping. On a non-trapping spherically symmetric Riemannian manifold a function is uniquely determined by its integrals over all geodesics. \end{theorem} Spherical symmetry is a strong requirement. Our proof idea with the sphere bundle and the Pestov identity works in more generality: \begin{theorem} A compact manifold with boundary is called simple if any two points can be joined with a unique geodesic and the geodesic depends smoothly on its endpoints. On a simple Riemannian manifold a function is uniquely determined by its integrals over all geodesics. \end{theorem} The second theorem does not contain the first one; there are non-simple but non-trapping rotation symmetric manifolds. \subsection{Tensor tomography} \label{sec:outlook-tt} A scalar function can be replaced with a tensor field of any rank. So far we have studied only rank zero (scalar fields) and rank one (vector fields). To go further, one must first understand what a tensor field is in the first place, and then figure out how to integrate them along lines (or geodesics). As in the case of vector fields, there is non-uniqueness for any non-zero rank. The goal is then to characterize this non-uniqueness. A symmetric rank~$m$ tensor field on~$\mathbb{R}^n$ is a function $f\colon\mathbb{R}^n\times\mathbb{R}^{nm}\to\mathbb{R}$ so that $f(x;v_1,\dots,v_m)$ is smooth in~$x$, linear in each~$v_i$ and invariant under changes of any two~$v_i$ and~$v_j$. The integral of such a tensor field over a line $\gamma\colon\mathbb{R}\to\mathbb{R}^n$ is \begin{equation} \int_\gamma f = \int_\mathbb{R} f(\gamma(t);\dot\gamma(t),\dots,\dot\gamma(t))\mathrm{d} t. \end{equation} That is, the velocity~$\dot\gamma$ is plugged into each of the~$m$ slots. A scalar function is a tensor field of rank zero, and a vector field of rank one. If $m=1$, there is only one slot for~$v$, and this is the integral of a vector field along a line as defined earlier. If $m=0$, there are no slots at all and the function only depends on~$x$. The resulting integral is the usual integral of a scalar function we have studied in this course. \begin{theorem} On a two-dimensional simple manifold a tensor field~$f$ of order~$m$ integrates to zero over all geodesics if and only if there is a tensor field~$h$ of order $m-1$ which vanishes at the boundary and satisfies $f=\mathrm{d}^s h$, where~$\mathrm{d}^s$ is a symmetrized covariant derivative. \end{theorem} \subsection{Linearization of travel time} For one specific example of applications, we can consider travel time tomography in seismology. The problem can be recast as a geometrical one, once the Earth is treated as a geometrical object. The linearized problem has non-uniqueness, but it corresponds to the non-uniqueness inherent to the geometrical problem. This has physical repercussions. \begin{theorem} \label{thm:outlook-lin} Let~$M$ be a manifold and~$g_s$ a family of Riemannian metrics on it, depending on a parameter $s\in\mathbb{R}$. Consider two points $x,y\in M$. Let~$\gamma_s$ be a geodesic with respect to the metric~$g_s$ joining these two points. Suppose~$\gamma_s$ depends smoothly on~$s$ and denote the length of~$\gamma_s$ by~$\ell_s(\gamma_s)$. Denote by $f_s=\partial_sg_s$ the second order tensor field obtained by differentiating the metric with respect to the parameter. Then \begin{equation} \frac{\mathrm{d}}{\mathrm{d} s}\ell_s(\gamma_s) = \frac12\int_{\gamma_s}f_s. \end{equation} \end{theorem} The boundary distance function of a manifold~$M$ with boundary is the restriction of the distance function $d\colon M\times M\to\mathbb{R}$ to $\partial M\times\partial M$. Then one can ask whether the Riemannian manifold $(M,g)$ is uniquely determined by its boundary distance function. This problem is hard and non-linear, but the linearization is simpler. Linearized travel time tomography is tensor tomography of rank two. If one studies conformal variations of a metric, then each~$g_s$ is a conformal multiple of~$g_0$. In that case~$f_s$ is also a conformal multiple of~$g_s$, and the tensor tomography problem reduces to a scalar tomography problem for the conformal factor. Both the original problem and the linearized one have a gauge freedom. In the original problem one can change coordinates on~$M$ by a diffeomorphism $\phi\colon M\to M$ and change the metric from~$g$ to~$\phi^*g$ and the data stays the same as long as $\phi|_{\partial M}=\id$. In the linearized problem one can only hope to reconstruct the metric perturbation~$f_s$ up to tensor fields of the form~$\mathrm{d}^sh$, where~$h$ is a covector field (one-form) vanishing at the boundary. It turns out that this gauge freedom in the linearized problem is the linearization of the gauge freedom of coordinate changes in the non-linear one. We also discussed a linearized travel time tomography problem in section~\ref{sec:doppler-application}. \subsection{Other classes of curves} So far we have only integrated functions (and vector fields) over straight lines. We also mentioned Riemannian geodesics, but there several other options as well. Of course, this can be seen as the mathematical art of (over)generalization, but a great number of different geometrical situations turn out to be physically relevant. As a broad term, inverse problems in integral geometry ask to recover an object from its integrals over a collection of subsets of the space. Integral geometry questions are often asked in a differential geometric setting, but the apparent duality of the concepts is coincidental. Some classes of curves to consider: \begin{itemize} \item Lines in~$\mathbb{R}^n$. \item Circles in~$\mathbb{R}^2$. \item Geodesics on a Riemannian manifold. \item Magnetic geodesics on a manifold or~$\mathbb{R}^n$. \item Geodesics on a Finsler manifold. \item Integral curves of a dynamical system. \item Curves which reflect or split in some way. \end{itemize} \subsection{Further reading} The following selection of books, lecture notes and review articles is by no means complete. The sources listed here are available online. \begin{itemize} \item Gunther Uhlmann, ``Inverse problems: seeing the unseen'', Bulletin of Mathematical Sciences, volume 4, issue 2, pp. 209--279, 2014: An overview of two important inverse problems, namely travel time tomography and Calder\'on's problem. Both are related to X-ray transforms of some kind. \\[-.2em]{\small\url{https://link.springer.com/article/10.1007/s13373-014-0051-9}} \item Sigur\dh{}ur (Sigurdur) Helgason, ``The Radon Transform'', second edition, 1999 and ``Integral Geometry and Radon Transforms'', first edition, 2011: The books are freely available on the author's homepage. They give a very thorough treatment of the theory of X-ray transforms and particularly Radon transforms. \\[-.2em]{\small\url{http://www-math.mit.edu/~helgason/publications.html}} \item Vladimir Sharafutdinov, ``Ray Transform on Riemannian Manifolds. Eight Lectures on Integral Geometry'', 1999: Lecture notes on X-ray tomography on Riemannian manifolds. \\[-.2em]{\small\url{http://www.math.nsc.ru/~sharafutdinov/publ.html}} \item Gabriel Paternain, Mikko Salo, and Gunther Uhlmann, ``Tensor tomography: progress and challenges'', Chinese Ann. Math. Ser. B 35, no. 3, 399--428, 2014: A review article on on X-ray tomography on manifolds, with focus on tensor fields. \\[-.2em]{\small\url{https://arxiv.org/abs/1303.6114}} \item Will Merry and Gabriel Paternain, ``Inverse Problems in Geometry and Dynamics'', 2011: Lecture notes on geodesic X-ray tomography with a dynamical focus. \\[-.2em]{\small\url{https://www.dpmms.cam.ac.uk/~gpp24/ipgd(3).pdf}} \item Gunther Uhlmann and Hanming Zhou, ``Journey to the Center of the Earth'', 2016: A review article of travel time tomography, which is closely related to tensor tomography. \\[-.2em]{\small\url{https://arxiv.org/abs/1604.00630}} \end{itemize} \begin{ex}Do you have any questions or comments regarding section~\thesection? Was something confusing or unclear? Were there mistakes?\end{ex} \subsection{Feedback} This course is somewhat unusual, compared to most courses in mathematics. To make the course more suitable for students in the future, the last exercises concern the course itself. \begin{ex} This course introduced a physical problem and introduced a number of mathematical tools related to that application. Should there have been more focus on physics --- more applications, more details, deeper explanations, or something else? Was the balance between physics and mathematics good for your interests? (Also, what is your major?) \end{ex} \begin{ex} The course was designed to be broad but shallow. We discussed a number of different mathematical tools and ideas related to X-ray tomography, but we did not go very deep into any of them. This was done so as to give you a broad overview of the topic and an example of how various different tools in analysis can be used to tackle the same applied problem. On the other hand, we could have developed a theory with optimal regularity (not restricting to continuous functions all the time), stability estimates, range characterizations, various function spaces and the like. Should the course have been deeper as opposed to broad? Would you have preferred to see one theory developed in detail rather than several independent ideas? It would also be possible to give a broad introductory course like this one and then a deeper follow-up course. \end{ex} \begin{ex} Answer the following questions: \begin{enumerate}[(a)] \item How much time did you spend on the course? \item What were the worst things about the course? \item Would you be interested in a course in the geometry of geodesics and geodesic X-ray tomography on Riemannian manifolds? \item Do you have any feedback in mind that was not covered by other questions? \end{enumerate} Thank you for your feedback! \end{ex} Previous feedback has been of immense help in improving these notes. Many thanks for all students who contributed!
1,314,259,992,853
arxiv
\section{Introduction} In the standard electroweak model the origin of quark masses is attributed to the Yukawa interactions and the Higgs mechanism. But the model gives no quantitative prediction for the structures of the Yukawa coupling matrices $Y^{}_{+2/3}$ and $Y^{}_{-1/3}$ in the $Q=+2/3$ and $Q=-1/3$ quark sectors, respectively. That is why there is no explanation of the observed strong hierarchies of quark masses, namely $m^{}_u/m^{}_c \sim m^{}_c/m^{}_t \sim \lambda^4$ and $m^{}_d/m^{}_s \sim m^{}_s/m^{}_b \sim \lambda^2$ with $\lambda \simeq 0.2$ \cite{XZZ}, within the standard model. In other words, why the three eigenvalues of the Yukawa coupling matrix $Y^{}_{+2/3}$ or $Y^{}_{-1/3}$ (i.e., $f^{}_\alpha = m^{}_\alpha/v$ with $v \simeq 174$ GeV being the vacuum expectation value and $\alpha$ running over $u$, $c$ and $t$ for $Y^{}_{+2/3}$ or $d$, $s$ and $b$ for $Y^{}_{-1/3}$) are so different in magnitude? This remains a highly puzzling question. As first pointed out by Harari, Haut and Weyers in 1978 \cite{Harari}, it should be very natural to conjecture that the quark fields of the same electric charge initially have the identical Yukawa interactions with the Higgs field, namely, \begin{eqnarray} Y^{(0)}_Q = \frac{C^{(0)}_Q}{3} \left( \begin{matrix} 1 & 1 & 1 \cr 1 & 1 & 1 \cr 1 & 1 & 1 \cr \end{matrix} \right) \; , \end{eqnarray} where $C^{(0)}_Q$ is a dimensionless coefficient, and $Q = +2/3$ for the up-quark sector or $Q= -1/3$ for the down-quark sector. Such a form of $Y^{(0)}_Q$ means that the corresponding quark mass matrix $M^{(0)}_Q$ must have the same ``flavor democracy", \begin{eqnarray} M^{(0)}_Q = \frac{m^{}_3}{3} \left( \begin{matrix} 1 & 1 & 1 \cr 1 & 1 & 1 \cr 1 & 1 & 1 \cr \end{matrix} \right) \; , \end{eqnarray} where $m^{}_3 \equiv v C^{(0)}_Q$, equal to the top-quark mass $m^{}_t$ for $Q= +2/3$ or the bottom-quark mass $m^{}_b$ for $Q= -1/3$. The corresponding quark mass term can be written as \begin{eqnarray} \frac{m^{}_3}{3} \sum_\alpha \sum_\beta \overline{\alpha^{}_{\rm L}} \ \beta^{}_{\rm R} + {\rm h.c.} \; , \end{eqnarray} and it is completely invariant under the permutation of all the three left-handed quark fields and all the three right-handed quark fields, where $\alpha, \beta = u, c, t$ for $Q=+2/3$ or $\alpha, \beta = d, s, b$ for $Q=-1/3$. That is to say, the flavor democracy of $Y^{(0)}_Q$ or $M^{(0)}_Q$ implies that the quark mass term in Eq. (3) possesses the exact $S(3)^{}_{\rm L} \times S(3)^{}_{\rm R}$ symmetry. This symmetry must be broken, since two of the three eigenvalues of $M^{}_Q$ are vanishing. The breaking of this flavor democracy leads to the flavor mixing effects between the two quark sectors \cite{D1,Yang,D2}. How to break the democracy of quark flavors and to what extent to break it are two highly nontrivial questions for model building in this regard \cite{FX2000}. In the present work we are going to address ourselves to these two questions by assuming a structural parallelism between the mass matrices of $Q=+2/3$ and $Q=-1/3$ quarks. Such a phenomenological assumption makes sense if the generation of quark masses in the two sectors is governed by the same dynamics, and combining it with a nontrivial parametrization of the Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix proposed by Fritzsch and Xing \cite{FX} allows one to figure out the texture and strength of flavor democracy breaking in each quark sector in terms of the observed values of quark masses and flavor mixing parameters. Some interesting implications of such flavor-democratized quark mass matrices, including their variations in the hierarchy basis and their evolution with the energy scales, are also discussed. \section{Flavor democracy breaking} Let us begin with diagonalizing the flavor-democratized quark mass matrix $M^{(0)}_Q$ as follows: \begin{eqnarray} V^\dagger_0 M^{(0)}_Q V^{}_0 = m^{}_3 \left(\begin{matrix} 0 & 0 & 0 \cr 0 & 0 & 0 \cr 0 & 0 & 1 \cr\end{matrix}\right) \; , \end{eqnarray} where \begin{eqnarray} V^{}_0 = \left(\begin{matrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{3}} \cr -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{3}} \cr 0 & -\frac{2}{\sqrt{6}} & \frac{1}{\sqrt{3}} \cr\end{matrix}\right) \; . \end{eqnarray} We therefore arrive at $m^{}_1 = m^{}_2 =0$, which are qualitatively consistent with the experimental fact $m^{}_u, m^{}_c \ll m^{}_t$ or $m^{}_d, m^{}_s \ll m^{}_b$. However, there is no flavor mixing in this special case, because the resulting CKM matrix $V = V^\dagger_0 V^{}_0 = {\bf 1}$ is an identity matrix. The realistic CKM quark mixing matrix \begin{eqnarray} V = V^\dagger_{+2/3} V^{}_{-1/3} = (V^{}_0 V^{}_{+2/3})^\dagger (V^{}_0 V^{}_{-1/3}) \end{eqnarray} measures a mismatch between the diagonalization of the $Q=+2/3$ quark mass matrix $M^{}_{+2/3}$ and that of the $Q=-1/3$ quark mass matrix $M^{}_{-1/3}$, and thus it provides a natural description of the observed phenomena of quark flavor mixing. Notice that $M^{}_{+2/3}$ and $M^{}_{-1/3}$ can always be arranged to be Hermitian, thanks to a proper choice of the flavor basis in the standard model or its extensions which have no flavor-changing right-handed currents \cite{Frampton}. So let us simply focus on Hermitian quark mass matrices in the following and take into account the corresponding flavor democracy in such a basis, namely, \begin{eqnarray} (V^{}_0 V^{}_{Q})^\dagger M^{}_{Q} (V^{}_0 V^{}_{Q}) = \widehat{M}^{}_{Q} \equiv \left(\begin{matrix} m^{}_1 & 0 & 0 \cr 0 & m^{}_2 & 0 \cr 0 & 0 & m^{}_3 \cr\end{matrix} \right) \; , \end{eqnarray} where $m^{}_1 = \pm m^{}_u$, $m^{}_2 = \pm m^{}_c$ and $m^{}_3 = m^{}_t$ for $Q=+2/3$, or $m^{}_1 = \pm m^{}_d$, $m^{}_2 = \pm m^{}_s$ and $m^{}_3 = m^{}_b$ for $Q=-1/3$. Here the sign ambiguity of $m^{}_1$ or $m^{}_2$ is attributed to the fact that the eigenvalues of the Hermitian matrix $M^{}_Q$ can be either positive or negative under the above unitary transformation. To reconstruct the pattern of $M^{}_Q$ in terms of $V^{}_0$, $V^{}_Q$ and $\widehat{M}^{}_Q$, however, one must specify the form of $V^{}_Q$ with the help of the parameters of $V$. We find that the most suitable parametrization of the CKM matrix $V$ for our purpose is the one advocated by two of us in Ref. \cite{FX}: \begin{eqnarray} V=\left( \begin{matrix} \sin\theta^{}_{\rm u}\sin\theta^{}_{\rm d} \cos\theta + \cos\theta^{}_{\rm u} \cos\theta^{}_{\rm d} e^{-{\rm i}\phi} & \sin\theta^{}_{\rm u} \cos\theta^{}_{\rm d} \cos\theta - \cos\theta^{}_{\rm u} \sin\theta^{}_{\rm d} e^{-{\rm i}\phi} & \sin\theta^{}_{\rm u} \sin\theta \cr \cos\theta^{}_{\rm u} \sin\theta^{}_{\rm d} \cos\theta - \sin\theta^{}_{\rm u} \cos\theta^{}_{\rm d} e^{-{\rm i}\phi} & \cos\theta^{}_{\rm u} \cos\theta^{}_{\rm d} \cos\theta + \sin\theta^{}_{\rm u} \sin\theta^{}_{\rm d} e^{-{\rm i}\phi} & \cos\theta^{}_{\rm u} \sin\theta \cr -\sin\theta^{}_{\rm d} \sin\theta & -\cos\theta^{}_{\rm d} \sin\theta & \cos\theta \end{matrix} \right) \; \end{eqnarray} with the subscripts ``u" and ``d" denoting ``up" ($Q=+2/3$) and ``down" ($Q=-1/3$), respectively. The reason is simply that this form of $V$ can be decomposed into $V^{}_{+2/3}$ and $V^{}_{-1/3}$ in an exactly {\it parallel} way as follows: \begin{eqnarray} V^{}_{+2/3} = \left(\begin{matrix} 1 & 0 & 0 \cr 0 & \cos (+\frac{2}{3}\theta) & -\sin (+\frac{2}{3}\theta) \cr 0 & \sin (+\frac{2}{3}\theta) & \cos (+\frac{2}{3}\theta) \end{matrix}\right) \left(\begin{matrix} \exp (+{\rm i} \frac{2}{3}\phi) & 0 & 0 \cr 0 & 1 & 0 \cr 0 & 0 & 1 \end{matrix}\right) \left(\begin{matrix} \cos\theta^{}_{\rm u} & -\sin\theta^{}_{\rm u} & 0 \cr \sin\theta^{}_{\rm u} & \cos\theta^{}_{\rm u} & 0 \cr 0 & 0 & 1\end{matrix}\right) \; , \nonumber \\ V^{}_{-1/3} = \left(\begin{matrix} 1 & 0 & 0 \cr 0 & \cos (-\frac{1}{3}\theta) & -\sin (-\frac{1}{3}\theta) \cr 0 & \sin (-\frac{1}{3}\theta) & \cos (-\frac{1}{3}\theta) \end{matrix}\right) \left(\begin{matrix} \exp(-{\rm i} \frac{1}{3}\phi) & 0 & 0 \cr 0 & 1 & 0 \cr 0 & 0 & 1 \end{matrix}\right) \left(\begin{matrix} \cos\theta^{}_{\rm d} & -\sin\theta^{}_{\rm d} & 0 \cr \sin\theta^{}_{\rm d} & \cos\theta^{}_{\rm d} & 0 \cr 0 & 0 & 1 \end{matrix}\right) \; . \end{eqnarray} Since all the four parameters in this parametrization of $V$ can be determined to a good degree of accuracy by using current experimental data, one may therefore fix the patterns of $V^{}_{+2/3}$ and $V^{}_{-1/3}$. Of course, the decomposition made in Eq. (9) depends also on a purely phenomenological assumption: the up- and down-type components of the flavor mixing angle $\theta$ are demanded to be proportional to the corresponding charges of these two quark sectors, so are the components of the CP-violating phase $\phi$. Such an assumption is another reflection of the {\it up-down parallelism}, which has been taken as the main guiding principle of our treatment, although it is very hard to argue any potential connection between the quark mass textures and the quark charges at this stage \footnote{However, it has been argued that the origin of some differences between the up- and down-quark sectors might simply represent a difference between their charges in a dynamical model which can explain the observed family structure, rather than a fundamental difference between the two sectors \cite{Hung}.}. One is certainly allowed to try some other possibilities of decomposing $V$ into $V^{}_{+2/3}$ and $V^{}_{-1/3}$ \cite{D2}, but the key point should be the same as ours --- to minimize the number of free parameters in reason, at least at the phenomenological level. Given Eqs. (7) and (9), we are now in a position to reconstruct the quark mass matrices $M^{}_{+2/3}$ and $M^{}_{-1/3}$ based on the flavor democracy. The texture of $M^{}_Q$ can be expressed as \begin{eqnarray} M^{}_{Q} = A^{2}_Q M^{(0)}_{Q} + M^{(1)}_{Q} + M^{(2)}_{Q} \; , \end{eqnarray} where $A^{}_{Q} = -\sin{(Q\theta)}/\sqrt{2}+\cos{(Q\theta)}$, $M^{(0)}_Q$ has been defined in Eq. (2), and \begin{eqnarray} \begin{aligned} M^{(1)}_{Q}&=C^{(11)}_{Q}\left( \begin{matrix} 1 & 1 & -r^{}_Q \cr 1 & 1 & -r^{}_Q \cr -r^{}_Q & -r^{}_Q & r^{2}_Q \end{matrix} \right)+C^{(12)}_{Q}\left( \begin{matrix} 0 & 0 & r^{}_Q \cr 0 & 0 & r^{}_Q \cr r^{}_Q & r^{}_Q & 2+r^{}_Q \end{matrix} \right) \; , \\ M^{(2)}_{Q}&=C^{(21)}_{Q}\left[\cos{(Q\phi)}\left(\begin{matrix} 1 & 0 & -1\cr 0 & -1 & 1 \cr -1 & 1 & 0 \end{matrix}\right)+{\rm i} \sin{(Q\phi)} \left(\begin{matrix} 0 & 1 & -1\cr -1 & 0 & 1 \cr 1 & -1 & 0 \end{matrix} \right) \right] \\ &+C^{(22)}_{Q}\left(\begin{matrix} 1 & -1 & 0 \cr -1 & 1 & 0 \cr 0 & 0 & 0 \end{matrix} \right) \\ &+C^{(23)}_{Q}\left[\cos{(Q\phi)} \left(\begin{matrix} 2 & 0 & 1\cr 0 & -2 & -1 \cr 1 & -1 & 0 \end{matrix}\right)-{\rm i}\sin{(Q\phi)}\left(\begin{matrix} 0 & -2 & -1\cr 2 & 0 & 1 \cr 1 & -1 & 0 \end{matrix}\right) \right] \;, \end{aligned} \end{eqnarray} in which $r^{}_Q = 2A^{}_Q/B^{}_Q$ with $B^{}_{Q}=\sqrt{2} \sin{(Q\theta)}+\cos{(Q\theta)}$, and \begin{equation} \begin{aligned} & C^{(11)}_{Q} = \frac{1}{6}\left(m^{}_{1}\sin^{2}{\theta^{}_{\rm q}} + m^{}_{2}\cos^{2} {\theta^{}_{\rm q}}\right) B^{2}_{Q} \; , \\ & C^{(12)}_{Q} = \frac{1}{2\sqrt{2}}m^{}_{3}\sin{(Q\theta)}B^{}_{Q} \; , \\ & C^{(21)}_{Q} = \frac{1}{2\sqrt{3}}(m^{}_{1}-m^{}_{2})\cos{(Q\theta)} \sin{2\theta^{}_{\rm q}} \; , \\ & C^{(22)}_{Q} = \frac{1}{2}\left(m^{}_{1}\cos^{2}{\theta^{}_{\rm q}} + m^{}_{2}\sin^{2}{\theta^{}_{\rm q}}\right) \; , \\ & C^{(23)}_{Q} = \frac{1}{2\sqrt{6}}(m^{}_{1}-m^{}_{2})\sin{(Q\theta)} \sin{2\theta^{}_{\rm q}} \; \end{aligned} \end{equation} with $\rm q=u$ for $Q=+2/3$ and $\rm q=d$ for $Q=-1/3$. It is obvious that the matrices $M^{(0)}_{Q}$, $M^{(1)}_{Q}$ and $M^{(2)}_{Q}$ perform the $S(3)^{}_{\rm L}\times S(3)^{}_{\rm R}$, $S(2)^{}_{\rm L}\times S(2)^{}_{\rm R}$ and $S(1)^{}_{\rm L}\times S(1)^{}_{\rm R}$ flavor symmetries, respectively. Among the five coefficients of $M^{(1)}_{Q}$ and $M^{(2)}_{Q}$ in Eq. (12), $C^{(12)}_{Q}$ is proportional to $m^{}_3 \sin(Q\theta)$ and the others are all dominated by the terms proportional to $m^{}_2$. Hence their ratios to the coefficient of $M^{(0)}_Q$ (i.e., $m^{}_3/3$) are suppressed at the levels of $\sin(Q\theta)$ and $m^{}_2/m^{}_3$, respectively. Because $\theta \sim \lambda^2$ \cite{FX} and $|m^{}_2/m^{}_3| \sim \lambda^4$ (for $Q=+2/3$) or $\lambda^2$ (for $Q=-1/3$) \cite{XZZ}, the relevant suppression is at least at the percent level. In other words, the strength of flavor democracy breaking must be at or below the percent level. To see this point more clearly, let us take account of the strong quark mass hierarchy and the smallness of three flavor mixing angles to make a reasonable analytical approximation for the expression of $M^{}_Q$ in Eq. (10). Then we arrive at \begin{eqnarray} \begin{aligned} M^{}_{Q} & \simeq \frac{1}{3}m^{}_{3}\left\{\left( \begin{matrix} 1 & 1 & 1 \cr 1 & 1 & 1 \cr 1 & 1 & 1 \end{matrix}\right)\right. +\left[\frac{1}{2}\frac{m^{}_{2}}{m^{}_{3}}\left( \begin{matrix} 1 & 1 & -r \cr 1 & 1 & -r \cr -r & -r & r^2 \end{matrix} \right) +\frac{3\sqrt{2}}{4}Q\theta\left( \begin{matrix} 0 & 0 & r \cr 0 & 0 & r \cr r & r & 2+r \end{matrix} \right)\right] \\ &-\sqrt{3} \ \theta^{}_{\rm q}\frac{m^{}_{2}}{m^{}_{3}}\left[\cos{(Q\phi)} \left(\begin{matrix}1 & 0 & -1\cr 0 & -1 & 1 \cr -1 & 1 & 0 \end{matrix}\right)+{\rm i}\sin{(Q\phi)} \left(\begin{matrix} 0 & 1 & -1\cr -1 & 0 & 1 \cr 1 & -1 & 0 \end{matrix} \right) \right] \\ &-\frac{\sqrt{6}}{2}Q\theta\theta^{}_{\rm q}\frac{m^{}_{2}}{m^{}_{3}} \left[\cos{(Q\phi)}\left(\begin{matrix} 2 & 0 & 1\cr 0& -2 & -1 \cr 1 & -1 & 0 \end{matrix}\right)-{\rm i}\sin{(Q\phi)}\left(\begin{matrix} 0 & -2 & -1\cr 2 & 0 & 1 \cr 1 & -1 & 0 \end{matrix}\right)\right] \\ &\left.+\frac{3}{2}\left(\frac{m^{}_{1}}{m^{}_{3}}+\theta^{2}_{\rm q}\frac{m^{}_{2}} {m^{}_{3}}\right)\left(\begin{matrix} 1 & -1 & 0\cr -1 & 1 & 0 \cr 0 & 0 & 0 \end{matrix} \right)\right\} \;, \end{aligned} \end{eqnarray} in which the subscript of $r^{}_Q$ has been omitted. In fact, $r^{}_Q \simeq 2-3\sqrt{2} \ Q\theta$ is not very sensitive to the value of $Q$ due to the smallness of $\theta$. The result in Eq. (13) shows a hierarchical chain of flavor democracy breaking in the quark sector. First, the $S(3)^{}_{\rm L}\times S(3)^{}_{\rm R}$ symmetry is broken down to the $S(2)^{}_{\rm L}\times S(2)^{}_{\rm R}$ symmetry, and the strength of this effect is characterized by the small quantities $m^{}_2/m^{}_3$ and $\theta$. Second, the $S(2)^{}_{\rm L}\times S(2)^{}_{\rm R}$ symmetry is further broken down to $S(1)^{}_{\rm L}\times S(1)^{}_{\rm R}$, and the corresponding effect is further suppressed because it is characterized by the much smaller quantities $\theta^{}_{\rm q} m^{}_2/m^{}_3$, $\theta \theta^{}_{\rm q} m^{}_2/m^{}_3$, $\theta^{2}_{\rm q} m^{}_2/m^{}_3$ and $m^{}_1/m^{}_3$. In particular, the CP-violating phase $\phi$ comes in at the second symmetry-breaking stage and hence the effect of CP violation is strongly suppressed. We proceed to evaluate the strength of flavor democracy breaking in a numerical way. To do so, we make use of the central values of six quark masses renormalized to the electroweak scale characterized by the $Z$-boson mass \cite{XZZ}: \begin{eqnarray} \begin{aligned} & m^{}_{u} \simeq 1.38 ~{\rm MeV} \; , ~~~ m^{}_{c} \simeq 638 ~ {\rm MeV} \; , ~~~ m^{}_t \simeq 172.1 ~{\rm GeV} \; ; \\ & m^{}_{d} \simeq 2.82 ~{\rm MeV} \; , ~~~ m^{}_s \simeq 57 ~{\rm MeV} \; , ~~~ m^{}_{b} \simeq 2.86 ~{\rm GeV} \; . \end{aligned} \end{eqnarray} The values of the flavor mixing parameters $\theta^{}_{\rm u}$, $\theta^{}_{\rm d}$, $\theta$ and $\phi$ can be obtained by establishing their relations with the well-known Wolfenstein parameters \cite{L}, whose values have been determined to an impressively good degree of accuracy \cite{PDG,CKM}: \begin{eqnarray} \begin{aligned} &\theta^{}_{\rm u} \simeq \arctan{\left(\lambda\sqrt{\overline{\rho}^{2}+\overline{\eta}^{2}} \right)} \simeq 0.086 \; , \\ &\theta^{}_{\rm d} \simeq \arctan{\left(2\lambda\sqrt{\frac{(1-\overline{\rho})^{2}+ \overline{\eta}^{2}}{\left[\lambda^{2}(1-2\overline{\rho})-2\right]^{2} +4\lambda^{4} \overline{\eta}^2}}\right)} \simeq 0.206 \; , \\ &\theta \simeq \arcsin{\left(A\lambda^{2}\sqrt{1+\lambda^{2} \left(\overline{\rho}^2+ \overline{\eta}^2\right)}\right)} \simeq 0.042 \; , \\ &\phi \simeq \arccos{\left(\frac{\sin^2{\theta^{}_{\rm u}}\cos^2{\theta^{}_{\rm d}}\cos^2{\theta} + \cos^2{\theta{}_{\rm u}}\sin^2{\theta^{}_{\rm d}} - \lambda^{2}}{2\sin{\theta^{}_{\rm u}} \cos{\theta^{}_{\rm u}}\sin{\theta^{}_{\rm d}}\cos{\theta^{}_{\rm d}}\cos{\theta}}\right)} \simeq 1.636 \; , \end{aligned} \end{eqnarray} where the best-fit values $A \simeq 0.825$, $\lambda \simeq 0.2251$, $\overline{\rho} \simeq 0.160$ and $\overline{\eta} \simeq 0.350$ \cite{CKM} have been input. Namely, we have \begin{eqnarray} \theta^{}_{\rm u} \simeq 4.951^{\circ} \; , ~~~ \theta^{}_{\rm d} \simeq 11.772^{\circ} \; , ~~~ \theta \simeq 2.405^{\circ} \; , ~~~ \phi \simeq 93.730^{\circ} \; , \end{eqnarray} implying $\theta^{}_{\rm u} \sim 2\lambda^2$, $\theta^{}_{\rm d} \sim \lambda$ and $\theta \sim \lambda^2$ in terms of the expansion parameter $\lambda \simeq 0.2$. The fact that $\phi$ is very close to $\pi/2$ proves to be quite suggestive in quark flavor phenomenology, as already discussed in Ref. \cite{LX}. With the help of the central values of six quark masses and four flavor mixing parameters given in Eqs. (14) and (16), one may start from Eq. (10) to numerically calculate the elements of $M^{}_{+2/3}$ and $M^{}_{-1/3}$ in two typical possibilities: (a) $(m^{}_{1}, m^{}_2) = (-m^{}_u, +m^{}_c)$ for $Q=+2/3$ and $(-m^{}_d, +m^{}_s)$ for $Q=-1/3$, leading to \begin{eqnarray} \begin{aligned} M^{}_{+2/3} & \simeq 55.07 ~{\rm GeV} \times \left\{\left( \begin{matrix} 1 & 1 & 1 \cr 1 & 1 & 1\cr 1 & 1 & 1 \end{matrix}\right) + \left[3.2\times10^{-2}\left( \begin{matrix} 0 & 0 & 1.89 \cr 0 & 0 & 1.89 \cr 1.89 & 1.89 & 3.89 \end{matrix}\right) \right.\right. \\ & \left. -2.07 \times 10^{-3}\left( \begin{matrix} -1 & -1 & 1.89 \cr -1 & -1 & 1.89 \cr 1.89 & 1.89 & -3.56 \end{matrix}\right)\right] -\left[ 2.65 \times 10^{-4}\left(\begin{matrix} 1 & 0 & -1 \cr 0 & -1 & 1 \cr -1 & 1 & 0 \end{matrix}\right) \right. \\ & \left. -3.03 \times 10^{-5}\left(\begin{matrix} 1 & -1 & 0 \cr -1 & 1 & 0 \cr 0 & 0 & 0 \end{matrix}\right)+5.24 \times 10^{-6}\left(\begin{matrix} 2 & 0 & 1 \cr 0 & -2 & -1 \cr 1 & -1 & 0 \end{matrix}\right) \right] \\ & \left. - {\rm i} \left[5.09 \times 10^{-4} \left(\begin{matrix} 0 & 1 & -1 \cr -1 & 0 & 1\cr 1 & -1 & 0 \end{matrix}\right) -1.01 \times 10^{-5} \left(\begin{matrix} 0 & -2 & -1 \cr 2 & 0 & 1 \cr1 & -1 & 0 \end{matrix}\right) \right]\right\} \; , \end{aligned} \end{eqnarray} and \begin{eqnarray} \begin{aligned} M_{-1/3} & \simeq 0.97 ~{\rm GeV} \times \left\{\left(\begin{matrix} 1 & 1 & 1 \cr 1 & 1 & 1 \cr 1 & 1 & 1 \end{matrix}\right)-\left[1.43 \times 10^{-2}\left(\begin{matrix} 0 & 0 & 2.06 \cr 0 & 0 & 2.06 \cr 2.06 & 2.06 & 4.06 \end{matrix}\right) \right.\right. \\ &\left.+8.98 \times 10^{-3}\left(\begin{matrix} -1 & -1 & 2.06 \cr -1 & -1 & 2.06 \cr 2.06 & 2.06 & -4.25 \end{matrix}\right)\right]-\left[ 6.08 \times 10^{-3}\left(\begin{matrix} 1 & 0 & -1 \cr 0 & -1 & 1 \cr -1 & 1 & 0 \end{matrix}\right) \right. \\ &\left. +1.63 \times 10^{-4}\left(\begin{matrix} 1 & -1 & 0 \cr -1 & 1 & 0 \cr 0 & 0 & 0 \end{matrix}\right)-6.02 \times 10^{-5} \left(\begin{matrix} 2 & 0 & 1 \cr 0 & -2 & -1 \cr 1 & -1 & 0 \end{matrix}\right)\right] \\ &+\left. {\rm i} \left[ 3.69 \times 10^{-3} \left(\begin{matrix} 0 & 1 & -1 \cr -1 & 0 & 1 \cr 1 & -1 & 0 \end{matrix}\right)+3.65 \times 10^{-5} \left(\begin{matrix} 0 & -2 & -1 \cr 2 & 0 & 1 \cr 1 & -1 & 0 \end{matrix} \right) \right] \right\} \; ; \end{aligned} \end{eqnarray} (b) $(m^{}_{1}, m^{}_2) = (+m^{}_u, +m^{}_c)$ for $Q=+2/3$ and $(+m^{}_d, +m^{}_s)$ for $Q=-1/3$, leading to \begin{eqnarray} \begin{aligned} M^{}_{+2/3}& \simeq 55.07 ~{\rm GeV} \times\left\{\left( \begin{matrix} 1 & 1 & 1 \cr 1 & 1 & 1 \cr 1 & 1 & 1 \end{matrix}\right) +\left[ 3.2 \times 10^{-2}\left(\begin{matrix} 0 & 0 & 1.89 \cr 0 & 0 & 1.89 \cr 1.89 & 1.89 & 3.89 \end{matrix}\right) \right.\right. \\ & \left. -2.07 \times 10^{-3}\left(\begin{matrix} -1 & -1 & 1.89 \cr -1 & -1 & 1.89 \cr 1.89 & 1.89 & -3.56 \end{matrix} \right) \right]-\left[ 2.64 \times 10^{-4}\left(\begin{matrix} 1 & 0 & -1 \cr 0 & -1 & 1 \cr -1 & 1 & 0 \end{matrix}\right) \right. \\ & \left. -5.52 \times 10^{-5}\left(\begin{matrix} 1 & -1 & 0 \cr -1 & 1 & 0 \cr 0 & 0 & 0 \end{matrix}\right) + 5.22 \times 10^{-6}\left(\begin{matrix} 2 & 0 & 1 \cr 0 & -2 & -1 \cr 1 & -1 & 0 \end{matrix}\right) \right] \\ &-\left. {\rm i} \left[ 5.06 \times 10^{-4} \left(\begin{matrix} 0 & 1 & -1 \cr -1 & 0 & 1 \cr 1 & -1 & 0 \end{matrix}\right) - 1.00 \times 10^{-5} \left( \begin{matrix} 0 & -2 & -1 \cr 2 & 0 & 1 \cr1 & -1 & 0 \end{matrix} \right) \right] \right\}\;, \end{aligned} \end{eqnarray} and \begin{eqnarray} \begin{aligned} M_{-1/3}& \simeq 0.97 ~{\rm GeV} \times \left\{ \left(\begin{matrix} 1 & 1 & 1 \cr 1 & 1 & 1 \cr 1 & 1 & 1 \end{matrix}\right) - \left[ 1.43 \times 10^{-2}\left(\begin{matrix} 0 & 0 & 2.06 \cr 0 & 0 & 2.06 \cr 2.06 & 2.06 & 4.06 \end{matrix}\right)\right. \right. \\ & \left. +9.01 \times 10^{-3} \left(\begin{matrix} -1 & -1 & 2.06 \cr -1 & -1 & 2.06 \cr 2.06 & 2.06 & -4.25 \end{matrix}\right)\right] - \left[5.51 \times 10^{-3} \left(\begin{matrix} 1 & 0 & -1 \cr 0 & -1 & 1 \cr -1 & 1 & 0 \end{matrix}\right) \right. \\ &\left.-2.62 \times 10^{-3} \left(\begin{matrix} 1 & -1 & 0 \cr -1 & 1 & 0 \cr 0 & 0 & 0 \end{matrix}\right)-5.45 \times 10^{-5}\left(\begin{matrix} 2 & 0 & 1 \cr 0 & -2 & -1 \cr 1 & -1 & 0 \end{matrix}\right)\right] \\ &+\left. {\rm i} \left[3.34 \times 10^{-3} \left(\begin{matrix} 0 & 1 & -1 \cr -1 & 0 & 1 \cr 1 & -1 & 0 \end{matrix} \right) + 3.31 \times 10^{-5} \left(\begin{matrix} 0 & -2 & -1 \cr 2 & 0 & 1 \cr 1 & -1 & 0 \end{matrix} \right) \right] \right\}\;. \end{aligned} \end{eqnarray} Some comments on the implications of these results are in order. \begin{itemize} \item The other two possibilities, corresponding to $(m^{}_1, m^{}_2) = (+m^{}_u, -m^{}_c)$ and $(-m^{}_u, -m^{}_c)$ in the $Q=+2/3$ quark sector or $(m^{}_1, m^{}_2) = (+m^{}_d, -m^{}_s)$ and $(-m^{}_d, -m^{}_s)$ in the $Q=-1/3$ quark sector, are numerically found to be very similar to cases (a) and (b) shown above. Hence they will not be separately discussed. \item The $S(2)^{}_{\rm L} \times S(2)^{}_{\rm R}$ terms of $M^{}_Q$ are not sensitive to the sign ambiguities of $m^{}_1$ and $m^{}_2$, but the latter can affect those $S(1)^{}_{\rm L} \times S(1)^{}_{\rm R}$ terms of $M^{}_Q$ to some extent. In other words, a specific model-building exercise should take into account the fine structure of $M^{}_Q$ which is associated with both the lightest quark mass and the CP-violating phase in each quark sector. \item It is always possible to combine the two $S(2)^{}_{\rm L} \times S(2)^{}_{\rm R}$ terms of $M^{}_Q$, and such a combination does not violate the $S(2)^{}_{\rm L} \times S(2)^{}_{\rm R}$ symmetry. Since the coefficients of five $S(1)^{}_{\rm L} \times S(1)^{}_{\rm R}$ terms are very different in magnitude, it is reasonable to neglect the most strongly suppressed ones when building a phenomenologically viable quark mass model. In particular, Eqs. (17)---(20) suggest that $C^{(22)}_Q \simeq 0$ and $C^{(23)}_Q \simeq 0$ should be two good approximations, which can also be observed from their analytical expressions in Eq. (12) or (13) by considering $|m^{}_1| \ll |m^{}_2| \ll m^{}_3$ and the smallness of $\theta$ and $\theta^{}_{\rm q}$. In this situation the analytical approximation of $M^{}_Q$ in Eq. (13) is further simplified to \begin{eqnarray} \begin{aligned} M^{}_{Q} & \simeq \frac{1}{3}m^{}_{3}\left\{\left( \begin{matrix} 1 & 1 & 1 \cr 1 & 1 & 1 \cr 1 & 1 & 1 \end{matrix}\right)\right. +\left[\frac{1}{2}\frac{m^{}_{2}}{m^{}_{3}}\left( \begin{matrix} 1 & 1 & -2 \cr 1 & 1 & -2 \cr -2 & -2 & 4 \end{matrix} \right) +\frac{3\sqrt{2}}{4}Q\theta\left( \begin{matrix} 0 & 0 & 2 \cr 0 & 0 & 2 \cr 2 & 2 & 4 \end{matrix} \right)\right] \\ & \left. -\sqrt{3} \ \theta^{}_{\rm q}\frac{m^{}_{2}}{m^{}_{3}}\left[\cos{(Q\phi)} \left(\begin{matrix}1 & 0 & -1\cr 0 & -1 & 1 \cr -1 & 1 & 0 \end{matrix}\right)+{\rm i}\sin{(Q\phi)} \left(\begin{matrix} 0 & 1 & -1\cr -1 & 0 & 1 \cr 1 & -1 & 0 \end{matrix} \right) \right]\right\} \;, \end{aligned} \end{eqnarray} where $r\simeq 2$ has been taken into account. \end{itemize} In short, the strength of $S(3)^{}_{\rm L} \times S(3)^{}_{\rm R} \to S(2)^{}_{\rm L} \times S(2)^{}_{\rm R}$ breaking is at the percent level for both up- and down-quark sectors, while the effects of $S(2)^{}_{\rm L} \times S(2)^{}_{\rm R} \to S(1)^{}_{\rm L} \times S(1)^{}_{\rm R}$ breaking are at the percent and ten percent levels for the up- and down-quark sectors, respectively. \section{On the hierarchy basis} It is sometimes convenient to ascribe the hierarchy of the quark mass spectrum directly to the hierarchy of the corresponding quark mass matrix. In the latter basis, which is usually referred to as the hierarchy basis, the quark mass matrix $M^\prime_Q$ is related to its democratic counterpart $M^{}_Q$ via the following transformation: \begin{eqnarray} M^\prime_Q = V^{\dag}_{0}M^{}_Q V^{}_{0} \; , \end{eqnarray} where $V^{}_0$ and $M^{}_Q$ have been given in Eqs. (5) and (10), respectively. To be explicit, we obtain \begin{eqnarray} M^\prime_Q = \left(\begin{matrix} 2C^{(22)}_Q & \sqrt{3} \ C^{(21)}_Q e^{{\rm i}Q\phi} & \sqrt{6} \ C^{(23)}_Q e^{{\rm i}Q\phi} \cr \sqrt{3} \ C^{(21)}_Q e^{-{\rm i}Q\phi} & X^{}_Q & Y^{}_Q \cr \sqrt{6} \ C^{(23)}_Q e^{-{\rm i}Q\phi} & Y^{}_Q & Z^{}_Q \end{matrix}\right) \; , \end{eqnarray} where \begin{eqnarray} && X^{}_Q = \frac{2}{3}\left[\left(r^{}_Q+1\right)^2 C^{(11)}_Q - \left(r^{}_Q-2\right) C^{(12)}_Q \right] \; , \nonumber \\ && Y^{}_Q = -\frac{\sqrt{2}}{3}\left(r^{}_Q+1\right)\left[\left(r^{}_Q-2 \right) C^{(11)}_Q +2C^{(12)}_Q \right] \; , \nonumber \\ && Z^{}_Q = \frac{1}{3}\left[\left(r^{}_Q-2\right)^{2}C^{(11)}_Q + \left(5 \ r^{}_Q + 2\right)C^{(12)}_Q \right] + A^{2}_Q m^{}_{3} \; . \end{eqnarray} The exact expression of $M^\prime_Q$ in Eq. (23) can be simplified, if the analytical approximation made in Eq. (13) for $M^{}_Q$ is taken into account. In this case, \begin{eqnarray} M^\prime_Q \simeq \left(\begin{matrix} m^{}_{1}+\theta^{2}_{\rm q}m^{}_{2} & -\theta^{}_{\rm q} m^{}_{2} e^{{\rm i}Q\phi} & -Q\theta\theta^{}_{\rm q}m^{}_{2}e^{{\rm i}Q\phi} \cr -\theta^{}_{\rm q} m^{}_{2} e^{-{\rm i}Q\phi} & m^{}_{2}+Q^{2}\theta^{2}m^{}_{3} & -Q\theta m^{}_{3} \cr -Q\theta\theta^{}_{\rm q} m^{}_{2}e^{-{\rm i}Q\phi} & -Q\theta m^{}_{3} & m^{}_{3} \end{matrix} \right) \; . \end{eqnarray} The hierarchical structure of $M^\prime_Q$ is therefore determined by the hierarchy $|m^{}_1| \ll |m^{}_2| \ll m^{}_3$ and the smallness of $\theta$ and $\theta^{}_{\rm q}$. Corresponding to the numerical illustration of $M^{}_Q$ in Eqs. (17)---(20), the results of $M^\prime_Q$ with the same inputs are give below. (a) $(m^{}_{1}, m^{}_2) = (-m^{}_u, +m^{}_c)$ for $Q=+2/3$ and $(-m^{}_d, +m^{}_s)$ for $Q=-1/3$, leading to \begin{eqnarray} \begin{aligned} M^{\prime}_{+2/3} \simeq \left(\begin{matrix} 3.337 & -54.695e^{1.091{\rm i}} & -1.532e^{1.091{\rm i}} \cr -54.695e^{-1.091{\rm i}} & 767.678 & -4798.559 \cr -1.532e^{-1.091{\rm i}} & -4798.559 & 171965.605 \end{matrix}\right) {\rm MeV} \;, \end{aligned} \end{eqnarray} and \begin{eqnarray} \begin{aligned} M^{\prime}_{-1/3} \simeq \left(\begin{matrix} -0.317 & -11.976e^{-0.545{\rm i}} & 0.168e^{-0.545{\rm i}} \cr -11.976e^{0.545{\rm i}} & 55.047 & 39.272 \cr 0.168e^{0.545i} & 39.272 & 2859.450\end{matrix}\right){\rm MeV} \; ; \end{aligned} \end{eqnarray} (b) $(m^{}_{1}, m^{}_2) = (+m^{}_u, +m^{}_c)$ for $Q=+2/3$ and $(+m^{}_d, +m^{}_s)$ for $Q=-1/3$, leading to \begin{eqnarray} \begin{aligned} M^{\prime}_{+2/3} \simeq \left(\begin{matrix} 6.077 & -54.458e^{1.091{\rm i}} & -1.525e^{1.091{\rm i}} \cr -54.458e^{-1.091{\rm i}} & 767.698 & -4798.559 \cr -1.525e^{-1.091{\rm i}} & -4798.559 & 171965.605 \end{matrix}\right) {\rm MeV} \; , \end{aligned} \end{eqnarray} and \begin{eqnarray} \begin{aligned} M^\prime_{-1/3} \simeq \left(\begin{matrix} 5.087 & -10.847e^{-0.545{\rm i}} & 0.152e^{-0.545{\rm i}} \cr -10.847e^{0.545{\rm i}} & 55.283 & 39.269 \cr 0.152e^{0.545i} & 39.269 & 2859.450 \end{matrix}\right){\rm MeV} \; . \end{aligned} \end{eqnarray} One can see that the sign ambiguities of $m^{}_1$ and $m^{}_2$ mainly affect the magnitude of the $(1,1)$ element of $M^\prime_Q$. The smallness of this matrix element is especially guaranteed if $m^{}_1$ and $m^{}_2$ take the opposite signs, as numerically shown in Eqs. (26) and (27). In the hierarchy basis the language of texture ``zeros" has proved to be very useful in establishing some experimentally testable relations between the ratios of quark masses and the flavor mixing angles \cite{F77,F78}. Those zeros dynamically mean that the corresponding matrix elements are sufficiently suppressed as compared with their neighboring counterparts, and this kind of suppression may reasonably arise from an underlying flavor symmetry \cite{FN}. In this sense Eqs. (26)---(29) motivate us to conjecture the well-known four-zero textures of Hermitian quark mass matrices \cite{Du} as the fairest extension of the original Fritzsch ansatz which contains six texture zeros \cite{F78}: \begin{eqnarray} M^\prime_{Q} = \left(\begin{matrix} {\bf 0} & \diamondsuit^{}_Q & {\bf 0} \cr \diamondsuit^*_Q & \heartsuit^{}_Q & \triangle^{}_Q \cr {\bf 0} & \triangle^{*}_Q & \Box^{}_Q \cr\end{matrix}\right) \; , \end{eqnarray} where the relevant symbols denote the nonzero matrix elements. In fact, the pattern of $M^{}_Q$ with an approximate flavor democracy obtained in Eq. (21) just leads us to the four-zero textures of $M^\prime_Q$ in the hierarchy basis, if one takes $r \simeq 2 - 3\sqrt{2} \ Q\theta$ instead of $r \simeq 2$: \begin{eqnarray} M^\prime_Q \simeq \left(\begin{matrix} {\bf 0} & -\theta^{}_{\rm q} m^{}_{2} e^{{\rm i}Q\phi} & {\bf 0} \cr -\theta^{}_{\rm q} m^{}_{2} e^{-{\rm i}Q\phi} & m^{}_{2}+Q^{2}\theta^{2}m^{}_{3} & -Q\theta m^{}_{3} \cr {\bf 0} & -Q\theta m^{}_{3} & m^{}_{3} \end{matrix} \right) \; , \end{eqnarray} which can also be read off from Eq. (25) if similar approximations are made. As pointed out in Refs. \cite{FX2003,XZ2015}, current experimental data require that the (2,2) and (2,3) elements of $M^\prime_{-1/3}$ be comparable in magnitude. In any case the pattern of $M^{}_Q$ in Eq. (21) or the texture of $M^\prime_Q$ in Eq. (31) can be very helpful for building a viable quark mass model. \section{On the scale dependence} In the above discussions we have restricted ourselves to the quark mass matrices at the electroweak scale characterized by $\mu = M^{}_Z$. Since the flavor democracy might be realized at a much higher energy scale $M^{}_X$, where a kind of fundamental new physics may occur, it makes sense to study the scale dependence of $M^{}_Q$ by means of the one-loop renormalization-group equations (RGEs) for the Yukawa coupling matrices and the CKM flavor mixing matrix \cite{RGE1}. For the sake of simplicity, here we work in the framework of the minimal supersymmetric standard model (MSSM) and calculate the relevant RGEs by taking account of the strong hierarchies of charged fermion masses and that of the CKM parameters. The approximate analytical results turn out to be \cite{RGE2} \begin{eqnarray} && m^{}_{t}(M^{}_{Z}) \simeq m^{}_{t}(M^{}_{X}) \left(\zeta^{}_{\rm u} \xi^{6}_{t} \xi^{}_{b}\right) \;, \nonumber \\ && m^{}_{b}(M^{}_{Z}) \simeq m^{}_{b}(M^{}_{X}) \left(\zeta^{}_{\rm d} \xi^{}_{t} \xi^{6}_{b} \xi^{}_{\tau}\right) \;; \hspace{0.8cm} \end{eqnarray} and \begin{eqnarray} \begin{aligned} & \frac{ m^{}_{u} (M^{}_{X}) }{ m^{}_{t} (M^{}_{X}) } \simeq \frac{ m^{}_{u} (M^{}_{Z}) }{ m^{}_{t} (M^{}_{Z}) } \left(\xi^{3}_{t} \xi^{}_{b}\right) \;, \\ & \frac{ m^{}_{c} (M^{}_{X}) }{ m^{}_{t} (M^{}_{X}) } \simeq \frac{ m^{}_{c} (M^{}_{Z}) }{ m^{}_{t} (M^{}_{Z}) } \left(\xi^{3}_{t} \xi^{}_{b}\right) \;, \\ & \frac{ m^{}_{d} (M^{}_{X}) }{ m^{}_{b} (M^{}_{X}) } \simeq \frac{ m^{}_{d} (M^{}_{Z}) }{ m^{}_{b} (M^{}_{Z}) } \left(\xi^{}_{t} \xi^{3}_{b}\right) \;, \\ & \frac{ m^{}_{s} (M^{}_{X}) }{ m^{}_{b} (M^{}_{X}) } \simeq \frac{ m^{}_{s} (M^{}_{Z}) }{ m^{}_{b} (M^{}_{Z}) } \left(\xi^{}_{t} \xi^{3}_{b}\right) \; ; \end{aligned} \end{eqnarray} and \begin{eqnarray} \begin{aligned} \theta^{}_{\rm u} (M^{}_{X}) \simeq \theta^{}_{\rm u} (M^{}_{Z}) \;, ~~~ \theta^{}_{\rm d} (M^{}_{X}) \simeq \theta^{}_{\rm d} (M^{}_{Z}) \;, ~~~ \theta (M^{}_{X}) \simeq \theta (M^{}_{Z}) \left(\xi^{}_{t} \xi^{}_{b}\right) \;, ~~~ \phi (M^{}_{X}) \simeq \phi (M^{}_{Z}) \; , ~~ \end{aligned} \end{eqnarray} where \begin{eqnarray} \begin{aligned} &\zeta^{}_{\rm q} \equiv \exp \left[ \frac{1}{2} \int^{{\rm ln}(M^{}_{X}/M^{}_{Z})}_{0} \sum^{3}_{i=1} \frac{c^{\rm q}_{i} g^{2}_{i} (0)}{8\pi^{2} - b^{}_{i}g^{2}_{i} (0) \chi} {\rm d} \chi \right] \;, \\ &\xi^{}_{\alpha} \equiv \exp \left[ - \frac{1}{16\pi^2} \int^{{\rm ln}(M^{}_{X}/M^{}_{Z})}_{0} f^{2}_{\alpha}(\chi) {\rm d} \chi \right] \; \end{aligned} \end{eqnarray} with $\rm q = u$ or $\rm d$, $\alpha = t$, $b$ or $\tau$, and $\chi = {\rm ln}(\mu / M^{}_{Z})$. In Eq. (35) $c^{\rm q}_{i}$ and $b^{}_{i}$ are the model-dependent coefficients whose values can be found in Ref. \cite{RGE1}. With the help of Eqs. (13) and (32)---(34), one can then express the democratic quark mass matrices at $M^{}_{X}$ by using the quark masses and flavor mixing parameters at $M^{}_{Z}$ and taking into account their RGE evolution effects: \begin{eqnarray} \begin{aligned} M^{}_{+2/3}(M^{}_{X}) & \simeq \frac{m^{}_{t}} {3\zeta^{}_{\rm u} \xi^{6}_{t} \xi^{}_{b}} \left\{\left( \begin{matrix} 1 & 1 & 1 \cr 1 & 1 & 1 \cr 1 & 1 & 1 \end{matrix} \right) \right. + \xi^{}_t \xi^{}_b \left[ \frac{1}{2} \xi^{2}_{t} \frac{m^{}_{c}}{m^{}_{t}} \left( \begin{matrix} 1 & 1 & -2 \cr 1 & 1 & -2 \cr -2 & -2 & 4 \end{matrix} \right) + \frac{\sqrt{2}}{2} \theta \left( \begin{matrix} 0 & 0 & 2 \cr 0 & 0 & 2 \cr 2 & 2 & 4 \end{matrix} \right) \right] \\ &-\sqrt{3} \xi^{3}_{t} \xi^{}_{b} \theta^{}_{\rm u} \frac{m^{}_{c}}{m^{}_{t}} \left[ \cos{ \left( + \frac{2}{3} \phi \right)} \left( \begin{matrix} 1 & 0 & -1 \cr 0 & -1 & 1 \cr -1 & 1 & 0 \end{matrix} \right) + {\rm i} \sin{ \left(+ \frac{2}{3} \phi \right)} \left( \begin{matrix} 0 & 1 & -1 \cr -1 & 0 & 1 \cr 1 & -1 & 0 \end{matrix} \right) \right] \\ &-\frac{\sqrt{6}}{3} \xi^{4}_{t} \xi^{2}_{b} \theta \theta^{}_{\rm u} \frac{m^{}_{c}}{m^{}_{t}} \left[\cos{ \left(+ \frac{2}{3} \phi \right)} \left( \begin{matrix} 2 & 0 & 1 \cr 0& -2 & -1 \cr 1 & -1 & 0 \end{matrix} \right) - {\rm i} \sin{ \left(+ \frac{2}{3} \phi \right)} \left( \begin{matrix} 0 & -2 & -1 \cr 2 & 0 & 1 \cr 1 & -1 & 0 \end{matrix} \right)\right] \\ &\left.+\frac{3}{2} \xi^{3}_{t} \xi^{}_{b} \left( \frac{m^{}_{u}}{m^{}_{t}} + \theta^{2}_{\rm u} \frac{m^{}_{c}}{m^{}_{t}} \right)\left(\begin{matrix} 1 & -1 & 0 \cr -1 & 1 & 0 \cr 0 & 0 & 0 \end{matrix} \right)\right\} \;, \end{aligned} \end{eqnarray} and \begin{eqnarray} \begin{aligned} M^{}_{-1/3}(M^{}_{X}) & \simeq \frac{m^{}_{b}} {3\zeta^{}_{\rm d} \xi^{}_{t} \xi^{6}_{b}\xi^{}_\tau} \left\{\left( \begin{matrix} 1 & 1 & 1 \cr 1 & 1 & 1 \cr 1 & 1 & 1 \end{matrix} \right) \right. + \xi^{}_t\xi^{}_b \left[ \frac{1}{2} \xi^{2}_{b} \frac{m^{}_{s}}{m^{}_{b}} \left( \begin{matrix} 1 & 1 & -2 \cr 1 & 1 & -2 \cr -2 & -2 & 4 \end{matrix} \right) - \frac{\sqrt{2}}{4} \theta \left( \begin{matrix} 0 & 0 & 2 \cr 0 & 0 & 2 \cr 2 & 2 & 4 \end{matrix} \right)\right] \\ &-\sqrt{3} \xi^{}_{t} \xi^{3}_{b} \theta^{}_{\rm d} \frac{m^{}_{s}}{m^{}_{b}} \left[ \cos{ \left( - \frac{1}{3} \phi \right)} \left( \begin{matrix} 1 & 0 & -1 \cr 0 & -1 & 1 \cr -1 & 1 & 0 \end{matrix} \right) + {\rm i} \sin{ \left(- \frac{1}{3} \phi \right)} \left( \begin{matrix} 0 & 1 & -1 \cr -1 & 0 & 1 \cr 1 & -1 & 0 \end{matrix} \right) \right] \\ &+ \frac{\sqrt{6}}{6} \xi^{2}_{t} \xi^{4}_{b} \theta \theta^{}_{\rm d} \frac{m^{}_{s}}{m^{}_{b}} \left[\cos{ \left(- \frac{1}{3} \phi \right)} \left( \begin{matrix} 2 & 0 & 1 \cr 0& -2 & -1 \cr 1 & -1 & 0 \end{matrix} \right) - {\rm i} \sin{ \left(- \frac{1}{3} \phi \right)} \left( \begin{matrix} 0 & -2 & -1 \cr 2 & 0 & 1 \cr 1 & -1 & 0 \end{matrix} \right) \right] \\ &\left. + \frac{3}{2} \xi^{}_{t} \xi^{3}_{b} \left( \frac{m^{}_{d}}{m^{}_{b}} + \theta^{2}_{\rm d} \frac{m^{}_{s}}{m^{}_{b}} \right)\left(\begin{matrix} 1 & -1 & 0 \cr -1 & 1 & 0 \cr 0 & 0 & 0 \end{matrix} \right)\right\} \;, \end{aligned} \end{eqnarray} where $r^{}_{Q} \simeq 2$ has been taken. Typically taking $M^{}_{X}=10^{16} ~{\rm GeV}$, $M^{}_{Z}=91.187 ~{\rm GeV}$ and $\tan{\beta^{}_{\rm MSSM}}=10$ for illustration, we numerically obtain $\zeta^{}_{\rm u} \simeq 3.47$, $\zeta^{}_{\rm d} \simeq 3.38$, $\xi^{}_{t} \simeq 0.854$, $\xi^{}_{b} \simeq 0.997$ and $\xi^{}_{\tau} \simeq 0.998$ from the one-loop RGEs \cite{RGE2}. In this case the expressions of $M^{}_{+2/3}$ and $M^{}_{-1/3}$ at $M^{}_X$ turn out to be \begin{eqnarray} \begin{aligned} M^{}_{+2/3}(M^{}_{X}) & \simeq 0.75 \cdot \frac{1}{3} m^{}_{t} \left\{\left( \begin{matrix} 1 & 1 & 1 \cr 1 & 1 & 1 \cr 1 & 1 & 1 \end{matrix} \right)\right. + 0.85 \left[ 0.73 \cdot \frac{1}{2} \frac{m^{}_{c}}{m^{}_{t}} \left( \begin{matrix} 1 & 1 & -2 \cr 1 & 1 & -2 \cr -2 & -2 & 4 \end{matrix} \right) + \frac{ \sqrt{2}}{2} \theta \left( \begin{matrix} 0 & 0 & 2 \cr 0 & 0 & 2 \cr 2 & 2 & 4 \end{matrix} \right) \right] \\ &- 0.62 \cdot \sqrt{3} \theta^{}_{\rm u} \frac{m^{}_{c}}{m^{}_{t}} \left[ \cos{ \left( + \frac{2}{3} \phi \right)} \left( \begin{matrix} 1 & 0 & -1 \cr 0 & -1 & 1 \cr -1 & 1 & 0 \end{matrix} \right) + {\rm i} \sin{ \left(+ \frac{2}{3} \phi \right)} \left( \begin{matrix} 0 & 1 & -1 \cr -1 & 0 & 1 \cr 1 & -1 & 0 \end{matrix} \right) \right] \\ &- 0.53 \cdot \frac{\sqrt{6}}{3} \theta \theta^{}_{\rm u} \frac{m^{}_{c}}{m^{}_{t}} \left[\cos{ \left(+ \frac{2}{3} \phi \right)} \left( \begin{matrix} 2 & 0 & 1 \cr 0 & -2 & -1 \cr 1 & -1 & 0 \end{matrix} \right) - {\rm i} \sin{ \left(+\frac{2}{3} \phi\right)} \left( \begin{matrix} 0 & -2 & -1 \cr 2 & 0 & 1 \cr 1 & -1 & 0 \end{matrix}\right)\right] \\ &\left.+ 0.62 \cdot \frac{3}{2} \left( \frac{m^{}_{u}}{m^{}_{t}} + \theta^{2}_{\rm u} \frac{m^{}_{c}}{m^{}_{t}} \right)\left(\begin{matrix} 1 & -1 & 0\cr -1 & 1 & 0 \cr 0 & 0 & 0 \end{matrix} \right)\right\} \;, \end{aligned} \end{eqnarray} and \begin{eqnarray} \begin{aligned} M^{}_{-1/3}(M^{}_{X}) & \simeq 0.35 \cdot \frac{1}{3} m^{}_{b} \left\{\left( \begin{matrix} 1 & 1 & 1 \cr 1 & 1 & 1 \cr 1 & 1 & 1 \end{matrix} \right) \right. + 0.85 \left[ 1.00 \cdot \frac{1}{2} \frac{m^{}_{s}}{m^{}_{b}} \left( \begin{matrix} 1 & 1 & -2 \cr 1 & 1 & -2 \cr -2 & -2 & 4 \end{matrix} \right) - \frac{\sqrt{2}}{4} \theta \left( \begin{matrix} 0 & 0 & 2 \cr 0 & 0 & 2 \cr 2 & 2 & 4 \end{matrix} \right) \right] \\ &- 0.85 \cdot \sqrt{3} \theta^{}_{\rm d} \frac{m^{}_{s}}{m^{}_{b}} \left[ \cos{\left( - \frac{1}{3} \phi \right)} \left( \begin{matrix} 1 & 0 & -1 \cr 0 & -1 & 1 \cr -1 & 1 & 0 \end{matrix} \right) + {\rm i} \sin{ \left(- \frac{1}{3} \phi \right)} \left( \begin{matrix} 0 & 1 & -1 \cr -1 & 0 & 1 \cr 1 & -1 & 0 \end{matrix} \right) \right] \\ &+ 0.72 \cdot \frac{\sqrt{6}}{6} \theta \theta^{}_{\rm d} \frac{m^{}_{s}}{m^{}_{b}} \left[\cos{ \left(- \frac{1}{3} \phi \right)} \left( \begin{matrix} 2 & 0 & 1 \cr 0 & -2 & -1 \cr 1 & -1 & 0 \end{matrix} \right) - {\rm i} \sin{ \left(- \frac{1}{3} \phi \right)} \left( \begin{matrix} 0 & -2 & -1 \cr 2 & 0 & 1 \cr 1 & -1 & 0 \end{matrix} \right) \right] \\ &\left. + 0.85 \cdot \frac{3}{2} \left( \frac{m^{}_{d}}{m^{}_{b}} + \theta^{2}_{\rm d} \frac{m^{}_{s}}{m^{}_{b}} \right)\left(\begin{matrix} 1 & -1 & 0\cr -1 & 1 & 0 \cr 0 & 0 & 0 \end{matrix} \right)\right\} \;, \end{aligned} \end{eqnarray} from which one can clearly see the RGE-induced corrections to the relevant terms in each quark sector. Hence such quantum effects should not be ignored when building a specific quark mass model based on the flavor democracy at $M^{}_X$ and confronting its predictions with the experimental data at $M^{}_Z$. At this point it is worth mentioning that the approximate four-zero textures of $M^{}_{+2/3}$ and $M^{}_{-1/3}$ in the hierarchy basis are essentially stable against the RGE running effects. Here the stability of the texture zeros means that the (1,1), (1,3) and (3,1) elements of each quark mass matrix at $M^{}_X$ remain strongly suppressed in magnitude as compared with their neighboring counterparts, and thus it is a reasonable approximation to take them to be vanishing at any energy scale between $M^{}_Z$ and $M^{}_X$ from a phenomenological point of view \cite{XZ2015}. Such an observation makes sense because the four-zero textures of Hermitian quark mass matrices or their variations are especially favored by current experimental data and deserve some special attention in the model-building exercises. \section{Summary} It has been known for quite a long time that the democracy of quark flavors is one of the well-motivated flavor symmetries for building a viable quark mass model, but how to break this symmetry and to what extent to break it are highly nontrivial. To minimize the number of free parameters, in this work we have assumed the structural parallelism between $Q=+2/3$ and $Q=-1/3$ quark sectors, and proposed a novel way to reconstruct the texture of flavor democracy breaking and evaluate its strength in each sector with the help of the Fritzsch-Xing parametrization of the CKM flavor mixing matrix. Some phenomenological implications of such flavor-democratized quark mass matrices, in particular their variations with possible texture zeros in the hierarchy basis and their RGE evolution from the electroweak scale to a superhigh-energy scale, have also been discussed. We hope that this kind of study will be useful to more deeply explore the underlying correlation between the quark flavor structures and the observed quark mass spectrum and flavor mixing pattern. \vspace{0.3cm} This research work was supported in part by the National Natural Science Foundation of China under grant No. 11375207 and the National Basic Research Program of China under grant No. 2013CB834300.
1,314,259,992,854
arxiv
\section{Introduction} \label{sec:intro} Far-field WPT is considered as a promising technique to exert a revolutionary impact on the powering systems of low power devices and to be the enabler of 1G mobile power networks \cite{BrunoToward}. Nevertheless, boosting the efficiency of WPT remains a key challenge \cite{clerckx2021wireless}. For this purpose, early efforts in the RF community have focused on the design of efficient rectennas \cite{suh2002high,1556784}, while recent efforts in the communication community have emphasized the crucial benefits of efficient signal designs for WPT \cite{Clerckx2016Waveform}. Of notable importance is the work in \cite{Clerckx2016Waveform} that developed a systematic framework for the design and optimization of waveforms to maximize the harvested DC power at the output of the rectenna. Such waveform optimization was further extended to other scenarios such as limited-feedback\cite{Huang1}, large-scale \cite{HuangLarge2017}, multi-user \cite{HuangLarge2017,abeywickrama2021refined}, opportunistic/fair-scheduling \cite{kim2020opportunistic,8476162}, multi-input-multi-output \cite{shen2020beamforming}, low-complexity \cite{ClerckxA}, prototyping and experimentation \cite{KimSignal}, wireless information and power transfer (WIPT) \cite{clerckx2017wireless} and wireless powered backscatter communications \cite{clerckx2017wirelessly}. Despite those progress, the above waveform optimization was performed without much consideration for HPA's non-linearity at the transmitter. Indeed, it has been verified that HPA's non-linearity distorts the amplitude and phase of its input signal\cite{santella1998hybrid}, and results in unexpected performance degradation particularly with multi-sine waveform transmission where the amplitudes' high variations make the input signal more vulnerable to HPA's non-linearity \cite{park2020performance}. To combat HPA's non-linear effect, mainly two lines of methods have been put forward, namely designing signals less susceptible to HPA's non-linearity and by means of digital pre-distortion (DPD). The former method decreases input signals' exposure to HPA's non-linear region by limiting their amplitude variations, such as peak-to-average-power-ratio (PAPR) reduction \cite{kryszkiewicz2018amplifier}, distortion power reduction across desired bandwidth \cite{kryszkiewicz2018amplifier} and leakage power reduction across adjacent channel\cite{goutay2021end}. Indeed, PAPR reduction has been introduced as a transmit waveform constraint in WPT in \cite{Clerckx2016Waveform}. However, this class of methods might be less efficient in WPT because HPA's power efficiency is often higher in the non-linear region and also because the method is not adaptive to HPAs' characteristics. In contrast, DPD pre-distorts the desired input signal according to HPA's transfer characteristics to linearize the transfer function of the joint pre-distorter-and-HPA structure \cite{fu2014frequency}. Recent literature has revealed the performance gain of using DPD in simultaneous WIPT (SWIPT) systems, observing an improved rate-energy region \cite{2020WIPTNON}. However, those papers did not propose a waveform design strategy that comprises HPA's non-linearity and EH's non-linearity simultaneously in WPT/SWIPT\cite{krikidis2020information}. This letter proposes a practical WPT system model accounting for both HPA and rectenna non-linearity, and derives the optimal waveform solution in the non-linear system based on a non-linear solid-state power amplifier (SSPA) and the non-linear rectenna in \cite{Clerckx2016Waveform}. Simulations verify the benefit of the proposed waveform, which compensates the power loss caused by HPA's non-linearity. The paper is organised as follows. Section \ref{section_WPT_system_model} models the non-linear WPT architecture. Section \ref{section_optimization} declares the optimization problem and reformulates it into a tractable problem, which is solved by successive convex programming (SCP), combining with Barrier's method and the gradient descent (GD) method. Section \ref{section_simulations} presents simulation results, and Section \ref{section_conclusion} draws the conclusions. \begin{figure}[htb] \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=9cm]{whole_structure1.pdf}} \end{minipage} \caption{The WPT structure with HPA and rectenna non-linearity.} \label{Fig_whole_structure} \end{figure} \section{WPT System Model} \label{section_WPT_system_model} Consider a system as depicted in Fig. \ref{Fig_whole_structure}. The transmitter consists of $M$ antennas, with each antenna transmitting over $N$ evenly frequency-spaced sub-carriers. At the transmitter, the RF signal is amplified and filtered before being transmitted. The complex input signal at the amplifier of the $m^{\text{th}}\:\:(m=1,2,...,M)$ antenna is written as: \begin{align} \label{eq_input_signal_complex} \widetilde{x}^{\text{in}}_m(t)&=\sum_{n=0}^{N-1}\widetilde{w}^{\text{in}}_{n,m}e^{j2\pi f_nt}, \end{align} where $\widetilde{w}^{\text{in}}_{n,m}$ denotes the complex weight of the $n^{\text{th}}\:\:(n=0,1,...,N-1)$ sub-carrier at the $m^{\text{th}}$ antenna, and $f_n=f_0+(n-1)\Delta_f$ denotes the frequency of the $n^{\text{th}}$ sub-carrier, with $f_0$ being the lowest sub-carrier frequency and $\Delta_f$ being the frequency spacing. The input signal $\widetilde{x}^{\text{in}}_m(t)$ is amplified and filtered before being transmitted. Adopting an SSPA model\cite{rapp1991effects}, the complex signal at the output of the SSPA at the $m^{\text{th}}$ antenna becomes: \begin{equation} \label{eq_HPA_model} \widetilde{x}^{\text{HPA}}_m(t)=f_{\text{SSPA}}(\widetilde{x}^{\text{in}}_m(t))=\frac{G\widetilde{x}^{\text{in}}_m(t)}{[1+(\frac{Gx^{\text{in}}_m(t)}{A_s})^{2\beta}]^{\frac{1}{2\beta}}}, \end{equation} where $x^{\text{in}}_m(t)=|\widetilde{x}^{\text{in}}_m(t)|$ is the amplitude envelop of the complex input signal $\widetilde{x}^{\text{in}}_m(t)$, $G$ denotes the small-signal amplifier gain of SSPA, $A_s$ denotes the saturation voltage of SSPA, and $\beta$ denotes the smoothing parameter of SSPA. $\widetilde{x}^{\text{HPA}}_m(t)$, after propagating through a BPF, becomes the complex transmit signal $\widetilde{x}^{\text{tr}}_m(t)$. Denote by $\widetilde{w}^{\text{tr}}_{n,m}$ the complex weight of the $n^{\text{th}}$ sub-carrier at the $m^{\text{th}}$ antenna. We have: \begin{align} \label{eq_transmit_signal_complex} \widetilde{x}^{\text{tr}}_m(t)&=\sum_{n=0}^{N-1}\widetilde{w}^{\text{tr}}_{n,m}e^{j2\pi f_nt}. \end{align} After propagating through the frequency-selective channel, the complex received signal at the receiver is: \begin{align} \label{eq_WPT_received_signal} \widetilde{y}(t)&=\sum_{m=1}^{M}\sum_{n=0}^{N-1}\widetilde{h}_{n,m}\widetilde{w}^{\text{tr}}_{n,m}e^{j2\pi f_nt}, \end{align} where $\widetilde{h}_{n,m}\sim \mathcal{CN}(0,1)$ denotes the complex channel of the $n^{\text{th}}$ sub-carrier of the signal from the $m^{\text{th}}$ transmit antenna. At the receiver, the wireless signal $\widetilde{y}(t)$ is picked up and is converted into DC as a power supply via a rectenna. We model the non-linear rectenna based on \cite{Clerckx2016Waveform}, whose output DC is approximately proportional to a scaling term as: \begin{align} \label{eq_scaling_term0} z_{DC}&=k_2R_{\text{ant}}\varepsilon\{y(t)^2\}+k_4R_{\text{ant}}^2\varepsilon\{y(t)^4\}\\ \label{eq_SSPA_poly_x_tr} \nonumber&=\frac{k_2R_{\text{ant}}}{2}(\sum_{m=1}^{M}\sum_{n=0}^{N-1}|\widetilde{w}^{\text{tr}}_{n,m}\widetilde{h}_{n,m}|^2)\\ \nonumber&\quad +\frac{3k_4R_{\text{ant}}^2}{8}(\sum_{\tiny{\begin{array}{c}m_0,m_1\\m_2,m_3\end{array}}}\sum_{\tiny{\begin{array}{c} n_0,n_1,n_2,n_3\\n_0+n_1=n_2+n_3\end{array}}} \widetilde{h}_{n_0,m_0}\widetilde{w}^{\text{tr}}_{n_0,m_0}\times\\ &\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\widetilde{h}_{n_1,m_1}\widetilde{w}^{\text{tr}}_{n_1,m_1}\widetilde{h}^*_{n_2,m_2}\widetilde{w}^{\text{tr}^*}_{n_2,m_2}\widetilde{h}^*_{n_3,m_3}\widetilde{w}^{\text{tr}^*}_{n_3,m_3}), \end{align} where $y(t)=\mathfrak{R}\{y(t)\}$ is the real received signal, and $k_i=i_s/(i!(\eta_0 V_0)^i)$ with $i_s$ being the reverse bias saturation current, $\eta_0$ being the ideality factor, $V_0$ being the thermal voltage of the diode and $R_{\text{ant}}$ being the characteristic impedance of the receiving antenna. \section{Optimization Solutions} \label{section_optimization} Consequently, subjected to a transmit power constraint and an input power constraint, the optimization problem to maximize the end-to-end harvested DC in WPT is written as: \begin{maxi!} {{\{\widetilde{w}^{\text{in}}_{n,m}\}}}{z_{DC}(\{\widetilde{w}^{\text{in}}_{n,m}\}),}{\label{eq_optimization_P1}}{\label{eq_optimization_P1_1}} \addConstraint{\frac{1}{2}\sum_m^M \sum_n^N |\widetilde{w}^{\text{in}}_{n,m}|^2 \leq P^{\max}_{\text{in}}}\label{eq_optimization_P1_2} \addConstraint{\frac{1}{2}\sum_m^M \sum_n^N |\widetilde{w}^{\text{tr}}_{n,m}(\{\widetilde{w}^{\text{in}}_{n,m}\})|^2 \leq P^{\max}_{\text{tr}},}\label{eq_optimization_P1_3} \end{maxi!} where $P^{\max}_{\text{in}}$ and $P^{\max}_{\text{tr}}$ are the input power constraint and the transmit power constraint respectively \footnote{ Eq. \eqref{eq_optimization_P1_2} prevents the power of SSPA's input signal exceeding SSPA's saturation power (the maximal output power) significantly, and avoids poor amplifier efficiency. Eq. \eqref{eq_optimization_P1_3} limits the transmit signal's RF exposure to human beings.}. Unfortunately, the scaling term $z_{DC}$ as a function of $\{\widetilde{w}^{\text{in}}_{n,m}\}$ in Eq. \eqref{eq_optimization_P1_1} is hardly specified, while $z_{DC}$ as a function of $\{\widetilde{w}^{\text{tr}}_{n,m}\}$ has been written explicitly in Eq. \eqref{eq_SSPA_poly_x_tr}. Thus, to solve problem \eqref{eq_optimization_P1}, we alter the optimization variables in problem \eqref{eq_optimization_P1} from $\{\widetilde{w}^{\text{in}}_{n,m}\}$ into $\{\widetilde{w}^{\text{tr}}_{n,m}\}$ and express $\{\widetilde{w}^{\text{in}}_{n,m}\}$ in Eq. \eqref{eq_optimization_P1_2} by using $\{\widetilde{w}^{\text{tr}}_{n,m}\}$. Consequently, an equivalent optimization problem is formed as: \begin{maxi!} {\substack{\{\overline{w}^{\text{tr}}_{n,m}\},\{\widehat{w}^{\text{tr}}_{n,m}\}}}{z_{DC}(\{\overline{w}^{\text{tr}}_{n,m}\},\{\widehat{w}^{\text{tr}}_{n,m}\}),}{\label{eq_optimization_P3}}{\label{eq_optimization_P3_1}} \addConstraint{{\sum_{m=1}^M\frac{1}{2T}\int_{T}\{\frac{x^{\text{tr}}_m(t)}{G}[\frac{1}{1-(\frac{x^{\text{tr}}_m(t)}{A_s})^{2\beta}}]^{\frac{1}{2\beta}}\}^2 dt}\nonumber\breakObjective{\leq P^{\max}_{\text{in}}}}\label{eq_optimization_P3_3} \addConstraint{\frac{1}{2}\sum_m^M \sum_n^N {\overline{w}^{\text{tr}^2}_{n,m}}+\widehat{w}^{\text{tr}^2}_{n,m} \leq P^{\max}_{\text{tr}},}\label{eq_optimization_P3_2} \end{maxi!} where $\{\overline{w}^{\text{tr}}_{n,m}\}$ and $\{\widehat{w}^{\text{tr}}_{n,m}\}$ are the real and imaginary part of $\{\widetilde{w}^{\text{tr}}_{n,m}\}$ respectively, and $x^{\text{tr}}_m(t)$ in Eq. \eqref{eq_optimization_P3_3} is the amplitude of $\widetilde{x}^{\text{tr}}_m(\{\overline{w}^{\text{tr}}_{n,m}\},\{\widehat{w}^{\text{tr}}_{n,m}\},t)$. The objective function and constraints in problem \eqref{eq_optimization_P3} can be proved convex. Problem \eqref{eq_optimization_P3} maximizes a convex objective function, which can be solved by SCP. In SCP, the objective term is linearly approximated by its first-order Taylor expansion at a fixed operating point, forming a new tractable optimization problem whose optimal solution is used as a new operating point of the next iteration. The procedure is repeated until two successive solutions are close enough and can be viewed as the solution of problem \eqref{eq_optimization_P3}. Assume $(\{\overline{w}^{\text{tr},(l-1)}_{n,m}\},\{\widehat{w}^{\text{tr},(l-1)}_{n,m}\})$ are the values of the operating point at the beginning of the $l^{\text{th}}$ iteration. Then, $z_{DC}(\{\overline{w}^{\text{tr}}_{n,m}\},\{\widehat{w}^{\text{tr}}_{n,m}\})$ at the $l^{\text{th}}$ iteration is linearly approximated as: \begin{align} \label{eq_first_order_Taylor} z_{DC}^{(l)}(\{\overline{w}^{\text{tr}}_{n,m}\},\{\widehat{w}^{\text{tr}}_{n,m}\})=\sum_{m=1}^M\sum_{n=0}^{N-1} \overline{\alpha}^{(l)}_{n,m}\overline{w}^{\text{tr}}_{n,m}+\widehat{\alpha}^{(l)}_{n,m}\widehat{w}^{\text{tr}}_{n,m}, \end{align} where $(\{\overline{\alpha}^{(l)}_{n,m}\},\{\widehat{\alpha}^{(l)}_{n,m}\})$ are the first-order Taylor coefficients of $(\{\overline{w}^{\text{tr}}_{n,m}\},\{\widehat{w}^{\text{tr}}_{n,m}\})$ respectively at the $l^{\text{th}}$ iteration. Hence, at the $l^{\text{th}}$ iteration, problem \eqref{eq_optimization_P3} is approximated as: \begin{maxi!} {\substack{\{\overline{w}^{\text{tr}}_{n,m}\},\{\widehat{w}^{\text{tr}}_{n,m}\}}}{z_{DC}^{(l)}(\{\overline{w}^{\text{tr}}_{n,m}\},\{\widehat{w}^{\text{tr}}_{n,m}\}),}{\label{eq_optimization_P4}}{\label{eq_optimization_P4_1}} \addConstraint{\text{Eq}. \eqref{eq_optimization_P3_2},\quad \text{Eq}. \eqref{eq_optimization_P3_3}.}{\label{eq_optimization_P4_2}} \end{maxi!} Problem \eqref{eq_optimization_P4} is solved by using Barrier's method, where the non-linear constraints in Eq. \eqref{eq_optimization_P4_2} are omitted by reformulating problem \eqref{eq_optimization_P4} into: \begin{align} \label{eq_optimization_P4_l} \nonumber\min_{\{\overline{w}^{tr}_{n}\},\{\widehat{w}^{tr}_{n}\}} \quad &-z_{DC}^{(l)}(\{\overline{w}^{\text{tr}}_{n,m}\},\{\widehat{w}^{\text{tr}}_{n,m}\})\\ &\quad +\sum_{i=1}^{2}I_-(f_{c,i}(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})), \end{align} where \begin{align} \label{eq_interpratation} I_-(x)=&\lbrace\begin{matrix} 0,\:\:\:\:&x\leq 0,\\ \infty,\:\:\:\:&x> 0, \end{matrix}\\\label{eq_interpratation1} f_{c,1}(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})=&\frac{1}{2}\sum_m^M \sum_n^N {\overline{w}^{\text{tr}^2}_{n,m}}+\widehat{w}^{\text{tr}^2}_{n,m} - P^{\max}_{\text{tr}},\\ \nonumber f_{c,2}(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})=&\sum_{m=1}^M\frac{1}{2T}\int_{T}\{\frac{x^{\text{tr}}_m(t)}{G}[\frac{1}{1-(\frac{x^{\text{tr}}_m(t)}{A_s})^{2\beta}}]^{\frac{1}{2\beta}}\}^2 dt\\ & - P^{\max}_{\text{in}}. \end{align} Further, to make problem \eqref{eq_optimization_P4_l} differentiable, $I_-(x)$ is approximated as: \begin{equation} \label{eq_I_-} \widehat{I}_-(x)=-(\frac{1}{t})\log(-x), \end{equation} where $t$ is a parameter that sets the accuracy of the approximation. The larger the $t$, the closer the $\widehat{I}_-(x)$ is to ${I}_-(x)$. Consequently, for a specific $t$, the optimization problem \eqref{eq_optimization_P4_l} becomes: \begin{align} \label{eq_optimization_barrier_approx} \nonumber \min_{\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\}} \quad &-z_{DC}^{(l)}(\{\overline{w}^{\text{tr}}_{n,m}\},\{\widehat{w}^{\text{tr}}_{n,m}\})-\\&\frac{1}{t}\sum_{i=1}^{2}\log(-f_{c,i}(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})), \end{align} which can be solved by GD methods such as Newton's Method. In summary, the optimization problem \eqref{eq_optimization_P3} is solved in a iterative manner by adopting SCP. In each SCP's round, the corresponding optimization problem \eqref{eq_optimization_P4} is solved by Barrier's method iteratively, with an exit condition of a sufficient large $t$ so that problem \eqref{eq_optimization_barrier_approx} approximates problem \eqref{eq_optimization_P4} satisfyingly. The whole optimization process is described in Algorithm \ref{SCP}. \begin{algorithm}[h] \SetAlgoLined $\textbf{Input}$: $(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{(0)},\epsilon_0>0,l\leftarrow 1$\; $\textbf{Output}$: $(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{\star}$\; $\textbf{Repeat}$: \\ $\:\:\:\:\:\:1: \:$Compute $(\{\overline{\alpha}\},\{\widehat{\alpha}\})^{(l)}$ at the operating point $(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{(l-1)}$ using Taylor expansion\; $\:\:\:\:\:\:2: \text{Compute } (\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{(l)}$ using Algorithm \ref{algorithm_barrier}\; $\:\:\:\:\:\:3: \:$Update $(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{\star}\leftarrow (\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{(l)}$\; $\:\:\:\:\:\:4: \:$Quit if \\ $\:\:\:\:\:\:\:\:\:\:\:\:\:|{(\{\mathbf{\overline{w}}^{\text{tr}}_{n}\},\{\mathbf{\widehat{w}}^{\text{tr}}_{n}\})^{(l)}}-{(\{\mathbf{\overline{w}}^{\text{tr}}_{n}\},\{\mathbf{\widehat{w}}^{\text{tr}}_{n}\})^{(l-1)}}|< \epsilon_0$\; $\:\:\:\:\:\:\:5: \:l\leftarrow l+1$\; \caption{Successive convex programming (SCP)} \label{SCP} \end{algorithm} \begin{algorithm}[h] \SetAlgoLined $\textbf{Input}$: $(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{(B_0)}\leftarrow(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{(l-1)},\:t>0,$\\ $\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\mu_B>0,\epsilon_B>0$\; $\textbf{Output}$: $(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{(l)}$; \\ $\textbf{Repeat}$: \\ $\:\:\:\:\:\:\:1:\:$Compute $(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})$ by minimizing problem \eqref{eq_optimization_barrier_approx} using Newton's Method with initialised point $(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{(B_0)}$\; $\:\:\:\:\:\:\:2:\text{Update }(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{(l)}\leftarrow(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})$\; $\:\:\:\:\:\:\:3: \:\text{Quit if } 2/t < \epsilon_B$\; $\:\:\:\:\:\:\:4: \:t\leftarrow\mu_Bt,\:(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{(B_0)}\leftarrow(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{(l)}$\; \caption{Barrier's method} \label{algorithm_barrier} \end{algorithm} \textit{Remark 1:} Current literature optimizes the WPT transmit waveform based on different optimization variables, such as the amplitude and phase of the weights\cite{2017Communications,shen2020beamforming}, the real and imaginary part of the weights\cite{abeywickrama2021refined}, and the complex weight vector\cite{HuangLarge2017}. This letter solves problem \eqref{eq_optimization_P3} by optimizing the real and imaginary part of the weights, because the non-linear SSPA constraint in Eq. \eqref{eq_optimization_P3_3} is only proved convex relative to the real and imaginary parts of the weights of the sub-carriers. \section{Simulations} \label{section_simulations} The power efficiency of the proposed waveform is evaluated under a Wi-Fi-like scenario with $f_0=5.18$ GHz. For the SSPA, set the smoothing parameter to $\beta=1$ and the small-signal gain to $G=1$; For the rectenna, set $i_s=5\:\mu$A, $\eta_0=1.05$, $V_0=25.86$ mV, and $R_{\text{ant}}=50\:\Omega$. Fig. \ref{fig_diff_P_tr} compares the energy harvesting performance between the proposed input waveform and the waveform considering only rectenna's non-linearity by putting the optimal transmit waveform in\cite{Clerckx2016Waveform} directly into SSPA. The energy harvesting performance assuming an ideal linear HPA is plotted as a benchmark (black), demonstrating the power loss caused by HPA's non-linearity compared with other curves. The comparison with using an ideal HPA also reveals that, although larger transmit power gives larger harvested energy in practical WPT systems, it also leads to more severe power loss caused by HPA's non-linearity. When the transmit power constraint grows sufficiently large, the harvested energy is limited by the saturation power of SSPA. Fig.\ref{fig_diff_P_tr} also verifies that, until the transmit power constraint reaches SSPA's saturation power ($-35\:$dBW), the proposed waveform always outperforms all the other solutions which are only optimized for rectenna's non-linearity. The result highlights the significance of considering HPA's non-linearity for waveform design. Interestingly, Fig.\ref{fig_diff_P_tr} also shows that, although the non-linear HPA prefers low-PAPR input signals, using the transmit waveform with PAPR constraints in \cite{Clerckx2016Waveform} as the input waveform (PAPR=$20$) will not necessarily outperform using the transmit waveform without PAPR constraints in \cite{Clerckx2016Waveform} as the input waveform. This might originate from a trade-off between the HPA non-linearity and the rectenna non-linearity, since high PAPR signals are preferred by rectenna's non-linearity, which is the opposite for SSPA \cite{Clerckx2016Waveform}. The phenomenon indicates that adding PAPR constraint only is not sufficient to grasp the HPA's non-linearity for optimal input waveform design and thus highlights the significance of designing waveforms adaptive to SSPA's transfer characteristics. However, that the curve of PAPR$=12$ outperforms the curve of PAPR$=20$ still illustrates SSPA's preference on low-PAPR signals. \begin{figure}[t] \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=8.5cm]{diff_P_tr-eps-converted-to.pdf}} \end{minipage} \caption{{Energy harvesting performance with $G=1, A_s=-35 \:$dBV,$\: P^{\max}_{\text{in}}=-20\:$dBW,$\: N=8$. `Ideal HPA' stands for using the optimal transmit waveform in \cite{Clerckx2016Waveform} to the input of an ideal HPA; `OPT' stands for the proposed optimal solution accounting for SSPA's and rectenna's non-linearity; `Decoupling' stands for using the optimal transmit waveform in \cite{Clerckx2016Waveform} to the input of SSPA; `PAPR=12' and `PAPR=20' stand for the optimal transmit waveform in \cite{Clerckx2016Waveform} with different PAPR constraints.}} \label{fig_diff_P_tr} \end{figure} The effect of HPA's non-linearity on energy harvesting performance is further verified in Fig. \ref{fig_diff_N}, where $z_{DC}$ is plotted as a function of the number of the sub-carriers with different saturation voltages. Fig. \ref{fig_diff_N} shows that the harvested energy increases linearly with the number of sub-carriers when using the optimal transmit waveform in \cite{Clerckx2016Waveform} into an ideal amplifier (black). However, if adopting the same waveform as the black curve but with a non-linear SSPA (blue), the harvested energy tends to saturate when the number of sub-carriers keeps increasing, especially with low SSPA's saturation voltage. This is because the PAPR of the optimal waveform in \cite{Clerckx2016Waveform} increases with the number of sub-carriers, giving larger maximal amplitudes of the signal and making the signal exposed to SSPA's non-linear regime more severely, which results in more power loss. In contrast, using the proposed input waveform (red) can compensate for SSPA's non-linear effect and guarantee the same harvested energy as using an ideal amplifier, as long as the input signal does not make the SSPA operate in very high non-linear regime (i.e. $A_s=-24\:$dBV, $N=16$). \begin{figure}[t] \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=8.5cm]{diff_N-eps-converted-to.pdf}} \end{minipage} \caption{{$z_{DC}$ as a function of $N$ with different $A_s$, $G=1, \: P^{\max}_{\text{in}}=-20\:$dBW, $P^{\max}_{\text{tr}}=-40\:$dBW.}} \label{fig_diff_N} \end{figure} \section{Conclusions} \label{section_conclusion} This paper proposes an input waveform design strategy which maximizes the harvested energy in WPT, considering both HPA and rectenna non-linearity. The power loss caused by HPA's non-linearity is evaluated through simulations. The simulations also verify that the proposed input waveform achieves better energy harvesting performance compared with the waveform that only accounts for rectenna's non-linearity, emphasizing the significance of considering transmitter's non-linearity in efficient wireless powered networks design. \bibliographystyle{IEEEtran}
1,314,259,992,855
arxiv
\section{Introduction} Recently, one found that blocked clause decomposition (BCD) can not only efficiently find backbone variables \cite{backbone:97} and implied binary equivalences through SAT sweeping, but also improve the performance of the state-of-the-art SAT solvers such as Lingeling \cite{Lingeling:13} on hard application benchmarks \cite{sbliter:13,EagerMover:14}. From our experimental result of solver abcdSAT \cite{abcdSAT}, winner of the main track of SAT Race 2015, abcdSAT with BCD was better than abcdSAT without BCD. This shows further that BCD is a useful technique. Now many researchers have been attracted to pay attention to this subject. A set of clauses is said to be a blocked set if it can be removed completely by Blocked Clause Elimination (BCE) \cite{BCE:99,BCE:12}. Any CNF formula can be decomposed into two blocked subsets. To make a blocked clause decomposition more useful, one wants always to have two blocked subsets as unbalanced as possible. The problem is that it is not easy to find the most unbalanced subsets. In theory, one has proven that finding a maximal blocked subset of a CNF formula with the largest cardinality (\emph{MaxBS} for short) is NP-hard \cite{sbliter:13}. In other words, it is impossible to find the best decomposition in polynomial time unless $P = NP$. So far a few decomposition algorithms were proposed. However, no algorithm achieves optimization in all terms. \emph{PureDecompose} \cite {sbliter:13} is the fastest, but its quality is poor. To improve the quality, Heule et al \cite{sbliter:13} presented \emph{QuickDecompose}. However, \emph{QuickDecompose} is time-consuming. Soon after, to improve the speed, Balyo et al \cite{EagerMover:14} developed a post-processing algorithm called \emph{EagerMover}. Through an exhaustive series of experiments, we noted that although the decomposition quality of \emph{PureDecompose}+\emph{EagerMover} (\emph{PureEager} for short) and \emph{QuickDecompose} can outperform \emph{PureDecompose}, their quality is not high yet. This paper aims at improving the decomposition quality, while keeping the runtime of algorithms under control. To achieve this goal, we present two new variants of \emph{PureDecompose}, a new decomposition algorithm based on clause correlation degree, and a new post-processing algorithm. In addition, we improve the existing BCE to speed up the decomposition. The algorithm resulting from integrating these new techniques is called \emph{MixDecompose}, which can improve significantly the quality of decomposition. On application instances, the decomposition quality of \emph{MixDecompose} is better than that of \emph{PureEager}. There is no application formula where the quality of \emph{PureEager} is better than \emph{MixDecompose}. In terms of speed, \emph{MixDecompose} is still fast. On average, it took 8.97 seconds on our machine, which is a little slower than \emph{PureEager} which took 7.41 seconds. However, in the worst case, \emph{MixDecompose} was faster than \emph{PureEager}. The latter exceeded 300 seconds in some cases, whereas the former took at most 110 seconds. \section{Preliminaries} In this section, we present basic concepts that will be used in subsequent algorithms for blocked clause decomposition. \vspace{0.5em} \noindent \textbf{CNF}. It is short for conjunctive normal form. A formula in CNF is formulated as a conjunction of clauses, where each clause is a disjunction of literals, each literal being either a Boolean variable or its negation. The negation of a variable $x$ is denoted by $\bar{x}$ or $\neg x$. In general, a clause $C$ is written as $C = x_1 \vee \cdots \vee x_m$, where $x_i (1 \leq i \leq m)$ is a literal. A formula $F$ is written as $F = C_1 \wedge \cdots \wedge C_n $, where $C_i (1 \leq i \leq n)$ is a clause. The symbols $var(F)$ and $lit(F)$ denote the sets of variables and literals occurring in a formula $F$, respectively. \vspace{0.5em} \noindent \textbf{Resolution}. Given two clauses $C_1 = l \vee a_1 \vee \cdots \vee a_m$ and $C_2 = \bar{l} \vee b_1 \vee\cdots \vee b_n$, the clause $C = a_1 \vee \cdots \vee a_m \vee b_1 \vee \cdots \vee b_n$ is called the resolvent of $C_1$ and $C_2$ on the literal $l$, which is denoted by $C = C_1 {\otimes}_l C_2$. \vspace{0.5em} \noindent \textbf{Blocked Clauses}.Given a CNF formula $F$, a clause $C$, a literal $l \in C$ is said to block $C$ w.r.t. $F$ if (i) $C$ is a tautology w.r.t. $l$, or (ii) for each clause $C' \in F$ with $\bar{l} \in C'$, $C' {\otimes}_l C$ is a tautology. A clause is a tautology if it contains both $x$ and $\bar{x}$ for some variable $x$. When $l$ blocks $C$ w.r.t. $F$, the literal $l$ and the clause $C$ are called a blocking literal and a blocked clause, respectively. \vspace{0.5em} \noindent \textbf{BCE}. It is short for blocked clause elimination, which removes blocked clauses from CNF formulas. By BCE($F$) we mean the CNF formula resulting from repeating the following operation until fixpoint: If there is a blocked clause $C \in F$ w.r.t. $F$, let $F := F - \{C\}$. It is said that BCE can solve a formula $F$ if and only if BCE($F) = \emptyset$. The seminal work in BCE is due to Kullmmann \cite{BCE:99}. \section{Blocked Clause Decomposition} In theory, any CNF formula can be decomposed into two blocked subsets. However, not all the decompositions are effective. In general, The larger one of the blocked sets is, the better the decomposition quality is, since the larger it is, the more it resembles the original formula. Therefore, the size difference of the two sets is considered as a measure of the decomposition quality. Nevertheless, computing the largest blocked set from a CNF formula is NP-hard. Hence, we here aim at finding a fast decomposition with higher quality, rather than the highest quality. This paper improves \emph{pure decomposition} by defining two possible variable ordering for variable elimination. The version based on the ordering from the lowest occurrences to the highest is called \emph{min pure decomposition}. The version based on the opposite ordering is called \emph{max pure decomposition}. In addition, we present a simple and limited BCE and a new decomposition algorithm, based on this BCE. This new decomposition algorithm is called \emph{less interfere decomposition}. To improve further the quality of decomposition, we propose a new post-processing called \emph{right set guided decomposition}. We do not know beforehand which one of these algorithms is the best. However, since these algorithms are lightweight, running all of them one after the other is fast still. We can obtain a fast and high-quality algorithm called \emph{MixDecompose} by integrating sequentially them. In subsequent subsections, we introduce these algorithms one by one. \subsection{Pure Decomposition} This is viewed as the simplest decomposition algorithm. Here we call it \emph{PureDecompose} for short. Fig. 1 shows its basic idea. Let the symbols $L$ and $R$ denote the \emph{left}(\emph{large}) subset and the \emph{right} (\emph{remainder}) subset, respectively. For each variable $x$, this algorithm adds always the larger of $F_x$ and $F_{\bar x}$ to $L$ and the smaller to $R$, where $F_x$ ($F_{\bar x}$) is the set of clauses of $F$ where $x$ occurs positively (negatively). At the termination of this algorithm, we have $F = L \cup R $ with $|L| \geq |R|$. In Fig. 1, $\max\{ F_x, F_{\bar x} \}$ means the set with the larger cardinality between $F_x$ and $F_{\bar x}$. The advantage of this algorithm is that it can be easily implemented to run in linear time in the size of $F$, using a standard structure of occurrence lists. Therefore it is very fast. The drawback is that its decomposition quality is not high on many formulas. For this reason, next we improve it by defining two possible variable ordering for variable elimination. \begin{flushleft} \begin{sf} \begin{footnotesize} \hskip 12mm $PureDecompose (F)$\\ \hskip 16mm $ L := \emptyset $\\ \hskip 16mm {\bf for } each variable $ x \in var(F) $ {\bf do}\\ \hskip 20mm $L := L \cup \max\{ F_x, F_{\bar x} \}$\\ \hskip 20mm $F := F - (F_x \cup F_{\bar x})$\\ \hskip 16mm {\bf return} $L$. \vspace{1em} \hskip 8mm \textrm{Fig. 1. Pseudo-code of \emph{PureDecompose} algorithm} \end{footnotesize} \end{sf} \end{flushleft} \subsection{Min Pure Decomposition} By a few empirical observations, we found that the performance of \emph{PureDecompose} rely significantly on the order in which variables are eliminated. Here, we present the first variant of \emph{PureDecompose}, which is called \emph{min pure decomposition}. Fig. 2 shows its pseudo-code. Its variable elimination order is different from \emph{PureDecompose}. One fifth of variable eliminations are to be eliminated in the same order as \emph{PureDecompose}. The remaining variables are to be eliminated in order from the lowest occurrence of literals to the highest. If there are multiple literals with the lowest occurrence, the literal with the minimum total number of clauses containing it is eliminated first. The total clause size of a literal $x$ can be formulated as $\sum_{C \in F_x} |C|$. \begin{flushleft} \begin{sf} \begin{footnotesize} \hskip 12mm $MinPureDecompose (F)$\\ \hskip 16mm $ L := \emptyset $\\ \hskip 16mm $ k := 0 $\\ \hskip 16mm {\bf while } $ F \neq \emptyset $ {\bf do}\\ \hskip 20mm {\bf if } $k$ $\mathrm{mod}$ $5 = 0 $ {\bf then } select $u \in vars(F)$ in the order of variable No. \\ \hskip 20mm {\bf else} $ m = \min_{x \in lit(F)} |F_x|$\\ \hskip 27mm $ u :=\arg \min_{|F_x|=m } \sum_{C \in F_x} |C|$\\ \hskip 20mm $L := L \cup \max\{ F_u, F_{\bar u} \}$\\ \hskip 20mm $F := F - (F_u \cup F_{\bar u})$\\ \hskip 20mm $ k := k + 1 $\\ \hskip 16mm {\bf return} $L$. \vspace{1em} \hskip 8mm \textrm{Fig. 2. Pseudo-code of \emph{MinPureDecompose} algorithm} \end{footnotesize} \end{sf} \end{flushleft} Compared to \emph{PureDecompose}, this algorithm adds only the search of variables to be eliminated. This search can be done in $O(n \log n)$ time, using an order heap, where $n$ is the number of variables. In the actual implementation, the number $\gamma$ of variables for computing $\min_{x \in lit(F)} |F_x|$ is limited to 30000 when $n < 70000$, 1500 otherwise. That is, the computation of $m$ in Fig. 2 is replaced with $ m = \min_{x \leq n \wedge s \leq x \leq s+ \gamma} \min \{|F_x|,|F_{\bar x}|$\}, where $s$ is the previous literal $u$ with the lowest occurrence in the given range. This limit can guarantee that \emph{MinPureDecompose} is still very fast even if $n$ is very large. In terms of decomposition quality, this algorithm is superior to the other algorithms on some application instances such as \emph{ctl\_4291\_567\_5\_unsat\_pre}. \subsection{Max Pure Decomposition} Now we consider the second variant of \emph{PureDecompose}. The order of its variable eliminations is opposite to that of the first variant. We call this variant \emph{max pure decomposition}, which is shown in Fig. 3. It always eliminate first a literal with the highest occurrence. When multiple literals have the same highest occurrence, we select a variable with the lowest difference of its two literal occurrences. This can be done by computing $\min_{|F_x|=m} ||F_x| - |F_{\bar x}|| $, where $m$ is defined as $\max_{x \in lit(F)} |F_x|$. Actually, the first variant can introduce also this tie-break method. \begin{flushleft} \begin{sf} \begin{footnotesize} \hskip 12mm $MaxPureDecompose (F)$\\ \hskip 16mm $ L := \emptyset $\\ \hskip 16mm {\bf while } $ F \neq \emptyset $ {\bf do}\\ \hskip 20mm $ m := \max_{x \in lit(F)} |F_x|$\\ \hskip 20mm $ u :=\arg \min_{|F_x|=m} ||F_x| - |F_{\bar x}|| $\\ \hskip 20mm $L := L \cup \max\{ F_u, F_{\bar u} \}$\\ \hskip 20mm $F := F - (F_u \cup F_{\bar u})$\\ \hskip 16mm {\bf return} $L$. \vspace{1em} \hskip 8mm \textrm{Fig. 3. Pseudo-code of \emph{MaxPureDecompose} algorithm} \end{footnotesize} \end{sf} \end{flushleft} Unlike the first variant, \emph{MaxPureDecompose} needn't compute the total clause size $\sum_{C \in F_x} |C|$ for each literal $x$. So it should run faster than the first variant. In order to ensure that is still very fast even if the number of variables is very large, the number $\gamma$ of variables for finding a literal with the highest occurrence is limited to 5000 when $n < 800000$, 500 otherwise. In other words, in the actual implementation, the computation of $m$ in Fig. 3 is replaced with $ m = \max_{x \leq n \wedge s \leq x \leq s+ \gamma} \max \{|F_x|,|F_{\bar x}|$\}, where $s$ is the previous literal $u$ with the highest occurrence. The decomposition quality of this algorithm is superior to that of the other algorithms on some application instances such as \emph{complete-500}. \subsection{A Simple and Limited BCE} BCE is applied not only to BCD, but also to CNF preprocessing. Using a good BCE is very important. To improve the efficiency of \emph{LessInterfereDecompose} that will be given in the next subsection, in Fig. 4 we present a simple and efficient BCE, which is different from that presented in \cite{BCE:12}. BCE in \cite{BCE:12} is based on a literal-based priority queue, while our BCE is based on a clause-based linear linked list. Another important difference from the usual BCE is that we do not try to test whether each literal $l$ in $C$ is a blocking literal when $|F|\geq300000$. We test only literal $l$ with $|F_{\bar l}|<2$. That is, we replace the statement `` {\sf{\bf for } $ l \in C $ {\bf do} }" in the usual BCE with the statement `` {\sf {\bf for } $ l \in C $ with ($|F_{\bar l}|<2$ or $|F|<300000$ or \emph{isFirst}) {\bf do}}'', where \emph{isFirst} is a Boolean variable for testing whether BCE is invoked for the first time. If it is the first time to call to BCE, we run the usual BCE. The condition ``$|F_{\bar l}|<2$'' will not prevent forever from calling to BCE, since we select always literal $\bar l$ with the minimum occurrences, and move at least one clause from $F_{\bar l}$ to $R$ every time, i.e., $|F_{\bar l}|$ decreases constantly. In the decomposition algorithm given in the next subsection, using this simple BCE is much faster than the usual BCE. Surprisingly, the decomposition quality keep unchanged in most cases. Even if it is changed, its change is still very small. In addition, our \emph{ touch} function is different from that in \cite{BCE:12}. Here is our definition about it. \[{\it touch(C,F)}= \left \{ \begin {array} {l@{\quad \quad}l} \bigcup\limits_{x \in C}F_{\bar x} & |F| < 800000 \hskip 1mm \mathrm{or} \hskip 1mm isFirst \\ \bigcup\limits_{x \in C \wedge |F_x|<2}F_{\bar x} & \mathrm {otherwise} \end {array} \right . \]\\ When $|F| \geq 800000$, and it is not the first call to $BCE$, we consider only the clauses touched by the negation of literals with the number of occurrences $< 2$. This can speed up the decomposition of large instances. For example, using the above \emph{touch}, the runtime required by \emph{LessInterfereDecompose} to decompose \emph{q\_query\_3\_L90} can be reduced from 600 seconds to less than 9 seconds. The decomposition quality keep unchanged still. \begin{flushleft} \begin{sf} \begin{footnotesize} \hskip 12mm $BCE($touched clauses $T$, formula $F$, blocked set $L)$\\ \hskip 16mm {\bf for } each clause $ C \in T \wedge F$ {\bf do}\\ \hskip 20mm {\bf for } $ l \in C $ with ($|F_{\bar l}|<2$ or $|F|<300000$ or \emph{isFirst}) {\bf do}\\ \hskip 24mm {\bf if } all resolvents of $C$ on $l$ are tautologies, i.e., $C$ is blocked {\bf then}\\ \hskip 28mm $L := L \cup \{C\}$\\ \hskip 28mm $F := F - \{C\}$\\ \hskip 28mm $T := T \cup touch(C, F)$\\ \hskip 28mm {\bf continue} with next $C$ in outer loop\\ \hskip 16mm {\bf return} $L$ \vspace{1em} \hskip 8mm \textrm{Fig. 4. Pseudo-code of \emph{BCE} algorithm} \end{footnotesize} \end{sf} \end{flushleft} \subsection{Less Interfere Decomposition} The two variants of \emph{PureDecompose} both are based on the variable elimination order. Nevertheless, in fact, it is difficult to find out the optimal algorithm by optimizing only the variable elimination order. Therefore, it is necessary to find a different decomposition technique. Below we present a new algorithm called \emph{less interfere decomposition}, which is based on the order of clause elimination. Fig. 5 shows its pseudo-code. The basic outline of this algorithm may be sketched as follows: move blocked clauses in $F$ to $L$ by BCE, compute the candidate set $S$, move each clause $C \in S \cap F$ to $R$. These steps are repeated until $F$ is empty. The computation of the candidate set $S$ is based on the notion of interfering degree. The interfering degree of a clause $C$ can be defined as $\sum\limits_{C' \in F \wedge l \in C \wedge \bar{l} \in C'} Ntaut(C' {\otimes}_l C) $, where $Ntaut(X)$ is zero if $X$ is a tautology clause, and one otherwise. The probability that $C' {\otimes}_l C$ is not a tautology clause is very high. To save the computing cost, we may approximate the interfering degree as $\sum\limits_{C' \in F} |\{l \mid l \in C \wedge \bar{l} \in C'\}|$. \emph{LessInterfereDecompose} in Fig. 5 uses this approximation version to compute the interfering degree, and call this measure $score$, i.e., $score[C]= \sum\limits_{C' \in F} |\{l \mid l \in C \wedge \bar{l} \in C'\}|$. To get clauses with the maximum score, all the clauses in $F$ are traversed. If only one clause with the maximum score is moved to $R$ from $F$ each time $F$ is traversed, it is time consuming. So we decide to move $p$ clauses to $R$ one time, where $p=\frac{|F|}{\theta}$, where $\theta$ is a constant. Table\,1 shows the performance behavior for different $\theta$'s on ACG-20-5p1 with $|F|=1416850$. For this instance, selecting 400 as the value of $\theta$ is a better choice. However, considering the other instances, actually for application instances, $\theta$ is set to 200 when $|F| \geq 8\times10^5$, 2300 Otherwise. For large instances, because we put great stock in speed, $\theta$ is selected as a smaller value. For small instances, because we put great stock in quality, $\theta$ is selected as a larger value. This is actually a compromise between speed and quality. For random instances, $\theta$ is set to 400 in any case. When $\frac{|F|}{\theta} < 18 $, $p$ is set to 18. As shown in Fig. 5, the clauses with the first $p$ highest scores are stored in $S$ as the candidate clauses to be moved to $R$. In order to save time further, we compute the interfering degree produced by only literals with the lowest occurrence, not all literals. \begin{flushleft} \begin{sf} \begin{footnotesize} \hskip 12mm $LessInterfereDecompose(F)$\\ \hskip 16mm $ L := R := S := \emptyset $\\ \hskip 16mm $BCE(F,F,L)$\\ \hskip 16mm {\bf while } $ F \neq \emptyset $ {\bf do}\\ \hskip 20mm {\bf if } $ S \cap F = \emptyset $ {\bf then} \\ \hskip 25mm $ m = \min_{x \in lit(F)} |F_x|$\\ \hskip 25mm {\bf for } each clause $ C \in F ${\bf do}\\ \hskip 28mm {\bf for } each clause $e \in F_l$ with $ l \in C $ and $|F_l|=m$ {\bf do}\\ \hskip 32mm $ score[e] := score[e]+1$\\ \hskip 25mm $S :=$ \{$x | score[x] \geq \alpha$, where the $p$-th highest score is $\alpha$\}\\ \hskip 20mm select a clause $C \in S \cap F $\\ \hskip 20mm $F := F - \{C\}$\\ \hskip 20mm $BCE(touch(C, F),F, L)$\\ \hskip 16mm {\bf return} $L$ \vspace{1em} \hskip 8mm \textrm{Fig. 5. Pseudo-code of \emph{LessInterfereDecompose} algorithm} \end{footnotesize} \end{sf} \end{flushleft} \begin{table} \caption{ Performance of \emph{LessInterfereDecompose} + post-processing given in Fig.7 for different $\theta$'s on ACG-20-5p1. Time is in seconds} \begin{center} \renewcommand{\arraystretch}{0.95} \setlength\tabcolsep{4pt} \begin{tabular}{|r|r|c|} \hline \hline \ $\theta$ & Time & $\frac{|L|}{|F|}$ \\ \hline 50 & 6.95 & 79.66\% \\ 100 & 6.03 & 78.87\% \\ 200 & 5.44 & 79.11\% \\ 400 & 5.57 & 79.32\% \\ 800 & 6.52 & 79.31\% \\ 1600 & 8.62 & 79.77\% \\ 2400 & 10.64 & 79.78\% \\ \hline \end{tabular} \end{center} \end{table} \nopagebreak[3] The runtime of \emph{LessInterfereDecompose} consists of three parts: \emph{BCE}, computing scores and determining $S$'s. The runtime of computing scores is $O(\frac{|F|^2}{p})=O(\theta|F|)$. If scores are given, determining a $S$ can be done in a linear time in $|F|$, since there exist linear time algorithms for finding the $p$-th highest score \cite{find:97,find:00}. The total runtime of computing scores plus determining $S$'s does not exceed $O(\theta|F|)$. If the number of clauses touched by each clause does not exceed a constant $\delta$, where $\delta$ is certainly smaller than the maximal number of literal occurrences times the maximal size of clauses, i.e., $\max_{x \in lit(F)} |F_x| \times \max_{C \in F}|C|$. The total time required by all \emph{BCE}'s is at most $O(\delta |F|)$. Thus, the total runtime of \emph{LessInterfereDecompose} is at most $O((\delta+\theta)|F|)$. In practice, $\delta$ is generally very small. Should $\delta$ is very large, we can remove a part of touched clauses to reduce the time required by \emph{BCE} to test whether a clause in the touch list is blocked, or limit the size of the touch list a small constant, say 2000. Using such a policy can guarantee that the time complexity of \emph{LessInterfereDecompose} is linear in $|F|$. Compared with \emph{EagerMover} in \cite{EagerMover:14}, the runtime of \emph{BCE} in \emph{LessInterfereDecompose} is smaller than that in \emph{EagerMover}. \emph{EagerMover} calls at least four times \emph{BCE} on a subset with the size of $0.75|F|$. All calls to \emph{BCE} on each clause $C$ in $F$ can be viewed as a call to \emph{BCE} on the whole $F$. Thus, the total runtime of \emph{BCE} in \emph{LessInterfereDecompose} corresponds to double the runtime of \emph{BCE} on a $F$. As long as the runtime of computing scores and determining $S$'s is smaller than the runtime of \emph{BCE} on a $F$, \emph{LessInterfereDecompose} should be faster than \emph{EagerMover}. In fact, that is true. On some instances, the former are indeed faster than the latter. \subsection{Right Set Guided Post-processing} In general, the above algorithms do not achieve maximal blocked set decomposition. However, they can be improved further by post-processing. The post-processing often used is \emph{MoveBlockedClause} algorithm shown in Fig. 7, which is to move blocked (with respect to the current $L$) clauses from $R$ to $L$. We noted that even if this post-processing algorithm is applied, the decomposition quality can be improved still. For this reason, we present a new post-processing algorithm, called \emph{Right set guided decomposition}, which is shown in Fig. 6. It is a simplified version of \emph{LessInterfereDecompose}. Replacing $S$ with $R$ results in this algorithm. This algorithm requires that the right blocked set $R$ must be given in advance. Hence, it is used generally as post-processing. It is faster than \emph{LessInterfereDecompose}, since it need not compute $R$. Its time complexity depends mainly on that of \emph{BCE}. For some benchmarks, this algorithm can improve significantly their decomposition quality. For example, to decompose \emph{SAT\_dat.k75-24\_1\_rule\_3} using \emph{MinPureDecompose}, the fractions of the large subset (i.e., $\frac{|L|}{|F|}$) with \emph{RsetGuidedDecompose} and without it are $83.9\%$ and $69.9\%$, respectively. If replacing \emph{MinPureDecompose} with \emph{LessInterfereDecompose}, their fractions are 87.8\% and 87.3\%, respectively. \emph{RsetGuidedDecompose} raises still the quality by 0.5\%. However, the speed difference among the three algorithms is big. On this instance, \emph{LessInterfereDecompose}, \emph{RsetGuidedDecompose} and \emph{MinPureDecompose} spent 25, 4 and 1 seconds, respectively. The slowest \emph{LessInterfereDecompose} is not suitable for huge instances with ten millions of clauses. \begin{flushleft} \begin{sf} \begin{footnotesize} \hskip 12mm $RsetGuidedDecompose($formula $F$, right set $R$)\\ \hskip 16mm $ L := \emptyset $\\ \hskip 16mm $BCE(F,F,L)$\\ \hskip 16mm {\bf while } $ F \neq \emptyset $ {\bf do}\\ \hskip 20mm select a clause $C \in (R \cap F) $\\ \hskip 20mm $F := F - \{C\}$\\ \hskip 20mm $BCE(touch(C, F),F, L)$\\ \hskip 16mm {\bf return} $L$ \vspace{1em} \hskip 8mm \textrm{Fig. 6. Pseudo-code of \emph{RsetGuidedDecompose} algorithm} \end{footnotesize} \end{sf} \end{flushleft} \subsection{Mix Decomposition} \begin{flushleft} \begin{sf} \begin{footnotesize} \hskip 22mm $MoveBlockedClause($left blocked set $L$, right set $R)$\\ \hskip 26mm {\bf for } each clause $ C \in R$ {\bf do}\\ \hskip 30mm {\bf if } $\mathrm{BCE}(L \cup \{C\})=\emptyset$ {\bf then} $L := L \cup \{C\}$\\ \hskip 26mm {\bf return} $L$ \vspace{1em} \hskip 12mm $MixDecompose($formula $F$)\\ \hskip 16mm $L_1:=PureDecompose(F)$\\ \hskip 16mm $L_2:=MinPureDecompose(F)$\\ \hskip 16mm $L_3:=MaxPureDecompose(F)$\\ \hskip 16mm $L:=\max\{L_1,L_2,L_3\}$\\ \hskip 16mm {\bf if } $|F|<5\times10^6$ and $|var(F)|<10^6$ {\bf then} \\ \hskip 20mm $L_4:=LessInterfereDecompose(F)$\\ \hskip 20mm $L:=\max\{L,L_4\}$\\ \hskip 16mm $L:=RsetGuidedDecompose(F, F-L)$\\ \hskip 16mm $L:=MoveBlockedClause(L,F-L)$\\ \hskip 16mm {\bf return} $L$ \vspace{1em} \hskip 8mm \textrm{Fig. 7. Pseudo-code of \emph{MoveBlockedClause}, \emph{MixDecompose} algorithm} \end{footnotesize} \end{sf} \end{flushleft} In general, in advance we do not know which algorithm is the best. Because all the algorithms given in the previous subsections are very fast, and running them one after another does not lose much time, we can construct an algorithm with high speed and high performance by combining them. The detailed implementation is shown in Fig. 7. We call this algorithm \emph{MixDecompose}. Its basic idea is to take the maximum from three left sets outputted by three algorithms as the initial $L$ first. If the formula to be decomposed is not large, say the number of clauses and variables is less than $5\times10^6$ and $10^6$, respectively, we invoke \emph{LessInterfereDecompose} to get a larger $L$. Finally, we enlarge the size of $L$ by calling two post-processing algorithms: \emph{RsetGuidedDecompose} and \emph{MoveBlockedClause}. Like the usual post-processing, the task of \emph{MoveBlockedClause} is to move blocked clauses from $R$ to $L$. Notice, if $F$ is large, say $|F| > 10^7$, the last post-processing can be canceled to save the running time. According to whether both subsets can be solved by BCE, a blocked clause decomposition can be classified as \emph{symmetric} or \emph{asymmetric}. If yes, it is symmetric. If only one of the subsets can be solved by BCE, it is asymmetric. Clearly, our two variants of \emph{PureDecompose} are symmetric. If blockable clauses (whose definition is given below) are allowed to move to $L$ like \emph{EagerMover} \cite{EagerMover:14}, that is, replacing \emph{MoveBlockedClause} with the procedure \emph{MoveBlockableClause} of \emph{EagerMover}, \emph{MixDecompose} is asymmetric, since it cannot guarantee that the two subsets both can be solved by BCE. However, even if adopting the replacement, by our observation, on almost all application instances, it is still symmetric. \section{Empirical Evaluation} We evaluated the performance of each decomposition algorithm on the 297 instances from the application track of the SAT competition 2014, except for three huge ones: zfcp-2.8-u2-nh, esawn\_uw3.debugged and post-cbmc-zfcp-2.8-u2. The reason why the three huge instances were removed is that there is not enough memory to solve them. All the algorithms were run under the following experimental platform: Intel Core 2 Quad Q6600 CPU with speed of 2.40GHz and 2GB memory. Each tested algorithm is written in C. The source code of \emph{MixDecompose} is available at http://github.com/jingchaochen/MixBcd. This paper presented four decomposition algorithms. To understand more clearly the characteristic of each of them, we compared them experimentally. Empirical results reveal that except for 41 application instances listed in Table\,2, on all the other ones, the quality of \emph{LessInterfereDecompose} is superior to that of the other three algorithms: \emph{MinPureDecompose}, \emph{PureDecompose}, \emph{MaxPureDecompose}. That is to say, there are 256 application instances where \emph{LessInterfereDecompose} is superior to the other three algorithms in terms of quality. However, due to limited space, Table\,3 lists only a part of instances. In Table\,2--4, $|F|$ denotes the number of the clauses in formula $F$, where $F$ is simplified by removing satisfied clauses, but contains unit clauses. To obtain such $F$, before calling each decomposition algorithm, we use the same unit decomposition policy given in \cite{EagerMover:14} as \emph{EagerMover} to preprocess the input formula. Column $\frac{|L|}{|F|}$ indicates the fraction of the large set. Column Time shows the runtime in seconds. Only from Table\,2, \emph{MinPureDecompose} seems to be less important than the others, since there are only two instances where it is better than the others. However, in fact, on some large instances, it is very important. For example, for \emph{9vliw\_m\_9stage\_iq3\_C1\_b1}, it is very important, because on this instance, \emph{LessInterfereDecompose} is much slower than \emph{MinPureDecompose}, but their quality difference is small, as shown in the last row of Table\,3. In \emph{MixDecompose} execution, to save the runtime, we skip \emph{LessInterfereDecompose} and adopt the best result of the other algorithms (which may well be \emph{MinPureDecompose}) when $|F| \geq 5\times10^6$. \begin{table} \caption{All application instances where \emph{LessInterfereDecompose} (\emph{LessInterfere} for short) is inferior to the other three algorithms:\emph{MinPureDecompose} (\emph{MinPure} for short), \emph{PureDecompose}, \emph{MaxPureDecompose} (\emph{MaxPure} for short). Time is in seconds.} \begin{center} \renewcommand{\arraystretch}{0.95} \setlength\tabcolsep{4pt} \begin{tabular}{|l|r|c|c|c|c|c|c|c|c|} \hline \hline \ & & \multicolumn{2}{c|} {\emph{MinPure}} & \multicolumn{2}{c|}{\emph{PureDecompose}} & \multicolumn{2}{c|} {\emph{MaxPure}} & \multicolumn{2}{c|}{\emph{LessInterfere}} \\ \cline{3-10} \multicolumn{1}{|c|}{\raisebox{1.5ex}[0pt]{Instances}} & \raisebox{1.0ex}[0pt]{\large $\frac{|F|}{10^4}$} & $\frac{|L|}{|F|}$ & Time & $\frac{|L|}{|F|}$ & Time & $\frac{|L|}{|F|}$ & Time & $\frac{|L|}{|F|}$ & Time\\ \hline ctl\_3791\_556\_unsat & 8 & \textbf{93.1\%} & 0.14 & 88.9\% & 0.01 & 75.3\% & 0.03 & 88.1\% & 3.17\\ ctl\_4291\_567\_5\_unsat & 13 & \textbf{87.6\%} & 0.66 & 83.7\% & 0.01 & 68.6\% & 0.04 & 80.3\% & 9.94\\ atco\_enc1\_opt2\_20\_12 & 653 & 78.5\% & 1.48 & \textbf{82.9\%} & 0.51 & 77.3\% & 1.47 & 80.3\% & 57.7 \\ atco\_enc2\_opt1\_20\_11 & 971 & 84.2\% & 2.34 & \textbf{87.0\%} & 0.86 & 82.7\% & 3.75 & 86.0\% & 64.2 \\ atco\_enc2\_opt2\_20\_11 & 651 & 78.7\% & 1.40 & \textbf{83.0\%} & 0.50 & 77.3\% & 1.46 & 80.4\% & 58.1 \\ atco\_enc3\_opt1\_03\_53 & 427 & 51.0\% & 6.61 & \textbf{75.8\%} & 0.74 & 75.4\% & 12.9 & 60.5\% & 35.2 \\ atco\_enc3\_opt1\_04\_50 & 561 & 50.5\% & 8.90 & \textbf{75.1\%} & 1.01 & 75.0\% & 21.9 & 56.9\% & 60.2 \\ atco\_enc3\_opt1\_13\_48 & 608 & 50.5\% & 9.92 & \textbf{75.1\%} & 1.12 & 75.0\% & 22.8 & 56.9\% & 70.4 \\ atco\_enc3\_opt2\_05\_21 & 538 & 50.8\% & 8.63 & \textbf{75.8\%} & 1.02 & 75.7\% & 21.3 & 57.8\% & 55.2 \\ grieu-vmpc-31 & 15 & 79.7\% & 0.03 & \textbf{79.8\%} & 0.01 & 79.7\% & 0.01 & 79.7\% & 4.60 \\ openstack-p30\_3.085 & 141 & 76.7\% & 0.52 & \textbf{92.8\%} & 0.10 & 85.1\% & 0.13 & 89.6\% & 2.04 \\ openstack-s-p30\_3.085 & 141 & 76.7\% & 0.51 & \textbf{92.8\%} & 0.11 & 85.1\% & 0.14 & 89.6\% & 2.03 \\ reg\_s\_2\_unknown & 170 & 77.0\% & 1.17 & \textbf{79.4\%} & 0.16 & 69.0\% & 1.22 & 72.3\% & 101 \\ vmpc\_29 & 12 & 79.6\% & 0.01 & \textbf{79.7\%} & 0.01 & 79.6\% & 0.01 & 79.6\% & 4.17 \\ vmpc\_32 & 16 & 79.6\% & 0.02 & \textbf{79.7\%} & 0.01 & 79.6\% & 0.01 & 79.6\% & 5.59 \\ vmpc\_33 & 18 & 79.6\% & 0.04 & \textbf{79.7\%} & 0.01 & 79.6\% & 0.01 & 79.6\% & 6.81 \\ atco\_enc3\_opt2\_10\_12 & 422 & 50.3\% & 6.61 & 75.0\% & 0.77 & \textbf{75.1\%} & 14.2 & 60.0\% & 35.7 \\ atco\_enc3\_opt2\_10\_14 & 423 & 50.3\% & 6.64 & 75.0\% & 0.78 & \textbf{75.1\%} & 14.2 & 60.0\% & 36.1 \\ atco\_enc3\_opt2\_18\_44 & 457 & 50.3\% & 7.22 & 75.0\% & 0.79 & \textbf{75.1\%} & 14.5 & 60.0\% & 41.8 \\ complete-300-0.1-18 & 3 & 90.2\% & 0.01 & 90.0\% & 0.01 & \textbf{93.8\%} & 0.01 & 90.8\% & 0.79 \\ complete-300-0.1-4 & 3 & 90.8\% & 0.01 & 89.3\% & 0.01 & \textbf{92.9\%} & 0.01 & 90.4\% & 0.72 \\ complete-300-0.1-7 & 3 & 90.2\% & 0.01 & 89.8\% & 0.01 & \textbf{93.5\%} & 0.01 & 90.8\% & 0.81 \\ complete-300-0.1-8 & 3 & 90.8\% & 0.01 & 90.2\% & 0.01 & \textbf{93.0\%} & 0.01 & 90.6\% & 0.71 \\ complete-400-0.1-12 & 5 & 92.8\% & 0.01 & 91.6\% & 0.01 & \textbf{94.6\%} & 0.01 & 92.0\% & 2.70 \\ complete-400-0.1-16 & 5 & 91.5\% & 0.01 & 91.7\% & 0.02 & \textbf{94.5\%} & 0.01 & 92.8\% & 2.71 \\ complete-400-0.1-3 & 5 & 91.9\% & 0.01 & 92.0\% & 0.02 & \textbf{94.7\%} & 0.01 & 91.8\% & 2.65 \\ complete-400-0.1-7 & 5 & 91.7\% & 0.02 & 91.3\% & 0.01 & \textbf{94.6\%} & 0.01 & 92.1\% & 2.61 \\ complete-500-0.1-1 & 7 & 92.9\% & 0.03 & 93.0\% & 0.03 & \textbf{95.5\%} & 0.01 & 93.3\% & 1.97 \\ complete-500-0.1-15 & 7 & 92.7\% & 0.03 & 93.3\% & 0.03 & \textbf{95.5\%} & 0.01 & 92.6\% & 2.11 \\ complete-500-0.1-17 & 7 & 93.3\% & 0.04 & 93.1\% & 0.04 & \textbf{95.6\%} & 0.01 & 92.8\% & 2.08 \\ complete-500-0.1-7 & 7 & 93.2\% & 0.02 & 93.1\% & 0.03 & \textbf{95.6\%} & 0.01 & 93.3\% & 2.01 \\ complete-500-0.1-8 & 7 & 93.5\% & 0.03 & 92.8\% & 0.02 & \textbf{95.7\%} & 0.01 & 93.3\% & 1.98 \\ pb\_300\_10\_lb\_07 & 55 & 58.0\% & 0.70 & 64.7\% & 0.02 & \textbf{75.5\%} & 0.10 & 71.0\% & 15.1 \\ pb\_300\_10\_lb\_08 & 56 & 58.0\% & 0.71 & 64.7\% & 0.04 & \textbf{75.5\%} & 0.09 & 70.6\% & 15.3 \\ stable-300-0.1-20 & 2 & 88.9\% & 0.01 & 89.6\% & 0.01 & \textbf{93.3\%} & 0.01 & 87.1\% & 0.52 \\ stable-400-0.1-11 & 3 & 90.5\% & 0.01 & 91.8\% & 0.01 & \textbf{94.4\%} & 0.01 & 89.8\% & 1.51 \\ stable-400-0.1-12 & 3 & 91.2\% & 0.02 & 90.4\% & 0.01 & \textbf{94.1\%} & 0.01 & 89.2\% & 1.60 \\ stable-400-0.1-2 & 3 & 90.6\% & 0.01 & 90.5\% & 0.01 & \textbf{94.3\%} & 0.01 & 88.9\% & 1.59 \\ stable-400-0.1-4 & 3 & 90.3\% & 0.01 & 91.2\% & 0.01 & \textbf{94.3\%} & 0.01 & 89.1\% & 1.63 \\ stable-400-0.1-5 & 3 & 89.5\% & 0.01 & 91.6\% & 0.01 & \textbf{94.1\%} & 0.01 & 88.7\% & 1.61 \\ stable-400-0.1-7 & 3 & 89.7\% & 0.02 & 91.3\% & 0.01 & \textbf{94.3\%} & 0.01 & 89.0\% & 1.56 \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Some application instances where the quality of \emph{LessInterfereDecompose}(\emph{LessInterfere}) is superior to that of the other three algorithms: \emph{MinPureDecompose} (\emph{MinPure}), \emph{PureDecompose}, \emph{MaxPureDecompose} (\emph{MaxPure}). Time is in seconds.} \begin{center} \renewcommand{\arraystretch}{0.95} \setlength\tabcolsep{4pt} \begin{tabular}{|l|r|c|c|c|c|c|c|c|c|} \hline \hline \ & & \multicolumn{2}{c|} {\emph{MinPure}} & \multicolumn{2}{c|}{\emph{PureDecompose}} & \multicolumn{2}{c|} {\emph{MaxPure}} & \multicolumn{2}{c|}{\emph{LessInterfere}} \\ \cline{3-10} \multicolumn{1}{|c|}{\raisebox{1.5ex}[0pt]{Instances}} & \raisebox{1.0ex}[0pt] {\large $\frac{|F|}{10^4}$} & $\frac{|L|}{|F|}$ & Time & $\frac{|L|}{|F|}$ & Time & $\frac{|L|}{|F|}$ & Time & $\frac{|L|}{|F|}$ & Time\\ \hline 001-80-12 & 31 & 58.3\% & 0.97 & 57.7\% & 0.03 & 57.5\% & 0.06 & 99.2\% & 6.05 \\ 010-80-12 & 31 & 58.3\% & 0.94 & 57.7\% & 0.04 & 57.5\% & 0.05 & 99.2\% & 6.09 \\ 6s10 & 10 & 66.5\% & 3.18 & 63.0\% & 0.01 & 63.0\% & 0.12 & 99.9\% & 0.07 \\ 6s123 & 241 & 63.4\% & 4.99 & 62.5\% & 0.25 & 60.6\% & 0.17 & 97.7\% & 0.67 \\ 7pipe\_k & 75 & 96.7\% & 1.91 & 96.0\% & 0.12 & 96.1\% & 0.04 & 99.9\% & 9.68 \\ 8pipe\_k & 133 & 97.2\% & 1.58 & 96.7\% & 0.27 & 96.7\% & 0.09 & 97.9\% & 2.74 \\ 9dlx\_vliw\_at\_b\_iq3 & 97 & 93.4\% & 4.93 & 92.7\% & 0.12 & 92.2\% & 0.11 & 96.7\% & 1.21 \\ 9dlx\_vliw\_at\_b\_iq9 & 968 & 95.1\% & 3.52 & 94.4\% & 1.61 & 94.3\% & 3.26 & 96.5\% & 50.5 \\ ACG-15-10p0 & 92 & 71.5\% & 1.06 & 71.8\% & 0.12 & 68.7\% & 2.16 & 76.3\% & 1.93 \\ aes\_24\_4\_keyfind\_4 & 1 & 54.4\% & 0.01 & 54.3\% & 0.01 & 55.1\% & 0.01 & 66.9\% & 0.21 \\ aes\_64\_1\_keyfind\_1 & 0.3 & 52.8\% & 0.01 & 56.1\% & 0.01 & 50.7\% & 0.01 & 89.2\% & 0.01 \\ AProVE07-01 & 2 & 61.3\% & 0.30 & 61.1\% & 0.01 & 61.1\% & 0.01 & 96.5\% & 0.01 \\ AProVE09-06 & 26 & 62.2\% & 0.57 & 62.2\% & 0.02 & 62.2\% & 0.23 & 99.9\% & 0.16 \\ atco\_enc1\_opt1\_04\_32 & 55 & 69.4\% & 3.08 & 70.7\% & 0.04 & 69.5\% & 0.21 & 76.3\% & 44.3 \\ beempgsol2b1 & 8 & 65.8\% & 2.10 & 62.6\% & 0.11 & 63.1\% & 0.09 & 99.9\% & 0.30 \\ bjrb07amba10andenv & 59 & 66.7\% & 1.31 & 66.4\% & 0.08 & 66.3\% & 0.94 & 99.9\% & 4.79 \\ blocks-blocks-37-1.130 & 728 & 88.6\% & 1.88 & 90.6\% & 0.60 & 87.4\% & 5.25 & 94.4\% & 9.16 \\ bob12m04 & 168 & 66.7\% & 3.64 & 60.9\% & 0.17 & 61.5\% & 3.12 & 99.9\% & 2.74 \\ c10bi\_i & 40 & 66.7\% & 0.99 & 62.1\% & 0.03 & 62.0\% & 0.41 & 99.9\% & 0.92 \\ countbitssrl032 & 6 & 66.7\% & 1.81 & 58.5\% & 0.01 & 62.7\% & 0.06 & 99.9\% & 0.20 \\ dated-10-11-u & 48 & 69.4\% & 0.42 & 68.2\% & 0.05 & 62.1\% & 1.05 & 81.4\% & 2.60 \\ dimacs & 1 & 58.7\% & 0.01 & 58.9\% & 0.01 & 59.0\% & 0.01 & 99.9\% & 0.01 \\ E02F22 & 130 & 99.0\% & 1.35 & 98.8\% & 0.29 & 98.2\% & 0.12 & 99.9\% & 28.6 \\ grid-strips-grid-y-3.065 & 350 & 87.1\% & 0.68 & 85.8\% & 0.29 & 90.5\% & 0.28 & 97.1\% & 1.77 \\ gss-25-s100 & 10 & 65.0\% & 3.01 & 64.8\% & 0.01 & 64.6\% & 0.17 & 99.8\% & 0.04 \\ hitag2-8-60-0--47 & 3 & 53.8\% & 0.30 & 52.1\% & 0.01 & 53.3\% & 0.01 & 98.4\% & 0.53 \\ hwmcc10-k45-pdts3p02 & 49 & 66.7\% & 1.09 & 65.2\% & 0.04 & 64.8\% & 0.71 & 99.9\% & 0.25 \\ itox\_vc1130 & 44 & 54.3\% & 0.71 & 54.9\% & 0.03 & 54.7\% & 0.78 & 97.5\% & 1.34 \\ k2fix\_gr\_rcs\_w9.shuffled & 31 & 99.6\% & 0.26 & 99.6\% & 0.06 & 99.6\% & 0.01 & 99.7\% & 0.23 \\ korf-17 & 9 & 92.9\% & 0.22 & 93.0\% & 0.01 & 90.5\% & 0.04 & 99.5\% & 0.16 \\ manol-pipe-c10nidw & 129 & 66.7\% & 3.11 & 61.9\% & 0.14 & 61.7\% & 1.43 & 99.9\% & 0.41 \\ maxxor032 & 4 & 66.6\% & 1.06 & 60.9\% & 0.01 & 60.8\% & 0.20 & 99.9\% & 0.04 \\ MD5-32-1 & 7 & 56.6\% & 0.34 & 51.5\% & 0.01 & 53.3\% & 0.01 & 99.0\% & 0.26 \\ minandmaxor128 & 75 & 66.7\% & 1.61 & 62.1\% & 0.07 & 62.1\% & 0.48 & 99.9\% & 3.14 \\ partial-10-17-s & 118 & 70.5\% & 1.05 & 69.0\% & 0.14 & 63.2\% & 3.62 & 78.3\% & 2.48 \\ post-c32s-ss-8 & 14 & 62.7\% & 4.06 & 58.3\% & 0.01 & 57.9\% & 0.15 & 96.2\% & 0.03 \\ rpoc\_xits\_15\_SAT & 18 & 97.9\% & 0.03 & 98.8\% & 0.01 & 97.6\% & 0.01 & 99.6\% & 0.16 \\ SAT\_dat.k90.debugged & 509 & 70.6\% & 8.66 & 68.9\% & 0.53 & 68.8\% & 8.71 & 87.5\% & 18.3 \\ slp-synthesis-aes-top28 & 27 & 64.3\% & 0.53 & 64.2\% & 0.01 & 64.2\% & 0.23 & 98.7\% & 0.10 \\ velev-vliw-uns-2.0-uq5 & 247 & 94.2\% & 0.85 & 93.5\% & 0.35 & 93.2\% & 0.18 & 96.5\% & 4.18 \\ 9vliw\_m\_9s\_iq3\_C1\_b1 & 1338 & 86.0\% & 4.13 & 82.3\% & 2.27 & 82.4\% & 1.58 & 86.6\% & 266 \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{ We run \emph{PureEager} and \emph{MixDecompose} on 297 application instances. Due to limited space and the fact that listing all is tedious, we list results on only a part of application ones and a random instance in the last row. Time is in seconds.} \begin{center} \renewcommand{\arraystretch}{0.95} \setlength\tabcolsep{4pt} \begin{tabular}{|l|r|c|c|c|c|} \hline \hline \ & & \multicolumn{2}{c|} {\emph{PureEager}} & \multicolumn{2}{c|}{\emph{MixDecompose}} \\ \cline{3-6} \multicolumn{1}{|c|}{\raisebox{1.5ex}[0pt]{Instances}} & \raisebox{1.0ex}[0pt]{\large $\frac{|F|}{10^4}$} & $\frac{|L|}{|F|}$ & Time & $\frac{|L|}{|F|}$ & Time \\ \hline 002-23-96 & 13 & 97.7\% & 1.4 & 99.3\% & 0.29 \\ aes\_24\_4\_keyfind\_4 & 1 & 57.5\% & 0.02 & 68\% & 0.11 \\ atco\_enc1\_opt1\_03\_56 & 26 & 79.3\% & 0.43 & 83.5\% & 7.89\\ blocks-blocks-36-0.120 & 607 & 92.3\% & 17.4 & 96.4\% & 12.65 \\ complete-500-0.1-17 & 8 & 93.9\% & 2.2 & 96.4\% & 3.04 \\ dated-10-11-u & 49 & 81.6\% & 1.43 & 82.6\% & 2.91 \\ dimacs & 1 & 99.9\% & 0.14 & 99.9\% & 0.04 \\ grid-strips-grid-y-3.035 & 167 & 85.1\% & 5.61 & 95.1\% & 4.18 \\ hitag2-7-60-0-80 & 3 & 73.8\% & 0.26 & 98.4\% & 1.02\\ MD5-29-3 & 7 & 81.4\% & 0.29 & 99.3\% & 0.51 \\ openstacks-p30\_3.085 & 141 & 93.5\% &1.73 & 94\% & 3.62 \\ partial-5-17-s & 101 & 74.5\% & 2.3 & 82.1\% & 5.66 \\ q\_query\_3\_L150\_coli.sat & 217 & 67.9\% & 52.4 & 85.8\% & 12.03 \\ q\_query\_3\_L90\_coli.sat & 118 & 67.8\% & 15.5 & 88.1\% & 9.04 \\ 9vliw\_m\_9stage\_iq3\_C1\_b7 & 1338 & & $>300$ & 86.8\% & 108.8 \\ 9vliw\_m\_9stage\_iq3\_C1\_b4 & 1335 & & $>300$ & 86.7\% & 109.2 \\ 9dlx\_vliw\_at\_b\_iq6 & 364 & 95.2\% & 14.7 & 96.6\% & 12.05\\ SAT\_dat.k75-24\_1\_rule\_3 & 415 & 78.9\% & 14.4 & 87.8\% & 33.83 \\ transport-35node-1000s-4d & 590 & 92.5\% & 24.4 & 92.9\% & 15.66 \\ 7pipe\_k & 75 & 0.97.0\% & 1.92 & 99.9\% & 11.75 \\ ACG-15-10p1 & 94 & 76.4\% & 3.34 & 79.6\% & 7.07 \\ ctl\_3791\_556\_unsat & 8 & 89.0\% & 0.22 & 93.6 & 3.43\\ korf-18 & 19 & 99.3\% & 3.61 & 99.7\% & 10.79 \\ E02F22 & 130 & 99.6\% & 125.8 & 99.9\% & 30.92 \\ MD5-30-4 & 7 & 86.1\% & 0.46 & 99.3\% & 0.83 \\ partial-10-11-s & 68 & 74.6\% & 1.99 & 83.1\% & 4.78 \\ rbcl\_xits\_08\_UNSAT & 7 & 99.7\% & 0.17 & 99.8\% & 0.12 \\ stable-400-0.1-4 & 3 & 91.2\% & 0.49 & 94.4\% & 3.89 \\ total-10-13-u & 79 & 80.9\% & 2.94 & 81.9\% & 7.60 \\ UCG-15-10p0 & 79 & 71.0\% & 3.38 & 78.0\% & 5.76 \\ UR-20-10p1 & 113 & 70.6\% & 4.84 & 76.3\% & 7.85 \\ UTI-20-5p1 & 99 & 70.6\% & 4.24 & 76.5\% & 13.9 \\ velev-vliw-uns-4.0-9-i1 & 323 & 81.3\% & 34.29 & 86.5\% & 21.04\\ IBM\_FV\_2004\_SAT\_dat.k40 & 18 & 91.9\% & 0.83 & 96.4\% & 4.22\\ unif-k3-r3.96-v1000000-c3960000 & & & & & \\ S8043316035928452744 & \raisebox{1.1ex}[0pt]{396} & \raisebox{1.1ex}[0pt]{76.3\%} & \raisebox{1.1ex}[0pt]{37.96} & \raisebox{1.1ex}[0pt]{83.2\%} &\raisebox{1.1ex}[0pt]{82.18}\\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Comparing performance of two algorithms on 297 benchmarks from SAT Competition 2014 application track.} \begin{center} \renewcommand{\arraystretch}{0.95} \setlength\tabcolsep{4pt} \begin{tabular}{|l|c|c|c|c|c|c|} \hline \hline \multicolumn{1}{|c|}{Algorithm} & Ave $\frac{|L|}{|F|}$ & \# of best & \# of eq & Ave Time & Time Out \\ \hline \emph{PureEager} & 87.2\% & 0 & 71 & 7.41 & 7 \\ \hline \emph{MixDecompose} & 92.2\% &226 & 71 & 8.97 & 0 \\ \hline \end{tabular} \end{center} \end{table} To evaluate the performance of \emph{MixDecompose}, we select very competitive \emph{PureDecompose}+\emph{EagerMover} (\emph{PureEager} for short) \cite{web:14,EagerMover:14} as our comparison object. Although \emph{QuickDecompose} \cite{sbliter:13} was proposed recently also, we did not select it as as our comparison object, because \emph{QuickDecompose} requires more time than \emph{EagerMover} for many instances. The large set $L$ obtained by \emph{PureEager} contains blockable clauses in addition to blocked clauses. A clause $C$ is said to be blockable w.r.t. a blocked set $L$ if each literal $l \in C$ is not a blocking literal of any clause in $L$. The reason why blockable clauses are added to the blocked set is that they do not destroy the blocked property. That is, blocked sets containing blockable clauses are still satisfiable. To keep identical with the performance evaluation of \emph{PureEager}, the large set $L$ of our \emph{MixDecompose} contains also blockable clauses. Table\,4 compares the performance of \emph{PureEager} and \emph{MixDecompose} on application instances and a random instance from the SAT competition 2014. Although we tested the two algorithms on 297 application instances, due to limited space and the fact that listing all yields a tedious feeling, Table\,4 lists only a part of representative results. As seen in Table\,4, in terms of decomposition quality, \emph{MixDecompose} outperforms completely \emph{PureEager}. In terms of speed, the former is sometimes faster than the latter, and vice versa. \emph{MixDecompose} was able to finish the decomposition on all SAT 2014 application benchmarks excluding three huge instances within 110 seconds. However, \emph{PureEager} was not able to finish on some benchmarks such as \emph{9vliw\_m\_9stage\_iq3\_C1\_b7} within 300 seconds. Table\,5 presents the outline of the performance of two algorithms on 297 benchmarks from SAT Competition 2014 application track. The second column shows the average fraction of the large set. Column `\# of best' indicates the number of the best results obtained by an algorithm. Column `\# of eq' is the number of results equivalent to ones obtained by another algorithm. On 226 out of 297 benchmarks, the size of the large set obtained by \emph{MixDecompose} is larger than that obtained by \emph{PureEager}. On 71 remaining benchmarks, the quality of the two algorithms is identical. There is no application formula where the quality of \emph{PureEager} is better than \emph{MixDecompose}. In addition, we conducted also experiments on random benchmarks. We observed that on all random instances, the quality of \emph{MixDecompose} is strictly better than that of \emph{PureEager}. As seen from the last row of Table\,4, \emph{ MixDecompose} can solve huge random instances with millions of clauses in a reasonable time. For 3-SAT random instances, it can increase the fraction of the large set by 5\%. The fifth column in Table\,5 shows the average runtime taken by each algorithm in seconds. Here, computing the average runtime counts only solved instances, excluding timed-out instances. The last column in Table\,5, lists the number of times the time-out was hit. The timeout for each algorithm was set to 300 seconds. \emph{MixDecompose} did not time out on the tested benchmarks, while \emph{PureEager} did on 7 benchmarks. \emph{MixDecompose} took at most 110 seconds. Although on average, \emph{PureEager} run faster than \emph{MixDecompose}, in this experiment, the worst-case runtime of the former was significantly lager than the latter. \section {Conclusions and Future Work} In this paper, we developed a new blocked clause decomposition algorithm by combining several decomposition strategies. The new algorithm not only achieves high quality decomposition, but also is fast. Even for large instances, it can ensure that the decomposition is done within 110 seconds on our machine. Because our machine is slower than the platform of SAT competition 2014, If running on the latter, the speed will be more fast. In designing the blocked clause decomposition algorithm, we simplified Blocked Clause Elimination (BCE) by applying various cut-off heuristics, such as only "touching" the literals with few occurrences. We believe that the simple and limited BCD may be also applied to improve the performance of BCE for CNF preprocessing, without sacrificing much the quality of the final result. So far we know only that we can get a higher quality decomposition than the existing algorithms such as \emph{PureEager}. However, this does not mean that \emph{MixDecompose} is the best. How to develop a better and more efficient than \emph{MixDecompose} will be a future research topic. \bibliographystyle{splncs}
1,314,259,992,856
arxiv
\section{Derivation of the Variational Formulation} In the following, we derive the mixed variational formulation of \cref{ss_weakform} for all evolution equations \cref{eq_balance_mass,eq_balance_momentum,eq_balance_energy,eq_balance_heatflux,eq_balance_stress} separately. Reordering of the boundary conditions includes the addition of \cref{eq_bc_sigmant,eq_bc_rnt} to form \cref{eq_bcnew_rnt}, the addition of \cref{eq_bc_sn,eq_bc_mnnn} to form \cref{eq_bcnew_mnnn}, and a reordering/scaling of \cref{eq_bc_sigmant,eq_bc_sn} to obtain \cref{eq_bcnew_sigmant,eq_bcnew_sn} yields \begin{align} u_n &= \epsilon^\mathrm{w} \tilde{\chi} \left( (p-p^\mathrm{w}) + \sigma_{nn} \right) + u_n^{\mathrm{w}}, \label{eq_bcnew_un} \\ \frac{1}{\tilde{\chi}} \sigma_{nt} + u_t^{\mathrm{w}} - \frac{1}{5} s_t &= u_t + m_{nnt}, \label{eq_bcnew_sigmant} \\ R_{nt} &= \tilde{\chi} \frac{12}{5} s_t - \sigma_{nt},\label{eq_bcnew_rnt} \\ \frac{1}{2} \frac{1}{\tilde{\chi}} s_n + \theta^{\mathrm{w}} &= \theta + \frac{1}{4} \sigma_{nn} + \frac{1}{5} R_{nn} + \frac{1}{15} \Delta,\label{eq_bcnew_sn} \\ \frac{3}{4} m_{nnn} &= \tilde{\chi} \frac{9}{8} \sigma_{nn} - \frac{3}{20} s_n,\label{eq_bcnew_mnnn} \\ \left( \frac{1}{2} m_{nnn} + m_{ntt} \right) &= \tilde{\chi} \left( \frac{1}{2} \sigma_{nn} + \sigma_{tt} \right), \label{eq_bcnew_05mnnnmnnt} \end{align} \subsection{Heat Flux Balance}\label{ss_derviationHeatflux} We start by testing the heat flux balance \cref{eq_balance_heatflux} with \(\te{r}\) and apply integration by parts to the terms \begin{align} \int_\Omega \left(\te{\nabla} \te{\cdot} \tee{\sigma}\right) \te{\cdot} \te{r} \dd \te{x} &= - \int_\Omega \tee{\sigma} : \nabla \te{r} \dd \te{x} + \int_{\Gamma} \left( \tee{\sigma} \cdot \te{n} \right) \cdot \te{r} \dd l , \\ \frac{1}{2} \int_\Omega \left(\te{\nabla} \te{\cdot}\tee{R}\right) \te{\cdot} \te{r} \dd \te{x} &= - \frac{1}{2} \int_\Omega \tee{R} : \nabla \te{r} \dd \te{x} + \frac{1}{2} \int_{\Gamma} \left( \tee{R} \cdot \te{n} \right) \cdot \te{r} \dd l , \\ \frac{5}{2} \int_\Omega \left(\te{\nabla} \theta\right) \te{\cdot} \te{r} \dd \te{x} &= - \frac{5}{2} \int_\Omega \theta \left( \nabla \cdot \te{r} \right) \dd \te{x} + \frac{5}{2} \int_{\Gamma} \theta \left( \te{r} \cdot \te{n} \right) \dd l , \\ \frac{1}{6} \int_\Omega \left(\te{\nabla} \Delta\right) \te{\cdot} \te{r} \dd \te{x} &= - \frac{1}{6} \int_\Omega \Delta \left( \nabla \cdot \te{r} \right) \dd \te{x} + \frac{1}{6} \int_{\Gamma} \Delta \left( \te{r} \cdot \te{n} \right) \dd l . \end{align} To proceed further, we expand all boundary integrals. In fact, for two spatial dimensions and regarding a local normal/tangential (characterized through \(\te{n},\te{t}\)) coordinate system along the boundary path, it holds that \begin{align} \left( \tee{\sigma} \cdot \te{n} \right) \cdot \te{r} &= \sigma_{nn} r_n + \sigma_{nt} r_t, & \left( \tee{R} \cdot \te{n} \right) \cdot \te{r} &= R_{nn} r_n + R_{nt} r_t, \\ \theta \left( \te{r} \cdot \te{n} \right) &= \theta r_n, & \Delta \left( \te{r} \cdot \te{n} \right) &= \Delta r_n. \end{align} The resulting equation for the left-hand side is normalized with the factor \((2/5)\) and reads \begin{align} & - \frac{2}{5} \int_\Omega \tee{\sigma} : \nabla \te{r} \dd \te{x} - \frac{1}{5} \int_\Omega \tee{R} : \nabla \te{r} \dd \te{x} - \int_\Omega \theta \left( \nabla \cdot \te{r} \right) \dd \te{x} - \frac{1}{15} \int_\Omega \Delta \left( \nabla \cdot \te{r} \right) \dd \te{x} \nonumber \\ & \frac25 \int_{\Gamma} \left( \sigma_{nn} r_n + \sigma_{nt} r_t \right) \dd l + \frac{1}{5} \int_{\Gamma} \left( R_{nn} r_n + R_{nt} r_t \right) \dd l + \int_{\Gamma} \theta r_n \dd l + \frac{1}{15} \int_{\Gamma} \Delta r_n \dd l , \end{align} to insert the boundary conditions \cref{eq_bcnew_sn,eq_bcnew_rnt} together with the closure expression \cref{eq_closure_r,eq_closure_delta}. The final weak form of the heat balance therefore reads \begin{align} 0= & - \frac{2}{5} \int_\Omega \tee{\sigma} : \nabla \te{r} \dd \te{x} + \frac{24}{25} \operatorname{Kn} \int_\Omega {(\te{\nabla} \te{s})}_{\text{stf}} : \nabla \te{r} \dd \te{x} - \int_\Omega \theta \left( \nabla \cdot \te{r} \right) \dd \te{x} \nonumber \\ & + \frac{4}{5} \operatorname{Kn} \int_\Omega \left( \nabla \cdot \te{s} \right) \left( \nabla \cdot \te{r} \right) \dd \te{x} + \frac{3}{20} \int_{\Gamma} \sigma_{nn} r_n + \frac{1}{5} \int_{\Gamma} \sigma_{nt} r_t \dd l + \frac{12}{25} \tilde{\chi} \int_{\Gamma} s_t r_t \dd l \nonumber \\ & + \frac{1}{2} \frac{1}{\tilde{\chi}} \int_\Gamma s_n r_n \dd l + \int_\Gamma \theta^{\text{w}} r_n \dd l + \frac{4}{15} \frac{1}{\operatorname{Kn}} \int_\Omega \te{s} \te{\cdot} \te{r} \dd \te{x} , \end{align} which equivalently, using the sub-functionals \cref{eq_subf_a,eq_subf_b,eq_subf_c,eq_subf_l1}, reads as \begin{equation} a(\te{s},\te{r}) - b(\theta, \te{r}) - c(\te{r},\tee{\sigma}) = l_1(\te{r}) . \label{eq_subf_line_heatflux} \end{equation} Note that \cref{eq_subf_line_heatflux} is the result defining the symmetric and trace-free operator in three dimensions \cref{eq:heatStfModification} together with the orthogonality principle of the additive tensor decomposition into the symmetric and the skew-symmetric part as \begin{align} \int_\Omega {(\te{\nabla} \te{s})}_{\text{stf}} : \nabla \te{r} \dd \te{x} &= \int_\Omega \text{sym}(\te{\nabla} \te{s}) : \text{sym}(\te{\nabla} \te{r}) \dd \te{x} - \frac{1}{3} \int_\Omega \left( \trace{(\te{\nabla} \te{s})} \tee{I} \right) \te{:} \te{\nabla} \te{r } \dd \te{x} \nonumber \\ &= \int_\Omega \text{sym}(\te{\nabla} \te{s}) : \text{sym}(\te{\nabla} \te{r}) \dd \te{x} - \frac{1}{3} \int_\Omega \left( \te{\nabla} \te{\cdot} \te{s} \right) \te{\cdot} \left( \te{\nabla} \te{\cdot} \te{r} \right) \dd \te{x} . \end{align} \subsection{Energy Balance}\label{ss_derviationEnergy} The energy equation \cref{eq_balance_energy} needs special treatment already in the strong form. To have an (anti-) symmetric system, as we see later on, we eliminate the velocity divergence utilizing the continuity equation \cref{eq_balance_mass}. This step is not necessary when using the variables \((\rho,\theta)\) instead of \((p, \theta)\). A subsequent testing with the scalar test function \(\kappa\) yields \begin{equation} \int_\Omega \kappa \left( \te{\nabla} \te{\cdot} \te{s} \right) \dd \te{x} = \int_\Omega \left( r - \dot{m} \right) \kappa \dd \te{x} , \end{equation} which equivalently, using the sub-functionals \cref{eq_subf_b,eq_subf_l2}, reads as \begin{equation} b(\theta,\te{r}) = l_2(\kappa) . \label{eq_subf_line_energy} \end{equation} \subsection{Stress Balance}\label{ss_derviationStress} The stress balance \cref{eq_balance_stress} has a tensorial rank of two and, therefore, needs the corresponding 2-tensor test function \(\tee{\psi}\). Normalization with the factor \((1/2)\) yields \begin{equation} \frac{2}{5} \int_\Omega {(\te{\nabla} \te{s})}_{\text{stf}} \tee{:} \tee{\psi} \dd \te{x} + \int_\Omega {(\te{\nabla} \te{u})}_{\text{stf}} \tee{:} \tee{\psi} \dd \te{x} + \frac{1}{2} \int_\Omega \left(\te{\nabla} \te{\cdot} \teee{m} \right) \tee{:} \tee{\psi} \dd \te{x} + \frac{1}{2} \frac{1}{\operatorname{Kn}} \int_\Omega \tee{\sigma} \tee{:} \tee{\psi} \dd \te{x} = 0 . \end{equation} We reconsider that the stress tensor \(\tee{\sigma}\) is symmetric and trace-free and chose the same properties also for \(\tee{\psi}\), such that \begin{equation} \tee{\sigma} = \begin{pmatrix} \sigma_{xx} & \sigma_{xy} & 0 \\ \sigma_{xy} & \sigma_{yy} & 0 \\ 0 & 0 & -\left(\sigma_{xx} + \sigma_{yy}\right) \end{pmatrix}, \quad \tee{\psi} = \begin{pmatrix} \psi_{xx} & \psi_{xy} & 0 \\ \psi_{xy} & \psi_{yy} & 0 \\ 0 & 0 & -\left(\psi_{xx} + \psi_{yy}\right) \end{pmatrix}. \label{eq:stressTensorShapes} \end{equation} The symmetric and trace-free velocity gradient, therefore, expands to \begin{equation} \int_\Omega {\left( \te{\nabla} \te{u} \right)}_{\text{stf}} \tee{:} \tee{\psi} \dd \te{x} = \int_\Omega {\left( \te{\nabla} \te{u} \right)}_{\mathrm{sym}} \tee{:} \tee{\psi} \dd \te{x} - \frac{1}{3} \int_\Omega \left( \te{\nabla} \te{\cdot} \te{u} \right) \trace({\tee{\psi}}) \dd \te{x}, \end{equation} where the trace of \(\tee{\psi}\) directly vanishes due to the present setup. Using the additive tensor decomposition's orthogonality of the symmetric and skew-symmetric components, integration by parts becomes possible as \begin{equation}\label{eq:stressIpU} \int_\Omega {\left( \te{\nabla} \te{u} \right)}_{\text{stf}} \tee{:} \tee{\psi} \dd \te{x} = \int_\Omega \te{\nabla} \te{u} \tee{:} \tee{\psi} \dd \te{x} = - \int_\Omega \te{u} \tee{\cdot} \left( \te{\nabla} \te{\cdot} \tee{\psi} \right) \dd \te{x} + \int_\Gamma \te{u} \te{\cdot} \left( \tee{\psi} \te{\cdot} \te{n} \right) \dd l . \end{equation} The integral with the divergence of the 3-tensor \(\teee{m}\) utilizes integration by parts, such that a Frobenius inner product of degree three and a boundary expression result as \begin{equation}\label{eq:stressIp1} \int_\Omega (\nabla \cdot \teee{m}) \tee{:} \tee{\psi} \dd \te{x} = - \int_\Omega \teee{m} \teee{\because} \nabla \tee{\psi} \dd \te{x} + \int_{\Gamma} (\teee{m} \cdot \te{n}) : \tee{\psi} \dd l. \end{equation} To insert the boundary conditions, the terms of \cref{eq:stressIpU,eq:stressIp1} again use the local coordinate system with \begin{align} (\teee{m} \cdot \te{n}) : \tee{\psi} &= m_{nnn}\psi_{nn} + 2 m_{nnt}\psi_{nt} + m_{ntt}\psi_{tt} + (-m_{ntt}-m_{nnn})(-\psi_{tt}-\psi_{nn}) \nonumber \\ &= \frac{3}{2} m_{nnn}\psi_{nn} + 2 m_{nnt}\psi_{nt} + 2\left(m_{ntt}+\frac{1}{2}m_{nnn}\right)\left(\psi_{tt}+\frac{1}{2}\psi_{nn}\right), \\ \te{u} \cdot (\tee{\psi} \cdot \te{n}) &= u_n \psi_{nn} + u_t \psi_{nt}, \end{align} where \(m_{nnt} = m_{ntn}\) was utilized. All boundary terms can now be collected as \begin{multline} \frac{1}{2} \int_{\Gamma} (\teee{m} \cdot \te{n}) : \tee{\psi} \dd l + \int_{\Gamma} \te{u} \cdot (\tee{\psi} \cdot \te{n}) \dd l = \int_{\Gamma} \left( \frac{3}{4} m_{nnn} + u_n \right) \psi_{nn} \dd l \\ + \int_{\Gamma} \left( m_{nnt} + u_t \right) \psi_{nt} \dd l + \int_{\Gamma} \left(\frac{1}{2}m_{nnn}+m_{ntt}\right)\left(\frac{1}{2}\psi_{nn}+\psi_{tt}\right) \dd l, \end{multline} where the reordered boundary conditions \cref{eq_bcnew_un,eq_bcnew_sigmant,eq_bcnew_mnnn,eq_bcnew_05mnnnmnnt} fit naturally. Eliminating the highest-order moment \(\teee{m}\) with the closure relation \cref{eq_closure_m} yields the resulting equation as \begin{align} & \int_\Gamma \left( u_t^{\text{w}} \psi_{nt} + \left( u_n^{\text{w}} - \epsilon^{\text{w}} \tilde{\chi} p^{\text{w}} \right) \psi_{nn} \right) \dd l \nonumber \\ & = \operatorname{Kn} \int_\Omega \text{stf}(\te{\nabla}\tee{\sigma}) \teee{\because} \text{stf}(\te{\nabla}\tee{\psi}) \dd \te{x} + \frac{1}{2} \frac{1}{\operatorname{Kn}} \int_\Omega \tee{\sigma} \tee{:} \tee{\psi} \dd \te{x} + \frac{9}{8} \tilde{\chi} \int_\Gamma \sigma_{nn} \psi_{nn} \dd l \nonumber \\ &+ \tilde{\chi} \int_\Gamma \left( \sigma_{tt} + \frac{1}{2} \sigma_{nn} \right) \left( \psi_{tt} + \frac{1}{2} \psi_{nn} \right) \dd l + \frac{1}{\tilde{\chi}} \int_\Gamma \sigma_{nt} \psi_{nt} \dd l + \epsilon^{\text{w}} \tilde{\chi} \int_\Gamma \sigma_{nn} \psi_{nn} \dd l \nonumber \\ &+ \frac{2}{5} \int_\Omega \tee{\psi} \tee{:} \te{\nabla} \te{s} \dd \te{x} - \frac{3}{20} \int_\Gamma \psi_{nn} s_n \dd l - \frac{1}{5} \int_\Gamma \psi_{nt} s_t \dd l - \int_\Omega \text{div}(\tee{\psi}) \te{\cdot} \te{u} \dd \te{x} + \epsilon^{\text{w}} \tilde{\chi} \int_\Gamma p \psi_{nn} \dd l , \end{align} and using the sub-functionals defined in \cref{eq_subf_c,eq_subf_d,eq_subf_e,eq_subf_f,eq_subf_l3}, we have \begin{equation} c(\te{s},\tee{\psi}) + d(\tee{\sigma},\tee{\psi}) - e(\te{u},\tee{\psi}) + f(p,\tee{\psi}) = l_3(\tee{\psi}) . \label{eq_subf_line_stress} \end{equation} \subsection{Momentum Balance}\label{ss_derviationMomentum} In contrast to the extensive derivation for the stress balance, we obtain the weak momentum balance trivially by multiplicating \cref{eq_balance_momentum} with \(\te{v}\). The absence of higher-order moments results in no need for integration by parts in \begin{equation} \int_\Omega \te{\nabla} p \te{\cdot} \te{v} \dd \te{x} + \int_\Omega \left( \te{\nabla} \te{\cdot} \tee{\sigma} \right) \te{\cdot} \te{v} \dd \te{x} = \int_\Omega \te{b} \te{\cdot} \te{v} \dd \te{x} , \end{equation} such that the resulting weak form \textendash\ using the sub-functionals \cref{eq_subf_e,eq_subf_g,eq_subf_l4} \textendash\ reads \begin{equation} e(\te{v},\tee{\sigma}) + g(p,\te{v}) = l_4(\te{v}) . \label{eq_subf_line_momentum} \end{equation} \subsection{Mass Balance}\label{ss_derviationMass} Application of integration by parts for the mass balance from \cref{eq_balance_mass} allows enforcing the velocity boundary condition for \(u_n\) as \begin{align} \int_\Omega \left( \te{\nabla} \te{\cdot} \te{u} \right) q \dd \te{x} &= - \int_\Omega \te{u} \te{\cdot} \te{\nabla}q \dd \te{x} + \int_\Gamma u_n q \dd l \\ &= - \int_\Omega \te{u} \te{\cdot} \te{\nabla}q \dd \te{x} + \int_\Gamma \left( \epsilon^\mathrm{w} \tilde{\chi} \left( (p-p^\mathrm{w}) + \sigma_{nn} \right) + u_n^{\mathrm{w}} \right) q \dd l . \end{align} A reordering for \(\te{\mathcal{U}}\)- and \(\te{\mathcal{V}}\)-variables yields the weak formulation as \begin{equation} \int_\Omega \te{u} \te{\cdot} \te{\nabla}q \dd \te{x} \int_\Gamma \epsilon^\mathrm{w} \tilde{\chi} \left( p + \sigma_{nn}\right) q \dd l = \int_\Omega \dot{m} q \dd \te{x} - \int_\Gamma \left( u_n^{\mathrm{w}} - \epsilon^\mathrm{w} p^\mathrm{w} \right) q \dd l, \label{eq_weakmass} \end{equation} that has to hold for all \(q \in \mathbb{V}_p\). The equation \cref{eq_weakmass} reads, in terms of sub-functionals \cref{eq_subf_f,eq_subf_g,eq_subf_h,eq_subf_l5}, as \begin{equation} f(q,\tee{\sigma}) - g(q,\te{u}) + h(p,q) = l_5(q) . \label{eq_subf_line_mass} \end{equation} \section{Introduction} \begin{acks} The authors acknowledge financial support by the German Research Foundation (DFG) under grant IRTG 2379. The authors acknowledge the anonymous reviewers for their helpful comments. \end{acks} \bibliographystyle{ACM-Reference-Format} \section{Introduction} Nowadays, scientists often consider computational resources as the limiting factor in numerical simulations. However, this is not true in general. For non-standard gas flow conditions, the model accuracy can dominate the discretization errors \textendash\ quantitative and qualitative. Especially the lack of essential rarefaction effects in the numerical solution affects the quality of the computational predictions. These non-standard conditions often occur for high Knudsen numbers, e.g., in diluted gases. For rarefied gas applications, classical Navier--Stokes and Fourier (NSF) models are not sufficiently accurate to predict all occurring non-equilibrium effects. Therefore, we will consider the regularized 13-moment (R13) equations as a natural extension to the classical gas flow models. This set of equations is derived using the moment method for the Boltzmann equation (compare, e.g., with~\cite{grad1958principles}), resulting in additional evolution equations for the heat flux vector and the stress tensor (in contrast to the NSF system). Based on a pseudo-equilibrium approach, regularization terms are added in~\cite{struchtrup2003regularization} to transform \citeauthor{grad1958principles}'s 13-moment equations into the R13 equations. \par Several numerical methods were applied to solve the resulting set of equations. In~\cite{rana2013robust}, \citeauthor{rana2013robust} applied a finite difference scheme to obtain steady-state approximations of the R13 equations. Only recently, \citeauthor{torrilhon2017hierarchical} used a discontinuous Galerkin approach in~\cite{torrilhon2017hierarchical} for a hierarchical simulation context. During the same period, \citeauthor{westerkamp2019finite} proposed the first finite element approaches for the steady-state linearized R13 system in~\cite{westerkamp2019finite,westerkamp2017continous} after subsequent advances regarding instability issues and stabilization techniques in~\cite{westerkamp2012finite,westerkamp2014stabilization,westerkamp2017curvature}. Earlier FEM approaches in~\cite{mueller2010computing} already used the FEniCS simulation framework for a simplified set of equations. We will focus on a Galerkin finite element approach and note that previous work did not provide a tensor-based formulation, being very common in the context of mixed Galerkin methods~\cite{auricchio2004mixed}. \paragraph{The FEniCS Project} The FEniCS framework~\cite{alnaesEtAl2015fenics,loggMardalWellsEtAl2012} serves as an optimal computing platform for implementing our method. It allows us to avoid the component-wise derivation of scalar variational forms. FEniCS is an LGPLv3-licensed~\cite{web2007lgplv3} collection of interoperable components for automated scientific computing. The main focus is on solving partial differential equations by the finite element method~\cite{alnaesEtAl2015fenics}. \par In terms of usability, FEniCS offers both a modern high-level Python interface and a high-performance C++ backend. The whole framework consists of several \textit{building blocks}~\cite{langtangen2016solving}, i.e., flexible and reusable software components allowing the user to write the mathematical models in a very high-level abstraction. The effort to write significant portions of code is shifted away from the user to the developer by the concept of automated code generation. This approach aims to solve the ``disconnect between mathematics and code'' (e.g., a relatively simple Poisson equation \(-\Delta u = f\) vs.~100--10000 lines of code)~\cite{logg2015ytImplementing}. In the optimal case, a user should only write the model problem's mathematical statements while the simulation framework executes all the extra work automatically. \par The main component of FEniCS is DOLFIN~\cite{logg2010dolfin,hoffman2002dolfin}. This library implements the high-performance C++ backend, consisting of the relevant data structures such as meshes, function spaces, and function expressions. The main algorithms in the context of the finite element method (i.e., element assembly, mesh handling, or connection to linear algebra solvers) are also part of DOLFIN\@. From a user perspective, DOLFIN is the main connection layer between the high-level Python interface and the core layer because it handles the communication between all modules and extends them with external software. The most important internal low-level components of FEniCS are: \begin{itemize} \item The UFL~\cite{alnaes2014unified} (Unified Form Language) is a domain-specific language to formulate finite element problems in their weak variational form. With UFL, the user can express a model problem in its natural and mathematical way. It also serves as the input for the form compiler used by both the C++ and the Python interface. \item The FFC~\cite{kirby2006compiler} (FEniCS Form Compiler) automatically generates DOLFIN code from a given variational form. The goal of FFC is to provide a validated compiler, performing automated code generation tasks to improve the correctness of the resulting code. \item The FIAT~\cite{kirby2004algorithm} (FInite element Automatic Tabulator) enables the automatic computation of basis functions for nearly arbitrary finite elements. \end{itemize} Other simulation frameworks building upon or utilizing similar concepts as FEniCS are the Firedrake project~\cite{rathgeber2017firedrake} or FEniCS-HPC~\cite{hoffman2015fenics}. The presented work uses the FEniCS version 2019.1.0.r3. \paragraph{Outline} This work's organization is as follows: In \cref{s_modelFormulation}, we present the tensorial R13 model equations, discretized in \cref{s_galerkinFiniteElementApproach} using a Galerkin approach. \cref{s_implementationAndValidation,s_validation} are devoted to implementing and validating the proposed method, focusing on auxiliary implementations for tensor differential operators. Furthermore, application cases in \cref{s_applications} present the solver capabilities to predict the critical flow phenomena. Finally, we discuss limitations and future work before adding some concluding remarks in \cref{s_conclusion}. \section{Formulation of the Model Equations}\label{s_modelFormulation} We first consider the general case of a closed and time-independent gas domain \(\tilde{\Omega} \subset \mathbb{R}^{3}\) in three spatial dimensions. The main quantities are the time evolutions (with \(t \in [0,\con{T}]\)) of field quantities, such as the gas density \(\rho : \tilde{\Omega} \times [0,\con{T}] \rightarrow \mathbb{R}_+\), the gas velocity \(\te{u} : \tilde{\Omega} \times [0,\con{T}] \rightarrow \mathbb{R}^d\), and the gas temperature \(T : \tilde{\Omega} \times [0,\con{T}] \rightarrow \mathbb{R}_+\) inside the domain \(\tilde{\Omega}\). These are the three fundamental quantities. We will further encounter the pressure \(p : \tilde{\Omega} \times [0,\con{T}] \rightarrow \mathbb{R}_+\), the heat flux \(\te{s} : \tilde{\Omega} \times [0,\con{T}] \rightarrow \mathbb{R}^d\), and the deviatoric stress tensor \(\tee{\sigma} : \tilde{\Omega} \times [0,\con{T}] \rightarrow \mathbb{R}_{\text{stf}}^{d \times d}\). The deviatoric part means the symmetric and trace-free part of a tensor. Note that the traditional symbol \(\te{q}\) for the heat flux is replaced by \(\te{s}\) to have an intuitive set of test functions in the weak formulations later on in \cref{s_galerkinFiniteElementApproach}. The overall goal is to determine these evolutions for all points \(\te{x} \in \tilde{\Omega}\) from given initial conditions (e.g.,~\(\rho_0(\te{x},0), \te{u}_0(\te{x},0), T(\te{x},0), \ldots\)) together with a set of given boundary conditions. The latter describes the outer environment and boundary behavior of the fields on the domain boundary \(\partial \tilde{\Omega}\). \subsection{Modeling Non-Equilibrium Gas Flows Using the R13 Equations} Three fundamental laws of physics describe the behavior of general gas flows: The point-wise conservation of mass in~\cref{eq_consMass}\index{balance laws!mass}, the balance of momentum in~\cref{eq_consMomentum}\index{balance laws!momentum} (Newton's second law of motion), and the energy conservation in~\cref{eq_consEnergy}\index{balance laws!energy} (first law of thermodynamics), expressed in component form as \begin{subequations}\label{eqsec:balanceLawsLagrangian} \begin{align} \mdv{\rho}{t} + \rho \pdv{u_k}{x_k} &= \dot{m}, \label{eq_consMass} \\ \rho \mdv{u_i}{t} + \pdv{p}{x_i} + \pdv{\sigma_{ij}}{x_j} &= \rho b_i, \label{eq_consMomentum} \\ \rho \mdv{\epsilon}{t} + p \pdv{u_i}{x_i} + \sigma_{ij} \pdv{u_i}{x_j} + \pdv{s_i}{x_i} &= r. \label{eq_consEnergy} \end{align} \end{subequations} In~\eqref{eqsec:balanceLawsLagrangian}, \(\{\dot{m}, b_i, r\}\) denotes a mass source, a body force, and an energy source, respectively. \( (\text{D}\star/\text{D}t) = (\partial\star/\partial t) + u_j (\partial\star/\partial x_j)\) defines the material derivative using Einstein's summation convention \textendash\ compare, e.g., with~\cite{schade2009tensoranalysis}. When assuming a monoatomic ideal gas, the pressure is related to the density and the temperature as \(p=\rho \theta\). In contrast to \(T\), \(\theta\) is the temperature in energy units as \(\theta = (k/m) T\). The classical constitutive theory considers Fourier's law (\(s_i=-\kappa (\partial T)/(\partial x_i)\)) and the law of Navier--Stokes (\(\sigma_{ij} = -2 \mu (\partial u_{\langle i})/(\partial x_{j\rangle})\) in notation~\eqref{eq_rank2stf}) as closure relations to solve the system~\eqref{eqsec:balanceLawsLagrangian} \textendash\ altogether forming the well-known NSF system for compressible gas flows. \par However, the NSF model is only accurate near the thermodynamic equilibrium with \(\operatorname{Kn} \ll 1\), where the dimensionless Knudsen number \begin{equation} \operatorname{Kn}=\frac{\lambda}{L}, \end{equation} describes the ratio between the mean free path \(\lambda\) of the gas particles and the relevant characteristic length scale \(L\)~\cite{struchtrup2005macroscopic}. One observes many interesting rarefaction effects or micro-scale phenomena in a non-equilibrium gas flow at about \(\operatorname{Kn} \gtrsim 0.05 \). Experiments and the underlying Boltzmann equation's analysis confirmed these observations~\cite{struchtrup2011macroscopic}. We find a comprehensive list of rarefaction effects in~\cite{struchtrup2011macroscopic,torrilhon2016modeling}. Some of them are: \begin{itemize} \item Heat flux parallel to the walls in a channel flow, contradicting Fourier's law as there is no temperature difference in this direction (see \cref{sec_channelFlow}); \item A non-constant pressure behavior in Couette and Poiseuille channel flows although no flow across the channel is present; \item A minimum of the mass flow rate (\textit{Knudsen minimum}) in a force-driven Poiseuille flow, also known as the \textit{Knudsen paradox} (see \cref{sec_channelFlow}); \item A non-convex temperature profile in such microchannels while NSF predicts a strictly convex shape for the same setup; \item Temperature-induced flow situations in channels (see \cref{sec_knudsenPump}); \item Temperature jump and velocity slip at walls (\textit{Knudsen boundary layers}); \item as well as Temperature-induced edge flow (see \cref{sec_thermalEdgeFlow}). \end{itemize} \par Therefore, the goal of a rarefied gas solver is to predict all of the above-listed effects accurately. Using Boltzmann's transport equation is an option for all flow situations with \(\operatorname{Kn} \in \mathbb{R}_+\). However, due to high dimensionality, its numerical simulation is costly compared to classical continuum approaches. For this reason, there exists a variety of models that extend the classical NSF system. We use the R13 equations proposed in~\cite{struchtrup2003regularization} (summarized in~\cite{struchtrup2011macroscopic,torrilhon2016modeling}). This prominent example of extended macroscopic gas flow models is suitable in the transition regime away from the thermodynamic equilibrium. Accordingly, one might consider the full set of non-linear R13 equations as the natural extension of the Navier--Stokes' and Fourier's models to more field equations. \subsection{Steady-State Linearization and Simplifications}\label{ss_linearization} Throughout this work, we will consider the steady-state and linearized R13 equations using pressure \(p\) and temperature \(\theta\) instead of density \(\rho\) and classical temperature \(T\). This set of variables is a common choice for engineering applications. The full set of non-linear equations consists of the three conservation equations \cref{eq_consMass,eq_consMomentum,eq_consEnergy} and balance laws for \(\te{s}\) and \(\tee{\sigma}\) and are given, e.g., in~\cite{struchtrup2003regularization,struchtrup2011macroscopic,torrilhon2016modeling}. The balance laws for \(\te{s}\) and \(\tee{\sigma}\) contain collision terms with the collision frequency \(\nu\). A reformulation expands the material derivatives. It uses \(p=\rho \theta\) and \(\epsilon = 3 \theta/2\), accounting for monoatomic ideal gases. We further neglect all temporal operators and only consider deviation fields \(\delta \te{U}\) with \(\te{U}=\te{U}_0 + \delta \te{U}\) around a ground state \(\te{U}_0 = (\rho^{(0)},{u_i}^{(0)},\theta^{(0)},{\sigma_{ij}}^{(0)},{s_i}^{(0)}) = (\rho_0,0,\theta_0,0,0)\). This procedure linearizes the whole system together with the closures. \par We end up with the only remaining parameter \(\tau\), which is the mean free-flight time defined as \(\tau = 1/\nu\). A subsequent scaling of the equations relates every field \(\star\) to its reference \(\hat{\star}\). We therefore define \begin{equation} \hat{x}_i \coloneqq \frac{x_i}{L}, \; \hat{\theta} \coloneqq \frac{\theta}{\theta_0}, \; \hat{p} \coloneqq \frac{p}{p_0}, \; \hat{\tau} \coloneqq \frac{\tau}{\tau_0}, \; \hat{u}_i \coloneqq \frac{u_i}{\sqrt{\theta_0}}, \end{equation} which leads to the references \begin{equation} \hat{\sigma}_{ij} = \frac{\sigma}{p_0}, \; \hat{s}_i = \frac{s_i}{p_0\sqrt{\theta_0}}, \; \hat{m}_{ijk} = \frac{m_{ijk}}{p_0\sqrt{\theta_0}}, \; \hat{\Delta}_{ij} = \frac{\Delta_{ij}}{p_0 \theta_0}. \; \end{equation} We then replace all quantities with their dimensionless counterpart multiplied with the reference value. For example, we insert \(x_i \rightarrow L \hat{x}_i\) into the linearized model. The resulting equations, then, allow for the identification of the Knudsen number as \begin{equation} \operatorname{Kn} = \frac{\tau_0 \sqrt{\theta_0}}{L}. \end{equation} The resulting system of interest is linear, steady-state, and dimensionless. We switch to a tensorial notation and drop the dimensionless indicator \(\hat{\star}\) for better readability of the differential operators and simpler notation. The resulting balance laws read \begin{align} \te{\nabla} \te{\cdot} \te{u} &= \dot{m},\label{eq_balance_mass} \\ \te{\nabla} p + \te{\nabla} \te{\cdot} \tee{\sigma} &= \te{b},\label{eq_balance_momentum} \\ \te{\nabla} \te{\cdot} \te{u} + \te{\nabla} \te{\cdot} \te{s} &= r,\label{eq_balance_energy} \end{align} with additional evolution equations for the heat flux vector \(\te{s}\) and the deviatoric stress tensor \(\tee{\sigma}\) as \begin{align} \frac{4}{5} {(\te{\nabla} \te{s})}_{\text{stf}} + 2 {(\te{\nabla} \te{u})}_{\text{stf}} + \te{\nabla} \te{\cdot} \teee{m} &= - \frac{1}{\operatorname{Kn}} \tee{\sigma},\label{eq_balance_heatflux} \\ \frac{5}{2} \te{\nabla} \theta + \te{\nabla} \te{\cdot} \tee{\sigma} + \frac{1}{2} \te{\nabla} \te{\cdot} \tee{R} + \frac{1}{6} \te{\nabla} \Delta &= - \frac{1}{\operatorname{Kn}} \frac{2}{3} \te{s}.\label{eq_balance_stress} \end{align} To obtain a closed system, we also require the linearized closure relations for the highest-order moments as \begin{align} \teee{m} &= - 2 \operatorname{Kn} {(\te{\nabla} \tee{\sigma})}_{\text{stf}},\label{eq_closure_m} \\ \tee{R} &= - \frac{24}{5} \operatorname{Kn} {(\te{\nabla} \te{s})}_{\text{stf}},\label{eq_closure_r} \\ \Delta &= - 12 \operatorname{Kn} \left( \te{\nabla} \te{\cdot} \te{s} \right).\label{eq_closure_delta} \end{align} Here, the symmetric and trace-free (deviatoric) part of a 2-tensor is defined component-wise \({(\star)}_{ij} \mapsto {({(\star)}_{\text{stf}})}_{ij} = {(\star)}_{\langle ij \rangle}\), with the trace subtracted from the symmetric part \({(\star)}_{(ij)}\). For a tensor \(\tee{A} \in \mathbb{R}^{3 \times 3}\) and following~\cite{struchtrup2005macroscopic}, this translates to \begin{equation}\label{eq_rank2stf} A_{\langle ij \rangle} = A_{(ij)} - \frac{1}{3} A_{kk} \delta_{ij} = \frac{1}{2} (A_{ij} + A_{ji}) - \frac{1}{3} A_{kk} \delta_{ij}, \end{equation} using Kronecker's delta function \(\delta_{ij}\). The symmetric and trace-free part~\cite{struchtrup2005macroscopic} of a 3-tensor \(\teee{B} \in \mathbb{R}^{3 \times 3 \times 3}\) analogously reads \begin{equation}\label{eq_rank3stf} B_{\langle ijk \rangle} = B_{(ijk)} - \frac{1}{5} \left( B_{(ill)} \delta_{jk} + B_{(ljl)} \delta_{ik} + B_{(llk)} \delta_{ij}\right). \end{equation} Here, the symmetric part of a 3-tensor is the average of all possible transpositions and is given by \begin{equation}\label{eq_rank3sym} B_{(ijk)} = \frac{1}{6} \left( B_{ijk} + B_{ikj} + B_{jik} + B_{jki} + B_{kij} + B_{kji} \right). \end{equation} \par We restrict the set of computational domains \(\tilde{\Omega} \subset \mathbb{R}^3\) to geometries \(\tilde{\Omega} \equiv \Omega \times \mathbb{R}\) with complete homogeneity in the third spatial direction \(z\), such that \(\partial_{x_3} = \partial_z \equiv 0\). This assumption simplifies the calculation of all variables, although they formally remain three-dimensional. Therefore, in this paper's remainder, we only have to consider the computational domain \(\Omega \subset \mathbb{R}^2\). One can assume this as an area cut from an infinitely long domain in the \(z\)-direction. \subsection{Linearized Boundary Conditions} To formulate boundary value problems, we require a set of linearized boundary conditions. In~\cite{rana2016thermodynamically}, \citeauthor{rana2016thermodynamically} proposed the most recent version based on Maxwell's accommodation model~\cite{torrilhon2008boundary} while we use the notation of~\cite{westerkamp2019finite,torrilhon2017hierarchical} as \begin{align} u_n &= 0,\label{eq_bc_un} \\ \sigma_{nt} &= \tilde{\chi} \left( (u_t-u_t^{\mathrm{w}}) + \frac{1}{5} s_t + m_{nnt} \right),\label{eq_bc_sigmant} \\ R_{nt} &= \tilde{\chi} \left( -(u_t-u_t^{\mathrm{w}}) + \frac{11}{5} s_t - m_{nnt} \right),\label{eq_bc_rnt} \\ s_n &= \tilde{\chi} \left( 2(\theta-\theta^{\mathrm{w}}) + \frac{1}{2} \sigma_{nn} + \frac{2}{5} R_{nn} + \frac{2}{15} \Delta \right),\label{eq_bc_sn} \\ m_{nnn} &= \tilde{\chi} \left( - \frac{2}{5} (\theta-\theta^{\mathrm{w}}) + \frac{7}{5} \sigma_{nn} - \frac{2}{25} R_{nn} - \frac{2}{75} \Delta \right),\label{eq_bc_mnnn} \\ \left( \frac{1}{2} m_{nnn} + m_{ntt} \right) &= \tilde{\chi} \left( \frac{1}{2} \sigma_{nn} + \sigma_{tt} \right).\label{eq_bc_05mnnnmnnt} \end{align} A two-dimensional local boundary-aligned coordinate system in terms of outer normal and tangential components \((\te{n},\te{t})\) generates the required projections. The modified accommodation factor is given by \(\tilde{\chi} = \sqrt{2/(\pi \theta_0)} \chi/(2-\chi)\)~\cite{westerkamp2019finite}. The boundary conditions are equal to the Onsager boundary conditions of~\cite{rana2016thermodynamically}. They were adjusted, as described in~\cite{torrilhon2017hierarchical} (to ensure thermodynamic admissibility). \par For real-life applications, it is often necessary to prescribe inflow or outflow conditions. The trivial velocity boundary condition in the \(\te{n}\)-direction (\(u_n=0\)) is therefore replaced by an inflow model, following the idea of~\cite{torrilhon2017hierarchical} as \begin{equation} \epsilon^{\mathrm{w}} \tilde{\chi} \left( (p-p^{\mathrm{w}}) + \sigma_{nn} \right) = \left(u_n - u_n^{\mathrm{w}}\right). \label{eq_inflowBC} \end{equation} The artificial in- and outflow interface require a pressure \(p^{\mathrm{w}}\), a velocity \(u_n^{\mathrm{w}}\), and a velocity prescription coefficient \(\epsilon^\mathrm{w}\). Although these interfaces are no physical walls, we still use the notation \(\star^{\textrm{w}}\) for indicating the boundary values following the other conditions \cref{eq_bc_sigmant,eq_bc_rnt,eq_bc_sn,eq_bc_mnnn,eq_bc_05mnnnmnnt}. Intuitively, the equation \(\epsilon^\mathrm{w}=0\), together with \(u_n^{\mathrm{w}}=0\), reduces the inflow model back to the standard boundary condition. However, a value of \(\epsilon^\mathrm{w} \rightarrow \infty\) enforces the total pressure at the wall \(p^{\mathrm{w}} = p + \sigma_{nn}\). \section{Galerkin Finite Element Approach}\label{s_galerkinFiniteElementApproach} The R13 equations of \cref{ss_linearization} are solved numerically. Before utilizing the Galerkin finite element method in \cref{ss_femDiscretization}, \cref{ss_weakform} presents the weak formulation. A stabilization approach using a continuous interior penalty (CIP) method is proposed in the \cref{ss_stabilization} and allows for a broader spectrum of stable element combinations. \subsection{Derivation of the Variational Formulation}\label{ss_weakform} The derivation of the weak variational form follows the usual strategy. The first step consists of integration over the computational domain \(\Omega\) while multiplying (testing) with a corresponding set of test functions. Due to the different tensorial degrees of all five equations, the trial- and test-function vectors read \begin{equation} \te{\mathcal{U}} \coloneqq (\te{s},\theta,\tee{\sigma},\te{u},p), \; \te{\mathcal{V}} \coloneqq (\te{r},\kappa,\tee{\psi},\te{v},q). \end{equation} One 2-tensorial, two vectorial, and two scalar test functions from suitable Sobolev function spaces \(\mathbb{V}_\star\). We choose the trial and test functions from the same product spaces for the Galerkin method as \(\te{\mathcal{U}},\te{\mathcal{V}} \in \mathbb{H} \coloneqq \bigtimes_{i \in \te{\mathcal{U}}} \mathbb{V}_i = \mathbb{V}_{\te{s}} \times \mathbb{V}_{\theta} \times \mathbb{V}_{\tee{\sigma}} \times \mathbb{V}_{\te{u}} \times \mathbb{V}_{p}\). In \cref{ss_derviationHeatflux,ss_derviationEnergy,ss_derviationStress,ss_derviationMomentum,ss_derviationMass}, we transform all evolution equations using the following strategy: \begin{enumerate} \item Integrate over \(\Omega\) while testing with the corresponding test function \(\star \in \mathcal{V}\). \item Apply integration by parts to all \(\te{u}\)-, \(\te{r}\)-, and \(\tee{m}\)-terms. \item Insert the corresponding conditions on the boundary \(\Gamma \coloneqq \partial \Omega\) using a normal/tangential aligned coordinate system with \((\te{n},\te{t})\). \item Use the closure relations \cref{eq_closure_m,eq_closure_r,eq_closure_delta} to eliminate the highest-order moments \(\teee{m},\tee{R}\), and \(\Delta\). \end{enumerate} The addition of all five weak equations \cref{eq_subf_line_heatflux,eq_subf_line_energy,eq_subf_line_stress,eq_subf_line_momentum,eq_subf_line_mass} yields the continuous compound formulation: \textit{Find} \(\te{\mathcal{U}} \in \mathbb{H}\) \textit{, such that} for all \(\te{\mathcal{V}} \in \mathbb{H}\): \begin{equation} \mathcal{A} \left(\te{\mathcal{U}},\te{\mathcal{V}}\right) = \mathcal{L}\left(\te{\mathcal{V}}\right) . \end{equation} A collection and structuring step produces the bilinear form \(\mathcal{A}: \mathbb{H} \times \mathbb{H} \rightarrow \mathbb{R}\) on the product space over \(\mathbb{H}\). By defining the sub-functionals \(a(\te{s},\te{r}),\ldots,h(p,q)\), the combined weak form reads \begin{align} \mathcal{A}\left(\te{\mathcal{U}},\te{\mathcal{V}}\right) &= a(\te{s},\te{r}) + b(\kappa, \te{s}) - b(\theta, \te{r}) + c(\te{s},\tee{\psi}) - c(\te{r},\tee{\sigma}) + d(\tee{\sigma},\tee{\psi}) \nonumber \\ &+ e(\te{u},\tee{\psi}) - e(\te{v},\tee{\sigma}) + f(p,\tee{\psi}) + f(q,\tee{\sigma}) + g(p,\te{v}) - g(q,\te{u}) + h(p,q) , \label{eq_compundWeakForm} \end{align} while the linear functional \(\mathcal{L}: \mathbb{H} \rightarrow \mathbb{R}\) on the right-hand side reads \begin{align} \mathcal{L}\left(\mathcal{V}\right) &= l_1(\te{r}) + l_2(\kappa) + l_3(\tee{\psi}) + l_4(\te{v}) + l_5(q) . \label{eq_compoundRHS} \end{align} \par The bilinear sub-functionals used in~\eqref{eq_compundWeakForm} contain \(a(\te{s},\te{r})\), \(d(\tee{\sigma},\tee{\psi})\), \(h(p,q)\) as symmetric diagonal terms. Considering the physical interpretations, \(b(\star,\star)\) is an intra-heat coupling, \(e(\star,\star)\), \(f(\star,\star)\), \(g(\star,\star)\) are intra-stress couplings, and the contribution \(c(\star,\star)\) is the inter-heat-stress coupling. Altogether, they read: \begin{align} a(\te{s},\te{r}) &= \frac{24}{25} \operatorname{Kn} \int_\Omega \text{sym}(\te{\nabla}\te{s}) \tee{:} \text{sym}(\te{\nabla}\te{r}) \dd \te{x} + \frac{12}{25} \operatorname{Kn} \int_\Omega \text{div}(\te{s}) \text{div}(\te{r}) \dd \te{x} \nonumber \\ & + \frac{4}{15} \frac{1}{\operatorname{Kn}} \int_\Omega \te{s} \cdot \te{r} \dd \te{x} + \frac{1}{2} \frac{1}{\tilde{\chi}} \int_\Gamma s_n r_n \dd l + \frac{12}{25} \tilde{\chi} \int_\Gamma s_t r_t \dd l \label{eq_subf_a} , \\ b(\theta, \te{r}) &= \int_\Omega \theta \, \text{div}(\te{r}) \dd \te{x} \label{eq_subf_b} , \\ c(\te{r},\tee{\sigma}) &= \frac{2}{5} \int_\Omega \tee{\sigma} \tee{:} \te{\nabla} \te{r} \dd \te{x} - \frac{3}{20} \int_\Gamma \sigma_{nn} r_n \dd l - \frac{1}{5} \int_\Gamma \sigma_{nt} r_t \dd l \label{eq_subf_c} , \\ d(\tee{\sigma},\tee{\psi}) &= \operatorname{Kn} \int_\Omega \text{stf}(\te{\nabla}\tee{\sigma}) \teee{\because} \text{stf}(\te{\nabla}\tee{\psi}) \dd \te{x} + \frac{1}{2} \frac{1}{\operatorname{Kn}} \int_\Omega \tee{\sigma} \tee{:} \tee{\psi} \dd \te{x} \nonumber \\ &+ \frac{9}{8} \tilde{\chi} \int_\Gamma \sigma_{nn} \psi_{nn} \dd l + \tilde{\chi} \int_\Gamma \left( \sigma_{tt} + \frac{1}{2} \sigma_{nn} \right) \left( \psi_{tt} + \frac{1}{2} \psi_{nn} \right) \dd l \nonumber \\ &+ \frac{1}{\tilde{\chi}} \int_\Gamma \sigma_{nt} \psi_{nt} \dd l + \epsilon^{\text{w}} \tilde{\chi} \int_\Gamma \sigma_{nn} \psi_{nn} \dd l \label{eq_subf_d} , \\ e(\te{u},\tee{\psi}) &= \int_\Omega \text{div}(\tee{\psi}) \te{\cdot} \te{u} \dd \te{x} \label{eq_subf_e} , \\ f(p,\tee{\psi}) &= \epsilon^{\text{w}} \tilde{\chi} \int_\Gamma p \psi_{nn} \dd l \label{eq_subf_f} , \\ g(p,\te{v}) &= \int_\Omega \te{v} \te{\cdot} \te{\nabla} p \dd \te{x} \label{eq_subf_g} , \\ h(p,q) &= \epsilon^{\text{w}} \tilde{\chi} \int_\Gamma p q \dd l \label{eq_subf_h} . \end{align} The linear functionals of~\eqref{eq_compoundRHS} contain the corresponding source terms, forces, and given boundary expressions: \begin{align} l_1(\te{r}) &= - \int_\Gamma \theta^{\text{w}} r_n \dd l \label{eq_subf_l1} , \\ l_2(\kappa) &= \int_\Omega \left( r - \dot{m} \right) \kappa \dd \te{x} \label{eq_subf_l2} , \\ l_3(\tee{\psi}) &= - \int_\Gamma \left( u_t^{\text{w}} \psi_{nt} + \left( u_n^{\text{w}} - \epsilon^{\text{w}} \tilde{\chi} p^{\text{w}} \right) \psi_{nn} \right) \dd l \label{eq_subf_l3} , \\ l_4(\te{v}) &= \int_\Omega \te{b} \te{\cdot} \te{v} \dd \te{x} \label{eq_subf_l4} , \\ l_5(q) &= \int_\Omega \dot{m} q \dd \te{x} - \int_\Gamma \left( u_n^{\text{w}} - \epsilon^{\text{w}} \tilde{\chi} p^{\text{w}} \right) q \dd l \label{eq_subf_l5} . \end{align} Identification of the total pressure \(p + \sigma_{nn}\) allows for an alternative notation of \(\mathcal{A}(\te{\mathcal{U}},\te{\mathcal{V}})\) in \cref{eq_compundWeakForm} as \begin{align} \mathcal{A}\left(\te{\mathcal{U}},\te{\mathcal{V}}\right) &= a(\te{s},\te{r}) + b(\kappa, \te{s}) - b(\theta, \te{r}) + c(\te{s},\tee{\psi}) - c(\te{r},\tee{\sigma}) + \bar{d}((\tee{\sigma}, p), (\tee{\psi}, q)) \nonumber \\ &+ e(\te{u},\tee{\psi}) - e(\te{v},\tee{\sigma}) + g(p,\te{v}) - g(q,\te{u}) , \label{eq_compundWeakFormAlternative} \end{align} where we add the terms \(d(\tee{\sigma},\tee{\psi})\), \(f(p,\tee{\psi})\), \(f(q,\tee{\sigma})\), and \(h(p,q)\) to form \(\bar{d} : (\mathbb{V}_{\tee{\sigma}} \times \mathbb{V}_{p}) \times (\mathbb{V}_{\tee{\sigma}} \times \mathbb{V}_{p}) \rightarrow \mathbb{R}\) as \begin{align} \bar{d}((\tee{\sigma}, p), (\tee{\psi}, q)) &= \operatorname{Kn} \int_\Omega \text{stf}(\te{\nabla}\tee{\sigma}) \teee{\because} \text{stf}(\te{\nabla}\tee{\psi}) \dd \te{x} + \frac{1}{2} \frac{1}{\operatorname{Kn}} \int_\Omega \tee{\sigma} \tee{:} \tee{\psi} \dd \te{x} \nonumber \\ &+ \frac{9}{8} \tilde{\chi} \int_\Gamma \sigma_{nn} \psi_{nn} \dd l + \tilde{\chi} \int_\Gamma \left( \sigma_{tt} + \frac{1}{2} \sigma_{nn} \right) \left( \psi_{tt} + \frac{1}{2} \psi_{nn} \right) \dd l \nonumber \\ &+ \frac{1}{\tilde{\chi}} \int_\Gamma \sigma_{nt} \psi_{nt} \dd l + \epsilon^{\text{w}} \tilde{\chi} \int_\Gamma (p + \sigma_{nn}) (q + \psi_{nn}) \dd l \label{eq_subf_dTilde} . \end{align} The total pressure term \(\epsilon^{\text{w}} \tilde{\chi} \int_\Gamma (p + \sigma_{nn}) (q + \psi_{nn}) \dd l\) replaces the last term \(\epsilon^{\text{w}} \tilde{\chi} \int_\Gamma \sigma_{nn} \psi_{nn} \dd l\) of \(d(\sigma,\psi)\) in \cref{eq_subf_d}. We will need the notation \cref{eq_compundWeakFormAlternative} in \cref{thm_positiveDefiniteness} to simplify notation. The actual implementation, however, uses the equivalent weak form \cref{eq_compundWeakForm} to set up the system \cref{eq_discreteSystem}. \subsection{Finite Element Discretization}\label{ss_femDiscretization} We consider a conforming and shape-regular partition \(\mathcal{T}_h\) of the computational domain \(\Omega \subset \mathbb{R}^2\) into triangular elements \(\tau\) as \(\mathcal{T}_h = {\left\{ \tau \right\}}_{\tau \in \mathcal{T}_h}\). For a given polynomial degree \(m \in \mathbb{N}\), let the space of all polynomials with maximal degree \(m\) on every \(\tau \in \mathcal{T}_h\) reads \begin{equation} \mathbb{V}_{\star,h} = \left\{ u \in \mathbb{V}_{\star} : u|_\tau \in \mathbb{P}_m(\tau) \, \forall \tau \in \mathcal{T}_h \right\}. \end{equation} Following the usual conforming finite element approach, we restrict the function space to a finite-dimensional subspace \(\mathbb{H}_h \subset \mathbb{H}\) by choosing polynomial ansatz functions for all fields. This discretization procedure, then, leads to the discrete algebraic system: \newcommand{1.2}{1.2} \begin{equation} \renewcommand{\arraystretch}{1.2} \left[{ \begin{array}{cc|ccc} A_h & -B_h^T & -C_h^T & 0 & 0 \\ B_h & 0 & 0 & 0 & 0 \\ \hline C_h & 0 & D_h & -E_h^T & F_h^T \\ 0 & 0 & E_h & 0 & G_h^T \\ 0 & 0 & F_h & -G_h & H_h \\ \end{array} }\right] \left[{ \begin{array}{c} \te{s}_h \\ \theta_h \\ \hline \tee{\sigma}_h \\ \te{u}_h \\ p_h \\ \end{array} }\right] = \left[{ \begin{array}{c} L_{1,h} \\ L_{2,h} \\ \hline L_{3,h} \\ L_{4,h} \\ L_{5,h} \\ \end{array} }\right] , \label{eq_discreteSystem} \end{equation} where each matrix \(A_h,\ldots,H_h\) corresponds to its corresponding weak form \(a(\te{s},\te{r}),\ldots,h(p,q)\). The system's formulation \cref{eq_discreteSystem} reveals the physical coupling between the heat variables \((\te{s}_h,\theta_h)\) and the stress variables \((\tee{\sigma}_h,\te{u}_h,p_h)\) only through the \(C_h\) and \(C_h^T\) matrices. \par To show the need for stabilization of \cref{eq_discreteSystem}, we reorder the rows with \(\te{x}={(\tee{\sigma}_h ,\te{s}_h ,p_h)}^T\) and \(\te{y}={(\te{u}_h ,\theta_h)}^T\) such that \begin{equation} \left[{ \begin{array}{cc} \mathbb{A} & -\mathbb{B}^T \\ \mathbb{B} & \tee{0} \\ \end{array} }\right] \left[{ \begin{array}{c} \te{x} \\ \te{y} \end{array} }\right] = \left[{ \begin{array}{c} \te{f} \\ \te{g} \end{array} }\right] , \label{eq_saddleSystem} \end{equation} with \begin{equation} \mathbb{A} = \left[{ \begin{array}{ccc} D_h & C_h & F_h^T \\ -C_h^T & A_h & 0 \\ F_h & 0 & H_h \end{array} }\right] , \; \mathbb{B} = \left[{ \begin{array}{ccc} E_h & 0 & G_h^T \\ 0 & B_h & 0 \\ \end{array} }\right] , \; \te{f} = \left[{ \begin{array}{c} L_{3,h} \\ L_{1,h} \\ L_{5,h} \\ \end{array} }\right] , \; \te{g} = \left[{ \begin{array}{c} L_{4,h} \\ L_{2,h} \\ \end{array} }\right] . \end{equation} The notation used in \cref{eq_saddleSystem} reveals the saddle point structure (compare, e.g., with~\cite{auricchio2004mixed}). We directly observe the need for \((\te{u},\theta)\)-stabilization due to the zero diagonal entries. For an impermeable wall condition \(u_n = 0\) resulting from \(\epsilon^{\text{w}}=0\), the \(H_h\)-block is also vanishing. The \(p\)-diagonal, therefore, also needs stabilization to work for all possible boundary conditions. \subsection{Continuous Interior Penalty (CIP) Stabilization}\label{ss_stabilization} In general, mixed finite element problems require a compatible set of finite elements. For example, in Stokes's second problem, a suitable choice to circumvent the LBB condition is the Taylor--Hood element \(\mathbb{P}_2\mathbb{P}_1\). Here, the velocity function space has a higher dimension than the pressure function space. When it comes to application cases, we do not want to focus on a particular field and desire an equal order discretization. Especially for higher-order moments, this is true due to no real physical intuition about these fields. The argument gets stronger when considering even more complex models above the 13 field case. \par One approach to overcome the compatible condition on the discrete function spaces is stabilization. In general, stabilization techniques modify the weak form's left-hand side to stabilize the discrete system, i.e., adding entries to the zero sub-matrices in the discrete system. Residual-based stabilization techniques are widespread for flow problems~\cite{donea2003finite} and add a stabilization term based on the current residuum's value. \par We will use the continuous interior penalty (CIP) method, as proposed in~\cite{westerkamp2019finite} for the R13 system. This technique adds stabilization terms based on edge inner products (for two dimensions). The modified bilinear form \(\tilde{\mathcal{A}}\) then reads \begin{align} \tilde{\mathcal{A}}\left(\te{\mathcal{U}_h},\te{\mathcal{V}_h}\right) &= \mathcal{A}\left(\te{\mathcal{U}_h},\te{\mathcal{V}_h}\right) + j_\theta(\theta_h,\kappa_h) + j_{\te{u}}(\te{u}_h,\te{v}_h) + j_p(p_h,q_h) , \label{eq_weakFormStabilized} \end{align} where the stabilization terms are given by \begin{align} j_\theta(\theta_h,\kappa_h) &= \delta_\theta \sum_\mathcal{E} \int_\mathcal{E} h^3 [\te{\nabla}\theta_h \te{\cdot} \te{n}] [\te{\nabla}\kappa_h \te{\cdot} \te{n}] \dd l , \\ j_{\te{u}}(\te{u}_h,\te{v}_h) &= \delta_{\te{u}} \sum_\mathcal{E} \int_\mathcal{E} h^3 [\te{\nabla}\te{u}_h \te{\cdot} \te{n}] \te{\cdot} [\te{\nabla}\te{v}_h \te{\cdot} \te{n}] \dd l , \\ j_p(p_h,q_h) &= \delta_p \sum_\mathcal{E} \int_\mathcal{E} h [\te{\nabla}p_h \te{\cdot} \te{n}] [\te{\nabla}q_h \te{\cdot} \te{n}] \dd l . \end{align} Here, \(\mathcal{E}\) is the index set of all interior element faces (i.e., edges) with \(\mathcal{E} \cap \partial \Omega = \emptyset\) and \begin{equation} [\te{f} \cdot \te{n}] = \te{f}^+ \cdot \te{n}^+ + \te{f}^- \cdot \te{n}^- , \end{equation} denotes the \(\te{f}\)-jump across the element boundary, weighted with the oppositely directed edge normals \(\te{n}^+\) and \(\te{n}^-\). Assuming the fields to be in \(C^1(\Omega)\) leads to no addition of stabilization terms. The method is, therefore, consistent~\cite{westerkamp2017curvature}. The different mesh size scalings result from an analysis in~\cite{westerkamp2017continous}, such that the order of stabilization does not change due to mesh refinement. The remaining parameters \(\delta_\theta,\delta_{\te{u}},\delta_p\) are not very sensitive, and the method is very robust to produce low errors for a wide range of \(\delta_\star\)-values. Compare, for example, with~\cite{burman2006edgeStabilization}, where \citeauthor{burman2006edgeStabilization} presented a discussion of the CIP method applied to the generalized Stokes problem. \par With the presented stabilization, we have the following property of the system: \begin{theorem}\label{thm_positiveDefiniteness} Consider a set of admissible system, stabilization, and boundary conditions as \begin{itemize} \item \(\operatorname{Kn} > 0\) to avoid division by zero, \item \(\delta_\theta,\delta_{\te{u}},\delta_p > 0 \) to avoid zero diagonals using stabilization, \item \(\tilde{\chi} > 0\) to have positive boundary terms in the diagonal sub-functionals, \item \(\epsilon^{\text{w}} \ge 0\) to guarantee non-negativity of all inflow boundary terms. \end{itemize} Then, the discrete stabilized weak form \(\tilde{\mathcal{A}}: \mathbb{H}_h \times \mathbb{H}_h \rightarrow \mathbb{R}, \left(\te{\mathcal{U}_h},\te{\mathcal{V}_h}\right) \mapsto \tilde{\mathcal{A}}\left(\te{\mathcal{U}_h},\te{\mathcal{V}_h}\right)\) is positive-definite for non-constant discrete fields \(\theta_h, \te{u}_h, p_h\). \end{theorem} \begin{proof} We use the notation \cref{eq_compundWeakFormAlternative} for \(\tilde{\mathcal{A}}\left(\te{\mathcal{U}_h},\te{\mathcal{V}_h}\right)\) and use antisymmetry of all non-diagonal sub-functionals to obtain \begin{equation} \tilde{\mathcal{A}}\left(\te{\mathcal{U}_h},\te{\mathcal{U}_h}\right) = a(\te{s}_h,\te{s}_h) + \bar{d}((\tee{\sigma}_h, p_h), (\tee{\sigma}_h, p_h)) + j_\theta(\theta_h,\theta_h) + j_{\te{u}}(\te{u}_h,\te{u}_h) + j_p(p_h,p_h) \end{equation} For the stabilization terms, it holds by construction that \begin{equation} j_\theta(\theta_h,\theta_h) >0 , \quad j_{\te{u}}(\te{u}_h,\te{u}_h) >0 , \quad j_p(p_h,p_h) > 0 , \end{equation} for non-constant discrete fields and \(\delta_\star > 0\). The quadratic nature of the diagonal terms in \cref{eq_subf_a,eq_subf_dTilde} ensures positivity with \begin{equation} a(\te{s}_h,\te{s}_h) > 0 \; \forall \te{s}_h \ne \te{0} , \quad \bar{d}((\tee{\sigma}_h, p_h), (\tee{\sigma}_h, p_h)) > 0 \; \forall (\tee{\sigma}_h, p_h) \ne (\te{0},0) . \end{equation} We can then directly follow that \(\tilde{\mathcal{A}}\left(\te{\mathcal{U}_h},\te{\mathcal{U}_h}\right) > 0 \; \forall \; \te{\mathcal{U}_h} \neq \te{0}\). \end{proof} \begin{remark} An analysis (similar to, e.g.,~\cite{burman2010interior}) would utilize the triple norm \begin{align} {||| \te{\mathcal{U}} |||}^2 &= \frac{24}{25} \operatorname{Kn} \norm*{\mathrm{sym}(\te{\nabla}\te{s})}_{L^2,\Omega}^2 + \frac{12}{25} \operatorname{Kn} \norm*{\mathrm{div}(\te{s})}_{L^2,\Omega}^2 + \frac{4}{15} \frac{1}{\operatorname{Kn}} \norm*{\te{s}}_{L^2,\Omega}^2 + \frac{1}{2} \frac{1}{\tilde{\chi}} \norm*{s_{n}}_{L^2,\Gamma}^2 \nonumber \\ & + \frac{12}{25} \tilde{\chi} \norm*{s_{t}}_{L^2,\Gamma}^2 + \operatorname{Kn} \norm*{\text{stf}(\te{\nabla}\tee{\sigma})}_{L^2,\Omega}^2 + \frac{4}{15} \frac{1}{\operatorname{Kn}} \norm*{\tee{\sigma}}_{L^2,\Omega}^2 + \frac{9}{8} \tilde{\chi} \norm*{\sigma_{nn}}_{L^2,\Gamma}^2 \nonumber \\ & + \tilde{\chi} \norm*{ \sigma_{tt} + \frac{1}{2} \sigma_{nn} }_{L^2,\Gamma}^2 + \frac{1}{\tilde{\chi}} \norm*{\sigma_{nt}}_{L^2,\Gamma}^2 + \epsilon^{\text{w}} \tilde{\chi} \norm*{p + \sigma_{nn}}_{L^2,\Gamma}^2 \nonumber \\ & + j_\theta(\theta,\theta) + j_{\te{u}}(\te{u},\te{u}) + j_p(p,p) , \label{eq_tripleNorm} \end{align} where the \(L^2\)-scalar product \({(\star,\star)}_{D}\) over a domain \(D \subseteq \Omega\) defines the associated norm \({\lVert \star \rVert}_{L^2,D} = \sqrt{{(\star,\star)}_{D}}\). The stabilized weak form \cref{eq_weakFormStabilized} is coercive, using the norm \cref{eq_tripleNorm} with \begin{equation} \tilde{\mathcal{A}}\left(\te{\mathcal{U}_h},\te{\mathcal{U}_h}\right) \ge 1 \cdot {||| \te{\mathcal{U}}_h |||}^2 \quad \forall ~ \te{\mathcal{U}_h} \in \mathbb{H}_h . \end{equation} \end{remark} \section{Implementation}\label{s_implementationAndValidation} The implementation of the compound weak form \cref{eq_compundWeakForm} uses the structured formulation of \cref{eq_discreteSystem} and schematically reads: \begin{lstlisting}[ style=pythonstyle, caption={Implementation of the stabilized compound weak form \(\tilde{\mathcal{A}}(\te{\mathcal{U}}_h,\te{\mathcal{V}}_h)\).}, commentstyle=\color{comment_c}\rmfamily\itshape, ] # 1) Left-hand sides, bilinear form A: A[0] = a(s, r) - b(theta, r) - c(r, sigma) + 0 + 0 A[1] = b(kappa, s) + 0 + 0 + 0 + 0 A[2] = c(s, psi) + 0 + d(sigma, psi) - e(u, psi) + f(p, psi) A[3] = 0 + 0 + e(v, sigma) + 0 + g(p, v) A[4] = 0 + 0 + f(q, sigma) - g(q, u) + h(p, q) # 2) Right-hand sides, linear functional L: L[0] = - sum([ n(r) * bcs[bc]["theta_w"] * df.ds(bc) for bc in bcs.keys() ]) # [...] self.form_lhs = sum(A) + cip * j_theta(theta, kappa) + j_u(u, v) + j_p(p, q) self.form_rhs = sum(L) # [...] df.solve(self.form_lhs == self.form_rhs, sol, []) \end{lstlisting} Here, we see the one-to-one correspondence between the underlying mathematics and the resulting source code. There is also no need to supply Dirichlet boundary conditions (observe ``\texttt{[]}'') to the ``\texttt{df.solve}''-routine. The weak formulation includes all boundary equations naturally in a weak sense. However, the sub-functionals still contain higher-order differential operators, e.g., the symmetric and trace-free part of a 2-tensor Jacobian \({(\te{\nabla}\tee{\sigma})}_{\text{stf}}\). For such higher-order tensors, not all required operators are available in the UF language 2019.1.0 (e.g., ``\texttt{Deviatoric}'' and ``\texttt{Trace}'' in ``\texttt{tensoralgebra.py}'' of~\cite{fenics2020uflRepo}). The same also applies to the DOLFIN C++ interface that uses the same underlying UFL to compile weak forms with the FFC\@. \par Therefore, we use the extension capabilities of the UFL and define the required operators using Einstein's index notation. This section first presents the important implementation aspects, focusing mainly on additional differential operators in \cref{ss_tensorFEMinFenics} and CIP stabilization in \cref{ss_cipStabilizationInFenics}. \subsection{Tensorial Mixed FEM in FEniCS}\label{ss_tensorFEMinFenics} An important implementation detail is treating the symmetric and trace-free operator for a tensor rank of two. For strictly two-dimensional problems, it is possible to use the default built-in ``\texttt{sym}''-, ``\texttt{tr}''- and ``\texttt{dev}''-operators of FEniCS/UFL successively. However, assuming a three-dimensional and \(z\)-homogenous problem (as stated in~\cref{ss_linearization}) requires a change in the operator. Following~\cite{fenics2019uflDocumentation}, UFL defines the deviatoric part of a 2-tensor as \begin{equation} {(\tee{A})}_{\mathrm{dev}} = \tee{A} - \frac{A_{ii}}{d} \tee{I}, \end{equation} using the dimension \(d\) and Einstein's summation notation for \(A_{ii}\). Performing computations on a two-dimensional mesh \(\Omega \in \mathbb{R}^2\) leads to \(d=2\). However, in our work, we assume homogeneity in the third spatial dimension (such that none of the relevant fields depends on the \(z\)-coordinate). A modified STF-operator for 2-tensors is, thus, used to obtain the definition of \cref{eq_rank2stf}. For \(\tee{A} \in \mathbb{R}^{2 \times 2}\), the modified operator is, for our purposes, defined as \begin{equation}\label{eq:heatStfModification} {(\tee{A})}_{\text{stf}} = \frac{1}{2} \left( \tee{A} + \tee{A}^{\mathrm{T}} \right) - \frac{A_{ii}}{3} \tee{I}. \end{equation} The corresponding implementation of \({(\star)}_{\text{stf}}\), therefore, artificially assumes \(d=3\) and reads: \begin{lstlisting}[ style=pythonstyle, caption={3D STF-operator for 2D 2-tensors.}, commentstyle=\color{comment_c}\rmfamily\itshape, ] def stf3d2(rank2_2d): symm = 1/2 * (rank2_2d + ufl.transpose(rank2_2d)) return symm - (1/3) * ufl.tr(symm) * ufl.Identity(2) \end{lstlisting} \par The construction of the Frobenius inner product of two 3-tensors (as in \(\text{stf}(\te{\nabla}\tee{\sigma}) \teee{\because} \text{stf}(\te{\nabla}\tee{\psi})\)) needs more auxiliary functions. It is possible to implement the system component-wise and solve for \(p,u_x,u_y,\sigma_{xx},\sigma_{xy},\sigma_{yy}\) after expanding all operators in the weak form \cref{eq_subf_d}. However, this would increase the complexity even more and is error-prone. Using a computer algebra system to calculate \({(\nabla \tee{\sigma})}_{\text{stf}}\) reveals 18 different terms for only one tensorial expression. Therefore, we will again make extensive use of the tensor capabilities provided by FEniCS and UFL to avoid computing such expressions and have the corresponding source code in a compact form. In fact, up to second-order tensors, all UFL operators are intuitive, except for the already discussed 3D information encoding. However, dealing with 3-tensors is not straight-forward because FEniCS lacks implementations of the required operators in its current version. \par First of all, using a two-dimensional mesh leads to creating only two spatial variables acting in the differential operators. To respect the highest-order moments' shape assumptions \cref{eq:stressTensorShapes}, we define a lifting operator \(L\), mapping a 2D 2-tensor artificially to a trace-free 3D 2-tensor. The definition of \begin{equation} L : \mathbb{R}^{2 \times 2} \rightarrow \mathbb{R}_{\mathrm{TF}}^{3 \times 3} , \begin{pmatrix} a & b \\ c & d \end{pmatrix} \mapsto \begin{pmatrix} a & b & 0 \\ c & d & 0 \\ 0 & 0 & -(a+d) \end{pmatrix} , \end{equation} implements as: \begin{lstlisting}[ style=pythonstyle, caption={Custom operator to lift a 2D 2-tensor to a 3D STF 2-tensor.}, commentstyle=\color{comment_c}\rmfamily\itshape, ] def gen3dTF2(rank2_2d): return df.as_tensor([ [rank2_2d[0, 0], rank2_2d[0, 1], 0], [rank2_2d[1, 0], rank2_2d[1, 1], 0], [0, 0, -rank2_2d[0, 0]-rank2_2d[1, 1]] ]) \end{lstlisting} The gradient operator is extended in a similar fashion accounting for the third dimension as: \begin{lstlisting}[ style=pythonstyle, caption={Custom gradient operator to account for three dimensions.}, commentstyle=\color{comment_c}\rmfamily\itshape, ] def grad3dOf2(rank2_3d): grad2d = df.grad(rank2_3d) dim3 = df.as_tensor([[0, 0, 0], [0, 0, 0], [0, 0, 0]]) grad3d = df.as_tensor([grad2d[:, :, 0], grad2d[:, :, 1], dim3[:, :]]) return grad3d \end{lstlisting} With these operators at hand, it is now possible to evaluate \({(\nabla \tee{\sigma})}_{\text{stf}}\). We use the definition \cref{eq_rank3stf} of the symmetric and trace-free part of a 3-tensor directly in FEniCS, including all Einstein summation conventions. The implementation of the corresponding function then reads: \begin{lstlisting}[ style=pythonstyle, caption={Custom operator to obtain the STF-part of a 3-tensor.}, commentstyle=\color{comment_c}\rmfamily\itshape, ] def stf3d3(rank3_3d): i, j, k, l = ufl.indices(4) delta = df.Identity(3) sym_ijk = sym3d3(rank3_3d)[i, j, k] traces_ijk = 1/5 * ( + sym3d3(rank3_3d)[i, l, l] * delta[j, k] + sym3d3(rank3_3d)[l, j, l] * delta[i, k] + sym3d3(rank3_3d)[l, l, k] * delta[i, j] ) tracefree_ijk = sym_ijk - traces_ijk return ufl.as_tensor(tracefree_ijk, (i, j, k)) \end{lstlisting} Here, the symmetric part of a 3-tensor is the average of all possible transpositions, as defined in \cref{eq_rank3sym}, and is translated into UFL code using: \begin{lstlisting}[ style=pythonstyle, caption={Custom operator to obtain the symmetric part of a 3-tensor.}, commentstyle=\color{comment_c}\rmfamily\itshape, ] def sym3d3(rank3_3d): i, j, k = ufl.indices(3) symm_ijk = 1/6 * ( # All permutations + rank3_3d[i, j, k] + rank3_3d[i, k, j] + rank3_3d[j, i, k] + rank3_3d[j, k, i] + rank3_3d[k, i, j] + rank3_3d[k, j, i] ) return ufl.as_tensor(symm_ijk, (i, j, k)) \end{lstlisting} These auxiliary functions are implemented in a separate ``\texttt{tensoroperations}''-module and allow the desired one-to-one correlation between the mathematical formulation and the corresponding implementation of the weak formulation. In general, the summation convention capabilities of FEniCS would also allow tackling even higher-order moment equations, such as the R26 equations~\cite{gu2009high}, if auxiliary \(n\)-tensor operators are defined. The implementation of \(d(\sigma,\psi)\), as the most complex bilinear form, is then schematically obtained as: \begin{lstlisting}[ style=pythonstyle, caption={Schemtic implementation of the bilinear form \(d(\sigma,\psi)\).}, commentstyle=\color{comment_c}\rmfamily\itshape, ] def d(sigma_, psi_): return ( kn * df.inner( to.stf3d3(to.grad3dOf2(to.gen3dTF2(sigma_))), to.stf3d3(to.grad3dOf2(to.gen3dTF2(psi_))) ) + (1/(2*kn)) * df.inner( to.gen3dTF2(sigma_), to.gen3dTF2(psi_) ) ) * df.dx # + [...] \end{lstlisting} \subsection{CIP Stabilization in FEniCS}\label{ss_cipStabilizationInFenics} The CIP stabilization method's implementation uses the support for discontinuous Galerkin (DG) operators in the UFL\@. Integration of all interior edges, as a subset of all edges, can be archived using ``\texttt{dS}'' instead of ``\texttt{ds}'' (which only acts on boundary edges). The resulting implementation then intuitively reads, e.g., for the scalar and vector stabilization functionals \(j_\theta\), \(j_{\te{u}}\): \begin{lstlisting}[ style=pythonstyle, caption={Implementation of CIP stabilization.}, commentstyle=\color{comment_c}\rmfamily\itshape, ] # Define custom measeasures for boundary edges and inner edges df.ds = df.Measure("ds", domain=mesh, subdomain_data=boundaries) df.dS = df.Measure("dS", domain=mesh, subdomain_data=boundaries) # Define mesh measuers h_msh = df.CellDiameter(mesh) h_avg = (h_msh("+") + h_msh("-"))/2.0 # 3) CIP Stabilization: def j_theta(theta, kappa): return ( + delta_theta * h_avg**3 * df.jump(df.grad(theta), n_vec) * df.jump(df.grad(kappa), n_vec) ) * df.dS def j_u(u, v): return ( + delta_u * h_avg**3 * df.dot(df.jump(df.grad(u), n_vec), df.jump(df.grad(v), n_vec)) ) * df.dS \end{lstlisting} \section{Convergence Study Based on Mesh Refinement}\label{s_validation} We perform a convergence study to validate the numerical method, comparing the discrete solutions to their exact solutions. The solver repository~\cite{theisen2020fenicsr13Zenodo} includes all exact solutions for reproducibility. Computations on a series of refined meshes reveal the method's numerical convergence properties and show convergence with increased mesh resolution. \subsection{Computational Domain and Test Case Parameters}\label{ss_convergenceStudy} We consider the ring domain \(\Omega \subset \mathbb{R}^2\) as the area between two coaxial circles with radii \(R_1\) and \(R_2\) as \begin{equation} \Omega = \left\{ \te{x} = {(x,y)}^{\mathrm{T}} : R_1 \le {\lVert \te{x} \rVert}_2 \le R_2 \right\} . \end{equation} We choose \(R_1=0.5\) and \(R_2=2\), which follows previous works~\cite{torrilhon2017hierarchical,westerkamp2019finite}. The inner boundary corresponds to \(\Gamma_1\) and the outer circle to \(\Gamma_2\). The particular domain \(\Omega\) avoids sharp corners, and a prescription of boundary fluxes does not produce any problems because the circle origin \((0,0)\) is not part of the domain. The computation of exact solutions is, therefore, possible. As already mentioned, we assume 2D problems as simplifications for 3D problems with full symmetry and homogeneity in the third spatial direction. \par To test the numerical method, we consider a series of general and unstructured triangular meshes without spatial refinement. As a result, the discretized domain contains approximately similar cell sizes. This very general setup does not take any properties of curved domains into account. The mesh resolution at the inner boundary, for example, is equal to the resolution at the outer boundary, although the curvatures are not equal. This general approach allows us to test the numerical method for principal correctness using the most general mesh types. \par We use the mesh generator Gmsh~\cite{geuzaine2009gmsh} to create a series of ring meshes. Note that no element split is applied to obtain the refined meshes, and finer meshes result from a complete re-meshing procedure with a lower cell size factor. The maximum cell size \(h_{\max}\) characterizes the mesh resolution. The \cref{fig:heatMeshes} presents exemplarily meshes with their corresponding \(h_{\max}\). In contrast to~\cite{westerkamp2019finite}, FEniCS cannot utilize isoparametric higher-order boundary representations in the current implementation. \(L^2\)-convergence rates beyond second-order are therefore not expected. \par We perform all calculations in a Docker container on an iMac 2017 with a 3.4 GHz Intel Core i5--7500 CPU and 48GiB memory. The resulting discrete systems were solved with the direct solver MUMPS~\cite{MUMPS:1,MUMPS:2}, shipped with FEniCS\@. \begin{figure}[t] \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\linewidth]{figs/png/mesh/0.png} \subcaption{ \(h_{\max}=0.99\). }\label{sfig:mesh0} \end{subfigure} \hfill \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\linewidth]{figs/png/mesh/1.png} \subcaption{ \(h_{\max}=0.63\). }\label{sfig:mesh1} \end{subfigure} \hfill \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\linewidth]{figs/png/mesh/2.png} \subcaption{ \(h_{\max}=0.33\). }\label{sfig:mesh2} \end{subfigure} \hfill \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\linewidth]{figs/png/mesh/3.png} \subcaption{ \(h_{\max}=0.17\). }\label{sfig:mesh3} \end{subfigure} \caption{ Series of unstructured triangular meshes used for the convergence study: Note that the coarsest mesh is not uniform to sufficiently resolve the inner boundary. With finer meshes, this effect vanishes. The bounding box of the mesh slightly varies due to the curved boundary and the node placement. }\label{fig:heatMeshes} \Description{ Series of unstructured triangular meshes used for the convergence study: Note that the coarsest mesh is not uniform to sufficiently resolve the inner boundary. With finer meshes, this effect vanishes. The bounding box of the mesh slightly varies due to the curved boundary and the node placement. } \end{figure} \paragraph{Error Measures} To rate the success of the numerical method, we introduce relevant error measures. The standard relative \(L^2\)-function error is \begin{equation} e_{L^2} = \frac{ {\lVert f_{\mathrm{ex}}-f_h\rVert}_{L^2(\Omega)} }{\max{\left\{ f_{\mathrm{ex}}|_n\right\}}_{n \in \eta}}, \end{equation} where \(f_{\mathrm{ex}}\) denotes the exact solution, and \(\eta\) is the set of all mesh nodes. \par The typical way to obtain a discrete solution is in terms of node values. Therefore, particular interest should also be given to these particular points, using a relative error \(e_{l^\infty}\). We define this vector-based error as \begin{equation} e_{l^\infty} = \frac{ {\lVert {\left\{ f_{\mathrm{ex}}|_n-f_h|_n \right\}}_{n \in \eta} \rVert}_{l^\infty(\eta)} }{\max{\left\{ f_{\mathrm{ex}}|_n\right\}}_{n \in \eta}}, \end{equation} while \(l^\infty\) is used to indicate that this error only considers point-wise errors based on mesh node values. If \(e_{l^\infty}\) decays to zero for refined meshes, we have ensured that for all points of interest \textendash\ i.e., the mesh nodes \textendash\ the solution converges towards the exact solution. \subsection{Homogenous Flow Around Cylinder} The test case considers a flow scenario with inflow and outflow boundary conditions, similar to~\cite{torrilhon2017hierarchical}. The velocity prescription coefficient, therefore, is \(\epsilon^{\mathrm{w}} \ne 0\). The outer wall is not impermeable but only acts as a cut-off from a larger homogenous velocity field. The inner cylinder wall is modeled as a non-rotating impermeable wall with zero velocity in normal direction \(u_n^{\mathrm{w}}|_{\Gamma_1}=0\) and zero tangential velocity \(u_t^{\mathrm{w}}|_{\Gamma_1}=0\) prescribed. A temperature difference is applied between the inner and the outer cylinder walls with \(\theta^\mathrm{w}|_{\Gamma_1} = 1\) and \(\theta^\mathrm{w}|_{\Gamma_2} = 2\) to render the case more complicated. \par A pressure difference drives the flow at the outer cylinder wall with \(p^{\mathrm{w}}|_{\Gamma_2} = - p_0 n_x\), in which the background pressure is set to \(p_0=0.27\), and \(n_x\) is equal to \(\cos(\phi)\) for the considered geometry. The velocity components at the outer boundary are set to \(u_n^{\mathrm{w}}|_{\Gamma_2} = u_0 n_x\) and \(u_t^{\mathrm{w}}|_{\Gamma_2} = - u_0 n_y\) with background velocity \(u_0 = 1\) and \(n_y = \sin(\phi)\). The remaining parameters read \(\epsilon^{\mathrm{w}}|_{\Gamma_1} = 10^{-3}\) to focus on velocity prescription, \(\epsilon^{\mathrm{w}}|_{\Gamma_2} = 10^{3}\) to focus on pressure prescription, \(\operatorname{Kn} = 1\) as flow characterization, and \(\tilde{\chi}=1\) as wall parameter. \par We will first consider \(\mathbb{P}_2\mathbb{P}_1\mathbb{P}_1\mathbb{P}_2\mathbb{P}_1\) elements (corresponding to the fields \(\tee{\sigma},\te{u},p,\te{s},\theta\) in order) without CIP stabilization. One could see this element combination as a generalization to classical Taylor--Hood elements with two hierarchies of moment sets: the heat-system and the stress-system variables. We choose \(k+1\) for both systems as the polynomial order for the highest-order moments (\(\tee{\sigma}\) and \(\te{s}\)) and \(k\) for all other fields. The resulting errors in \cref{fig:r13_1_coeffs_nosources_norot_inflow_p1p2p1p1p2_nostab} show an almost optimal convergence in the \(L^2\)-error measure. The node-values in the \(l^\infty\)-error also convergence but with a decreased rate for velocity and temperature fields. \par \begin{figure}[t] \newcommand{\datapath}{./data}% \newcommand{\firstOrderFactor}{1}% \newcommand{\secondOrderFactor}{0.1}% \newcommand{\errorfile}{convergence/article/r13_1_coeffs_nosources_norot_inflow_p1p2p1p1p2_nostab/errors.csv}% \newcommand{\fieldList}{theta/\(\theta\),sx/\(s_x\),sy/\(s_y\),p/\(p\),ux/\(u_x\),uy/\(u_y\),sigmaxx/\(\sigma_{xx}\),sigmaxy/\(\sigma_{xy}\),sigmayy/\(\sigma_{yy}\)}% \centering \input{figs/pgfplots/convergence/convergence_groupplot_article.tex} \caption{ Relative errors using unstabilized \(\mathbb{P}_2\mathbb{P}_1\mathbb{P}_1\mathbb{P}_2\mathbb{P}_1\) elements for the homogenous flow around a cylinder: Almost all fields have second-order convergence rates in the \(L^2\)-norm. In the \(l^\infty\)-norm, reduced rates are observed, but at least first-order convergence is guaranteed. }\label{fig:r13_1_coeffs_nosources_norot_inflow_p1p2p1p1p2_nostab} \Description{ Relative errors using unstabilized \(\mathbb{P}_2\mathbb{P}_1\mathbb{P}_1\mathbb{P}_2\mathbb{P}_1\) elements for the homogenous flow around a cylinder: Almost all fields have second-order convergence rates in the \(L^2\)-norm. In the \(l^\infty\)-norm, reduced rates are observed, but at least first-order convergence is guaranteed. } \end{figure} \par However, an increased discretization order for the highest-order fields is often not desired from an engineering perspective. For example, knowing the flow field's velocity gives more practical insight than knowing its stress tensor. We, therefore, aim also to use equal-order \(\mathbb{P}_1\) elements using the proposed CIP stabilization. The resulting errors for the same test case, using the stabilized setup, are presented in \cref{fig:r13_1_coeffs_nosources_norot_inflow_p1p1p1p1p1_stab}. The set of stabilization parameters \(\delta_\theta=1,\delta_{\te{u}}=1, \delta_p=0.01\) stems from~\cite{westerkamp2019finite}. We observe a decrease in relative accuracy and convergence order for the \(\theta\)-field Compared to the unstabilized setup. Parameter tuning for the \(\delta_\star\)-values might improve the numerical properties even more. \begin{figure}[t] \newcommand{\datapath}{./data}% \newcommand{\firstOrderFactor}{1}% \newcommand{\secondOrderFactor}{0.1}% \newcommand{\errorfile}{convergence/article/r13_1_coeffs_nosources_norot_inflow_p1p1p1p1p1_stab/errors.csv}% \newcommand{\fieldList}{theta/\(\theta\),sx/\(s_x\),sy/\(s_y\),p/\(p\),ux/\(u_x\),uy/\(u_y\),sigmaxx/\(\sigma_{xx}\),sigmaxy/\(\sigma_{xy}\),sigmayy/\(\sigma_{yy}\)}% \centering \input{figs/pgfplots/convergence/convergence_groupplot_article.tex} \caption{ Relative errors using stabilized \(\mathbb{P}_1\) equal-order elements for the homogenous flow around a cylinder: All fields except for \(\theta\) have second-order convergence rates in the \(L^2\)-norm. In the \(l^\infty\)-norm, reduced rates are observed for the stabilized fields \(\theta,\te{u}\). }\label{fig:r13_1_coeffs_nosources_norot_inflow_p1p1p1p1p1_stab} \Description{ Relative errors using stabilized \(\mathbb{P}_1\) equal-order elements for the homogenous flow around a cylinder: All fields except for \(\theta\) have second-order convergence rates in the \(L^2\)-norm. In the \(l^\infty\)-norm, reduced rates are observed for the stabilized fields \(\theta,\te{u}\). } \end{figure} \par The convergence behavior is improved using \(\mathbb{P}_2\) equal-order elements in \cref{fig:r13_1_coeffs_nosources_norot_inflow_p2p2p2p2p2_stab}. We now have second-order convergence for all fields in \(L^2\). However, using only a first-order boundary approximation for a curved domain limits the convergence rate, as discussed in~\cite{westerkamp2019finite}. An inspection of the discrete solution reveals the dominant error at the inner curved boundary, confirming the above considerations. \begin{figure}[t] \newcommand{\datapath}{./data}% \newcommand{\firstOrderFactor}{1}% \newcommand{\secondOrderFactor}{0.1}% \newcommand{\errorfile}{convergence/article/r13_1_coeffs_nosources_norot_inflow_p2p2p2p2p2_stab/errors.csv}% \newcommand{\fieldList}{theta/\(\theta\),sx/\(s_x\),sy/\(s_y\),p/\(p\),ux/\(u_x\),uy/\(u_y\),sigmaxx/\(\sigma_{xx}\),sigmaxy/\(\sigma_{xy}\),sigmayy/\(\sigma_{yy}\)}% \centering \input{figs/pgfplots/convergence/convergence_groupplot_article.tex} \caption{ Relative errors using stabilized \(\mathbb{P}_2\) equal-order elements for the homogenous flow around a cylinder: All fields have second-order convergence rates in the \(L^2\)-norm. In the \(l^\infty\)-norm, reduced rates are observed for the \(\theta\)-field, but first-order convergence is guaranteed. Note the different \(h_{\max}\)-axis compared to \cref{fig:r13_1_coeffs_nosources_norot_inflow_p1p1p1p1p1_stab}. }\label{fig:r13_1_coeffs_nosources_norot_inflow_p2p2p2p2p2_stab} \Description{ Relative errors using stabilized \(\mathbb{P}_2\) equal-order elements for the homogenous flow around a cylinder: All fields have second-order convergence rates in the \(L^2\)-norm. In the \(l^\infty\)-norm, reduced rates are observed for the \(\theta\)-field, but first-order convergence is guaranteed. Note the different \(h_{\max}\)-axis compared to \cref{r13_1_coeffs_nosources_norot_inflow_p1p1p1p1p1_stab}. } \end{figure} \par The \cref{fig:r13_1_coeffs_nosources_norot_inflow_p2p2p2p2p2_stab_schematic} shows schematic results for this test case using \(h_{\max}=0.09\). The homogenous outer flow field enters the computational domain in \cref{sfig:r13_1_coeffs_nosources_norot_inflow_p2p2p2p2p2_stab_stress_schematic} in parallel. The shear-stress component \(\sigma_{xy}\) has a greater magnitude at areas where the flow field is parallel to the inner cylinder walls. As expected with the given set of heat boundary conditions, heat flux from the warm outer cylinder wall to the cold inner wall is present in \cref{sfig:r13_1_coeffs_nosources_norot_inflow_p2p2p2p2p2_stab_heat_schematic}. However, the flow advects the temperature field in the flow direction leading to a cold gas region only behind the cylinder. In contrast, the region before the cylinder is warm. \begin{figure}[t] \begin{subfigure}[c]{0.49\textwidth} \includegraphics[width=\textwidth]{figs/png/r13_1_coeffs_nosources_norot_inflow_p2p2p2p2p2_stab/plot1.png} \subcaption{ Shear stress \(\sigma_{xy}\), velocity streamlines \(u_{i}\). }\label{sfig:r13_1_coeffs_nosources_norot_inflow_p2p2p2p2p2_stab_stress_schematic} \end{subfigure} \hfill \begin{subfigure}[c]{0.49\textwidth} \includegraphics[width=\textwidth]{figs/png/r13_1_coeffs_nosources_norot_inflow_p2p2p2p2p2_stab/plot2.png} \subcaption{ Temperature \(\theta\), heat flux streamlines \(s_{i}\). }\label{sfig:r13_1_coeffs_nosources_norot_inflow_p2p2p2p2p2_stab_heat_schematic} \end{subfigure} \caption{ Schematic results of the homogenous flow around a cylinder for \(h_{\max}=0.047\): The flow past the cylinder in \cref{sfig:r13_1_coeffs_nosources_norot_inflow_p2p2p2p2p2_stab_stress_schematic} induces higher \(\abs{\sigma_{xy}}\) values above and below the cylinder. The flow direction is tangential to the inner cylinder walls. The temperature distribution \cref{sfig:r13_1_coeffs_nosources_norot_inflow_p2p2p2p2p2_stab_heat_schematic} reveals a colder region behind the cylinder due to the present flow field. }\label{fig:r13_1_coeffs_nosources_norot_inflow_p2p2p2p2p2_stab_schematic} \Description{ Schematic results of the homogenous flow around a cylinder for \(h_{\max}=0.047\): The flow past the cylinder in \cref{sfig:r13_1_coeffs_nosources_norot_inflow_p2p2p2p2p2_stab_stress_schematic} induces higher \(\abs{\sigma_{xy}}\) values above and below the cylinder. The flow direction is tangential to the inner cylinder walls. The temperature distribution \cref{sfig:r13_1_coeffs_nosources_norot_inflow_p2p2p2p2p2_stab_heat_schematic} reveals a colder region behind the cylinder due to the present flow field. } \end{figure} \section{Application Cases}\label{s_applications} To the test solver's non-equilibrium capabilities, we now discuss three application cases with typical rarefaction effects. The channel flow example in \cref{sec_channelFlow} shows the expected Knudsen paradox behavior. In \cref{sec_knudsenPump}, a temperature gradient at the domain walls induces a thermal transpiration flow without the presence of gravity. In \cref{sec_thermalEdgeFlow}, we compare to the existing literature by considering a thermally-induced edge flow. These three application cases would not be possible using a classical NSF solver and justify using the R13 equations to predict rarefaction effects in non-standard flow situations. \subsection{Knudsen Paradox in a Channel Flow}\label{sec_channelFlow} A classic example is a microchannel flow similar to the one-dimensional case discussed in~\cite{torrilhon2016modeling}. In our artificial two-dimensional setting, the flow domain \(\Omega \in \mathbb{R}^2\) is between two infinitely large plates. The domain length is \(L=4\), and the plate distance reads \(H=1\). Inside the domain \(\Omega\), a body force \(\te{b}={(1,0)}^T\) induces a flow in positive \(x\)-direction. We prescribe no pressure gradients with \(p^{\mathrm{w}}|_{\Gamma_i}=0\). No additional inflow or outflow velocities are assumed with \(u_n^{\mathrm{w}}|_{\Gamma_i}, u_t^{\mathrm{w}}|_{\Gamma_i}=0\). A uniform temperature \(\theta^{\mathrm{w}}|_{\Gamma_i}=1\) is applied to all boundaries. The upper and lower impermeable walls have \(\epsilon^{\mathrm{w}}|_{\Gamma_1,\Gamma_3}=10^{-3}\) while the in- and outflow walls are modeled with the parameter \(\epsilon^{\mathrm{w}}|_{\Gamma_2,\Gamma_4}=10^{3}\) to allow a velocity through these boundaries. The \cref{sfig:applicationsDomainChannel} presents the overall setup. \begin{figure}[t] \begin{subfigure}[c]{0.54\textwidth} \newcommand{figs/tikz/sketches/style.tex}{figs/tikz/sketches/style.tex} \input{figs/tikz/sketches/channel_article.tex} \subcaption{ Geometry. }\label{sfig:applicationsDomainChannel} \end{subfigure} \hfill \begin{subfigure}[c]{0.44\textwidth} \vspace{0.3cm} \input{figs/pgfplots/knudsen_paradox/knudsen_paradox_article.tex} \subcaption{ Knudsen paradox. }\label{sfig:applicationsKnudsenParadox} \end{subfigure} \caption{ Geometry and Knudsen paradox in a channel flow. \cref{sfig:applicationsDomainChannel}: A dimensionless body force drives the flow in positive \(x\)-direction. The upper and lower boundaries are modeled as impermeable walls, while the inflow and outflow boundaries allow a mass flow. \cref{sfig:applicationsKnudsenParadox}: With increasing Knudsen number, the channel's mass flow first decreases to a minimum before increasing. Increasing the Knudsen number \(\operatorname{Kn} = \frac{\lambda}{H}\) either means diluting the gas or decreasing the channel width \(H\) for this particular context. } \Description{ Geometry and Knudsen paradox in a channel flow. \cref{sfig:applicationsDomainChannel}: A dimensionless body force drives the flow in positive \(x\)-direction. The upper and lower boundaries are modeled as impermeable walls, while the inflow and outflow boundaries allow a mass flow. \cref{sfig:applicationsKnudsenParadox}: With increasing Knudsen number, the channel's mass flow first decreases to a minimum before increasing. Increasing the Knudsen number \(\operatorname{Kn} = \frac{\lambda}{H}\) either means diluting the gas or decreasing the channel width \(H\) for this particular context. } \end{figure} \par The resulting computational mesh consists of 10712 uniform but unstructured triangles with 5517 nodes. We discretize the domain using \(\mathbb{P}_1\) equal-order finite elements and CIP stabilization with \(\delta_\theta=1,\delta_{\te{u}}=1, \delta_p=0.1\). The \cref{fig:applicationsResultsChannel} corresponds to a Knudsen number of \(\operatorname{Kn}=0.1\). In \cref{sfig:channelStress}, the flow field is almost parallel to the outer walls. The deviatoric stress component \(\sigma_{xy}\) has its maxima at both outer channel walls. In \cref{sfig:channelHeat}, the heat flux is nonzero, although no temperature gradient is applied. \begin{figure}[t] \begin{subfigure}[c]{0.49\textwidth} \includegraphics[width=\textwidth]{figs/png/channel_flow_force/plot1.png} \subcaption{ Shear stress \(\sigma_{xy}\), velocity streamlines \(u_{i}\). }\label{sfig:channelStress} \end{subfigure} \hfill \begin{subfigure}[c]{0.49\textwidth} \includegraphics[width=\textwidth]{figs/png/channel_flow_force/plot2.png} \subcaption{ Temperature \(\theta\), heat flux streamlines \(s_{i}\). }\label{sfig:channelHeat} \end{subfigure} \caption{ Schematic results of the channel flow application case for \(\operatorname{Kn}=0.1\): In \cref{sfig:channelStress}, the velocity field is almost parallel to the outer walls. The \cref{sfig:channelHeat} reveals a heat flux in the inflow direction. }\label{fig:applicationsResultsChannel} \Description{ Schematic results of the channel flow application case for \(\operatorname{Kn}=0.1\): In \cref{sfig:channelStress}, the velocity field is almost parallel to the outer walls. The \cref{sfig:channelHeat} reveals a heat flux in the inflow direction. } \end{figure} \par A parameter study for the Knudsen number further validates the solver capability to predict rarefaction effects using a range of \(\operatorname{Kn} \in [0.03,2.0]\) and the same overall problem setup. We define the dimensionless mass flow rate through the outflow boundary as \begin{equation} \hat{J}_{\Gamma_2} = \int_{\Gamma_2} \te{u} \cdot \te{n} \dd l, \end{equation} similar to the one-dimensional considerations in~\cite{torrilhon2016modeling}. The mass flow rate reduces for increasing Knudsen numbers. However, the dimensionless mass flow rate has a minimum at about \(\operatorname{Kn} \approx 0.3\), followed by a subsequent increase. This phenomenon is known as the \textit{Knudsen paradox}, following~\cite{torrilhon2016modeling}, and measurements observed this effect, e.g., in~\cite{dongari2009pressure}. The \cref{sfig:applicationsKnudsenParadox} presents the relation between the dimensionless mass flow and the Knudsen number. \subsection{Thermal Transpiration Flow in a Knudsen Pump}\label{sec_knudsenPump} We observe another rarefaction effect in a \textit{Knudsen pump} test case, which is inspired by~\cite{westerkamp2014stabilization,aoki2007numerical,leontidis2014numerical}. Without a body force acting, a flow is solely induced by a temperature gradient at the outer walls. For the test case, we consider a racetrack-shaped geometry created by slicing a ring with inner and outer radii \(R_1\) and \(R_2\) into two halves and placing two rectangular connections between these elements. The two connection elements have side lengths \(2L\) and \(R_2-R_1\) with \(L=1\), \(R_1=1/2\), and \(R_2=2\). We prescribe a linear temperature profile at both boundaries with four control points, as presented in \cref{sfig:applicationsDomainPump}. The temperatures \(\theta_0=0.5\) and \(\theta_1=1.5\) define the initial temperature difference of \(\Delta \theta = 1\). \begin{figure} \subcaptionbox{ Knudsen pump geometry.\label{sfig:applicationsDomainPump} }[0.49\textwidth]{ \newcommand{figs/tikz/sketches/style.tex}{figs/tikz/sketches/style.tex} \input{figs/tikz/sketches/knudsenpump_article.tex} } \hfill \subcaptionbox{ Knudsen pump discretization with \(h_{\max}=0.25\).\label{sfig:applicationsMeshPump} }[0.49\textwidth]{ \includegraphics[width=\linewidth]{figs/png/knudsen_pump/knudsen_pump2.png} } \caption{ Computational domain and spatial discretization for the Knudsen pump application case. \cref{sfig:applicationsDomainPump}: A temperature gradient at the outer walls drives the flow. All walls act as impermeable. The starting points of both half rings act as control points for the prescribed temperatures \(\theta_0\) and \(\theta_1\). In between these points, the temperature value follows a linear profile. \cref{sfig:applicationsMeshPump}: The maximum cell size \(h_{\max}\) characterizes the discretization with unstructured triangles. } \Description{ Computational domain and spatial discretization for the Knudsen pump application case. \cref{sfig:applicationsDomainPump}: A temperature gradient at the outer walls drives the flow. All walls act as impermeable. The starting points of both half rings act as control points for the prescribed temperatures \(\theta_0\) and \(\theta_1\). In between these points, the temperature value follows a linear profile. \cref{sfig:applicationsMeshPump}: The maximum cell size \(h_{\max}\) characterizes the discretization with unstructured triangles. } \end{figure} \par We want to prescribe a linear temperature profile for all the boundary paths between \(\theta_0\) and \(\theta_1\). To derive the corresponding expressions for \(\theta^{\mathrm{w}}\), we use the polar angle function \(\operatorname{atan2}(x,y): \mathbb{R}^2 \rightarrow (-\pi,\pi]\). This function returns the polar angle, such that, e.g., \(\operatorname{atan2}(1,1) = \frac{\pi}{4}\). This function is available in most programming languages, in Python with switched arguments as ``\texttt{atan2(y,x)}''. The \cref{tab:applicationsKnudsenPumpTemperatureExpressions} shows the corresponding boundary expressions for the temperature at the inner wall. For the outer wall, the same expressions as for the inner wall are used (\(\theta^{\mathrm{}}|{_{\Gamma_5}} = \theta^{\mathrm{}}|{_{\Gamma_1}},\ldots\)). The remaining parameters \(u_n^{\mathrm{w}}\), \(u_t^{\mathrm{w}}\), and \(\epsilon^{\mathrm{w}}\), are all set to zero to model impermeable walls. Ee use \(\mathbb{P}_1\) equal-order elements and CIP stabilization with \(\delta_\theta=1, \delta_{\te{u}}=1,\delta_p=0.1\) for the numerical computation. \begin{table}[t] \centering \caption{Temperature boundary conditions for the Knudsen pump: The four boundary paths define the linear and continuous temperature profile for the outer walls.}\label{tab:applicationsKnudsenPumpTemperatureExpressions} \begin{tabular}{rcccc} \toprule \(\Gamma_i\) | & \(\Gamma_1\) & \(\Gamma_2\) & \(\Gamma_3\) & \(\Gamma_4\) \\ \midrule \(\theta(x,y)|{_{\Gamma_i}}\) | & \(\frac{1}{2} \frac{2}{\pi} \operatorname{atan2}(x-1,y) + 1\) & \(\frac{1}{2} x + 1\) & \(- \frac{1}{2} \frac{2}{\pi} \operatorname{atan2}(1-x,y) + 1\) & \(-\frac{1}{2} x + 1\) \\ \bottomrule \end{tabular}% \end{table} \par To study the mesh sensitivity, we perform a series of computations on uniform unstructured grids with characterizing \(h_{\max}=1/2^i\) for \(i=4,\dots,7\). All meshes result from a complete re-meshing procedure implying additional refinement at curved boundaries. The \cref{sfig:applicationsMeshPump} presents a schematic mesh for \(h_{\max}=1/2^2\) and \cref{tab:applicationsPumpConvergenceTable} reports the convergence study results. No further change in the mean cross-section \(x\)-velocity \(3/2 \int_{-2}^{-1/2} (|u_x(y)|)|_{x=0} \dd y\) indicates the results' accuracy. \par The \cref{fig:applicationsResultsPump} presents the discrete solution for a Knudsen number of \(\operatorname{Kn}=0.1\) on the most refined mesh. We observe the counter-clockwise gas flow in \cref{sfig:PumpStress}, and the stress component \(\sigma_{xy}\) has higher values at the more curved parts of the geometry near the inner wall. The linearly applied temperature profile at the wall is visible in \cref{sfig:PumpHeat}. However, this temperature profile changes throughout the pump width due to diffusion. Intuitively, heat flux occurs in between warm and cold regions. \begin{figure} \begin{subfigure}[c]{0.49\textwidth} \includegraphics[width=\textwidth]{figs/png/knudsen_pump/plot1.png} \subcaption{ Shear stress \(\sigma_{xy}\), velocity streamlines \(u_{i}\). }\label{sfig:PumpStress} \end{subfigure} \hfill \begin{subfigure}[c]{0.49\textwidth} \includegraphics[width=\textwidth]{figs/png/knudsen_pump/plot2.png} \subcaption{ Temperature \(\theta\), heat flux streamlines \(s_{i}\). }\label{sfig:PumpHeat} \end{subfigure} \caption{ Schematic results of the Knudsen pump application case for \(h_{\max}=1/2^{7}\): The gas flow rotates in the counter-clockwise direction, as observed in \cref{sfig:PumpStress}, similar to the results obtained in~\cite{westerkamp2014stabilization,aoki2007numerical}. The \cref{sfig:PumpHeat} shows the linear temperature gradient at both boundary walls, resulting in heat flux from warm to cold regions. }\label{fig:applicationsResultsPump} \Description{ Schematic results of the Knudsen pump application case for \(h_{\max}=1/2^{7}\): The gas flow rotates in the counter-clockwise direction, as observed. The \cref{sfig:PumpHeat} shows the linear temperature gradient at both boundary walls, resulting in heat flux from warm to cold regions. } \end{figure} \begin{table}[t] \centering \caption{Summary of computations for the Knudsen pump, including: Number of triangles \(N_t\), number of nodes \(N_n\), maximum cell size \(h_{\max}\), mean cross-section \(x\)-velocity \(3/2 \int_{-2}^{-1/2} (|u_x(y)|)|_{x=0} \dd y\), wall time for FFC \(t_{\text{FFC}}\), wall time for system assembly \(t_{\text{a}}\), wall time for solution routine \(t_{\text{s}}\). The wall times were measured on an Amazon EC2 R5a instance using 64 vCPUs, 512 GiB memory, and MPI parallelization.}\label{tab:applicationsPumpConvergenceTable} \pgfplotstabletypeset[ col sep = comma, every head row/.style={before row=\toprule,after row=\midrule}, every last row/.style={after row=\bottomrule}, columns/0/.style={column name=\(N_{\text{t}}\)}, columns/1/.style={column name=\(N_{\text{n}}\)}, columns/2/.style={column name=\(h_{\max}\)}, columns/3/.style={column name=\(3/2 \int_{-2}^{-1/2} (|u_x(y)|)|_{x=0} \dd y\)}, columns/4/.style={column name=\(t_{\text{FFC}}\) [s]}, columns/5/.style={column name=\(t_{\text{a}}\) [s]}, columns/6/.style={column name=\(t_{\text{s}}\) [s]}, ] {./data/knudsen_pump/convergence.csv} \end{table} \par To further validate the obtained results, we also present velocity and temperature profiles at characteristic positions in \cref{fig:applicationsPumpPlotLines} for all considered meshes. The velocity profile at \(x=0\) shows a dominant flow in the middle of the domain. The temperature at \(x=1\) has its maximum not in the middle of the domain but around \(y \approx -1.1\). \begin{figure}[t] \newcommand{\datapath}{./data}% \centering \input{figs/pgfplots/knudsen_pump/knudsen_pump_fields.tex} \caption{ Cross-section flow velocity and temperature profiles in a Knudsen pump for different spatial discretizations: The velocity profile shows a dominant counter-clockwise flow indicated by the positive mean velocity. We observe a significant stream in the domain's center and two opposite-directed streams near the boundaries. The temperature at \(x=1\) is maximal at \(y \approx -1.1 \ne 5/2\). }\label{fig:applicationsPumpPlotLines} \Description{ Cross-section flow velocity and temperature profiles in a Knudsen pump for different spatial discretizations: The velocity profile shows a dominant counter-clockwise flow indicated by the positive mean velocity. We observe a significant stream in the domain's center and two opposite-directed streams near the boundaries. The temperature at \(x=1\) is maximal at \(y \approx -1.1 \ne 5/2\). } \end{figure} \subsection{Thermally-Induced Edge Flow}\label{sec_thermalEdgeFlow} We finish the applications section by considering a geometry with a hot beam in a cold chamber, summarized in~\cref{fig:thermalEdgeflow,sfig:thermalEdgeflowGeometry}. We model both boundaries with \(\tilde{\chi}=1\) and apply a temperature difference with \(\theta|_{\Gamma_2}=1\) and \(\theta|_{\Gamma_1}=0\). This difference induces a flow field with \(\operatorname{Kn}=0.001\). Test cases like this are common for rarefied gas applications~\cite{su2020fast,su2020implicit}. For example, \citeauthor{su2020fast} used a different numerical method for the same geometry and a similar rarefied flow situation. \begin{figure} \subcaptionbox{ Geometry.\label{sfig:thermalEdgeflowGeometry} }[0.32\textwidth]{ \newcommand{figs/tikz/sketches/style.tex}{figs/tikz/sketches/style.tex} \input{figs/tikz/sketches/thermaledgeflow_article.tex} } \hfill \subcaptionbox{ Base mesh for \(s=0\).\label{sfig:thermalEdgeflowMesh} }[0.32\textwidth]{ \includegraphics[width=\linewidth]{figs/png/thermal_edge_flow/mesh.png} } \hfill \subcaptionbox{ Schematic heat distribution.\label{sfig:thermalEdgeflowTemperatureHeatflux} }[0.32\textwidth]{ \includegraphics[width=\linewidth]{figs/png/thermal_edge_flow/plot2.png} } \caption{ Thermal edge flow for \(\operatorname{Kn} = 0.001\). \cref{sfig:thermalEdgeflowGeometry}: The geometry with \(L=8\), \(l=2\), \(d=1\) models a hot beam in a surrounding cold chamber. \cref{sfig:thermalEdgeflowMesh}: The base mesh with \(s=0\) has spatial refinement near the beam corners and near the outer walls. \cref{sfig:thermalEdgeflowTemperatureHeatflux}: The \(\theta\)-field and the \(\te{s}\)-streamlines show heat flux from the hot beam to the cold chamber walls. The temperature plot color consists of 10 uniform intervals with \(\theta \in [0,1]\). } \Description{ Thermal edge flow for \(\operatorname{Kn} = 0.001\). \cref{sfig:thermalEdgeflowGeometry}: The geometry with \(L=8\), \(l=2\), \(d=1\) models a hot beam in a surrounding cold chamber. \cref{sfig:thermalEdgeflowMesh}: The base mesh with \(s=0\) has spatial refinement near the beam corners and near the outer walls. \cref{sfig:thermalEdgeflowTemperatureHeatflux}: The \(\theta\)-field and the \(\te{s}\)-streamlines show heat flux from the hot beam to the cold chamber walls. The temperature plot color consists of 10 uniform intervals with \(\theta \in [0,1]\). }\label{fig:thermalEdgeflow} \end{figure} \par We perform a series of computations on unstructured triangle meshes with \(\mathbb{P}_1\) equal-order elements and CIP stabilization (\(\delta_\theta=1, \delta_{\te{u}}=1,\delta_p=0.1\)). The base mesh with \(s=0\) in~\cref{sfig:thermalEdgeflowMesh} applies refinement at the inner corners and the outer walls. All meshes result from a complete re-meshing procedure where we multiply the overall mesh resolution with the split factor \(2^{-s}\). In \cref{tab:applicationsEdgeflowConvergenceTable}, we summarize the corresponding mesh characterizations together with other computational statistics. \begin{table}[t] \centering \caption{Summary of computations for the thermal edge flow, including: Split number \(s\), number of triangles \(N_t\), number of nodes \(N_n\), minimum cell size \(h_{\min}\), mean cross-section \(y\)-velocity \(1/8 \int_{0}^{8} (|u_y(x)|)|_{y=0.5} \dd x\), wall time for FFC \(t_{\text{FFC}}\), wall time for system assembly \(t_{\text{a}}\), wall time for solution routine \(t_{\text{s}}\). The wall times were measured on an Amazon EC2 R5a instance using 64 vCPUs, 512 GiB memory, and MPI parallelization.}\label{tab:applicationsEdgeflowConvergenceTable} \pgfplotstabletypeset[ col sep = comma, every head row/.style={before row=\toprule,after row=\midrule}, every last row/.style={after row=\bottomrule}, columns/0/.style={column name=\(s\)}, columns/1/.style={column name=\(N_{\text{t}}\)}, columns/2/.style={column name=\(N_{\text{n}}\)}, columns/3/.style={column name=\(h_{\min}\)}, columns/4/.style={column name=\(1/8 \int_{0}^{8} (|u_y(x)|)|_{y=0.5} \dd x\)}, columns/5/.style={column name=\(t_{\text{FFC}}\) [s]}, columns/6/.style={column name=\(t_{\text{a}}\) [s]}, columns/7/.style={column name=\(t_{\text{s}}\) [s]}, ] {./data/thermal_edge_flow/convergence.csv} \end{table} \par The \cref{sfig:thermalEdgeflowTemperatureHeatflux} shows the resulting heat distribution following Fourier's law. On the contrary, the flow field develops an edge flow rarefaction effect. Along the horizontal lines \(y=0.5\) and \(y=4.5\) in~\cref{fig:applicationsThermalEdgeFlowPlotLines}, we observe the same characteristic extremal points in the vertical velocity profile for all mesh resolutions. To further validate the results, we inspect the velocity's streamlines and magnitude in \cref{fig:edgeflowDetailedPlot}. We observe no significant change in the flow field with further refinement in \cref{sfig:uStreamlines2,sfig:uStreamlines3,sfig:uMagnitude2,sfig:uMagnitude3}. For all meshes, the flow field develops the characteristic eight vortices pattern. Altogether, the obtained results are in notable agreement with the high-resolution results of~\cite{su2020fast}. Together with the comparison to exact solutions in \cref{ss_convergenceStudy}, this fact underlines the method's correctness and validity. \begin{figure}[t] \newcommand{\datapath}{./data}% \centering \input{figs/pgfplots/thermal_edge_flow/thermal_edge_flow_fields.tex} \caption{ Flow velocity profiles for the thermal edge flow case on the different spatial discretization. The (\(y=0.5\))-profile reveals a local maximum at \(x=2\) surrounded by global minima at \(x \approx 1.3\) and \(x \approx 2.7\). Global maxima are at \(x \approx 0.8\) and \(x \approx 3.2\). The (\(y=4.5\))-profile reveals a characteristic maximum at the position \(x=2\), surrounded by minima in \(x \approx 0.55\) and \(x \approx 3.75\). }\label{fig:applicationsThermalEdgeFlowPlotLines} \Description{ Flow velocity profiles for the thermal edge flow case on the different spatial discretization. The (\(y=0.5\))-profile reveals a local maximum at \(x=2\) surrounded by global minima at \(x \approx 1.3\) and \(x \approx 2.7\). Global maxima are at \(x \approx 0.8\) and \(x \approx 3.2\). The (\(y=4.5\))-profile reveals a characteristic maximum at the position \(x=2\), surrounded by minima in \(x \approx 0.55\) and \(x \approx 3.75\). } \end{figure} \begin{figure} \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{figs/png/thermal_edge_flow/stream0.png} \subcaption{ \(\te{u}\)-streamlines, \(s=0\). }\label{sfig:uStreamlines0} \end{subfigure} \hfill \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{figs/png/thermal_edge_flow/stream1.png} \subcaption{ \(\te{u}\)-streamlines, \(s=1\). }\label{sfig:uStreamlines1} \end{subfigure} \hfill \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{figs/png/thermal_edge_flow/stream2.png} \subcaption{ \(\te{u}\)-streamlines, \(s=2\). }\label{sfig:uStreamlines2} \end{subfigure} \hfill \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{figs/png/thermal_edge_flow/stream3.png} \subcaption{ \(\te{u}\)-streamlines, \(s=3\). }\label{sfig:uStreamlines3} \end{subfigure} \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{figs/png/thermal_edge_flow/vel0.png} \subcaption{ \(|\te{u}|\)-magnitude, \(s=0\). }\label{sfig:uMagnitude0} \end{subfigure} \hfill \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{figs/png/thermal_edge_flow/vel1.png} \subcaption{ \(|\te{u}|\)-magnitude, \(s=1\). }\label{sfig:uMagnitude1} \end{subfigure} \hfill \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{figs/png/thermal_edge_flow/vel2.png} \subcaption{ \(|\te{u}|\)-magnitude, \(s=2\). }\label{sfig:uMagnitude2} \end{subfigure} \hfill \begin{subfigure}[c]{0.24\textwidth} \includegraphics[width=\textwidth]{figs/png/thermal_edge_flow/vel3.png} \subcaption{ \(|\te{u}|\)-magnitude, \(s=3\). }\label{sfig:uMagnitude3} \end{subfigure} \caption{ Detailed visualization of the flow field for the thermal edge flow case on the different spatial discretization: We observe no significant changes in the flow field between the two most refined meshes. \cref{sfig:uStreamlines0,sfig:uStreamlines1,sfig:uStreamlines2,sfig:uStreamlines3}: The velocity streamlines show the characteristic eight vortices flow pattern in agreement with~\cite{su2020fast}. \cref{sfig:uMagnitude0,sfig:uMagnitude1,sfig:uMagnitude2,sfig:uMagnitude3}: Significant peaks in the velocity develop in the vicinity of the beam corners. For better visualization, the color scheme of the \(|\te{u}|\)-magnitude is clipped at \(|\te{u}|=10^{-5}\). }\label{fig:edgeflowDetailedPlot} \Description{ Detailed visualization of the flow field for the thermal edge flow case on the different spatial discretization: We observe no significant changes in the flow field between the two most refined meshes. \cref{sfig:uStreamlines0,sfig:uStreamlines1,sfig:uStreamlines2,sfig:uStreamlines3}: The velocity streamlines show the characteristic eight vortices flow pattern in agreement with~\cite{su2020fast}. \cref{sfig:uMagnitude0,sfig:uMagnitude1,sfig:uMagnitude2,sfig:uMagnitude3}: Significant peaks in the velocity develop in the vicinity of the beam corners. For better visualization, the color scheme of the \(|\te{u}|\)-magnitude is clipped at \(|\te{u}|=10^{-5}\). } \end{figure} \section{Conclusion and Outlook}\label{s_conclusion} In this work, we presented a mixed finite element solver for the linear R13 equations. The solver implementation, provided publicly in~\cite{theisen2020fenicsr13Zenodo}, uses the tensor-valued description of the model equations. The weak form's derivation revealed the need for differential operators of variables with a tensor rank above two. Using the generality of FEniCS (and its underlying form language) allowed us to implement these operators conveniently in index notation. This abstraction level leads to an almost one-to-one correspondence between the mathematical formulation and its corresponding implementation, improving the source code's readability and maintainability. \par Furthermore, a convergence study showed the validity of the proposed method for a stable 5-tuple of elements and a stabilized equal-order combination. Application cases justified using the R13 equation over the traditional Navier--Stokes--Fourier models due to the capabilities to predict rarefaction effects for non-equilibrium gas flows. \paragraph{UFL Discussion and Outlook} This work showed the successful application of FEniCS's UFL to tensor-valued model equations. More general use cases involving equations with tensor rank \(r>2\) are possible using the ``\texttt{TensorElement}'' class of~\cite{fenics2020uflRepo} with the ``\texttt{shape=(d1,..,dr)}'' option. In this case, one must implement the required operators using the summation convention analogously to \cref{s_implementationAndValidation}. While we used the CIP surface stabilization, UFL also allows for residual-based volume stabilization, demonstrated, e.g., for the GLS method, in~\cite{helanow2018stabilized}. \par Limitations of UFL include the FFC time \(t_{\text{FFC}}\) of order \(\mathcal{O}(1)\) (as reported in \cref{tab:applicationsPumpConvergenceTable,tab:applicationsEdgeflowConvergenceTable}), which is required once to compile the weak form. This compilation process also introduces another layer in the overall simulation pipeline, which might increase the debugging complexity. While being part of the FEniCS framework, it is also unclear whether one can use the UFL separately in other contexts without connection to FEniCS or the finite element method. \par However, for our particular use case, the generalization capabilities outweigh the possible limitations. Especially for quick prototyping of new model equations and their corresponding discretization, the UFL seems to be a practical choice. This fact holds in particular also for other moment models, e.g.~\cite{berghoff2020massively,koellermeier2017numerical,koellermeier2020spline}. Therefore, future work in rarefied gas applications could consider more complicated gas configurations~\cite{sarna2020moment,pavic2013maximum}, allow for more general gas molecules~\cite{cai2020regularized} or use different closures~\cite{mcdonald2013affordable}.
1,314,259,992,857
arxiv
\section{Introduction} \subsection{Statement of results} Sutured manifolds and sutured manifold hierarchies were introduced by Gabai in 1983 in \cite{gabai1983foliations} and subsequent papers. They are powerful tools in the study of $3$-dimensional topology. A sutured manifold is a compact oriented $3$-manifold with boundary, together with an oriented closed $1$-submanifold $\gamma$ on $\partial{M}$, which is called the suture. If $S\subset M$ is a properly embedded surface inside $M$, which satisfies some mild conditions, then we can perform a decomposition of $(M,\gamma)$ along $S$ and obtain a new sutured manifold $(M',\gamma')$. We call this process a sutured manifold decomposition and write $$(M,\gamma)\stackrel{S}{\leadsto}(M',\gamma').$$ A balanced sutured manifold $(M,\gamma)$ is a sutured manifold with some further restrictions on $M$ and $\gamma$. It was introduced by Juh\'asz in \cite{juhasz2006holomorphic} to accommodate the construction of Heegaard Floer homology on them. Later, when Kronheimer and Mrowka introduced sutured monopole and instanton Floer homologies, they also used the settings of balanced sutured manifolds. So, in this paper, we will only work with balanced sutured manifolds, though it should be understood that Gabai's results were initially proved for general sutured manifolds. A celebrated theorem proved by Gabai is the following. \begin{thm}[Gabai \cite{gabai1983foliations}]\label{thm: hierarchy} Suppose $(M,\gamma)$ is a taut balanced sutured manifold, then there exists a finite sequence of sutured manifold decompositions \begin{equation}\label{eq: hierary} (M,\gamma)\stackrel{S_1}{\leadsto}(M_1,\gamma_1)\stackrel{S_2}{\leadsto}...\stackrel{S_n}{\leadsto}(M_n,\gamma_n), \end{equation} where $(M_i,\gamma_i)$ is taut for all $i$ and $(M_n,\gamma_n)$ is a product sutured manifold, meaning that there is an oriented surface $F$ with non-trivial boundary so that $$(M_n,\gamma_n)=([-1,1]\times F,\{0\}\times \partial{F}).$$ \end{thm} One original motivation for Gabai to establish Theorem \ref{thm: hierarchy} is to construct taut foliations on $3$-manifolds. In particular, he proved the following theorem. \begin{thm}[Gabai \cite{gabai1983foliations}]\label{thm: existence of finite depth taut foliations} Suppose $(M,\gamma)$ is a taut balanced sutured manifold, then $(M,\gamma)$ admits a finite depth taut foliation. \end{thm} However, Gabai only proved the existence of a taut foliation with finite depth, yet he didn't offer any bounds on how small the depth could be. In \cite{juhasz2010polytope}, Juh\'asz made the following conjecture. \begin{conj}[Juh\'asz \cite{juhasz2010polytope}]\label{conj: juhasz} Suppose $(M,\gamma)$ is a taut balanced sutured manifold with $H_2(M)=0$, and $${\rm rk}_{\intg_2}(SFH(M,\gamma))<2^{k+1},$$ then $(M,\gamma)$ admits a taut foliation of depth at most $2k$. \end{conj} Here, $SFH(M,\gamma)$ is the sutured (Heegaard) Floer homology of $(M,\gamma)$, introduced by Juhasz in \cite{juhasz2006holomorphic}. It is a finite-dimensional vector space over the field $\intg_2$ and is a topological invariant associated to $(M,\gamma)$. Following this line, Kronheimer and Mrowka further made the following conjecture. \begin{conj}[Kronheimer and Mrowka \cite{kronheimer2010knots}]\label{conj: KM} Suppose $K\subset S^3$ is a knot, and consider the irreducible homomorphisms $$\rho:\pi_1(S^3(K)\ra SU(2))$$ which map a chosen meridian $m$ to the element $\mathbf{i}\in SU(2)$. Suppose that these homomorphisms are non-degenerate and that the number of conjugacy classes of such homomorphisms is less than $2^{k+1}$. Then, the knot complement $S^3(K)$ admits a taut foliation of depth at most $2k$, transverse to the boundary $\partial{S^3(K)}$. \end{conj} In this paper, we prove the following result, constructing a taut foliation whose depth is bounded in terms of the dimension of sutured instanton Floer homology. The sutured instanton Floer homology, denoted by $SHI$, is another type of Floer theory associated to $(M,\gamma)$, which was introduced by Kronheimer and Mrowka in \cite{kronheimer2010knots}. It is a finite-dimensional vector space over $\mathbb{C}$. Though the bound on the depth of taut foliations is not as sharp as the ones in Conjecture \ref{conj: juhasz} and Conjecture \ref{conj: KM}, up to the author's knowledge, it is the first of this kind. \begin{thm}\label{thm: small depth taut foliations} Suppose $(M,\gamma)$ is a taut balanced sutured manifold with $H_2(M)=0$, and $${\rm dim}_{\mathbb{C}}SHI(M,\gamma)<2^{k+1}.$$ Then, $(M,\gamma)$ admits a taut foliation of depth at most $2^{k+6}$. \end{thm} \begin{cor}\label{cor: small depth taut foliations} Conjecture \ref{conj: juhasz} and \ref{conj: KM} hold if we replace the depth $2k$ of the taut foliation in the statement of the conjecture by $2^{k+6}$. \end{cor} \begin{rem} It can be reformulated that the minimal depth of all taut foliations is bounded by a multiple of the dimension of the sutured instanton Floer homology. However, the original statement in Theorem \ref{thm: small depth taut foliations} is more convenient for carrying out the proof. \end{rem} \subsection{Strategy of the proof} In this subsection, we roughly explain the technical difficulty of attacking Conjecture \ref{conj: juhasz} and the idea to obtain the main result of the current paper. Though the original conjecture was stated in the Heegaard Floer setup, we present everything in the instanton settings, in consistence with the main proof below. Suppose we have a taut balanced sutured manifold $(M,\gamma)$ with $H_2(M)=0$ and whose instanton Floer homology has dimension smaller than $2^{k+1}$. Then, one can construct a sutured manifold hierarchy in $k$ stage so that in each stage, we decompose the sutured manifold twice, and that in each stage, the dimension of the Floer homology of the resulting balanced sutured manifold after the two decompositions is at most a half of the previous one. See Juh\'asz \cite{juhasz2010polytope} or Ghosh and Li \cite{li2019decomposition}. Suppose, for simplicity, that all balanced sutured manifolds involved are horizontally prime. Then, in each stage, the first of the two decompositions is to make the balanced sutured manifold free of essential product annuli and essential product disks so that one could apply Proposition \ref{prop: decomposition drops the dimension by half} to perform the second decomposition. Thus, after $k$ stages and altogether $2k$ decompositions, we obtain a product sutured manifold which admits a taut foliation of depth $0$. By Theorem \ref{thm: well groomed decomposition increase depth by 1}, if the each one of the $2k$ decompositions is well-groomed, then we can glue the taut foliation on the product sutured manifold through the hierarchy and obtain a depth-$2k$ taut foliation on $(M,\gamma)$, thus proving Conjecture \ref{conj: juhasz}. In each stage, the second of the two decompositions is coming from Proposition \ref{prop: decomposition drops the dimension by half} and is well-groomed already. However, the first of the two decompositions is to decompose along a maximal disjoint union of essential product annuli, which is not necessarily well-groomed. So a naive attempt to attack Conjecture \ref{conj: juhasz} fails at this point. In this paper, instead of showing that the first of the two decompositions can be made well-groomed, we show that it can be decomposed into a sequence of decompositions where each decomposing surface is either well-groomed or having its boundary contained in an annular neighborhood of the suture. Furthermore, each of such decompositions would only increase the depth of the taut foliation we glue back by at most one, and the total number of such decompositions can be bounded above by the dimension of the Floer homology of the sutured manifold we start with. To obtain this bound, recall that in each stage, the first of the two decompositions is to decompose along a maximal disjoint union of essential product annuli. The decomposition along one essential product annulus can be decomposed into a sequence of decompositions where each decomposing surface is either well-groomed or having its boundary contained in an annular neighborhood of the suture. Furthermore, the total number of such decompositions is a fixed constant. So we only need to bound the number of product annuli that are involved. This is done through Proposition \ref{prop: bounding dimension by genus of the boundary}: if the balanced sutured manifold is free of essential product annuli and product disks (and we have assumed it to be horizontally prime), then the dimension of Floer homology bounds the genus of the boundary of the $3$-manifold and hence the number of components of sutures, which further gives a bound on the number of product annuli we need to decompose along, thus completing the proof. {\bf Acknowledgement}. This material is based upon work supported by the National Science Foundation under Grant No. 1808794. The author would like to thank his advisor Tomasz S. Mrowka for his enormous help and valuable suggestions. The author would also like to thank the referee for helpful comments. \section{Preliminaries} In this paper, all the notations will be kept the same as in Ghosh and Li \cite{li2019decomposition}. So if a term has already been defined in that paper, we will not define it again. We have the following new definitions. \begin{defn}[Gabai \cite{gabai1983foliations}]\label{defn: well groomed surface} Suppose $(M,\gamma)$ is a balanced sutured manifold. A surface $S$ is called {\it well-groomed} if $\partial S$ is essential in $H_1(\partial M)$, and the following is true. (1) For each component $A$ of the annular neighborhood $A(\gamma)$, $S\cap A$ consists of either a collection of parallel and coherently oriented non-separating simple arcs or a collection of parallel simple closed curves each oriented in the same way as $\gamma\cap A$. (2) For each component $V$ of $R(\gamma)$, $S\cap V$ consists of either a collection of parallel and coherently oriented non-separating simple arcs or a collection of parallel and coherently oriented non-separating simple closed curves. \end{defn} \begin{defn}[Gabai \cite{gabai1983foliations}]\label{defn: taut foliations} A transversely oriented co-dimension-one foliation $\mathfrak{F}$ on a balanced sutured manifold $(M,\gamma)$ is called {\it taut} if $\mathfrak{F}$ is transverse to $A(\gamma)$, tangent to $R(\gamma)$ with normal direction pointing inward on $R_-(\gamma)$ and outward on $R_+(\gamma)$, $\mathfrak{F}|_{A(\gamma)}$ has no Reeb components, and there exists a (not necessarily connected and not necessarily closed) curve $c$ so that each leaf of $\mathfrak{F}$ intersects $c$ transversely and non-trivially. \end{defn} \begin{defn}[Gabai \cite{gabai1983foliations}]\label{defn: depth of foliations} Let $M$ be a compact oriented $3$-manifold, and $\mathfrak{F}$ a co-dimension-one foliation. We say a leaf $L$ of $\mathfrak{F}$ has {\it depth} $0$ if $L$ is compact. Suppose we have defined the depth for $j\leq k$, then say a leaf $L$ of $\mathfrak{F}$ has {\it depth} $k+1$ if $\widebar{L}\backslash L$ is a union of leaves of depth at most $k$ and contains at least one leaf of depth $k$. The foliation $\mathfrak{F}$ is called {\it depth} $k$ if all its leaves have depth at most $k$, and it admits at least one leaf of depth $k$. If such a $k$ does not exist, then we say $\mathfrak{F}$ has infinite depth. \end{defn} The following two Propositions from Ghosh and Li \cite{li2019decomposition} are key ingredients of the proof of Theorem \ref{thm: small depth taut foliations}. \begin{prop}\label{prop: decomposition drops the dimension by half} Suppose $(M,\gamma)$ is a connected balanced sutured manifold so that $H_2(M)=0$ and it is horizontally prime, taut, and free of essential product annuli and product disks. If $(M,\gamma)$ is not a product sutured manifold, then there exists a well-groomed surface $S\subset M$ and a sutured manifold decomposition $$(M,\gamma)\stackrel{S}{\leadsto}(M',\gamma')$$ so that $(M',\gamma')$ is taut, and $${\rm dim}_{\mathbb{C}}SHI(M',\gamma')\leq\frac{1}{2}{\rm dim}_{\mathbb{C}}SHI(M,\gamma).$$ \end{prop} \bpf This is essentially \cite[Proposition 5.8]{li2019decomposition}. In the original statement of the proposition, the homology class $\alpha\in H_2(M,\partial M)$ can be chosen freely, so we can choose a well-groomed class as guaranteed by \cite[Lemma 2.8]{juhasz2010polytope}. Then, Proposition \ref{prop: decomposition drops the dimension by half} follows. \end{proof} \begin{prop}\label{prop: bounding dimension by genus of the boundary} Suppose $(M,\gamma)$ is a connected balanced sutured manifold so that $H_2(M)=0$ and it is horizontally prime, taut, and free of essential product annuli and product disks. Then, $${\rm dim}_{\mathbb{C}}SHI(M,\gamma)\geq g(\partial{M})+1.$$ \end{prop} \bpf By Corollary 5.12 in Ghosh and Li \cite{li2019decomposition}, we know that $${\rm dim}_{\mathbb{C}}SHI(M,\gamma)\geq {\rm dim}_{\mathbb{R}}H^2(M,\partial M;\mathbb{R})+1.$$ Since $H_2(M)=0$ and $M$ is connected, we know that $${\rm dim}_{\mathbb{R}}H^2(M,\partial M;\mathbb{R})=g(\partial{M}).$$ \end{proof} To construct a taut foliation with controlled depth, we also need the following theorem from Gabai \cite{gabai1983foliations}. \begin{thm}\label{thm: well groomed decomposition increase depth by 1} Suppose $(M,\gamma)$ is a balanced sutured manifold. Suppose $S\subset M$ is a well-groomed surface so that we have a decomposition $$(M,\gamma)\stackrel{S}{\leadsto}(M',\gamma').$$ Suppose further that $(M',\gamma')$ admits a taut foliation of depth $k$, then $(M,\gamma)$ admits a taut foliation of depth at most $k+1$. \end{thm} The following Proposition is essentially \cite[Proposition 8.10]{juhasz2008floer} and is adapted to the instanton theory by Kronheimer and Mrowka. \begin{prop}[Kronheimer and Mrowka {\cite[Proposition 6.7]{kronheimer2010knots}}]\label{prop: decompose along product annuli do not increase SHI} Suppose $(M,\gamma)$ is a taut balanced sutured manifold so that $H_2(M)=0$. Suppose $A\subset (M,\gamma)$ is a non-trivial product annulus. Suppose further $(M',\gamma')$ is obtained from $(M,\gamma)$ by decomposing along $A$. Then we have $${\rm dim}_{\mathbb{C}}SHI(M',\gamma')\leq {\rm dim}_{\mathbb{C}}SHI(M,\gamma).$$ \end{prop} In the next section, the following definition is also useful. \begin{defn}\label{defn: quasi-horizontal surfaces} Suppose $(M,\gamma)$ is a balanced sutured manifold. A properly embedded surface $S$ is called {\it quasi-horizontal} if $\partial S\cap R(\gamma)=\emptyset$ and for any component $A$ of $A(\gamma)$, $S\cap A$ is either empty or parallel simple closed curves each parallel to $\gamma$ as oriented curves. \end{defn} Suppose $(M,\gamma)$ is a balanced sutured manifold and $S\subset (M,\gamma)$ is a quasi-horizontal surface. Suppose further that $(M',\gamma')$ is obtained from $(M,\gamma)$ by decomposing along $S$, then we know that $R_+(\gamma')$ has a component $V_+$ and $R_-(\gamma')$ has a component $V_-$, so that $V_+$ and $V_-$ are both diffeomorphic to $S$ and $(M,\gamma)$ is obtained from $(M',\gamma')$ by identifying $V_+$ with $V_-$ via the diffeomorphisms to $S$. The following lemma is straightforward. \begin{lem}\label{lem: glue taut foliation through quasi-horizontal surfaces} Suppose $(M',\gamma')$ is a taut balanced sutured manifold admitting a finite depth taut foliation $\mathfrak{F}'$. Suppose $V_+$ is a component of $R_+(\gamma')$ and $V_-$ is a component of $R_-(\gamma')$. Suppose further that there is an orientation preserving diffeomorphism $f:V_+\ra V_-$. We can glue $V_+$ to $V_-$ via $f$ to obtain a new sutured manifold $(M,\gamma)$. Then there exists a taut foliation $\mathfrak{F}$ on $(M,\gamma)$ whose depth is at most that of $\mathfrak{F}'$. \end{lem} \section{Constructing taut foliations of bounded depth} \begin{lem}\label{lem: curves on surface} Suppose $\Sigma$ is a closed connected oriented surface of genus $g\geq 1$. Then there are at most $(3g-2)$ many connected simple closed curves on $\Sigma$ so that they are each non-separating, pairwise disjoint, and pairwise non-parallel. \end{lem} \bpf Suppose $\gamma$ is a collection of connected simple closed curves on $\Sigma$ so that they are each non-separating, pairwise disjoint, and pairwise non-parallel. Write $|\gamma|$ to be the number of components of $\gamma$. We want to show that $|\gamma|\leq 3g-2$. First assume $g>1$. Write $S=\Sigma\backslash N(\gamma)$, where $N(\gamma)$ is an annular neighborhood of $\gamma$. By assumption, each component of $S$ has negative Euler Characteristics. Since $\chi(S)=\chi(\Sigma)=2-2g,$ we know that $|S|$, i.e., the number of components of $S$ is at most $2g-2$. Also, we know that $$|\partial{S}|-2\cdot |S|\leq -\chi(S)=2g-2,$$ so $$|\gamma|=\frac{1}{2}|\partial S|\leq 3g-3.$$ When $g=1$, clearly $|\gamma|\leq 1$ and we are done. \end{proof} Now we are ready to prove the main result of the paper. \bpf[Proof of Theorem \ref{thm: small depth taut foliations}] We prove the theorem by the induction on $k$. When $k=0$, by \cite[Theorem 1.2]{li2019decomposition}, we know that $(M,\gamma)$ is a product sutured manifold, and hence it admits a taut foliation of depth $0$. Suppose the theorem holds for $k<k_0$. Now we argue for the case $k=k_0$. (We will keep writing $k$ instead of $k_0$) {\bf Case 1}. The balanced sutured manifold $(M,\gamma)$ is not horizontally prime. Then, we can find a non-boundary-parallel horizontal surface $S$ and perform a decomposition: $$(M,\gamma)\stackrel{S}{\leadsto}(M_{1},\gamma_{1}),$$ with $(M_{1},\gamma_{1})$ being a disjoint union $$(M_{1},\gamma_{1})= (M_{2},\gamma_{2})\sqcup(M_{3},\gamma_{3}).$$ Since $S$ is not boundary parallel, we conclude that $${\rm dim}_{\mathbb{C}}SHI(M_{2},\gamma_2)\leq \frac{1}{2}{\rm dim}_{\mathbb{C}}SHI(M,\gamma)$$ and $${\rm dim}_{\mathbb{C}}SHI(M_3,\gamma_3)\leq \frac{1}{2}{\rm dim}_{\mathbb{C}}SHI(M,\gamma).$$ Hence, by the inductive hypothesis, $(M_2,\gamma_2)$ and $(M_3,\gamma_3)$ both admits a taut foliation of depth $2^{k+5}$. Since $S$ is quasi-horizontal, we are done by Lemma \ref{lem: glue taut foliation through quasi-horizontal surfaces}. {\bf Case 2}. The balanced sutured manifold $(M,\gamma)$ is horizontally prime. According to the proof of \cite[Proposition 7.6]{juhasz2010polytope} (also c.f. \cite[Proposition 2.16]{juhasz2010polytope} and \cite[Lemma 4.2]{scharlemann2006three}), we know that there is a disjoint union $A$ of non-trivial product annuli, and a sutured manifold decomposition $$(M,\gamma)\stackrel{A}{\leadsto}(M',\gamma'),$$ so that $(M',\gamma')$ is taut, horizontally prime, reduced, and with $H_2(M')=0$. Let $(M_1,\gamma_1)$ be the union of components of $(M',\gamma')$ that are not product sutured manifolds. There is a minimal union of product annuli $A'\subset A$ so that the decomposition along $A'$ results in $$(M,\gamma)\stackrel{A'}{\leadsto}(M_{1},\gamma_{1})\sqcup(M_2,\gamma_2),$$ where $(M_2,\gamma_2)$ is a product sutured manifold. Write $$(M_3,\gamma_3)=(M_1,\gamma_1)\cup (M_2,\gamma_2),$$ we know that $(M_1,\gamma_1)$ is also taut, horizontally prime, reduced, and with $H_2(M_1)=0$. Suppose the components of $A'$ are \begin{equation}\label{eq: definition of n} A'=A_1\cup...\cup A_n. \end{equation} {\bf Claim 1}. We have $$n<6\cdot 2^{k+1}.$$ To prove this claim, by Proposition \ref{prop: decompose along product annuli do not increase SHI} it suffices to show that $$n<6\cdot {\rm dim}_{\mathbb{C}}(\underline{\rm SHI}(M_1,\gamma_1))$$ First assume that $(M_1,\gamma_1)$ is connected. By Proposition \ref{prop: bounding dimension by genus of the boundary}, we know that $$g(\partial M_1)< {\rm dim}_{\mathbb{C}}(\underline{\rm SHI}(M_1,\gamma_1)).$$ Assume that $|\gamma_1|>6\cdot {\rm dim}_{\mathbb{C}}(\underline{\rm SHI}(M_1,\gamma_1))-4$, then by Lemma \ref{lem: curves on surface} and the pigeon-hole theorem, we know that there are three components of $\gamma_1$ that are parallel to each other. Hence, there is a non-trivial product annulus separating the three parallel sutures from the rest, which contradicts the fact that $(M_1,\gamma_1)$ is reduced. Thus, we conclude that $$n<|\gamma_1|\leq 6\cdot {\rm dim}_{\mathbb{C}}(\underline{\rm SHI}(M_1,\gamma_1))-4<6\cdot {\rm dim}_{\mathbb{C}}(\underline{\rm SHI}(M_1,\gamma_1)).$$ In general, if $(M_1,\gamma_1)$ is disconnected, then we can proceed by a second induction. Suppose we have proved that $$n<6\cdot {\rm dim}_{\mathbb{C}}(\underline{\rm SHI}(M_1,\gamma_1))$$ if $(M_1,\gamma_1)$ has $m$ components. If $(M_1,\gamma_1)$ has $m+1$ components, then assume $$(M_1,\gamma_1)=(M_{1,1},\gamma_{1,1})\cup(M_{1,2},\gamma_{1,2}),$$ where $(M_{1,1},\gamma_{1,1})$ has $m$ component and $(M_{1,2},\gamma_{1,2})$ is connected. Then by the inductive hypothesis, we know that \begin{equation*}\begin{aligned} |\gamma_1|<&6\cdot {\rm dim}_{\mathbb{C}}(\underline{\rm SHI}(M_{1,1},\gamma_{1,1}))+6\cdot {\rm dim}_{\mathbb{C}}(\underline{\rm SHI}(M_{1,2},\gamma_{1,2}))\\ <&6\cdot {\rm dim}_{\mathbb{C}}(\underline{\rm SHI}(M_1,\gamma_1)). \end{aligned}\end{equation*} The last equality holds, since by \cite[Proposition 6.5]{kronheimer2010knots} we know that $$\underline{\rm SHI}(M_1,\gamma_1)\cong\underline{\rm SHI}(M_{1,1},\gamma_{1,1})\otimes\underline{\rm SHI}(M_{1,2},\gamma_{1,2}),$$ and by construction, both $(M_{1,1},\gamma_{1,1})$ and $(M_{1,2},\gamma_{1,2})$ are not product and hence have Floer homology of dimension at least two. {\bf Claim 2}. There is a sequence of sutured manifold decompositions $$(M,\gamma)\stackrel{S_1}{\leadsto}(N_{1},\delta_{1})...\stackrel{S_n}{\leadsto}(N_{n},\delta_{n}),$$ where $n$ is defined as in Formula (\ref{eq: definition of n}), so that the following is true. (1) Each $S_i$ is either well-groomed or quasi-horizontal. (2) Each $(N_i,\delta_i)$ is taut. (3) Suppose the components of $\gamma_3$ are $$\gamma_3=\theta_1\cup...\cup\theta_m.$$ Then, for each $i=1,...,m$, there is a compact connected oriented surface-with-boundary $F_i$ satisfying the following properties. (a) For $i=1,...,m$, there is an orientation reversing embedding $$f_i:\theta_i\hookrightarrow \partial{F}_i.$$ (b) Write $$F=F_1\cup...\cup F_m ~{\rm and}~ f=f_1\cup...\cup f_m,$$ then we have $$N_n=M_3\mathop{\cup}_{f}[-1,1]\times F~{\rm and}~\delta_n=(\gamma_3\cup\{0\}\times\partial{F})\cap\partial (M_3\mathop{\cup}_{f}[-1,1]\times F).$$ To prove this claim, we focus on the case when $n=1$. The general case follows immediately by induction. When $n=1$, we have a sutured manifold decomposition $$(M,\gamma)\stackrel{A_1}{\leadsto}(M_{3},\gamma_{3})=(M_{1},\gamma_{1})\cup (M_{2},\gamma_{2}).$$ Write $\partial{A}_1=\alpha_+\cup\alpha_-$ so that $\alpha_{\pm}\subset R_{\pm}(\gamma)$. Write $V_{\pm}$ the component of $R_{\pm}(\gamma)$ that contains $\alpha_{\pm}$. We discuss in a few different cases. When $\alpha_{+}$ and $\alpha_{-}$ are non-separating in $V_+$ and $V_-$, respectively, we know that $A_1$ has already been well-groomed, and we just take $S_1=A_1$. Then, we have $(N_1,\delta_1)=(M_3,\gamma_3)$. So, for $i=1,...,m$, we simply pick $F_i$ to be an annulus and identify $\theta_i$ with any component of $\partial{F}_i$ but with orientation reversed. When $\alpha_+$ is separating, and $\alpha_-$ is non-separating, we know that $-\alpha_+$ co-bounds a sub-surface $F_1$ in $V_+$ with part of $\partial V_+$. We can glue $F_1$ to $A_1$ and push it into the interior of $M$. Write the resulting surface $S_1$, then $\partial S_1=\alpha_-\cup (\partial F_+\backslash\alpha_+)$, and by Definition \ref{defn: well groomed surface}, $S_1$ is well-groomed. After the decomposition along $A_1$, there is a component of $\gamma_3$ corresponding to $\alpha_+$, which we write $\theta_1$. Then, via $\alpha_+$, $\theta_1$ is identified with a component of $\partial F_1$ with orientation reversed. It is straightforward to check that $$N_1=M_3\mathop{\cup}_{\theta_1} [-1,1]\times F_1~{\rm and}~\delta_1=(\gamma_3\cup\{0\}\times\partial{F}_1)\cap\partial (M_3\mathop{\cup}_{\theta_1} [-1,1]\times F_1).$$ We can take $F_2,...,F_{m}$ to be annuli just as in the previous case. When $\alpha_+$ and $\alpha_-$ are both separating, assume that $-\alpha_+$ co-bounds a subsurface $F_+\subset V_+$ together with part of $\partial V_+$, and $-\alpha_-$ co-bounds a subsurface $F_-\subset V_-$ together with part of $\partial V_-$. We discuss in two cases. {\bf Case 2.1}. At least one of $F_+$ and $F_-$ has a disconnected boundary. We can glue $F_+$ and $F_-$ to $A_1$ to form the new decomposing surface $S_1$, which is quasi horizontal in the sense of Definition \ref{defn: quasi-horizontal surfaces}. The decomposition of $(M,\gamma)$ along $A_1$ yields a sutured manifold $(M_3,\gamma_3)$, and two components of $\gamma_3$, which we call $\theta_+$ and $\theta_-$, correspond to $\alpha_+$ and $\alpha_-$ respectively. As in the above paragraph, $\theta_+$ and $\theta_-$ are identified with a component of $\partial F_+$ and a component of $\partial F_-$, respectively, via $\alpha_+$ or $\alpha_-$. Furthermore, if $(N_1,\delta_1)$ is the result of the decomposition of $(M,\gamma)$ along $A_1$, then we know that $$N_1=M_3\cup [-1,1]\times F_+ \cup [-1,1]\times F_-.$$ Note $(M_3,\gamma_3)$ is taut by construction, and $(N_1,\delta_1)$ is obtained from $(M_3,\gamma_3)$ by gluing product regions $[-1,1]\times F_i$. By Claim 3, which we will prove later, we know that $F_i$ is not a disk, and hence $(N_1,\delta_1)$ is also taut. \begin{rem} Note in this case, the surface $S_1$ might be a horizontal surface instead of just quasi-horizontal (c.f. \cite[Theorem 5.1]{ni2007knot} and \cite[Proposition 6.7]{kronheimer2010knots}). If that happens, then since $(M,\gamma)$ is horizontally prime, $S_1$ must be parallel to the boundary. Thus decomposing along $S_1$ peels off a product sutured manifold from $(M,\gamma)$. However, the construction of $(N_1,\delta_1)$ is left unchanged, so we don't care if this subtlety happens. \end{rem} {\bf Case 2.2}. If both $F_+$ and $F_-$ have connected boundary, then we can simply replace the surface $A_1$ by $-A_1$ to perform the sutured manifold decomposition, and then it falls into Case 2.1. This concludes the proof of Claim 2. Recall we have $$(M,\gamma)\stackrel{A'}{\leadsto}(M_{3},\gamma_{3})=(M_{1},\gamma_{1})\cup (M_{2},\gamma_{2}).$$ By construction, $(M_1,\gamma_1)$ is reduced, and each component of it is not a product sutured manifold. By \cite[Lemma 2.13]{juhasz2010polytope}, it is also free of essential product disks. Thus, Proposition \ref{prop: decomposition drops the dimension by half} applies and there is a well-groomed surface $S\subset (M_1,\gamma_1)$ and a sutured manifold decomposition $$(M_1,\gamma_1)\stackrel{S}{\leadsto}(M_{4},\gamma_{4})$$ so that $${\rm dim}_{\mathbb{C}}SHI(M_4,\gamma_4)\leq\frac{1}{2}{\rm dim}_{\mathbb{C}}SHI(M_1,\gamma_1).$$ Note $(M_2,\gamma_2)$ is a product sutured manifold, and $A'$ consists of product annuli, so we know from \cite[Proposition 6.5]{kronheimer2010knots} and Proposition \ref{prop: decompose along product annuli do not increase SHI} that $${\rm dim}_{\mathbb{C}}SHI(M_4,\gamma_4)\leq\frac{1}{2}{\rm dim}_{\mathbb{C}}SHI(M,\gamma).$$ From now on, we will call $(M_4,\gamma_4)\sqcup (M_2,\gamma_2)$ still $(M_4,\gamma_4)$, so we have a well-groomed surface $S\subset (M_3,\gamma_3)$ and a taut sutured manifold decomposition $$(M_3,\gamma_3)\stackrel{S}{\leadsto}(M_{4},\gamma_{4})$$ with $${\rm dim}_{\mathbb{C}}SHI(M_4,\gamma_4)\leq\frac{1}{2}{\rm dim}_{\mathbb{C}}SHI(M,\gamma).$$ Next, we need to modify $(N_n,\delta_n)$ for the purpose of further discussions. Suppose for some $i\in\{1,...,m\}$, the surface $F_i$ has a connected boundary. Then, $\partial{F}_i$ is necessarily identified with $\theta_i$, with orientation reversed. {\bf Claim 3}. In the proof of Claim 2, if the surface $F_i$ has a connected boundary, then it is not a disk. Suppose, on the contrary, that $F_i$ is a disk. Then recall, by construction, the surface $F_i$ is a subsurface of $R(\gamma)\subset \partial M$, and there is a component $A_j$ of $A'$ and a boundary component $\alpha$ of $A_j$ so that $\partial{F_i}=-\alpha$. Let $\alpha'$ be the other boundary component of $A_j$, then we know that $A_j\cup F_i$ is a disk whose boundary is $\alpha'$. Since $(M,\gamma)$ is taut, we know that $\alpha'$ also bounds a disk $D\subset R(\gamma)$. Then, $A_j\cup F_i\cup D$ is a $2$-sphere. Since $(M,\gamma)$ is taut, this $2$-sphere bounds a $3$-ball, and hence $A_j$ is a trivial product annulus, which contradicts the way we choose $A_j$. Now we explain how to modify $(N_{n},\delta_n)$: Suppose for some $i\in\{1,...,m\}$, $F_i$ has a connected boundary and is glued to a component of $\gamma_1$. By Claim 3, $F_i$ is not a disk, so we can pick a non-separating curve $\beta_i$ on $F_i$. Then, $$[-1,1]\times\beta_i\subset [-1,1]\times F_i\subset (N_n,\delta_n)$$ is a product annulus, which is also a well-groomed surface. Let $A''$ be the union of all such product annuli, one for each $F_i$ so that it has a connected boundary that is glued to a component $\gamma_1$. Since $(M_1,\gamma_1)$ is reduced, we can argue in the same way as in the proof of Claim 1 and conclude the following. {\bf Claim 4}. We have $$|A''|\leq|\gamma_1|< 6\cdot 2^{k+1}.$$ We have a sutured manifold decomposition $$(N_n,\delta_n)\stackrel{A''}{\leadsto}(N_n',\delta_n').$$ By construction, we know that there are connected compact oriented surfaces $F_i'$, which are either $F_i$ or $F_i$ cut open along $\beta_i$, so that the following is true. (Recall $\theta_i$ are the components of $\gamma_3$.) (a') Each $F_i'$ has at least two boundary components. (b') For $i=1,...,m$, there is an orientation reversing embedding $$f'_i:\theta_i\hookrightarrow \partial{F}'_i.$$ (c') Write $$F'=F'_1\cup...\cup F'_m ~{\rm and}~ f'=f'_1\cup...\cup f'_m,$$ then we have $$N'_n=M_3\mathop{\cup}_{f'}[-1,1]\times F'~{\rm and}~\delta'_n=(\gamma_3\cup\{0\}\times\partial{F'})\cap\partial (M_3\mathop{\cup}_{f'}[-1,1]\times F').$$ Next, we want to extend the well-groomed surface $S\subset (M_3,\gamma_3)$ to a well-groomed surface $S'$ on $(N_n',\delta_n')$. To do this, we extend $S$ across all $[-1,1]\times F_i'$ as follows: For $i\in\{1,...,m\}$, let $A(\theta_i)$ be the annular neighborhood of $\theta_i\subset \gamma_3$. If $S\cap A(\theta_i)=\emptyset$, then we are already done. If $S\cap A(\theta_i)$ consists of parallel copies of $\theta_i$, then we simply glue the same number of copies of $F_i$ to $S$. If $S\cap A(\theta_i)$ is neither empty nor a collection of simple closed curves, then $S$ being well-groomed implies that $S\cap \theta_i$ is a finite set of points of the same signs. Write $\sigma_i'$ the component of $\partial{F}_i'$ so that $\sigma_i'=f'(\theta_i)$. Let $\tau_i'\subset F_i'$ be a disjoint union of parallel arcs so that each component of $\tau_i'$ has one end point on $\sigma_i'$ and the other end point on $\partial F_i'\backslash \sigma_i'$, which is non-empty by condition (a'). We further require that $$\tau_i'\cap \sigma_i'=f'(S\cap \theta_i).$$ Then, we can glue $[-1,1]\times \tau_i'$ to $S$ along $[-1,1]\times (\tau_i'\cap \sigma_i')$. Performing this operation for all $i$, we obtain a surface $S'\subset (N_n',\delta_n')$. To show that $S'$ is well-groomed, first it is straightforward to check that $\partial S'$ is essential in $H_1(\partial N_n')$, and condition (1) in Definition \ref{defn: well groomed surface} is satisfied by the construction of $S'$. To show that condition (2) also holds, suppose $V'_n$ is a component of $R(\delta_n')$. By construction, there exists a component $V_3$ of $R(\gamma_3)$, and surfaces $F_{i_1}$,..., $F_{i_{l}}$ so that $$V'_n=V_3\cup F_{i_1}\cup...\cup F_{i_l}.$$ It is crucial that our construction of $(N_n',\delta_n')$ makes sure that one component of $R(\delta_n')$ contains only one component of $R(\gamma_3)$. If $\partial S\cap V_3$ consists of parallel and coherently oriented non-separating simple closed curves, then $$\partial S'\cap V_n'=\partial S\cap V_3.$$ If $\partial S\cap V_3$ consists of parallel and coherently oriented properly embedded simple arcs, then each arc intersects two components of $\partial V_3$, say $\theta_{i_1}$ and $\theta_{i_2}$. Then we know that each component of $\partial S'\cap V_n'$ is an arc by gluing three pieces, $\tau_{i_1}'$, $\tau_{i_2}'$, and a component of $\partial S\cap V_3$, together. Hence by Definition \ref{defn: well groomed surface}, $S'$ is indeed well-groomed. Now there are two decompositions: $$(M_3,\gamma_3)\stackrel{S}{\leadsto}(M_{4},\gamma_{4}){\rm~and~}(N_n',\delta_n')\stackrel{S'}{\leadsto}(N_{n+1},\delta_{n+1}).$$ {\bf Claim 5}. We have $$SHI(M_4,\gamma_4)\cong SHI(N_{n+1},\delta_{n+1}).$$ To prove this claim, for any $i\in\{1,...,m\}$, let $A(\theta_i)$ be the annular neighborhood of $\theta_i\subset \gamma_3$. If $S\cap A(\theta_i)=\emptyset$, then $A(\theta_i)$ survives in $M_4$, and $\theta_i$ is a component of $\gamma_4$. To obtain $N_{n+1}$ from $M_4$, we need to glue $[-1,1]\times F_i'$ to $M_4$ along $\theta_i$. If $S\cap A(\theta_i)$ consists of parallel copies of $\theta_i$, then $(N_{n+1},\delta_{n+1})$ is obtained from $(M_4,\gamma_4)$ by gluing a few copies of $[-1,1]\times F_i'$. By \cite[Proposition 6.7]{kronheimer2010knots}, gluing product regions will not change the sutured instanton Floer homology. Hence, it remains to deal with the case when $S\cap \theta_i\neq\emptyset$. Suppose $$S\cap \theta_i=\{p_1,...,p_{s_i}\},$$ where $p_1,...,p_{s_i}$ are labeled according to the orientation of $\theta_i$. Let $\theta_i''$ be the part of $\theta_i$ from $p_{s_i}$ to $p_1$. Then, $\theta_i''$ does not contain any other $p_j$. Recall there is a collection of arcs $\tau_i'\subset F_i'$. Note $F_i'\backslash \tau_i'$ consists of a few disks and a large piece $F_{i}''$ that contains most of $F_i'$. Furthermore, there is a component $\sigma_i''$ of $\partial F_i'\backslash \tau_i'\subset \partial F_{i}''$ so that $\sigma_i''=f'(\theta_i'')\subset \partial F_i''$. It is straightforward to check that, to obtain $N_{n+1}$ from $M_4$, we need to glue $[-1,1]\times F_i''$ to $M_4$ along $[-1,1]\times \sigma_i''$. Note $\sigma_i''$ is an arc, so topologically $[-1,1]\times \sigma_i''$ is a disk which intersects the suture $\gamma_4$ along an arc $\theta_i''$, and intersects the suture $\{0\}\times\partial{F}_i''$ of the product sutured manifold $([-1,1]\times F_{i}'',\{0\}\times \partial F_i'')$ along another arc $\{0\}\times\sigma_i''$. Hence, this gluing coincides with the setting of attaching a contact $1$-handle to the disjoint union $$(M_4,\gamma_4)\sqcup([-1,1]\times F_{i}'',\{0\}\times \partial F_i''),$$ in the sense of Baldwin and Sivek \cite{baldwin2016instanton}. Since both disjoint union with a product sutured manifold and attaching a contact $1$-handle do not change sutured instanton Floer homology, we conclude that $$SHI(M_4,\gamma_4)\cong SHI(N_{n+1},\delta_{n+1}).$$ Finally, we are ready to finish the induction. We have a sequence of decompositions: \begin{equation}\label{eq: sequence of decompositions} (M,\gamma)\stackrel{S_1}{\leadsto}(N_{1},\delta_{1})\stackrel{S_2}{\leadsto}...\stackrel{S_n}{\leadsto}(N_{n},\delta_{n})\stackrel{A''}{\leadsto}(N_n',\delta_n')\stackrel{S'}{\leadsto}(N_{n+1},\delta_{n+1}). \end{equation} We know from Claim 5 that \begin{equation*}\begin{aligned}{\rm dim}_{\mathbb{C}}SHI(N_{n+1},\delta_{n+1})&={\rm dim}_{\mathbb{C}}SHI(M_4,\gamma_4)\\ &\leq \frac{1}{2}{\rm dim}_{\mathbb{C}}SHI(M_3,\gamma_3)\\ &\leq \frac{1}{2}{\rm dim}_{\mathbb{C}}SHI(M,\gamma)\\ &<2^{k}. \end{aligned}\end{equation*} Thus, the inductive hypothesis applies on $(N_{n+1},\delta_{n+1})$, and there is a taut foliation $\mathfrak{F}'$ of depth at most $2^{k+5}$ on $(N_{n+1},\delta_{n+1})$. We now go through decomposition (\ref{eq: sequence of decompositions}) to construct a taut foliation on $(M,\gamma)$. First, each $S_i$ is either well groomed or quasi horizontal by Claim 2, and $n<6\cdot 2^{k+1}$ by Claim 1. Second, each component of $A''$ is well-groomed and, since the components of $A''$ are contained in sufficiently disjoint regions of $N_n$, decomposing along some subset of $A''$ will keep each of the rest components of $A''$ being well-groomed. By Claim 4, $|A''|< 6\cdot 2^{k+1}$. Finally, there is one last decomposition along the well-groomed surface $S'$, so by Theorem \ref{thm: well groomed decomposition increase depth by 1} and Lemma \ref{lem: glue taut foliation through quasi-horizontal surfaces}, there is a taut foliation $\mathfrak{F}$ on $(M,\gamma)$ of depth at most $$6\cdot 2^{k+1}+6\cdot 2^{k+1}+1+2^{k+5}<2^{k+6}.$$ Hence, the inductive step is completed, and we finish the proof of Theorem \ref{thm: small depth taut foliations}. \end{proof} \begin{cor}\label{cor: SFH} Suppose $(M,\gamma)$ is a taut balanced sutured manifold with $H_2(M)=0$, and $${\rm rk}_{\intg_2}(SFH(M,\gamma))<2^{k+1},$$ then $(M,\gamma)$ admits a taut foliation of depth at most $2^{k+6}$. \end{cor} \bpf The proof of Theorem \ref{thm: small depth taut foliations} applies verbatim. \end{proof} The above corollary can be used to prove the following, which gives a partial answer to \cite[Question 9.14]{juhasz2008floer}. \begin{cor}\label{cor: HFK} Let $K$ be a knot in a rational homology $3$-sphere $Y$ so that the knot complement $Y(K)$ is irreducible. Suppose further that $k$ is a positive integer so that $${\rm rk}_{\intg_2}\widehat{HFK}(Y,K,g(K))<2^{k}.$$ Then, $Y\backslash N(K)$ admits a taut foliation of depth at most $2^{k+5}$ transverse to the boundary of $N(K)$. \end{cor} \bpf We have a sutured manifold $Y\backslash N(K)$, with toroidal suture. Take $S$ to be a minimal genus rational Seifert surface of $K$, then we have a sutured manifold decomposition \begin{equation}\label{eq: decomposing knot complement in rational homology sphere} (Y\backslash N(K))\stackrel{S}{\leadsto}(M,\gamma), \end{equation} and we know that $$SFH(M,\gamma)\cong \widehat{HFK}(Y,K,g(K)).$$ Thus, Corollary \ref{cor: SFH} applies and we obtain a taut foliation on $(M,\gamma)$ of depth at most $2^{k+5}$. We can further glue it along the decomposition (\ref{eq: decomposing knot complement in rational homology sphere}), which will not increase the depth of the taut foliation. Hence, we are done. \end{proof} \bpf[Proof of Corollary \ref{cor: small depth taut foliations}] On the knot complement $S^3(K)=S^3\backslash N(K)$, we can pick $\Gamma_{\mu}$ to be a suture consisting of two meridians. From Corollary 4.2 in Kronheimer and Mrowka \cite{kronheimer2010instanton}, we know that $${\rm dim}_{\mathbb{C}}SHI(S^3(K),\Gamma_{\mu})< 1+2^{k+2}.$$ Pick a minimal genus Seifert surface $S$ of $K$, we know that there is a decomposition $$(S^3(K),\Gamma_{\mu})\stackrel{S}{\leadsto}(M,\gamma),$$ and by \cite[Lemma 5.7 and Proposition 5.11]{kronheimer2010knots}, we know that $${\rm dim}_{\mathbb{C}}SHI(M,\gamma)<2^{k+1}.$$ Hence, we can apply Theorem \ref{thm: small depth taut foliations}, and there is a taut foliation on $(M,\gamma)$ of depth at most $2^{k+6}$. Note we can also regard $S^3(K)$ as a sutured manifold with toroidal sutures, and decomposing $S^3(K)$ along $S$ also gives rise to $(M,\gamma)$. So, we can glue the just obtained taut foliation on $(M,\gamma)$ along this later decomposition to conclude the proof of Corollary \ref{cor: small depth taut foliations}. \end{proof}
1,314,259,992,858
arxiv
\section{Introduction} The light meson is an important topic in hadron physics. The large nonperturbative effect in the light meson makes it relatively difficult to explore its internal structure. Due to the same reason, it is a wonderful place to study nonperturbative QCD. Though great progress has been achieved in the study of light meson spectroscopy during the last few decades, the internal structure of the light meson is still unclear, such as the debates about the $\sigma (500)$, $a_{1}/f_{1}(980)$, $\kappa $, and $% f_{1}(1285)$~\cite{Guo:2017jvc}. Hence, many large experimental facilities will be working in this research area, such as LHCb, BelleII, and the CEBAF 12 GeV. In particular, a new detector GlueX has been installed at CEBAF after the 12 GeV upgrade, which will focus on the light meson study with electron or photon beams~\cite{Dudek:2012vr}. Like the light meson photoproduction off the nucleon, the pion-induced light meson production is also an important way to study the internal structure of the light meson. This process is accessible at J-PARC~\cite{Kumano:2015gna} and COMPASS~\cite{Nerling:2012er} with high-energy secondary pion beams, which provide a good opportunity to study the light meson combined with the high-luminosity experiment at CEBAF with an electromagnetic probe. Among the light mesons, $f_{1}(1285)$ attracts much attention. Its internal structure has been studied for many years and is a long-standing problem. The Patrignani $et~al.$ (PDG) lists $% f_{1}(1285)$ as an axial-vector state with quantum number $% I^{G}(J^{PC})=0^{+}(1^{++})~$\cite{Olive:2016xmw} . It has been suggested as a dynamically generated state produced from the $K\bar{K}^{\ast }$ interaction in the literature \cite% {Roca:2005nm,Roca:2005nm,Geng:2015yta,Zhou:2014ila}. In recent years, many XYZ particles were observed in the charmed and bottomed sector, such as $X(3872)$, $Z_{c}(3900)$, $Z_{c}(4025)$, $Z_{c}(10610)$, and $Z_{c}(10650)$~% \cite{Choi:2003ue,Ablikim:2013mio,Ablikim:2013emm,Belle:2011aa}. $% f_{1}(1285)$ and these XYZ particles are close to the $K\bar{K}^{\ast }/D% \bar{D}^{\ast }/B\bar{B}^{\ast }$ thresholds, respectively. The similarity in three flavor sections suggests that these particles are from the corresponding hadron-hadron interactions, which is supported by explicit calculations in the one-boson-exchange model~\cite% {Lu:2016nlp,He:2014nya,He:2015mja,He:2013nwa,Sun:2011uh,Sun:2012zzd}. In particular, $f_{1}(1285)$ is the strange partner of the $X(3872)$ as S-wave hadronic molecular states from the $K\bar{K}^{\ast }$ and $D\bar{D}% ^{\ast }$ interactions, respectively~\cite{Lu:2016nlp}. Compared with the XYZ particles in charmed and bottomed sectors, $f_{1}(1285)$ is quite far from the $K\bar{K}^{\ast }$ interaction. Hence, more investigation of $% f_{1}(1285)$ in different production processes may provide more helpful information to confirm the molecular state interpretation of $f_{1}(1285) $. Recently, the $f_{1}(1285)$ meson was studied at CLAS in photoproduction from a proton target, and its decay pattern was extracted from high-precision data~\cite{Dickson:2016gwc}. A nucleon resonance of a mass of about 2300 MeV was suggested in the analyses~\cite% {Dickson:2016gwc,Wang:2017hug}. However, a calculation with an interpolated Reggeized treatment suggests that the experimental cross section can be well reproduced without nucleon resonance included~\cite{Wang:2017plf}. To check different models, an experimental study of pion-induced production process will be helpful. Until now, there only exist some old experimental data and no explicit theoretical study of those data can be found in the literature to our knowledge~% \cite% {Dahl:1967pg,Corden:1978cz,Dionisi:1980hi,Bityukov:1983cw,Bityukov:1987bj}. Furthermore, it is promising to launch new measurements of the pion-induced $% f_{1}(1285)$ at J-PARC and COMPASS. Hence, it is interesting to analyze the pion-induced $f_{1}(1285)$ production based on the old data in an effective Lagrangian approach to provide helpful predictions for the future experiment. Because the exisiting data scatter from near threshold to serveral tens of GeV, we will introduce the interpolating Reggeized treatment in the $t $ channel as in the $f_{1}(1285)$ photoproduction to reproduce the data at both low and high beam momentum~\cite{Nam:2010au}. The $t$ channel and $u$ channel usually correspond to the enhancement at forward and backward angles, respectively~\cite{He:2013ksa,He:2014gga}. The only existing data of the differential cross section are at very forward angles~\cite{Corden:1978cz}, which can be used to determine the $t$-channel contribution. From the previous studies, the $u$-channel contributions will become more important at higher beam momentum~\cite{He:2013ksa,He:2014gga}. The $u$ channel's contribution was found to be essential to interpret the behavior of the differential cross section of photoproduction~\cite{Wang:2017plf}. Hence, in this work, we will consider the $u$ channel as well as $t$ and $s$ channels to calculate the behavior of the pion-induced $f_1(1285)$ production in a large range of the beam momentum. It can be expected that the Born $s$ channel is negligible. Since the experimental data are very crude and information about the coupling constant is lacking, the $s$-channel nucleon resonance is not included in the current work as in the $f_{1}(1285)$ photoproduction~\cite{Wang:2017plf} to keep model simplified. This paper is organized as follows. After the Introduction, we present the formalism including Lagrangians and amplitudes of pion-induced $f_1(1285)$ production in Sec. II. The numerical results of cross sections are presented in Sec. III and compared with the existing data. Finally, the paper{\ ends} with a summary. \section{Formalism} \subsection{Lagrangians} The basic tree-level Feynman diagrams for the $\pi ^{-}p\rightarrow f_{1}(1285)n$ reaction are depicted in Fig.~\ref{Fig: Feynman}. These include $t$-channel $a_{0}(980)$ ($\equiv a_{0}$) exchanges $s$ and $u$ channels with intermediate nucleon. As shown by PDG~\cite{Olive:2016xmw} , the main two-body decay of $f_1(1285)$ ($\equiv f_1$) is the $a_0\pi$ channel. Hence, only the $a_0$ exchange is included in the $t$ channel. \begin{figure}[tbph] \begin{center} \includegraphics[scale=0.52]{Feynman.eps} \end{center} \caption{Feynman diagrams for the $\protect\pi^{-}p\rightarrow f_{1}(1285)n$ reaction.} \label{Fig: Feynman} \end{figure} For the $t$-channel $a_{0}$ exchange, one needs the following Lagrangians \cite% {Liu:2008qx,Penner:2002ma,Colangelo:2010te}% \begin{eqnarray} \mathcal{L}_{a_{0}NN} &=&g_{a_{0}NN}\bar{N}(\bm{\tau}\cdot \bm{a}_{0}){N}, \\ \mathcal{L}_{f_{1}a_{0}\pi } &=&-g_{f_{1} {a}_{0}\pi }f_{1}^{\mu }\bm{a}% _{0}\cdot\partial _{\mu }{\bm \pi}, \end{eqnarray}% where ${N}$, ${f_{1}}$, $a_{0}$, and $\pi $ are the nucleon, $f_{1}(1285)$, $% a_{0}(980)$ and $\pi $ meson fields, respectively. The coupling constant $% g_{f_{1}a_{0}\pi }$ is determined from the decay width% \begin{equation} \Gamma _{f_{1}a_{0}\pi }=g_{f_{1}a_{0}\pi }^{2}\frac{% (m_{f_{1}}^{2}-m_{a_{0}}^{2}+m_{\pi }^{2})^{2}-4m_{f_{1}}^{2}m_{\pi }^{2}}{% 24\pi m_{f_{1}}^{4}}\left\vert \bm{p}_{\pi }^{~\mathrm{c.m.}}\right\vert , \end{equation}% where $\bm{p}_{\pi }^{~\mathrm{c.m.}}$ is the three-momentum of the pion in the rest frame of the $f_{1}$ meson. By taking the value at PDG as $\Gamma _{f_{1}\rightarrow a_{0}\pi }\simeq 8.71$ MeV~\cite{Olive:2016xmw}, one gets a value of the coupling constant $g_{f_{1}a_{0}\pi }\simeq 4.53$. The coupling constant $g_{a_0NN}$ was not well determined in the literature~\cite% {Liu:2008qx,Penner:2002ma}. In the current work, we will take $g_{a_0NN}$ as a free parameter. For the $t$-channel $a_0$ meson exchange, the general form factors $F_{f_{1}a_0\pi }=(\Lambda _{t}^{2}-m_{a_0}^{2})/(\Lambda _{t}^{2}-q_{a_0}^{2})$ and $F_{a_0NN}=(\Lambda _{t}^{2}-m_{a_0}^{2})/(\Lambda _{t}^{2}-q_{a_0}^{2})$ are taken into account in this work and the cutoffs are taken as the same one for simplification. Here, $q_{a_0}$ and $m_{a_0}$ are the four-momentum and mass of the exchanged $% a_0$ meson, respectively. To calculate the amplitude of the $s$-channel nucleon exchange, we need relevant Lagrangians. For the $\pi NN$ interaction vertex we take the effective pseudoscalar coupling \cite{Tsushima:1998jz}% \begin{equation} \mathcal{L}_{\pi NN}=-ig_{\pi NN}\bar{N}\gamma _{5}\bm{\tau}\cdot \bm{\pi}{N}% \text{ }, \end{equation}% where $\bm{\tau}$ is the Pauli matrix, and $g_{\pi NN}^{2}/4\pi =12.96$ is adopted \cite{Lin:1999ve,Baru:2011bw}. The Lagrangian of the $f_{1}NN$ coupling reads ~\cite{Domokos:2009cq},% \begin{equation} \mathcal{L}_{f_{1}NN}=g_{f_{1}NN}\bar{N}\left( {f_{1}}^{\mu }-i\frac{\kappa _{f_{1}}}{2m_{N}}\gamma ^{\nu }\partial _{\nu } {f_{1}}^{\mu }\right) \gamma _{\mu }\gamma ^{5}{N}+\text{H.c.}, \end{equation}% where $g_{f_{1}NN}=2.5$ will be taken as discussed in Ref. \cite% {Birkel:1995ct}. Since the value of $\kappa _{f_{1}}$ was determined by fitting the CLAS data in our previous work \cite{Wang:2017plf}, $\kappa _{f_{1}}=1.94$ is adopted in this paper. For the $s$ and $u$ channels with intermediate nucleons, we adopt the{\ general form factor to describe the size of the hadrons \cite{Kochelev:2009xz},}% \begin{equation} F_{s/u}(q_{N})=\frac{\Lambda _{s/u}^{4}}{\Lambda _{s/u}^{4}+(q_{N}^{2}-m_{N}^{2})^{2}}~, \end{equation}% where $q_{N}$ and $m_{N}$ are the four-momentum and mass of the exchanged nucleon, respectively. Since the $s$-wave contribution is usually very small, we take $\Lambda _{s}=\Lambda _{u}$. The values of cutoffs $\Lambda _{s}$, $\Lambda _{u}$ and $\Lambda _{t}$ will be{\ determined by fitting experimental data}. \subsection{Amplitude for $\protect\pi ^{-}p\rightarrow f_{1}(1285)n$ reaction} The scattering amplitude of the $\pi ^{-}p\rightarrow f_{1}(1285)n$ process can be written in a general form of% \begin{equation} -i\mathcal{M}_{i}=\epsilon _{f_{1}}^{\mu \ast }(k_{2})\bar{u}(p_{2})\mathcal{% A}_{i,\mu }u(p_{1}), \end{equation}% where $\epsilon _{f_{1}}^{\mu \ast }$ is the polarization vector of the $f_{1}$ meson and $\bar{u}$ or $u$ is the Dirac spinor of the nucleon. The reduced amplitudes $\mathcal{A}_{i,\mu }$ for the $s$-, $t$-, and $u$% -channel contributions read \begin{eqnarray} \mathcal{A}_{s,\mu }^{(N)} &=&-\sqrt{2}g_{\pi NN}g_{f_{1}NN}F_{s}(q_{N})\left( 1-i\frac{\kappa _{f_{1}}}{2m_{N}}% \rlap{$\slash$}k_{2}\right) \notag \\ &\cdot &\gamma _{\mu }\gamma ^{5}\frac{(\rlap{$\slash$}q_{N}+m_{N})}{% s-m_{N}^{2}}\gamma _{5}, \\ \mathcal{A}_{t,\mu }^{(a_{0})} &=&i\sqrt{2}g_{a_{0}NN}g_{f_{1}a_{0}\pi }F_{t}(q_{V})\frac{1}{t-m_{a_{0}}^{2}}k_{1\mu }, \label{Texchange} \\ \mathcal{A}_{u,\mu }^{(N)} &=&-\sqrt{2}g_{\pi NN}g_{f_{1}NN}F_{u}(q_{N})\gamma _{5}\frac{(\rlap{$\slash$}q_{N}+m_{N})}{% u-m_{N}^{2}} \notag \\ &\cdot &\left( 1-i\frac{\kappa _{f_{1}}}{2m_{N}}\rlap{$\slash$}k_{2}\right) \gamma _{\mu }\gamma ^{5}, \label{AmpT2} \end{eqnarray}% where $s=(k_{1}+p_{1})^{2}$, $t=(k_{1}-k_{2})^{2}$ and $u=(p_{2}-k_{1})^{2}$ are the Mandelstam variables. \subsection{Interpolating Reggeized $t$ channel} In this work, we will consider a large beam-momentum range from threshold to several tens of GeV. To describe the behavior of the hadron production at high momentum, the Reggeized treatment should be introduced to the $t$ channel\cite% {He:2010ii,Galata:2011bi,Haberzettl:2015exa,Wang:2015hfm,Wan:2015gsl}. The Reggeized treatment for $t$-channel meson exchange consists of replacing the product of the form factor in Eq.~(\ref{Texchange}) as \begin{equation} F_{t}(t)\rightarrow \mathcal{F}_{t}(t)=(\frac{s}{s_{scale}}% )^{\alpha _{a_{0}}(t)}\frac{\pi \alpha _{a_{0}}^{\prime }(t-m_{a_{0}}^{2})}{% \Gamma \lbrack 1+\alpha _{a_{0}}(t)]\sin [\pi \alpha _{a_{0}}(t)]}. \end{equation}% The scale factor $s_{scale}$ is fixed at 1 GeV. In addition, the Regge trajectories $\alpha _{a_{0}}(t)$ read as $\alpha _{a_{0}}(t)=-0.5~{\rm GeV}^2+0.6t$~\cite{Kochelev:2009xz,Galata:2011bi}. To describe the behavior of the cross sections at both low and high beam momentum, an interpolating Reggeized treatment will be adopted to interpolate the Regge case smoothly to the Feynman case, which has been successfully to applied to several photoproduction processes\cite% {Nam:2010au,Haberzettl:2015exa,Wang:2015hfm,He:2012ud,He:2013ksa,He:2014gga}% . The interpolated Reggeized form factor can then be written as% \begin{equation} F_{t}\rightarrow \mathcal{F}_{R,t}=\mathcal{F}_{t}R\left( s,t\right) +F_{t}% \left[ 1-R\left( s,t\right) \right],\label{Eq: interpolating} \end{equation}% where $R\left( s,t\right) =$ $R_{s}\left( s\right) R_{t}\left( t\right) $, with% \begin{equation} R_{s}\left( s\right) =\frac{1}{1+e^{-(s-s_{R})/s_{0}}},\ \ R_{t}\left( t\right) =\frac{1}{1+e^{-(t+t_{R})/t_{0}}}, \end{equation}% where $s_{R}$ and $t_{R}$ are the centroid values for the transition from non-Regge to Regge regimes while $s_{0}$ and $t_{0}$ describe the respective widths of the transition regions. The four parameters will be fitted to the experimental data. The Feynman-type $u$ channel will be adopted first in the fitting procedure, and the Rggeized treatment of the $u$channel will be discussed in Sec.~\ref{Regu} \section{Numerical results} With the preparation in the above section, the differential cross section of the $\pi ^{-}p\rightarrow f_{1}(1285)n$ reaction will be calculated and compared with the experimental data~\cite% {Dahl:1967pg,Corden:1978cz,Dionisi:1980hi,Bityukov:1983cw,Bityukov:1987bj}. The differential cross section in the center-of-mass (c.m.) frame is written as \begin{equation} \frac{d\sigma }{d\cos \theta }=\frac{1}{32\pi s}\frac{\left\vert \vec{k}% _{2}^{{~\mathrm{c.m.}}}\right\vert }{\left\vert \vec{k}_{1}^{{~\mathrm{c.m.}}% }\right\vert }\left( \frac{1}{2}\sum\limits_{\lambda }\left\vert \mathcal{M}% \right\vert ^{2}\right), \end{equation}% where $s=(k_{1}+p_{1})^{2}$, and $\theta $ denotes the angle of the outgoing $f_{1}(1285)$ meson relative to the $\pi $ beam direction in the c.m. frame. $% \vec{k}_{1}^{{~\mathrm{c.m.}}}$ and $\vec{k}_{2}^{{~\mathrm{c.m.}}}$ are the three-momenta of the initial $\pi $ beam and final $f_{1}(1285)$, respectively. The experimental data for the $\pi ^{-}p\rightarrow f_{1}(1285)n$ reaction will be fitted with the help of the \textsc{minuit} code in \textsc{% cernlib}. \subsection{\protect\boldmath$t$ distribution for $\protect\pi% ^{-}p\rightarrow f_{1}(1285)n$ reaction} The interpolating Reggeized treatment is adopted to reproduce the cross section in a beam-momentum region from threshold to several tens of GeV considered in the current work. However, four additional free parameters will be introduced in such treatment. If we recall that at higher beam momenta, the Reggeized $t$-channel contribution is dominant, the two parameters $t_R$ ad $t_0$ can be determined with the $t$ distribution at a certain beam momentum. Fortunately, there exist experimental data of the $t$ distribution at beam momentum in the laboratory frame $P_{Lab}=12-15$ GeV~\cite{Corden:1978cz}. Hence, we first study $% t$ distribution and determine $t_R$ and $t_0$ before making a full fitting of all the data points that we collected. $t_R$ and $t_0$ can be determined by the $t$ distribution because other parameters only affect the $s$ dependence at high beam momenta. Because the experimental data in Ref.~\cite{Corden:1978cz} are at a very high beam momentum, one can safely assume that $R_{s}\left( s\right) \approx 1$. Hence, one minimizes the $\chi ^{2}$ per degree of freedom ($d.o.f.$) for the total cross section and the $t$ distribution of the experimental data at $% P_{Lab}=12-15$ GeV by fitting parameters, which include two parameters for the Regge trajectory $t_{0}$ and $t_{R}$. In Ref~~\cite{Corden:1978cz}, the $t$ distribution is given by the event not the differential cross section, so a scale parameter should be introduced, which can be related to the coupling constant $g_{a_{0}NN}$ with the total cross section which was given in the same Ref.~\cite{Corden:1978cz} (the total cross section is obtained only by continuation of the $t$-channel contribution at very forward angles). The cutoff $\Lambda_t$ is also involved through Eq.~(\ref{Eq: interpolating}). Hence, in the calculation we have four parameters as listed in Table~\ref{tab:fit}. \renewcommand\tabcolsep{0.3cm} \renewcommand{\arraystretch}{1.5} \begin{table}[h] \caption{Fitted values of free parameters by fitting the $t$ distribution in Ref.~\protect\cite{Corden:1978cz} with a reduced value $\protect\chi % ^{2}/d.o.f.=0.89$.} \label{tab:fit}% \begin{tabular}{|c|c|c|c|} \hline\hline $\Lambda _{t}$ (GeV) & $g_{a_{0}NN}$ & $t_{0}$ (GeV$^{2}$) & $t_{R}$ (GeV$% ^{2}$) \\ \hline $1.26\pm 0.05$ & $28.27\pm 2.49$ & $0.41\pm 0.16$ & $1.90\pm 1.62$ \\ \hline \end{tabular}% \end{table} The fitted values of the free parameters are listed in Table \ref{tab:fit}, with a reduced value of $\chi ^{2}/d.o.f.=0.89$. The best-fitted results are presented in Fig.~\ref{fig:data}. It is found that the experimental data of the $% t$ distribution for the $\pi ^{-}p\rightarrow f_{1}(1285)n$ reaction are well reproduced in our model. Here, we also present the best-fitted results with a pure Feynman model. It confirms that at high beam momentum, the results with a Feynman model deviate from the experimental data obviously, and the Reggeized treatment is essential to reproduce the $t$-slope. \begin{figure}[h] \centering \includegraphics[scale=0.32]{t-slope.eps} \caption{The $t$-distribution for the reaction $\protect\pi ^{-}p\rightarrow f_{1}(1285)n.$ The data are from Ref.~\protect\cite{Corden:1978cz}. The full (red) and dashed (blue) lines are for the full model and Feynman model, respectively. } \label{fig:data} \end{figure} To show the effect of the interpolating switching function $R_{t}\left( t\right) $ more clearly, in Fig. \ref{fig:Rt} we present the results with the values of parameters in Table \ref{tab:fit}. One can see that $R_{t}\left( t\right) $ is close to 1 at a small value of $-t$, which indicates that the contribution of pure Reggeized treatment plays a dominant role at high beam momentum in the $\pi ^{-}p\rightarrow f_{1}(1285)n$ reaction. \begin{figure}[h] \centering \includegraphics[scale=0.34]{Rt.eps} \caption{Interpolating switching function $R_{t}\left( t\right) $ with the values of parameters in Table \protect\ref{tab:fit}.} \label{fig:Rt} \end{figure} Now we would like to give some discussions about the above results. At low beam momentum, the $R_{s}(s)$ is very small, which leads to a very small $R(s,t)$% . Hence, the effect of $R_{t}(t)$ should be small at low beam momentum and becomes more important at high momentum where $R(s,t)\rightarrow R_{t}(t)$. At high beam momentum, the $t$-channel contribution is usually dominant at very forward angles. At medium and backward angles the $u$-channel contribution becomes more important. The $t_{R}$ of 1.9 GeV$^{2}$ in the $\pi ^{-}p\rightarrow f_{1}(1285)n$ reaction suggests that at very forward angles the $R(t)$ is close to 1. Considering that only a few available data points exist, we will assume $R(t)=1$ in the following calculation to reduce the number of free parameters. It is reasonable because only the results at a medium angle will be slightly affected where the differential cross section is usually very small. \subsection{Cross section of $\protect\pi ^{-}p\rightarrow f_{1}(1285)n$ reaction} In this subsection, we will fit all the data we collected as shown in Figs.~\ref{fig:data} and~\ref{Fig:total1}, which include four data points of the total cross section at low beam momentum, three data points of the total cross at high beam momentum, and four data points of the $t$-distribution at 12-15 GeV~\cite% {Dahl:1967pg,Corden:1978cz,Dionisi:1980hi,Bityukov:1983cw,Bityukov:1987bj}. It should be mentioned that the three data points of the total cross section at high beam momentum are obtained by continuation of the $t$-channel contribution at very forward angles to all angles, so we will fit these three data points only with the $t$-channel contribution because the $u$-channel contribution is negligible at forward angles and dominant at backward angles. For the three data points at low beam momentum, both $t$ and $u$ channels will be included. It will be found later that the $s$-channel contribution is negligible, as usual. We minimize $\chi ^{2}$ per degree of freedom by fitting five parameters $s_{0}$, $s_{R}$, $\Lambda _{t}$, $\Lambda _{u}$ and $g_{a_{0}NN}$ using a total of 11 data points at the beam momentum $P_{Lab}$ from 2 to 40 GeV as displayed in Fig. \ref{Fig:total1}. Here, $R(t)$ has been assumed to be 1 as discussed above. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.305]{tcs1.eps} \end{center} \caption{Total cross section for the $\protect\pi ^{-}p\rightarrow f_{1}(1285)n$ reaction. The full (black), dashed (blue), dotted (green) dashed-dotted (dark yellow), and dashed-dot-dotted (violet) lines are for the full model (Fit I), full model (Fit II), full model (Fit III), Feynman case [$R(s,t)=0$] and Regge case [$R(s,t)=1$], respectively. The experimental data are from Refs.~% \protect\cite{Dahl:1967pg,Dionisi:1980hi,Bityukov:1983cw,Bityukov:1987bj}.} \label{Fig:total1} \end{figure} \renewcommand\tabcolsep{0.54cm} \renewcommand{\arraystretch}{1.2} \begin{table*}[htbp] \caption{Fitted values of free parameters with all 11 data points in Refs.~% \protect\cite% {Dahl:1967pg,Corden:1978cz,Dionisi:1980hi,Bityukov:1983cw,Bityukov:1987bj}.}% \label{Tab: parameterfull} \begin{tabular}{|c|c|c|c|c|c|c|} \hline\hline & $\Lambda _{t}$ (GeV) & $\Lambda _{s}=\Lambda _{u}$ (GeV) & $s_{0}$ (GeV$% ^{2}$) & $s_{R}$ (GeV$^{2}$) & $g_{a_{0}NN}$ & $\chi ^{2}/d.o.f.$ \\ \hline Fit I & $1.26\pm 0.02$ & $0.50\pm 0.78$ & $1.53\pm 0.55$ & $14.99\pm 1.47$ & $28.44\pm 0.08$ & 1.21 \\ Fit II & $1.27\pm 0.01$ & $0.50\pm 0.77$ & $1.47\pm 0.38$ & $13.76\pm 1.38$ & $28.44\pm 0.11$ & 1.18 \\ Fit III & $1.25\pm 0.01$ & $0.50\pm 0.77$ & $1.35\pm 0.32$ & $15.46\pm 1.36$ & $28.44\pm 0.07$ & 1.16 \\ \hline \end{tabular}% \end{table*} As observed in Fig.~\ref{Fig:total1}, at $p_{Lab}=$ 3.95 and 4 GeV, there exist two data points, which are quite different from each other. Because the beam momenta of these two data points are very close, it is difficult to interpret them as a physical structure. We present the results by fitting with both data points at a beam momentum of about 4 GeV (Fit I), the results with a higher momentum (Fit II) and the results with a lower momentum (Fit III). The results suggest that the higher data point is difficult to reproduce in three fits, whose results are close to each other and support the lower data point. The fitted parameters are listed in Table~\ref{Tab: parameterfull}, and the values of the coupling constant $g_{a_0NN}$ and cutoff $\Lambda_t$ are close to those in Table~\ref{tab:fit}. We also present the results of the usual Feynman case [$R(s,t )=0$] and Regge case [$R(s, t)=1$] in Fig. \ref{Fig:total1}, which show that the experimental data of the total cross section of the $\pi ^{-}p\rightarrow f_{1}(1285)n$ reaction cannot be reproduced using the Feynman model alone, even with the traditional Reggeized treatment. The interpolating Reggeized treatment is essential to reproduce the total cross section at both low and high beam momenta. In Fig.~\ref{Fig:total2}, we present the explicit results with Fit I. The results show that the experimental data of both the total and $t$ distribution can be well reproduced in our model. The $t$-channel contribution is dominant at $p_{lab}$ up to about 20 GeV. The $u$-channel contribution is negligible compared with the $t$-channel contribution at low beam momenta, but becomes more important and exceeds the $t$-channel contribution at a beam momentum of about 30 GeV. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.305]{tcs2.eps} \end{center} \caption{Total cross section for the $\protect\pi ^{-}p\rightarrow f_{1}(1285)n$ reaction. The full (black), dashed (red), dotted (blue), short-dotted (violet), and dashed-dotted (green) lines are for the full model (Fit I), $t$ channel, $u$ channel, $u$ channel with interpolated Reggeized treatment and $s$ channel, respectively. The experimental data are from Refs.~\protect\cite% {Dahl:1967pg,Corden:1978cz,Dionisi:1980hi,Bityukov:1983cw,Bityukov:1987bj}.} \label{Fig:total2} \end{figure} The $u$-channel contribution can be seen more clearly in the differential cross section as shown in Fig.~\ref{Fig: dcs}. \begin{figure}[h!] \centering \includegraphics[scale=0.305]{dcs.eps} \caption{The differential cross section $d\protect\sigma /d\cos \protect% \theta $ of the $\protect\pi ^{-}p\rightarrow f_{1}(1285)n$ interaction as a function of $\cos \protect\theta $. The full (black), dashed (blue), dotted (green), dashed-dotted(red), dashed-dot-dotted (dark yellow), and short-dotted (violet) lines are for the full model (Fit I), full model (Fit II), full model (Fit III), $t$ channel (Fit I), $u$ channel (Fit I), and $u$ channel with interpolated Reggeized treatment . } \label{Fig: dcs} \end{figure} The $t$ and $u$ channels appear at forward and backward angles as expected. At low beam momentum, the differential cross section is dominated by the $t$ channel at a large range of the angles, whose contribution decreases with the decrease of the $\cos\theta $. At momenta lower than about 3 GeV, the $t$ channel is more important than the $u$ channel even at extreme backward angles. At the beam momenta higher than about 3 GeV, the $u$ channel becomes more and more important with the increase of the beam momentum. At a beam momentum of 20 GeV, though the total cross section is still mainly from the $t$-channel contribution, the $u$ channel is dominant even at medium angles while the $t$ channel is only dominant at very forward angles. The results of the three fits are also presented in Fig.~% \ref{Fig: dcs}. The discrepancy of the differential cross sections of three fits is small at most beam momenta. \subsection{Reggeized $u$-channel contribution}\label{Regu} In the above calculation, the Reggeized treatment is applied to the $t$ channel, but not to the $u$ channel. Physically, the $u$ channel can be seen as a $t$ channel with the final particles interchanged. Hence, the Reggeized treatment should be adopted in the $u$ channel, and as in the $t$ channel the interpolated treatment is needed to connect the Regge case at high beam momentum smoothly to the Feynman case at low beam momentum~\cite{shk2015,bgy2011}. Since the experimental data at high beam momenta are only obtained from the $t$-channel contribution, the fitting procedure in the above is not affected by the inclusion of the interpolated Reggeized $u$-channel contribution, whose contribution at low beam momentum is just the Feynman type that we adopted in the above fitting procedure. However, the different treatment of the $u$ channel will affect prediction of the cross section at a high beam momentum, which will be discussed in this subsection. The Reggeized treatment for $u$-channel baryon exchange consists of replacing the form factor $F_{u}(u)$ in Eq.~(\ref{AmpT2}) as \begin{equation} \mathcal{F}_{u}(u)=(\frac{s}{s_{scale}})^{\alpha _{N}(u)-\frac{1}{2}}\frac{\pi \alpha _{N}^{\prime }(u-m_{N}^{2})}{\Gamma \lbrack \frac{1}{2}+\alpha _{N}(u)]\sin [\pi (\alpha _{N}(u)-\frac{1}{2})]}. \end{equation}% The scale factor $s_{scale}$ is fixed at 1 GeV. In addition, the Regge trajectories $\alpha _{N}(u)$ read \cite{PR1984}% \begin{equation} \alpha _{N}(u)=-0.34~{\rm GeV}^2 +0.99u.\quad \ \ \end{equation} The interpolating can be applied to the $u$ channel analogously to the $t$ channel by replacing $t$ with $u$. It is reasonable to assume that the Reggeized treatment begins to exhibit its effect at the same value of beam momentum for both the $t$ channel and $s$ channel. So, we adopt the same parameters in the interpolating treatment for the $t$ channel as those for the $s$ channel. As said above, the fitting procedure is not affected with the inclusion of the interpolated Reggeized treatment in the $u$ channel. In this work, coupling constants involved in the $u$ channel are fixed at the values in our previous work~\cite{Wang:2017plf}. The $u$ channel at high beam momentum is determined after the cutoff $\Lambda_{u}$ is fixed in the fitting of the experimental data. In Fig.~\ref{Fig:total2}, the numerical results of the total cross section of the $u$ channel with interpolated Reggeized treatment are presented. As expected, it decreases exponentially as the $t$ channel at high beam momentum and is much smaller than those without Reggeized treatment. However, the small contribution of the Reggeized $u$ channel does not mean that its contribution is negligible in the differential cross section. As shown in Fig.~\ref{Fig: dcs}, the $u$ channel plays an important role in shaping the differential cross section at backward angles at high beam momentum. The results in the full model are almost the sum of the $u$- and $t$-channel contributions, which are not given explicitly in the figures. Most of the $f(1285)$ events are at extreme forward and backward angles, which correspond to the Reggeized $t$ and $u$ channel, respectively. At low beam momentum, the results with interpolated Reggeized treatment sre almost the same as those with the Feynman type. \section{Summary and discussion} In this work, based on the exisitng experimental data, we analyze the $\pi ^{-}p\rightarrow f_{1}(1285)n$ reaction with an interpolating Reggeized approach and try to make a prediction of its total and differential cross sections at a large beam-momentum range from threshold up to several tens of GeV. It is found that a pure Feynman or pure Regge type of $t$-channel contribution cannot reproduce the exisitng experiment data, though there are only 11 data points against 5 free parameters. The interpolating Reggeized treatment is essential to reproduce the cross sections at both low and high beam momenta. At low momenta, both total and differential cross sections are dominant with the Feynman-type $t$ channel. At high beam momenta, the Reggeized $t$-channel contribution is only dominant at extreme forward angles and decays rapidly with the decease of $\cos\theta$. The $u$-channel contributions with and without Reggeized treatment exhibit quite different behaviors at a high beam momentum. Without the Reggeized treatment, the $u$ channel becomes important in a larger range of angles with the increase of the beam momentum, while the $t$ channel plays its role only at a very forward angle at high beam momenta. With the Reggeized treatment, the $u$ channel and $t$ channel provide a sharp increase and a sharp decrease at extreme backward and extreme forward angles, respectively. The low- and high-momentum pion beams are accessible at the J-PARC and COMPASS. Our result is helpful to the possible experimental research of $f_1(1285) $ at the two facilities. Based on the results, the measurement at forward angles is supported while a measurement at extreme backward angles is helpful to understand the interaction mechanism of the pion-induced $f_1(1285)$ production. \section{Acknowledgments} This project is supported by the National Natural Science Foundation of China under Grant No. 11675228 and the Major State Basic Research Development Program in China under Grant No. 2014CB845405.
1,314,259,992,859
arxiv
\section{Introduction}\label{section_introduction} The initial value problem for the incompressible Euler equation in $\R^n$, $n \geq 2$, reads as: \begin{eqnarray} \nonumber \partial_t u + (u \cdot \nabla) u &=& -\nabla p \\ \label{E} \operatorname{div} u &=& 0 \\ \nonumber u(0)&=& u_0 \end{eqnarray} where $u(t,x)=\big(u_1(t,x),\ldots,u_n(t,x)\big)$ is the velocity of the fluid at time $t \in \R$ and position $x \in \R^n$, $u \cdot \nabla = \sum_{k=1}^n u_k \partial_k$ acts componentwise on $u$, $\nabla p$ is the gradient of the pressure $p(t,x)$, $\operatorname{div} u=\sum_{k=1}^n \partial_k u_k$ is the divergence of $u$ and $u_0$ is the value of $u$ at time $t=0$ (with assumption $\operatorname{div} u_0 = 0$). The system \eqref{E} (going back to Euler \cite{euler}) describes a fluid motion without friction. The first equation in \eqref{E} reflects the conservation of momentum. The second equation in \eqref{E} says that the fluid motion is incompressible, i.e. that the volume of any fluid portion remains constant during the flow.\\ The unknowns in \eqref{E} are $u$ and $p$. But as we will see later one can express $\nabla p$ in terms of $u$. Thus the evolution of system \eqref{E} is completely described by $u$. Therefore we will speak in the sequel of the solution $u$ instead of the solution $(u,p)$.\\ Consider now a fluid motion determined by $u$. If one fixes a fluid particle which at time $t=0$ is located at $x \in \R^n$ and whose position at time $t \geq 0$ we denote by $\varphi(t,x) \in \R^n$, we get the following relation between $u$ and $\varphi$ \[ \partial_t \varphi(t,x) = u\big(t,\varphi(t,x)), \] i.e. $\varphi$ is the flow-map of the vectorfield $u$. The second equation in \eqref{E} translates to the well-known relation $\det(d\varphi) \equiv 1$, where $d\varphi$ is the Jacobian of $\varphi$ -- see Majda, Bertozzi \cite{majda}. In this way we get a description of system \eqref{E} in terms of $\varphi$. The description of \eqref{E} in the $\varphi$-variable is called the Lagrangian description of \eqref{E}, whereas the description in the $u$-variable is called the Eulerian description of \eqref{E}. One advantage of the Lagrangian description of \eqref{E} is that it leads to an ODE formulation of \eqref{E}. This was already used in Lichtenstein \cite{lichtenstein} and Gunter \cite{gunter} to get local well-posedness of \eqref{E}.\\ \\ To state the result of this paper we have to introduce some notation. For $s \in \R_{\geq 0}$ we denote by $H^s(\R^n)$ the Hilbert space of real valued functions on $\R^n$ of Sobolev class $s$ and by $H^s(\R^n;\R^n)$ the vector fields on $\R^n$ of Sobolev class $s$ -- see Adams \cite{adams} or Inci, Topalov, Kappeler \cite{composition} for details on Sobolev spaces. We will often need the fact that for $n \geq 1$, $s > n/2$ and $0 \leq s' \leq s$ multiplication \begin{equation}\label{multiplication} H^s(\R^n) \times H^{s'}(\R^n) \to H^{s'}(\R^n),\quad (f,g) \mapsto f \cdot g \end{equation} is a continuous bilinear map.\\ The notion of solution for \eqref{E} we are interested in are solutions which lie in $C^0\big([0,T];H^s(\R^n;\R^n)\big)$ for some $T > 0$ and $s > n/2+1$. This is the space of continuous curves on $[0,T]$ with values in $H^s(\R^n;\R^n)$. To be precise we say that $u,\nabla p \in C^0\big([0,T];H^s(\R^n;\R^n)\big)$ is a solution to \eqref{E} if \begin{equation}\label{RE} u(t) = u_0 + \int_0^t -(u(\tau) \cdot \nabla) u(\tau) - \nabla p(\tau) \;d\tau \quad \forall 0 \leq t \leq T \end{equation} and $\operatorname{div} u(t)=0$ for all $0 \leq t \leq T$ holds. As $s-1>n/2$ we know by the Banach algebra property of $H^{s-1}(\R^n)$ that the integrand in \eqref{RE} lies in $C^0\big([0,T];H^{s-1}(\R^n;\R^n)\big)$. Due to the Sobolev imbedding and the fact $s > n/2+1$ the solutions considered here are $C^1$ (in the $x$-variable slightly better than $C^1$) and are thus solutions for which the derivatives appearing in \eqref{E} are classical derivatives. \\ The discussion above shows that in this paper the state-space of \eqref{E} in the Eulerian description is $H^s(\R^n;\R^n)$, $s > n/2+1$. The state-space of \eqref{E} in the Lagrangian description is given by \[ \Ds^s(\R^n) = \big\{ \varphi:\R^n \to \R^n \;\big|\; \varphi - \operatorname{id} \in H^s(\R^n;\R^n) \mbox{ and } \det d_x\varphi > 0, \;\forall x \in \R^n\big\} \] where $\operatorname{id}:\R^n \to \R^n$ is the identity map. Due to the Sobolev imbedding and the condition $s > n/2+1$ the space of maps $\Ds^s(\R^n)$ consists of $C^1$-diffeomorphisms -- see Palais \cite{palais} -- and can be identified via $\Ds^s(\R^n) - \operatorname{id} \subseteq H^s(\R^n;\R^n)$ with an open subset of $H^s(\R^n;\R^n)$. Thus $\Ds^s(\R^n)$ has naturally a real analytic differential structure (for real analyticity we refer to Whittlesey \cite{analyticity}) with the natural identification of the tangent space \[ T\Ds^s(\R^n) \simeq \Ds^s(\R^n) \times H^s(\R^n;\R^n). \] Moreover it is known that $\Ds^s(\R^n)$ is a topological group under composition and that for $0 \leq s' \leq s$ the composition map \begin{equation}\label{composition} H^{s'}(\R^n) \times \Ds^s(\R^n) \to H^{s'}(\R^n), \quad (f,\varphi) \mapsto f \circ \varphi \end{equation} is continuous -- see Cantor \cite{cantor_thesis} and Inci, Topalov, Kappeler \cite{composition}. That $\Ds^s(\R^n)$ is the right choice as configuration space for \eqref{E} in Lagrangian coordinates is justified by the fact that every $u \in C^0\big([0,T];H^s(\R^n;\R^n)\big)$, $s > n/2+1$, integrates uniquely to a $\varphi \in C^1\big([0,T];\Ds^s(\R^n)\big)$ fullfilling \[ \partial_t \varphi(t) = u(t) \circ \varphi(t) \quad \mbox{for all } 0 \leq t \leq T \] -- see Fischer, Marsden \cite{fischer} or Inci \cite{thesis} for an alternative proof. \\ \\ For the rest of this section we assume $n \geq 2$, $s > n/2+1$ and for $X,Y$ real Banach spaces we use the notation $L^2(X;Y)$ for the real Banach space of continuous bilinear maps from $X \times X$ to $Y$. With this we can state the main result of this paper: \begin{Th}\label{th_main} Let $n \geq 2$ and $s > n/2+1$. Then there is a real analytic map \[ \Gamma:\Ds^s(\R^n) \to L^2\big(H^s(\R^n;\R^n);H^s(\R^n;\R^n)\big), \quad \varphi \mapsto [(v,w) \mapsto \Gamma_\varphi(v,w)] \] called the Christoffel map for which the geodesic equation \begin{equation}\label{geodesic_eq} \partial_t^2 \varphi = \Gamma_\varphi(\partial_t \varphi,\partial_t \varphi);\quad \varphi(0)=\operatorname{id},\partial_t \varphi(0)=u_0 \in H^s(\R^n;\R^n) \end{equation} is a description of \eqref{E} in Lagrangian coordinates. More precisely, any $\varphi$ solving \eqref{geodesic_eq} on $[0,T]$, $T > 0$, with $\operatorname{div} u_0 = 0$ generates a solution to \eqref{E} by the formula $u:=\partial_t \varphi \circ \varphi^{-1}$ and on the other hand any $u$ solving \eqref{E} on $[0,T]$ integrates to a $\varphi$ solving \eqref{geodesic_eq} on $[0,T]$. \end{Th} By ODE theory -- see Dieudonn\'e \cite{dieudonne} -- and the continuity of the composition map \eqref{composition} we immediately get the following corollary (this result, using a different method, goes back to Kato \cite{kato}) \begin{Coro}\label{coro_kato} Let $n \geq 2$ and $s > n/2+1$. Then \eqref{E} is locally well-posed in $H^s(\R^n)$. \end{Coro} Connected to a geodesic equation like \eqref{geodesic_eq} is the notion of an exponential map -- see Lang \cite{lang}. The domain of definition for the exponential map is the set $U \subseteq H^s(\R^n;\R^n)$ consisting of initial values $u_0 \in H^s(\R^n;\R^n)$ for which the geodesic equation \eqref{geodesic_eq} has a solution on the interval $[0,1]$. It turns out that $U$ is star-shaped with respect to $0$ and is an open neighborhood of $0$. With this we define \begin{Def}\label{def_exp} The exponential map is defined as \[ \exp:U \to \Ds^s(\R^n),\quad u_0 \mapsto \varphi(1;u_0) \] where $\varphi(1;u_0)$ denotes the value of the solution $\varphi$ of \eqref{geodesic_eq} at time $t=1$ for the initial condition $\partial_t \varphi(0)=u_0$. \end{Def} By ODE theory we know that $exp$ is a real analytic map. Moreover we can describe every solution of \eqref{geodesic_eq} by considering the curves $t \mapsto \exp(t u_0)$ as is usual for geodesic equations. A further corollary of Theorem \ref{th_main} is \begin{Coro}\label{coro_analytic_trajectory} The trajectories of the fluid particles moving according to \eqref{E} are analytic. \end{Coro} \begin{proof}[Proof of Corollary \ref{coro_analytic_trajectory}] Fix $x \in \R^n$ and define $\varphi(t):=\exp(t u_0)$. Then the trajectory of the fluid particle which starts at time $t=0$ at $x$ is given by $t \mapsto \varphi(t,x)$. By Theorem \ref{th_main} we know that \[ [0,T] \mapsto H^s(\R^n;\R^n),\quad t \mapsto \varphi(t)-\operatorname{id} \] is analytic. Here $T > 0$ is any time up to which the fluid motion exists for sure. As $s > n/2+1$ we know by the Sobolev imbedding that the evaluation map at $x \in \R^n$ \[ H^s(\R^n) \to \R,\quad f \mapsto f(x) \] is a continuous linear map. Thus $t \mapsto \varphi(t,x)-x$ is analytic. Hence the claim. \end{proof} \emph{Related work}: To use an ODE-type approach for \eqref{E} via a Lagrangian formulation is already present in the works of Lichtenstein \cite{lichtenstein} and Gunter \cite{gunter}. One can also get analyticity in Lagrangian coordinates by using their successive approximation procedure.\\ The idea to express \eqref{E} as a geodesic equation on the ''Lie group'' $\Ds$, the group of diffeomorphisms, goes back to Arnold \cite{arnold}. In Ebin, Marsden \cite{ebin_marsden}, Ebin and Marsden worked out Arnold's idea by proving the analog of Theorem \ref{th_main} for the Sobolev spaces $H^s(M)$, where $M$ is a compact, smooth and oriented manifold of dimension $n$ and $s > n/2+1$, with the difference that they proved the Christoffel map $\Gamma$ to be smooth and not analytic (it is not so clear to us whether $\Gamma$ is analytic for all these $M$). Later Cantor \cite{cantor} showed the analog of Theorem \ref{th_main} for weighted Sobolev spaces on the whole space $H^s_w(\R^n)$, $s > n/2+1$ (Cantor stated it with $\Gamma$ smooth, but one can show that his $\Gamma$ is analytic). In Serfati \cite{serfati} the analog of Theorem \ref{th_main} was shown for $C^{k,\alpha}$-spaces over $\R^n$, $k \geq 1$ and $0 < \alpha < 1$. Most recently analytic dependence in the Lagrangian coordinates was shown to be true in the case of Sobolev spaces $H^s(\mathbb T^n)$, $s > n/2+1$, in Shnirelman \cite{shnirelman} and in the case of H\"older spaces $C^{1,\alpha}(\mathbb T^n)$, $0 < \alpha < 1$, in Frisch, Zheligovsky \cite{frisch} for fluid motion in the $n$-dimensional torus $\mathbb T^n=\R^n/\mathbb Z^n$.\\ \\ This paper is more or less an excerpt from the thesis Inci \cite{thesis}. So omitted proofs or references where they can be found are given in Inci \cite{thesis}. \section{Alternative Eulerian description}\label{section_alternative} The goal of this section is to give an alternative formulation of \eqref{E} by replacing $\nabla p$ with an expression in $u$. For this we will use an idea of Chemin \cite{chemin}. Throughout this section we assume $n \geq 2$ and $s > n/2+1$.\\ To motivate the approach, we apply $\operatorname{div}$ to the first equation in \eqref{E} and use $\operatorname{div} u=0$ to get \begin{equation}\label{laplace_p} -\Delta p = \sum_{j,k=1}^n \partial_j u_k \partial_k u_j = \sum_{j,k=1}^n \partial_j \partial_k (u_j u_k). \end{equation} In order to invert the Laplacian $\Delta$ we will use a cut-off in Fourier space. For this we denote by $\chi$ the characteristic function of the closed unit ball in $\R^n$, i.e. $\chi(\xi)=1$ for $|\xi| \leq 1$ and $\chi(\xi)=0$ otherwise. The continuous linear operator $\chi(D)$ on $L^2(\R^n):=L^2_\R(\R^n)$ is defined by \[ \chi(D):L^2(\R^n) \to L^2(\R^n),\quad f \mapsto \mathcal F^{-1}\left[\chi(\xi)\mathcal F[f](\xi) \right] \] where $\mathcal F$ is the Fourier transform and $\mathcal F^{-1}$ its inverse. We define the Fourier transform of $g \in L^1(\R^n)$ as the following complex-valued function $\mathcal F[g]:\R^n \to \mathbb C$ (with the usual extension to $L^2(\R^n)$) \[ \mathcal F[g](\xi) := \frac{1}{(2\pi)^{n/2}} \int_{\R^n} e^{-ix \cdot \xi} g(x) \;dx,\quad \xi \in \R^n \] where $x \cdot \xi=x_1 \xi_1 + \ldots + x_n \xi_n$ is the Euclidean inner product in $\R^n$. We have for $s_1,s_2 \geq 0$ \begin{equation}\label{chi_smoothing} ||\chi(D)f||_{s_1+s_2} \leq 2^{s_2/2} ||f||_{s_1},\quad \forall f \in H^{s_1}(\R^n) \end{equation} where $||g||_{s'}:=||\,(1+|\xi|^{s'/2}) \, |\mathcal F[g](\xi)| \;||_{L^2}$ for $g \in H^{s'}(\R^n)$, $s' \geq 0$. We use \eqref{laplace_p} to rewrite $-\nabla p$ \[ -\nabla p = \nabla \left(\Delta^{-1} \big(1-\chi(D)\big) \sum_{j,k=1}^n \partial_j u_k \partial_k u_j + \Delta^{-1} \chi(D) \sum_{j,k=1}^n \partial_j \partial_k (u_j u_k)\right). \] Using this expression we replace \eqref{E} by \begin{equation}\label{alternative_E} \partial_t u + (u \cdot \nabla) u = \nabla B(u,u),\quad u(0)=u_0 \in H^s(\R^n;\R^n) \end{equation} where $B(v,w)=B_1(v,w)+B_2(v,w)$ for $v,w \in H^s(\R^n;\R^n)$ with \[ B_1(v,w)=\Delta^{-1} \big(1-\chi(D)\big) \sum_{j,k=1}^n \partial_j v_k \partial_k w_j \] and \[ B_2(v,w)=\Delta^{-1} \chi(D) \sum_{j,k=1}^n \partial_j \partial_k(v_j w_k). \] As $\Delta^{-1}\big(1-\chi(D)\big):H^{s-1}(\R^n) \to H^{s+1}(\R^n)$ is a continuous linear map we get by the Banach algebra property of $H^{s-1}(\R^n)$ that \begin{eqnarray*} B_1:H^s(\R^n;\R^n) \times H^s(\R^n;\R^n) &\to& H^{s+1}(\R^n)\\ (v,w) &\mapsto& \Delta^{-1}\big(1-\chi(D)\big) \sum_{j,k=1}^n \partial_j v_k \partial_k w_j \end{eqnarray*} is a continuous bilinear map. And as $\Delta^{-1} \chi(D) \partial_j \partial_k:H^s(\R^n) \to H^{s+1}(\R^n)$ is a continuous linear map for any $1 \leq j,k \leq n$ we get by the Banach algebra property of $H^s(\R^n)$ that \begin{eqnarray*} B_2:H^s(\R^n;\R^n) \times H^s(\R^n;\R^n) &\to& H^{s+1}(\R^n)\\ (v,w) &\mapsto& \Delta^{-1} \chi(D) \sum_{j,k=1}^n \partial_j \partial_k (v_j w_k) \end{eqnarray*} is a continuous bilinear map. Altogether we see that \[ \nabla B:H^s(\R^n;\R^n) \times H^s(\R^n;\R^n) \to H^s(\R^n;\R^n) \] is a continuous bilinear map. Equation \eqref{alternative_E} is to be understood in the sense that $u$ is a solution to \eqref{alternative_E} on $[0,T]$ for some $T>0$ if $u \in C^0\big([0,T];H^s(\R^n;\R^n)\big)$ with \begin{equation}\label{alternative_RE} u(t) = u_0 + \int_0^t \nabla B\big(u(\tau),u(\tau)\big) - (u(\tau) \cdot \nabla) u(\tau) \;d\tau \end{equation} for any $0 \leq t \leq T$. By the Banach algebra property of $H^{s-1}(\R^n)$ the integrand in \eqref{alternative_RE} lies in $C^0\big([0,T];H^{s-1}(\R^n;\R^n)\big)$. \\ To consider \eqref{alternative_E} instead of \eqref{E} is justified by the following proposition \begin{Prop}\label{prop_alternative} Let $n \geq 2$ and $s > n/2+1$. If $u \in C^0\big([0,T];H^s(\R^n;\R^n)\big)$, $T > 0$, is a solution to \eqref{E} then it is also a solution to \eqref{alternative_E}. Conversely, let $u_0 \in H^s(\R^n;\R^n)$ with $\operatorname{div} u_0 = 0$. Then if $u \in C^0\big([0,T];H^s(\R^n;\R^n)\big)$ is a solution to \eqref{alternative_E} then it is also a solution to \eqref{E} with $p=-B(u,u)$. \end{Prop} Proposition \ref{prop_alternative} shows in particular that for solutions of \eqref{alternative_E} the condition $\operatorname{div} u(t)=0$ is preserved if it is true for $t=0$. The proof of Proposition \ref{prop_alternative} can be found in Inci \cite{thesis}. \section{Proof of Theorem \ref{th_main}}\label{section_proof} The goal of this section is to prove Theorem \ref{th_main}. To do that we will formulate the alternative equation \eqref{alternative_E} in Lagrangian coordinates. As usual we assume $n \geq 2$ and $s > n/2+1$. To motivate the approach consider $u$ solving \eqref{alternative_E} and $\varphi$ its flow, i.e. $\varphi$ is determined by the relation $\partial_t \varphi = u \circ \varphi$. Taking the $t$-derivative in this relation we get \[ \partial_t^2 \varphi = \big(\partial_t u + (u \cdot \nabla) u \big) \circ \varphi = \nabla B(u,u) \circ \varphi. \] Replacing $u$ by $u=\partial_t \varphi \circ \varphi^{-1}$ we get \[ \partial_t^2 \varphi = \nabla B(\partial_t \varphi \circ \varphi^{-1},\partial_t \varphi \circ \varphi^{-1}) \circ \varphi. \] So our candidate for the $\Gamma$ in Theorem \ref{th_main} is \begin{equation}\label{def_gamma} \Gamma_\varphi(v,w) := \nabla B(v \circ \varphi^{-1},w \circ \varphi^{-1}) \circ \varphi. \end{equation} The key ingredient for the proof of Theorem \ref{th_main} is the following proposition \begin{Prop}\label{prop_analytic_gamma} The map \[ \Gamma:\Ds^s(\R^n) \to L^2\big(H^s(\R^n;\R^n);H^s(\R^n;\R^n)\big),\quad \varphi \mapsto [ (v,w) \mapsto \Gamma_\varphi(v,w)] \] with $\Gamma_\varphi(v,w)$ as in \eqref{def_gamma} is real analytic. \end{Prop} Before we proof Proposition \ref{prop_analytic_gamma} we have to make some preparation. We introduce the following subspace of $L^2(\R^n)$ \[ H^\infty_\Xi(\R^n) := \big\{ g \in L^2(\R^n) \;\big|\; \operatorname{supp} \mathcal F[g] \subseteq \Xi \big\} \] where $\Xi \subseteq \R^n$ is the closed unit ball and $\operatorname{supp} f$ denotes the support of a function $f$. The space $H^\infty_\Xi(\R^n)$ is a closed subspace of $L^2(\R^n)$, lies in $\cap_{s' \geq 0} H^{s'}(\R^n)$ and consists of entire functions (i.e. analytic functions on $\R^n$ with convergence radius $R=\infty$). Note that $\chi(D)$ maps $H^{s'}(\R^n)$, $s' \geq 0$, into $H^\infty_\Xi(\R^n)$. In the sequel we will also use the vector-valued analog $H^\infty_\Xi(\R^n;\R^n)=\{(f_1,\ldots,f_n) | f_k \in H^\infty_\Xi(\R^n), \; \forall 1 \leq k \leq n\}$. The space $H^\infty_\Xi$ has good properties with regard to the composition map (in contrast to its bad behaviour in the $H^s$ space -- see Inci \cite{thesis}):\\ Denoting by $L(X;Y)$, $X,Y$ real Banach spaces, the real Banach space of continuous linear maps from $X$ to $Y$ we have \begin{Lemma}\label{lemma_composition1} Let $n \geq 2$ and $s > n/2+1$. Then \[ \Ds^s(\R^n) \to L\big(H^\infty_\Xi(\R^n);H^s(\R^n)\big),\quad \varphi \mapsto [f \mapsto f \circ \varphi] \] is real analytic. \end{Lemma} \noindent We also have \begin{Lemma}\label{lemma_composition2} Let $n \geq 2$, $s > n/2+1$ and $0 \leq s' \leq s$. Then \[ \Ds^s(\R^n) \to L\big(H^{s'}(\R^n);H^\infty_\Xi(\R^n)\big),\quad \varphi \mapsto [f \mapsto \chi(D)(f \circ \varphi^{-1})] \] is real analytic. \end{Lemma} The proofs of Lemma \ref{lemma_composition1} and Lemma \ref{lemma_composition2} can be found in Inci \cite{thesis}. We split the proof of Proposition \ref{prop_analytic_gamma} according to $B=B_1+B_2$ into two lemmas. In the sequel we will use the notation $R_\varphi$ for the right-composition, i.e. $R_\varphi f:=f \circ \varphi$. Note that $R_\varphi^{-1}=R_{\varphi^{-1}}$. \begin{Lemma}\label{lemma_b1} Let $n \geq 2$ and $s > n/2+1$. Then \begin{eqnarray*} \Ds^s(\R^n) &\to& L^2\big(H^s(\R^n;\R^n);H^s(\R^n;\R^n)\big)\\ \varphi &\mapsto& [(v,w) \mapsto \nabla B_1(v \circ \varphi^{-1},w \circ \varphi^{-1}) \circ \varphi] \end{eqnarray*} is real analytic. \end{Lemma} \begin{proof}[Proof of Lemma \ref{lemma_b1}] Recall that $\nabla B_1(v,w)$ is given by \[ \nabla B_1(v,w)= \nabla \left( \Delta^{-1} \big(1- \chi(D)\big) \sum_{j,k=1}^n \partial_j v_k \partial_k w_j \right). \] It will be convenient to write $\Delta^{-1} \big(1-\chi(D)\big)$ as \begin{equation}\label{convenient} \Delta^{-1} \big(1-\chi(D)\big) = \left(\chi(D) + \Delta \big(1-\chi(D)\big)\right)^{-1} - \chi(D). \end{equation} In a first step we will prove that for $A:=\chi(D) + \Delta \big(1-\chi(D)\big)$ \[ \Ds^s(\R^n) \to L\big(H^s(\R^n);H^{s-2}(\R^n)\big),\quad \varphi \mapsto [f \mapsto R_\varphi A R_\varphi^{-1} f] \] is real analytic. From Lemma \ref{lemma_composition1} and \ref{lemma_composition2} we know that \[ \Ds^s(\R^n) \to L\big(H^s(\R^n);H^s(\R^n)\big),\quad \varphi \mapsto [f \mapsto R_\varphi \chi(D) R_\varphi^{-1} f] \] is real analytic. The same is of course true if we replace above $\chi(D)$ by $1-\chi(D)$. To proceed we prove that for any $1 \leq s' \leq s$ and $1 \leq k \leq n$ \begin{equation}\label{conjugation} \Ds^s(\R^n) \to L\big(H^{s'}(\R^n);H^{s'-1}(\R^n)\big),\quad \varphi \mapsto [f \mapsto R_\varphi \partial_k R_\varphi^{-1} f] \end{equation} is real analytic. We clearly have \[ R_\varphi \partial_k R_\varphi^{-1} f = \sum_{j=1}^n \partial_j f C_{jk} \] where $(C_{jk})_{1 \leq j,k \leq n} = [d\varphi]^{-1}$, i.e the inverse matrix of the jacobian of $\varphi$. Note that the entries of $(C_{jk})_{1 \leq j,k \leq n}$ are polynomial expressions of the entries of $[d\varphi]$ divided by $\det(d\varphi)$. As $H^{s-1}$ is a Banach algebra and division by $\det(d\varphi)$ an analytic operation -- see Inci \cite{thesis} -- we get by \eqref{multiplication} that $\varphi \mapsto R_\varphi \partial_k R_\varphi^{-1} $ is real analytic as claimed. Writing \[ R_\varphi \Delta R_\varphi^{-1} = \sum_{k=1}^n R_\varphi \partial_k R_\varphi^{-1} R_\varphi \partial_k R_\varphi^{-1} \] we thus see that $\varphi \mapsto R_\varphi \Delta R_\varphi^{-1}$ is also real analytic. Finally writing \begin{equation}\label{analytic_A} R_\varphi A R_\varphi^{-1} = R_\varphi \chi(D) R_\varphi^{-1} + R_\varphi \Delta R_\varphi^{-1} R_\varphi \big(1-\chi(D)\big) R_\varphi^{-1} \end{equation} we get that $\varphi \mapsto R_\varphi A R_\varphi^{-1}$ is real analytic. Denoting by $X,Y$ real Banach spaces and by $GL(X;Y) \subseteq L(X;Y)$ the open subset of invertible continuous linear operators from $X$ to $Y$ we know by the Neumann series -- see Dieudonn\'e \cite{dieudonne} -- that \[ \operatorname{inv}:GL(X;Y) \to GL(Y;X),\quad T \mapsto T^{-1} \] is real analytic. Therefore we get from the analyticity of \eqref{analytic_A} that \[ \Ds^s(\R^n) \to L\big(H^{s-2}(\R^n);H^s(\R^n)\big),\quad \varphi \mapsto R_\varphi A^{-1} R_\varphi^{-1} = \left(R_\varphi A R_\varphi^{-1}\right)^{-1} \] is real analytic. This implies by \eqref{convenient} that \[ \Ds^s(\R^n) \to L\big(H^{s-2}(\R^n);H^s(\R^n)\big),\quad \varphi \mapsto R_\varphi \Delta^{-1} \big(1-\chi(D)\big) R_\varphi^{-1} \] is real analytic. By letting $\Delta^{-1}\big(1-\chi(D)\big)$ act componentwise we write \begin{multline*} \nabla B_1(v \circ \varphi^{-1},w \circ \varphi^{-1}) \circ \varphi = \\ \big(R_\varphi \Delta^{-1} \big(1-\chi(D)\big) R_\varphi^{-1}\big) \big( R_\varphi \nabla R_\varphi^{-1} \big) \sum_{j,k=1}^n \big(R_\varphi \partial_j R_\varphi^{-1} v_k \big) \big(R_\varphi \partial_k R_\varphi^{-1} w_j\big) \end{multline*} and we get from the considerations above that \begin{eqnarray*} \Ds^s(\R^n) &\to& L^2\big(H^s(\R^n;\R^n);H^s(\R^n;\R^n)\big)\\ \varphi &\mapsto& [(v,w) \mapsto \nabla B_1(v \circ \varphi^{-1},w \circ \varphi^{-1}) \circ \varphi] \end{eqnarray*} is real analytic. \end{proof} \begin{Lemma}\label{lemma_b2} Let $n \geq 2$ and $s > n/2+1$. Then \begin{eqnarray*} \Ds^s(\R^n) &\to& L^2\big(H^s(\R^n;\R^n);H^s(\R^n;\R^n)\big)\\ \varphi &\mapsto& [(v,w) \mapsto \nabla B_2(v \circ \varphi^{-1},w \circ \varphi^{-1}) \circ \varphi] \end{eqnarray*} is real analytic. \end{Lemma} \begin{proof}[Proof of Lemma \ref{lemma_b2}] We write \begin{equation}\label{b2_expression} \nabla B_2(v \circ \varphi^{-1},w \circ \varphi^{-1}) \circ \varphi = \sum_{j,k=1}^n R_\varphi \nabla \Delta^{-1} \partial_j \partial_k \chi(D) R_\varphi^{-1}(v_j w_k). \end{equation} By Lemma \ref{lemma_composition2} we know that $\varphi \mapsto \chi(D) R_\varphi^{-1}$ is real analytic with values in $L\big(H^s(\R^n);H^\infty_\Xi(\R^n)\big)$. Moreover for any $1 \leq j,k \leq n$ \[ \nabla \Delta^{-1} \partial_j \partial_k:H^\infty_\Xi(\R^n) \to H^\infty_\Xi(\R^n;\R^n) \] is a continuous linear map. By Lemma \ref{lemma_composition1} we then see that the expression \eqref{b2_expression} is real analytic in $\varphi$ showing the claim. \end{proof} \begin{proof}[Proof of Proposition \ref{prop_analytic_gamma}] As $B=B_1+B_2$ the proof follows from Lemma \ref{lemma_b1} and Lemma \ref{lemma_b2}. \end{proof} Now we can prove the main theorem. \begin{proof}[Proof of Theorem \ref{th_main}] The analyticity statement for $\Gamma$ follows from Proposition \ref{prop_analytic_gamma}. To prove the first part of the second statement consider $\varphi \in C^2\big([0,T];\Ds^s(\R^n)\big)$, $T > 0$, solving \begin{equation}\label{geodesic_eq2} \partial_t^2 \varphi= \Gamma_\varphi(\partial_t \varphi,\partial_t \varphi),\quad \varphi(0)=\operatorname{id},\partial_t \varphi(0)=u_0 \in H^s(\R^n;\R^n). \end{equation} We define $u:=\partial_t \varphi \circ \varphi^{-1}$. By the continuity of the group operations in $\Ds^s(\R^n)$ and by \eqref{composition} we know that $u \in C^0\big([0,T];H^s(\R^n;\R^n)\big)$. By the Sobolev imbedding we have $\varphi,\partial_t \varphi \in C^1\big([0,T]\times \R^n;\R^n)$. By the inverse function theorem we also have $\varphi^{-1} \in C^1\big([0,T] \times \R^n;\R^n)$. Hence $u \in C^1\big([0,T]\times \R^n;\R^n)$. Taking the pointwise $t$-derivative in the relation $\partial_t \varphi(t,x) = u(t,\varphi(t,x))$ leads to \begin{equation}\label{t_derivative} \partial_t^2 \varphi = (\partial_t u + (u \cdot \nabla)u) \circ \varphi. \end{equation} Using the expression \eqref{def_gamma} corresponding to $\Gamma_\varphi(\partial_t \varphi,\partial_t \varphi)$ and using $u=\partial_t \varphi \circ \varphi^{-1}$ we get pointwise (for any $(t,x) \in [0,T] \times \R^n$ without writing the argument explicitly) \[ B(u,u) \circ \varphi = (\partial_t u + (u \cdot \nabla) u) \circ \varphi. \] Skipping the composition by $\varphi$ on both sides, we get by the fundamental lemma of calculus for any $(t,x) \in [0,T] \times \R^n$ (without writing the $x$-argument) \begin{equation}\label{fundamental_lemma} u(t)=u_0 + \int_0^t B\big(u(\tau),u(\tau)\big) - \big(u(\tau) \cdot \nabla \big) u(\tau) \;d\tau. \end{equation} The integrand in \eqref{fundamental_lemma} lies in $C^0\big([0,T];H^{s-1}(\R^n;\R^n)\big)$ so that \eqref{fundamental_lemma} is actually an identity in $H^{s-1}$, which shows that $u$ is a solution to the alternative formulation \eqref{alternative_E}.\\ Now it remains to prove the other direction. We take $u$ solving the alternative formulation \eqref{alternative_E}. We know that there is a unique $\varphi \in C^1\big([0,T];\Ds^s(\R^n)\big)$ solving \[ \partial_t \varphi = u \circ \varphi,\quad \varphi(0)=\operatorname{id}. \] The claim is that $\varphi$ solves the geodesic equation \eqref{geodesic_eq2}. First note that by the fact that $u$ is a solution to the alternative formulation \eqref{alternative_E} and by the Sobolev imbedding we have $u,\varphi \in C^1([0,T] \times \R^n;\R^n)$. Thus we also have $\partial_t \varphi \in C^1([0,T] \times \R^n;\R^n)$. Taking the $t$-derivative in $\partial_t \varphi=u \circ \varphi$ we get the same expression as in \eqref{t_derivative}. Using that $u$ is a solution to the alternative formulation \eqref{alternative_E} we get by the fundamental lemma of calculus pointwise for any $(t,x) \in [0,T] \times \R^n$ (dropping the $x$-argument) \begin{eqnarray*} \partial_t \varphi(t) &=& u_0 + \int_0^t B\big(\partial_t \varphi(\tau) \circ \varphi(\tau)^{-1},\partial_t \varphi(\tau) \circ \varphi(\tau)^{-1}\big) \circ \varphi(\tau) \;d\tau\\ &=& u_0 + \int_0^t \Gamma_{\varphi(\tau)}\big(\partial_t \varphi(\tau),\partial_t \varphi(\tau)\big) \;d\tau. \end{eqnarray*} But as the integrand is a continuous curve in $H^s(\R^n;\R^n)$ we see that $t \mapsto \varphi(t)$ solves the geodesic equation \eqref{geodesic_eq2}. This completes the proof. \end{proof} In view of the condition $\operatorname{div} u=0$, the state space of \eqref{E} in Lagrangian coordinates is actually $\Ds^s_\mu(\R^n) \subseteq \Ds^s(\R^n)$, the subgroup of volume-preserving diffeomorphisms, i.e. \[ \Ds^s_\mu(\R^n) := \big\{ \varphi \in \Ds^s(\R^n) \;\big| \; \det(d\varphi) \equiv 1 \big\}. \] One has -- see Inci \cite{thesis} for the proof \begin{Th}\label{th_submanifold} Let $n \geq 2$ and $s > n/2+1$. Then $\Ds^s_\mu(\R^n)$ is a closed real analytic submanifold of $\D^s(\R^n)$. \end{Th} So the dynamics of \eqref{E} in Lagrangian coordinates is real analytic on $\Ds^s_\mu(\R^n)$ or expressed with the exponential map \begin{Coro} Let $n \geq 2$ and $s > n/2+1$. Then \[ \exp:U \cap H^s_\sigma(\R^n;\R^n) \to \Ds^s_\mu(\R^n) \] is real analytic. \end{Coro} \bibliographystyle{plain}
1,314,259,992,860
arxiv
\section{Introduction} Coherent structures at different spatial and temporal scales are a prominent feature of many turbulent fluid flows occuring in nature and in engineering applications \citep{Yaglom1967,Holmes1996,FazleHussain1986}. Examples include large-scale vortices, wakes, convection rolls and thermal plumes in Rayleigh--B\'enard convection (RBC) \citep{ahlers2009,Lohse2010}, Taylor rolls in Taylor--Couette flow \citep{grossmann2016high}, jets, travelling waves, very-large scale motions \citep{Hutchins2007a,Smits2011} and low-momentum zones \citep{Meinhart1995} that develop in wall-bounded turbulent boundary layers (BLs). These structures are known to have manifold significant effects in turbulent flows, for instance influence on heat and mass transport, the occurrence of extreme fluctuations or enhanced drag due their to interaction with near-wall dynamics in turbulent BLs \citep{Monty2007, Marusic2010a, Katul2019}. Improving our knowledge of multi-scale spatio-temporal coherence and the underlying physics is of paramount importance as it would lead to a better fundamental understanding of turbulence, specifically in terms of model-building and turbulence control. However, the co-existence of several coherent structures makes the identification and the extraction of particular spatio-temporal features difficult, which led to a growing need for data-driven methods designed to identify and extract patterns. Modal decomposition, as an umbrella term for a variety of structurally similar methods, identifies structures by decomposing a given dataset in a suitable set of basis functions, or modes. Fourier analysis constitutes perhaps the most well-known and widely used example of a modal decomposition technique. A more sophisticated example is Proper Orthogonal Decomposition (POD) \citep{berkooz1993proper,podvin1998low,Rowley2004,bailon2012low}, where each mode describes a flow structure according to its energy content. However, as the POD modes do not distinguish between different temporal signals, they usually contain more than one characteristic frequency and thus cannot yield information on temporal coherence. Dynamic Mode Decomposition (DMD), by constrast, decomposes a dataset into spatio-temporal coherent structures \citep{Schmid2010} with {\em dynamic modes} obtained as eigenmodes of a high-dimensional linear approximation of the dynamics. More precisely, DMD has solid mathematical foundations in the context of nonlinear dynamical systems theory. Under certain conditions it represents a finite-dimensional approximation to the Koopman operator \citep{Rowley2009,Williams2015}, a linear but infinite-dimensional representation of a nonlinear dynamical system \citep{Koopman1931,Mezic2005,Mezic2019}. DMD results have an intuitive physical interpretation as each dynamic mode corresponds to a single frequency and growth or decay rate. Therefore, it is a well-suited data-driven method for the analysis of complex datasets and model reduction. Since its introduction by Peter Schmid in 2010 \citep{Schmid2010}, DMD has had a history of successful applications in fluid dynamics such as obtaining low-dimensional dynamic model of the cylinder wake flow \citep{Tissot2014,bagheri2013}, generating good initial guesses for unstable periodic orbits in turbulent channel flow \citep{Page2020}, flow control \citep{Brunton2015,Proctor2016,Rowley2017}, aerodynamics \citep{Ghoreyshi2014}, and more general in pattern recognition \citep{Jovanovic2014,Brunton2016}. Most DMD applications consist of post-processing a time series of experimental or computational data, where most implementations require the access to all data samples at the same instant in time. However, the size of highly resolved turbulent flow data usually precludes saving or loading the entire dataset into memory. Therefore, only a few studies so far have applied DMD to highly turbulent flows. These constraints can be circumvented by a DMD implementation that allows for incremental data updates \citep{Hemati2014,Anantharamu2019,Zhang2019}, such that the DMD calculation proceeds alongside main process such as Direct Numerical Simulations (DNS) or real-time Particle Image Velocimetry (PIV). {\em Streaming} DMD (sDMD) \citep{Hemati2014} is such a method, which requires only two data samples at a given instant in time and converges to the same results as classical DMD. In what follows we focus on sDMD as a promising method for the analysis of turbulent flows. The present article is intended to serve two purposes: (a) to demonstrate the applicability of streaming DMD to large datasets of highly turbulent flows relevant to fundamental science and engineering applications, (b) to analyse the spatio-temporal dynamics of the flow in sub-domains of particular interest. The batch-process streaming version of the DMD algorithm \citep{Hemati2014} is applied to three datasets consisting of time series obtained in DNS of three different turbulent flows: rapidly rotating RBC, horizontal convection (HC) and asymptotic suction boundary layer (ASBL). Despite their physical differences, these three system share a few features that render them interesting and suitable as test cases to demonstrate the advantages of sDMD for the analysis of turbulent flows. First, all three cases are paradigmatic examples of fluid-dynamic systems of interest in geophysical fluid dynamics and engineering applications. Rapidly rotating RBC is of relevance whenever rotation and thermal convection are the key physical processes \citep{ahlers2009, siggia1994}, such as in the dynamics of planetary cores. Horizontal convection \citep{Hughes2008,shishkina2016heat} occurs in the ocean which is mostly heated and cooled by its upper surface being in contact with the atmosphere. The ASBL \citep{Jones1963,Schlichting1979} is a flat-plate BL with a constant BL thickness in the streamwise direction. The latter is achieved by removing fluid through the pores in the bottom plate. The ASBL therefore allows the application of techniques developed for parallel wall-bounded shear flows to an open flow. Second, all three systems host spatio-temporally coherent structures. In rapidly rotating RBC, this is the boundary zonal flow, a large-scale travelling wave structure confined to the lateral BLs \citep{Zhang2020,favier2020robust, Shishkina2020}. HC features two characteristic processes that operate on very different time scales, i.~e., plume emission and slow oscillatory dynamics in the bulk \citep{Reiter2020}, with the former one being an order of magnitude faster than the latter one. The ASBL shows coherent low momentum zones in the free stream, as do many wall bounded shear flows and freely evolving BLs \citep{Meinhart1995}, in the present dataset with a slow spanwise drift. Third, the size of the datasets presents challenges in all three cases that can be mitigated by the incremental nature of DMD. For rapidly rotating RBC and HC, the fine grids required to properly resolve the small-scale turbulent dynamics result in large datasets, as usual for DNS of turbulent flows. In case of ASBL, a further difficulty lies in the slow dynamics of the low-momentum zone, as an analysis thereof requires very long time series. This article is organised as follows. Section \ref{sec:dmd} provides a summary of both, classical DMD \citep{Schmid2010} and streaming DMD \citep{Hemati2014}, where we highlight few subtle differences concerning technical steps and compare our implementations of DMD and sDMD using a standard publicly available dataset -- DNS of a developing von-K\'arm\'an vortex street. The main results of our analysis concerning turbulent flows are contained in Sec.~\ref{sec:results}, beginning with rapidly rotating Rayleigh--B\'enard convection in Sec.~\ref{sec:rbc}, followed by horizontal convection in Sec.~\ref{sec:hc} and the asymptotic suction BL in Sec.~\ref{sec:asbl}. The paper ends with conclusions \& an outlook. \section{Dynamic mode decomposition} \label{sec:dmd} Before describing the specific features and advantages of streaming DMD (sDMD) \citep{Hemati2014}, we briefly summarise the basic ideas and the classical singular value decomposition (SVD) based DMD algorithm \citep{Schmid2010}. For simplicity we restrict ourselves here to the case of equidistant data sequences, for a more general discussion see \cite{kutz2016dynamic}. Consider a time series of spatially resolved measurement results recorded at a fixed sampling rate $1/\Delta t$ resulting in, say, $N$ equidistant snapshots. Let us further assume that the possibly multidimensional data in each snapshot is flattened into a corresponding $M$-dimensional real vector, such that the time series can be represented by an ordered sequence $(\bm{x}_k)_{\{k=1, \hdots, N\}}$ of column vectors $\bm{x}_k \in \mathbb{R}^M$ for $k \in \{1, \hdots, N\}$. In the present context $\bm{x}_k$ would represent the $k^{\rm th}$ velocity field in a series of $N$ measurements, hence in particular for highly resolved three-dimensional flow fields $M = (\text{number of grid points})^3$ can quickly become very large. We will come back to this point in due course. The assumption DMD relies upon the existence of a linear operator $\bm{A} \in \mathbb{R}^{M \times M}$ which approximates the nonlinear dynamics across the interval $\Delta t$, that is \begin{equation} \label{eq:arnoldi} \bm{x}_{k + 1} = \bm{A} \bm{x}_{k} + \bm{\varepsilon}_k \quad \text{for all} \quad k \in \{1, \hdots, N-1\} \ . \end{equation} Here, crucially, $\bm{A}$ does not depend on $k$. Finally, $\bm{\varepsilon}_k$ denotes an error term that is assumed to be small. The validity of this assumption depends on the ratio of the characteristic time scale of the observed nonlinear dynamics and the sampling interval $\Delta t$. In practice, $\bm{A}$ is chosen by regression over the available data by least-squares minimisation of the $\bm{\varepsilon}_k$ \citep{kutz2016dynamic}. Since the operator $\bm{A}$ describes the spatio-temporal dynamics of the system, its eigenvectors, known as {\em dynamic modes} or somewhat tautologically DMD modes, may be used to disentangle complex spatio-temporal dynamics and to construct low-dimensional models. In what follows we summarize how the dynamic modes may be determined from the data sequence $(\bm{x}_k)_{\{k=1, \hdots, N\}}$, following an SVD-based approach as this is what is mostly used in practice owing to numerical stability concerns with the more fundamental Krylov-subspace-type approach and for reasons of computational cost reduction. Further details can be found in the original work by \cite{Schmid2010} and the textbook by \cite{kutz2016dynamic}. \subsection{SVD-based DMD} \label{sec:svd-dmd} For what follows it is convenient to combine data sequences that consist of $N-1$ samples and are shifted forwards in time by $\Delta t$, that is $(\bm{x}_k)_{\{k=1, \hdots, N-1\}}$ and $(\bm{x}_k)_{\{k=2, \hdots, N\}}$, into $M \times (N-1)$-dimensional matrices \begin{align} \label{eq:X1} \bm{X} = \bm{X}_1^{N-1} = (x_{jk}) & : = \left( \bm{x}_1 \ \bm{x}_2 \ \cdots \ \bm{x}_{N-1} \right) \ , \\ \label{eq:X2} \bm{Y} = \bm{X}_2^{N} = (y_{jk}) & : = \left( \bm{x}_2 \ \bm{x}_3 \ \cdots \ \bm{x}_{N} \right) \ , \end{align} where $j \in {1, \hdots, M}$ is the spatial index and $k$ the temporal index. Then Eq.~\eqref{eq:arnoldi} implies \begin{equation} \label{eq:data-evol} y_{jk} = x_{jk+1} = \sum_{l=1}^M a_{jl} x_{lk} + \varepsilon_{jk}, \quad \text{or} \quad \bm{Y} = \bm{A} \bm{X} + \bm{R} \ , \end{equation} where $a_{jl}$ are the entries of the linear operator $\bm{A}$ and $\bm{R} = (\varepsilon_{jk})$ is the matrix of residuals. The best-fit solution for $\bm{A}$ with respect to least-squares minimization of $\bm{R}$ is given by $\bm{A} = \bm{Y}\bm{X}^+$, where $\bm{X}^+$ is the pseudo-inverse of $\bm{X}$. In practice, and in particular in fluid dynamics, $M \gg N$ as the dimension $M$ of the spatial samples usually exceeds the number of temporal samples $N$ by far. Hence, $\bm{A} \in \mathbb{R}^{M \times M}$ is at most of rank $N-1$, which calls for a lower-dimensional approximation of $\bm{A}$, for instance, by projecting $\bm{A}$ on a subspace spanned by, say, $r$ POD modes obtained by calculating the compact SVD of $\bm{X}$, \begin{equation} \label{eq:svd} \bm{X} =\bm{U}_{\bm{X}} \bm{\Sigma}_{\bm{X}} \bm{W}_{\bm{X}}^{T}, \end{equation} where the superscript $T$ denotes the transpose. The truncation number $r$ is bounded from above by the rank of the data matrix $\bm{X}$, which is at most $N-1$. The columns of $\bm{U}_{\bm{X}} \in \mathbb{R}^{M \times r}$ and the rows of $\bm{W}_{\bm{X}} \in \mathbb{R}^{r \times M}$ are orthogonal, and $\bm{\Sigma}_{\bm{X}} \in \mathbb{R}^{r \times r}$ is a diagonal matrix containing the nonzero singular values of $\bm{X}$. The matrix $\bm{U}_{\bm{X}}$ contains the spatial structures of the data sequence, that is, the POD modes are given by the columns of $\bm{U}_{\bm{X}}$. By substitution of Eq.~\eqref{eq:svd} into Eq.~\eqref{eq:data-evol} and subsequent rearrangement, one obtains \begin{equation} \label{eq:projection} \bm{S} := \bm{U}_{\bm{X}}^{T} \bm{A} \bm{U}_{\bm{X}} \approx \bm{U}_{\bm{X}}^{T} \bm{Y} \bm{W}_{\bm{X}} \bm{\Sigma}_{\bm{X}}^{-1} \in \mathbb{R}^{r \times r} \ . \end{equation} This equation is to be interpreted in a least-squares optimal sense (hence the absence of the residual), as it can also be obtained by calculating $\bm{A}$ through the pseudo-inverse of $\bm{X}$, which is calculated via SVD, and subsequently projecting $\bm{A}$ onto the $r_{\bm{A}}$-dimensional subspace spanned by the POD modes. Since $\bm{A}$ and $\bm{S}$ are related by a similarity transform, the eigenvalues of $\bm{S}$ correspond to the non-zero eigenvalues of $\bm{A}$. For practical purposes we summarize the SVD-based DMD algorithm \citep{Schmid2010} as follows: \begin{itemize} \item Collect $N$ temporally equidistant samples $\left\{\bm{x}_{1}, \bm{x}_{2}, \bm{x}_{3}, \ldots, \bm{x}_{N}\right\}, \bm{x}_{j}\in\mathbb{R}^{M}, j \in \{1,\ldots,N \}$. \item Build a matrix $\bm{X} \in \mathbb{R}^{M \times(N-1)}$ out of the first $(N-1)$ snapshots, according to Eq.~\eqref{eq:X1}. \item Calculate the compact SVD of $\bm{X}$ according to Eq.~\eqref{eq:svd}. \item Build a matrix $\bm{Y} \in \mathbb{R}^{M \times(N-1)}$ out of the last $(N-1)$ snapshots, according to Eq.~\eqref{eq:X2} and combine it with the matrices $\bm{U}_{\bm{X}}$ and $\bm{W}_{\bm{X}}$ to calculate the optimal representation $\bm{S}$ of the linear mapping $\bm{A}$ in the orthogonal basis given by the POD modes according to Eq.~\eqref{eq:projection}. \item Calculate the eigenvectors $\bm{v}_k$ and eigenvalues $\lambda_k$ of $\bm{S}$ for $k \in \{1, \hdots , r\}$. \item Calculate the (projected) dynamic modes $\psi_{k}$ \begin{equation} \psi_{k}=\bm{U} \bm{v}_{k} \ . \end{equation} \end{itemize} The data vector $\bm{x}$, or, in the present context the velocity field, at time $t$ can then be approximated using $N' \leqslant r$ dynamic modes and their corresponding DMD eigenvalues \begin{equation} \label{eq:approx} \bm{x}(t) \approx \sum_{k=1}^{N'} a_{k} \mathrm{e}^{\left(\sigma_{k}+\mathrm{i} \omega_{k}\right) t} \psi_{k} \ , \end{equation} where $a_k$ are the diagonal elements of $\bm{\Sigma} = \text{diag}(a_1, \hdots, a_r)$, representing the amplitude for each mode, and \begin{equation} \label{eq:eigenvalues} \omega_{k}=\frac{Im\left(\ln\left(\lambda_{k}\right)\right)}{\Delta t}, \quad \sigma_{k}=\frac{Re\left(\ln\left(\lambda_{k}\right)\right)}{\Delta t} \ . \end{equation} are the frequency and temporal growth or decay rate of the $k^{\rm th}$ dynamic mode for $k \in \{1, \hdots , r\}$, respectively. The accuracy of the approximation does not only depend on the number of dynamic modes used to reconstruct the data, it also depends on truncation number $r$, which determines the accuracy with which the projected dynamic modes have been calculated. Several truncation criteria have been developed to determine a suitable value for $r$, such as sparsity-promoting algorithm \citep{Jovanovic2014}, Optimal Singular Value Hard Threshold \citep{Gavish2014} and improved rank based on space sampling \citep{Kou2017}. \subsection{Streaming DMD} \label{sec:sdmd} Classical DMD requires access to the entire data sequence at once, which precludes the analysis of large datasets due to memory constraints. This applies to data of either a high degree of spatial or temporal complexity, where the former results in high spatial dimensionality (large $M$) and the latter requires long time series (large $N$) to capture the temporal features of the dynamics. Streaming DMD is a method for the calculation of the POD-projected linear operator $\bm{S}$ based on incremental data updates that addresses this challenge by only requiring two data samples to be held in memory at a given time \citep{Hemati2014}. In what follows we summarize this procedure; further details including processing steps that reduce the effects of data contamination by noise can be found in the original work by \cite{Hemati2014}. Streaming DMD consists of two conceptual parts, a low-storage calculation of $\bm{S}$, and a scheme to update $\bm{S}$ using new data samples based on the iterative Gram--Schmidt orthogonalization. Let us re-consider the data matrices $\bm{X}$ and $\bm{Y}$ defined in Eqs.~\eqref{eq:X1} and \eqref{eq:X2} and write Eq.~\eqref{eq:projection} as \begin{equation} \label{eq:sdmd} \bm{S} = \bm{U}_{\bm{X}}^T \bm{Y} (\bm{U}_{\bm{X}}^T \bm{X})^+ = \bm{U}_{\bm{X}}^T \bm{U}_{\bm{Y}} \tilde{\bm{Y}} \tilde{\bm{X}}^+ = \bm{U}_{\bm{X}}^T \bm{U}_{\bm{Y}} \tilde{\bm{Y}} \tilde{\bm{X}}^T(\tilde{\bm{X}}\tilde{\bm{X}}^T)^+ = \bm{U}_{\bm{X}}^T \bm{U}_{\bm{Y}} \bm{H} \bm{G}_{\bm{X}}^+ \ , \end{equation} where $\tilde{\bm{Y}} := \bm{U}_{\bm{Y}}^T \bm{Y} \in \mathbb{R}^{r_{\bm{Y}} \times N-1}$ and $\tilde{\bm{X}} := \bm{U}_{\bm{X}}^T \bm{X} \in \mathbb{R}^{r_{\bm{X}} \times N-1}$ and the identity $\tilde{\bm{X}}^+ = \tilde{\bm{X}}^T(\tilde{\bm{X}} \tilde{\bm{X}}^T)^+$, which can be readily verified via SVD, was used in the penultimate step. That is, now both data matrices $\bm{X}$ and $\bm{Y}$ are projected onto orthogonal bases consisting of their respective left singular vectors, the POD-modes, with truncation numbers $r_{\bm{X}} \leqslant \text{rank } \bm{X}$ and $r_{\bm{Y}} \leqslant \text{rank } \bm{Y}$. The rearrangement carried out in the penultimate step has the advantage that $\bm{H} := \tilde{\bm{Y}} \tilde{\bm{X}}^T \in \mathbb{R}^{r_{\bm{Y}} \times r_{\bm{X}}}$ and $ \bm{G}_{\bm{X}} = \tilde{\bm{X}}\tilde{\bm{X}}^T \in \mathbb{R}^{r_{\bm{X}} \times r_{\bm{X}}}$, which in itself is an improvement of the classical DMD in terms of memory usage as long as $r_{\bm{X}} < M$ and $r_{\bm{Y}} < M$. Especially in fluid dynamics, this is often the case unless the data is very noisy. We will come back to this issue in due course. However, the main advantage of the formulation in Eq.~\eqref{eq:sdmd} lies in the fact that all matrices on the right-hand side of Eq.~\eqref{eq:sdmd} can be obtained incrementally from a data stream using only two samples at a time. The matrices $\bm{U}_{\bm{X}}$ and $\bm{U}_{\bm{Y}}$ can be calculated incrementally from the data stream by iterative Gram-Schmidt orthogonalization. After each orthogonalization step the updated orthogonal matrices are then used to project the sample vectors onto the respective bases, and the matrices $\bm{H}$ and $ \bm{G}_{\bm{X}}$ are subsequently constructed from the projected sample vectors. More precisely, consider for instance the $k^{\rm th}$ pair of sample vectors $\bm{x}_k$ and $\bm{y}_k = \bm{x}_{k+1}$. The matrices $\bm{U}_{\bm{X}}$ and $\bm{U}_{\bm{Y}}$, which have been constructed incrementally from the previous data samples, are now updated using $\bm{x}_k$ and the newly available $\bm{y}_k$. Then, $\tilde{\bm{x}}_k = \bm{U}_{\bm{X}}^T \bm{x}_k$ and $\tilde{\bm{y}}_k = \bm{U}_{\bm{Y}}^T \bm{y}_k$ are calculated and we can update the remaining matrices according to \begin{equation} \bm{H} = \sum_{l = 1}^k \tilde{\bm{y}}_l \tilde{\bm{x}}_l^T \qquad \text{and} \qquad \bm{G}_{\bm{X}} = \sum_{l = 1}^k \tilde{\bm{x}}_l \tilde{\bm{x}}_l^T \ . \end{equation} Before proceeding to the calculations, a few comments are in order. First, the incremental nature of the method precludes the application of numerically more stable orthogonalization methods such as Householder reflections, and this may affect the convergence properties of the method. Second, experimental noise may result in a drastic decrease in computational efficiency as noise usually results in the data matrices being of high rank. In practice, this can be mitigated through an intermediate processing step, as explained in detail in \cite{Hemati2014}. Third, we note that $\bm{G}_{\bm{X}} = \bm{\Sigma}_{\bm{X}} \bm{\Sigma}_{\bm{X}}$ contains the squares of the nonzero singular values of $\bm{X}$, as can be verified via SVD. \subsection{Validation} \label{sec:validation} Before applying sDMD to the three aforementioned datasets, we first validate and compare the classical and streaming DMD implementation using the publicly available dataset provided by \cite{kutz2016dynamic}. Subsequently, we use this dataset to test a coarsening interpolation scheme designed to reduce the computational effort when analysing data of high spatial dimension $M$, as is the case for turbulence. Since the focus of the present work lies in the identification of the most dominant structures of the system, usually represented by one or two the most important modes, we do not apply any specific algorithm to determine the truncation number $r$. Instead, different values of $r$ were tested to ensure convergence, resulting in $r = 30$ as a sufficient truncation number. \subsubsection{Comparsion between sDMD and DMD} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{Fig1.pdf} \caption{ Streaming DMD results for two-dimensional vortex shedding behind a cylinder at $\Rey=100$ using in total 150 vorticity-field samples and a truncation mode number of $r = 30$. (a) Original and reconstructed vorticity fields. Cyclonic vortices are shown in blue and anticyclonic vortices in red. Panel (i) presents the original field and (ii)-(v) the reconstructed fields using two to five dynamic modes. (b) The first five dynamic modes used in the reconstructions. (c) Comparison of DMD eigenvalues obtained from SVD-based DMD \citep{Schmid2010} and streaming DMD \citep{Hemati2014}.} \label{fig1} \end{figure} The dataset provided in \cite{kutz2016dynamic} consists of a time series of two-dimensional vorticity fields obtained by computer simulation of the wake flow behind a cylinder for Reynolds number $Re=UD/\nu = 100$, where $U$, $D$ and $\nu$ denote the free-stream velocity, the diameter of the cylinder and the kinematic viscosity of the fluid. The dominant dynamics are governed by periodic vortex shedding, and therefore very well suited for DMD validation. For details on the numerical method used to generate the data we refer to the original reference \citep{kutz2016dynamic}. In total, 151 vorticity-field samples, separated by a time interval $\Delta t=0.2$, were analysed. A typical sample is shown in figure~\ref{fig1}a(i), where the flow evolves from left to right shedding cyclonic vortices in blue (dark grey) and anticyclonic vortices in red (light grey) at a $Re$-dependent frequency $f$. The vortex-shedding frequency can be expressed in non-dimensional form through the Strouhal number $St = fD/U$. For the present dataset at $Re = 100$, the Strouhal number is around $St = 0.16$. Figures \ref{fig1}a(ii)-(v) present reconstructions of the vorticity field with an increasing number of dynamic modes (2-5) calculated via sDMD using a projection onto 30 POD modes. The first mode shown in figure~\ref{fig1}b(i) has an eigenvalue of 1 and represents the mean vorticity field. The second dominant DMD eigenmode shown in figure~\ref{fig1}b(ii) corresponds to the vortex shedding frequency. The frequency of the second mode is given by $Im(\log(\lambda_2)/(2\pi\Delta t))$, where $\lambda_2$ is the corresponding eigenvalue. We obtain $\lambda_2=0.9875 - 0.2063i$, resulting in a dimensionless frequency, or Strouhal number of $St = 0.165$, which is in good agreement of the expected Strouhal number at $Re=100$. The following modes visualised in figures~\ref{fig1}b(iii,iv,v) show higher-order coherent structures in the vorticity field. As can be seen from the visualisations of the reconstructed vorticity field in figures~\ref{fig1}a(i) and (ii), and from the measured frequency of the second dynamic mode, vortex shedding is well described by the first two modes. With only four modes, the reconstructed vorticity field (figure~\ref{fig1}a(v)) is already visually indistinguishable from the original shown in figure~\ref{fig1}a(i). This demonstrates the efficiency of the DMD for periodic flows. The same calculations were carried out using the classical DMD algorithm. In figure~\ref{fig1}c we compare the spectra of the linear operator $\bm{S}$ obtained from standard DMD and streaming DMD. As the dynamics are nonlinear and statistically stationary, for sufficiently long time series all transiently growing processes will undergo nonlinear saturation while the amplitude of decaying process will essentially vanish and the eigenvalues are expected to be neutrally stable \citep{horn2017}, i.e. lie on the unit circle. This is indeed the case. Furthermore, the eigenvalues obtained with the two methods coincide. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{Fig2.pdf} \caption{ Memory consumption of the classical SVD-based DMD method and streaming DMD method as a function of the data matrix size given by the sample size times the number of samples in the time series. (a) Fixed sample size and varying number of samples in the time series. (b) Fixed number of samples with varying sample size. } \label{fig2} \end{figure} The memory consumption of both methods as a function of the data matrix size, defined as the product of the sample size times the number of samples in the time series, is presented in figures~\ref{fig2}(a,b). In order to ensure consistency, all tests were carried out on the same computer with an Intel i5-8250U CPU at 1.60GHz and 8GB RAM. In figure~\ref{fig2}(a), the individual sample size was held fixed such that an increase in the data matrix size is achieved through a larger number of samples in the time series. As expected, the memory consumption of classical DMD grows with increasing data matrix size while the memory consumption of sDMD remains almost constant. In contrast, in figure~\ref{fig2}(b), the number of samples in the time series is constant and the data matrix size increases with sample size. In this case, the increase in memory consumption is qualitatively similar between the two methods, albeit at an offset in favour of sDMD. \subsubsection{Coarse interpolation for the analysis of high-dimensional data} Numerical simulations of highly turbulent flows require fine computational grids to accurately resolve the dynamics at the small scales. This is not necessarily always due to a need to precisely measure small-scale quantities such as dissipation, but also concerns numerical stability. The required large number of grid points results in a high memory load even for a single sample, which quickly becomes prohibitive even for sDMD. This calls for a reliable downsampling strategy to interpolate the data on coarser grids. In what follows we analyse the robustness of sDMD with respect to different degrees of spatial downsampling, using the same vortex shedding dataset of the previous subsection. The downsampling was carried out by successively decreasing the original number of grid points, beginning with 90000 grid points down to a minimum of 5 grid points. The effect of downsampling is assessed by considering two observables, the DMD eigenvalues and the time-averaged reconstruction error, defined as \begin{equation} \label{eq:recerr} \varepsilon_2 := \langle \|\bm{v}(t_0) - \sum_{k=1}^{N'} a_{k} \mathrm{e}^{\left(\sigma_{k}+\mathrm{i} \omega_{k}\right) t_0} \psi_{k}\|_2\rangle_t \ , \end{equation} where $\bm{v}$ denotes the vorticity field here, and the angled brackets denote a time average. The results are summarized in figure~\ref{fig3}, with figure~\ref{fig3}(a) and (b) showing the streaming DMD eigenvalues for the original data and after different degrees of downsampling, Figure~\ref{fig3}(c) the reconstruction error as as a function of the data matrix size for classical DMD and sDMD, and figure~\ref{fig3}(d) presenting visualisations of a sample of the reconstructed vorticity fields after different degrees of downsampling. A number of observations can be made from figure~\ref{fig3}. The DMD eigenvalues are remarkably robust under the downsampling procedure. As can be seen from the data shown in figures~\ref{fig3}(a) and (b), a reduction by three orders of magnitude in the data matrix size results in almost the same values for the DMD eigenvalues. Significant qualitative differences in the eigenvalues occur only after drastic downsampling from 90000 to less than 10 data points. A more quantitative comparsion is achieved by considering the difference $\varepsilon_1$ between the Strouhal number and the dimensionless frequency of the second dynamic mode as a function of the data matrix size presented in figure~\ref{fig3}(c). As can be seen from the figure, the Strouhal number is reproduced very accurately using only 25 data points. This is particularly striking in view of the unsurprisingly large reconstruction error $\varepsilon$ of order $10^{-3}$ to $10^{-2}$, for the corresponding downsampled data, as shown in figure~\ref{fig3}(d). According to the data presented in the figure, converged results for the reconstruction of the full vorticity field requires at data matrix size of least 9000 points. The finite residual for higher resolved data is then due to truncation in the DMD algorithm. The visualisations of the reconstructed vorticity fields in figure~\ref{fig3}(e) give a visual impression of the effect the downsampling has on the reconstructed data. As expected, the large-scale spatial coherence is still present in the downsampled data. Since the focus is on the detection of large-scale coherent structures, like the vortex street in this case, and since the coarsening interpolation results in the removal of small-scale spatial structures, the downsampling has very little effect on the results, as expected. \begin{figure} \centering \includegraphics[width=1\columnwidth]{Fig3.pdf} \caption{ (a) Streaming DMD eigenvalues for the original data (green circles) and after different degrees of downsampling. (b) Magnification of the blue region in (a). (c) Error of the Strouhal number as a function of the downsampled data matrix size for classical DMD and sDMD. (d) The time-averaged reconstruction error as defined in Eq.~\eqref{eq:recerr} as a function of the downsampled data matrix size for classical DMD and sDMD. (e) The instantaneous flow field on 90000, 900 and 100 grid points, respectively, from top to bottom. } \label{fig3} \end{figure} \section{Results} \label{sec:results} Having validated our implementation of sDMD on publicly available data, we now apply the method to three different flows, rapidly rotating Rayleigh--B\'enard convection (RBC), horizontal convection (HC), and the asymptotic suction boundary layer (ASBL). We chose these three examples in order to demonstrate sDMD to be a useful tool for the analysis of different turbulent flows in terms of their main spatio-temporal structure. In rapidly rotating RBC, the anticyclonic circulation in the bulk is surrounded by a cyclonic layer close to the horizontal cell walls, and the aim is to identify this large-scale flow pattern. Horizontal convection lends itself well as a test case for the distinction of different spatio-temporal structures, as the dynamics is largely governed by two instabilities that operate on different time scales. The Rayleigh--Taylor instability leads to fast periodic plume generation close to the boundary while an oscillatory instability in the bulk results in much slower periodic dynamics in the bulk. The respective frequencies associated with these two processes differ by an order of magnitude. Similar to canonical wall-bounded parallel shear flows and spatially developing BLs, the ASBL features long-lived large-scale coherent motion. Here the aim is to identify the corresponding spatio-temporal structure. The slow dynamics requires very long time series, which makes this example particularly suitable for the application of sDMD. All datasets were obtained by direct numerical simulation at parameter values corresponding to turbulent flow. Further details on the numerical methods and parameter values will be given in the following subsections. \subsection{Rapidly Rotating Rayleigh--B\'enard Convection} \label{sec:rbc} \subsubsection{Fluid flow} In rotating Rayleigh--B\'enard convection, a fluid is confined between a heated bottom plate and a cooled top plate and is rotated around a vertical axis. It is a paradigmatic problem to study many geophysical and astrophysical phenomena in the laboratory, e.g. convective motion occurring in the oceans, the atmosphere, in the outer layer of stars, or in the metallic core of planets. In rotating RBC laboratory experiment, the fluid is laterally confined. The centrifugal force can be neglected, provided the Froude number is small, and then only the Coriolis force is considered. The interplay of the occurring buoyancy and Coriolis forces, however, may yield highly complex flows with very distinct flow structures whose nature strongly depends on the control parameters. Without rotation or with slow rotation, a distinct feature of turbulent RBC is the emergence of the Large-Scale Circulation (LSC) of fluid. For rapid rotation, however, a cyclonic azimuthal velocity boundary-layer flow, the Boundary Zonal Flow (BZF), develops close to the side walls, surrounding a core region of anticyclonic circulation. The viscous Ekman BLs near the plates induce an anticyclonic circulation with radial outflow in horizontal planes, which is balanced by the vertical velocity in a thin annular region near the sidewall, where cyclonic vorticity is concentrated. The Taylor--Proudman effect induced by rapid rotation tends to homogenize the flow in the vertical direction. The temperature pattern near the vertical wall, however, moves anticyclonicly within the BZF and is connected to the thin anticyclonic Ekman layers at the top and bottom plates. The aim here is to recover the BZF via sDMD. \subsubsection{Dynamic equations \& control parameters} We consider RBC in a vertical cylinder rotating with uniform angular velocity $\Omega$ about the vertical axis. The governing equations of the problem are the incompressible Navier--Stokes equations in the Oberbeck--Boussinesq approximation, coupled with the temperature equation, given here in dimensionless form \begin{align} \label{eq:ns} \partial_t \bm{u} + (\bm{u}\cdot\nabla)\bm{u} + \nabla p & = \sqrt{\Pran/\Ra}\nabla^2\bm{u} -\Ro^{-1} \hat{z} \times \bm{u} + T\hat{z}, \\ \label{eq:energy} \partial_t T+ (\bm{u}\cdot\nabla) T &= \sqrt{1/(\Pran\Ra)}\nabla^2 T,\\ \label{eq:incomp} \nabla\cdot\bm{u}&=0. \end{align} The Rayleigh number, $\Ra$, describes the strength of the thermal buoyancy force, the Prandtl number, $\Pr$, the ratio of viscosity and diffusivity, and the convective Rossby number, $\Ro$, is a measure for the rotation rate. They are defined as \begin{equation} \Ra \equiv \alpha g \Delta H^3/{\kappa\nu},\qquad \Pran \equiv \nu/\kappa, \qquad \Ro \equiv \sqrt{\alpha g\Delta H}/{2\Omega H}, \end{equation} where $\alpha$ denotes the isobaric expansion coefficient, $g$ the acceleration due to gravity, $H$ the fluid layer height, ${\Delta=T_+-T_-}$ the imposed adverse temperature difference with $T_+$ ($T_-$) being the temperature of the heated bottom (cooled top) plate, $\kappa$ the thermal diffusivity, $\nu$ the kinematic viscosity, and $\Omega$ the angular rotation speed. Equations \eqref{eq:ns}-\eqref{eq:incomp} were stepped forward in time using the finite-volume code {\sc golgfish} \citep{shishkina2015,Kooij2018,Zhang2020}. For the temperature we impose Dirichlet boundary conditions (isothermal) on the top and bottom plates and Neumann conditions (adiabatic) on the lateral walls. All boundaries are assumed to be impenetrable and no-slip, i.~e.~the velocity field vanishes at all boundaries. \begin{figure} \unitlength1truecm \begin{picture}(17, 6) \put(0,0){\includegraphics[width=14cm]{Fig4.pdf}} \put(0, 5.5){$(a)$} \put(4.5, 5.5){$(b)$} \put(9, 5.5){$(c)$} \end{picture} \caption{The first two dynamic modes for $\Pran=0.8$, $\Ra = 10^8$ and $\Ro=0.1$. $(a)$ The first mode of the anticyclonic drifting temperature field in the full cell (left) and in the bottom half of the cell (right). $(b)$ The first mode of the azimuthal velocity field in the full cell (left) and in the bottom half of the cell (right). $(c)$ The second mode of the temperature (left) and the azimuthal velocity field (right) in the bottom half of the cell.} \label{fig4} \end{figure} \subsubsection{Numerical details} We consider datasets for two different Rayleigh numbers, $\Ra=10^8$ and $\Ra=10^9$, the remaining control parameters are $\Pran=0.8, \Ro=0.1$. The resolution of the original datasets is $N_{r} \times N_{\phi} \times N_{z}=322 \times 82 \times 256$ for $\Ra = 10^8$ and $N_{r} \times N_{\phi} \times N_{z}=822 \times 194 \times 512$ for $\Ra = 10^9$, according to \cite{Shishkina2010}, where $N_r$, $N_\phi$ and $N_z$ denote the number of grid points in radial, azimuthal and vertical direction, respectively. The velocity fields from both datasets are sampled for a time period of 400 free-fall time units with a sampling interval of of $\Delta t = 2.5$, resulting in 160 samples in total. Both datasets are spatially downsampled for the sDMD analysis, by a four-fold and a 16-fold reduction in the number of data points, respectively, resulting in a spatial resolution of $N_{r} \times N_{\phi} \times N_{z}=161 \times 82 \times 128$ for $\Ra = 10^8$ and $N_{r} \times N_{\phi} \times N_{z}=256 \times 96 \times 128$ for $\Ra = 10^9$. The truncation number is $r=40$ in both cases. In what follows we first describe the generic spatial features that can be extracted with the first few dynamic mode for the case $\Ra = 10^8$ and subsequently consider the temporal features for both $\Ra = 10^8$ and $\Ra = 10^9$. \subsubsection{Streaming DMD} The first two dynamic modes obtained from the $\Ra = 10^8$-datset are visualised in figure~\ref{fig4} in terms of temperature and azimuthal velocity. The temperature field of the first dynamic mode shown in figure~\ref{fig4}(a) resembles the mean temperature profile, the corresponding azimuthal velocity field (figure~\ref{fig4}~b) consists of anticyclonic motion in the bulk and cyclonic motion close to the sidewall. As expected, the first dominant mode corresponds to a base or mean flow. However, this mode is evidently dynamically not important without temporal change. The principal mode is the second one, presented in figure~\ref{fig4}(c), and the BZF \citep{Zhang2020} is clearly visible, both in the temperature (left) and azimuthal velocity (right). Even though the flow is turbulent, its large-scale spatial structure can be reconstructed nicely with only a few modes as demonstrated by the visualisation of the azimuthal velocity in the lower half of the RBC-cell presented in figure~\ref{fig6}. We point out that much of the small-scale dynamics and thereby accuracy in the representation is lost through the downsampling procedure, and applying streaming DMD without coarse interpolation but with larger memory consumption may be advisable if the focus is on a more detailed reconstruction of the flow. Here, we focus only on the large-scale structure. The comparison is carried out for two velocity fields which have been sampled about 30 free-fall times apart in order to guarantee sufficiently decorrelated samples. The originals are shown in figures~\ref{fig6}(a) and (d), respectively. Figures~\ref{fig6}(b) and (e) contain the reconstructions using the first two modes, and figures~\ref{fig6}(c) and (f) the reconstructions from the first five modes for the two samples, respectively. These examples demonstrate consistently that even though the main features of the flow can be captures by the mean flow and the BZF, a fair amount of detail is missing and its inclusion requires a few more modes. A much better reconstruction can be achieved with as little as five modes. \begin{figure} \unitlength1truecm \begin{picture}(15, 6.7) \put(0.5, 0){\includegraphics[width=12cm]{Fig6.pdf}} \put(2, 5.8){$(a)$} \put(5.1, 5.8){$(b)$} \put(8.2, 5.8){$(c)$} \put(2, 2.7){$(d)$} \put(5.1, 2.7){$(e)$} \put(8.2, 2.7){$(f)$} \put(0.3, 4.8){\text{snapshot 1}} \put(0.3, 1.5){\text{snapshot 2}} \put(2.7, 6.2){\text{Original}} \put(5.8, 6.2){\text{2 Modes}} \put(8.9, 6.2){\text{5 Modes}} \end{picture} \caption{ Reconstructed azimuthal velocity field for two different velocity-field samples for $\Pran=0.8$, $\Ra = 10^8$ and $\Ro=0.1$. $(a),(d)$ original field, $(b),(e)$ reconstruction with two dynamic modes, $(c),(f)$ reconstruction with five dynamic modes. } \label{fig6} \end{figure} Having discussed the identification of the dominant spatial feature of the flow, the BZF, we now focus on its temporal structure. Figure \ref{fig5} presents spatio-temporally resolved diagrams of the dynamics in a ring located at half-height $z = H/2$ and at radial location $r=r_{u_\phi^\text{max}}$, where the maximum azimuthal velocity is observed, as indicated by the red circle in the schematic drawing shown in figure~\ref{fig5}(a). The time evolution of the temperature and vertical velocity fields of the $\Ra = 10^8$-dataset are presented in figure~\ref{fig5}(b) and (c), respectively, while figure~\ref{fig5}(d) corresponds to the time-evolution of the temperature field at $\Ra = 10^9$. The original data is shown in the left panels of the respective visualisations and the data reconstructed from the first two dynamic modes is shown in the right panels. Visual comparison of the left and right panels confirms again the the zonal flow pattern can be clearly captured with only the first two dynamic modes, the mean flow and the BZF. Furthermore, the visualisations clearly identify the BZF as a travelling wave with strongly correlated temperature and vertical velocity fields as can be seen by comparison of figure~\ref{fig5}(b) and (c). The travelling wave structure of the BZF is also present at higher $\Ra$, as can be seen in figure~\ref{fig5}(d). As such, it seems to be a robust feature of the BZF in the Rayleigh-number range considered here. However, according to the visualisation the dynamics appear to be slightly more complex at $\Ra= 10^9$ than at $\Ra= 10^8$, hence it remains to be seen to what extent the travelling wave dynamics persist with increasing $\Ra$. In summary, the most prominent spatio-temporal features of rapidly rotating RBC can be identified through sDMD, with the BZF emerging as the dominant dynamic mode. The cyclonic motion of the fluid reflected in the azimuthal velocity and the anticyclonic motion of the flow pattern reflected in the temperature and vertical velocity as well as their frequencies are fully reproduced by only the first two dynamic modes. These results firmly establish sDMD as a powerful tool for the extraction of dominant coherent structures in turbulent rapidly rotating RBC. \begin{figure} \unitlength1truecm \begin{picture}(17, 5) \put(0,0){\includegraphics[width=13cm]{Fig5.pdf}} \put(-0.2, 4.4){$(a)$} \put(2.2, 4.4){$(b)$} \put(5.8, 4.4){$(c)$} \put(9.4, 4.4){$(d)$} \end{picture} \caption{ Time evolution of temperature and vertical velocity. $(a)$ Schematic setup. The red circle indicates the location where temperature and velocity were measured. $(b)$ temperature field and $(c)$ vertical velocity field for $\Ra = 10^8$, and $(d)$ temperature field for $\Ra = 10^9$. The original fields are shown in the left panels and the right panels correspond to the reconstructed field using two dynamic modes. The color scale varies from minimum values indicated in blue to maximum values indicated in magenta for the respective fields, given by temperatures at the top and bottom plates in $(b)$ and $(d)$, and $[-u_{ff}/2, u_{ff}/2]$ with $u_{ff} \equiv \sqrt{\alpha \Delta g R}$ being the free-fall velocity in $(c)$.} \label{fig5} \end{figure} \subsection{Horizontal Convection} \label{sec:hc} \subsubsection{Fluid flow} Horizontal convection (HC), similarly to RBC, is driven by thermal buoyancy. However, in HC heating and cooling are applied to different parts of the same horizontal surface. In our case, the heated plate is located in the center and the cooled plates are placed at both ends, as shown in figure ~\ref{fig7}(a). This setup is relevant for many geophysical and astrophysical flows \citep{Scott2001, Spiegel1971} and engineering applications \citep{Gramberg2007}, in particular concerning the large-scale overturning circulation of the ocean as heat is supplied to and removed from the ocean predominantly through its upper surface, where the ocean contacts the atmosphere. The dimensionless control parameters are similar to RBC, that is the Rayleigh number, the Prandtl number and the aspect ratio $\Gamma$, \begin{eqnarray*} \qquad \quad Ra\equiv \alpha g \Delta L^3/(\kappa\nu), \qquad Pr\equiv\nu/\kappa, \qquad \Gamma\equiv L/H=10, \end{eqnarray*} where the characteristic length scale $L$ is the half-cell length. The governing equations are again the incompressible Navier--Stokes equations in the Oberbeck--Boussinesq approximation, and a temperature equation stated in Eqs.~\eqref{eq:ns}-\eqref{eq:incomp}, but without the Coriolis term in the momentum equation. \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{Fig7.pdf} \caption{ Sketch of (a) HC adapted from \cite{Reiter2020} and (b) front view of the setup. Only the shaded area (b) is used for sDMD. The inset shows a snapshot of the temperature field for $Ra = 10^{11}$ and $Pr = 10$. There, the grey arrows indicate the motion of the periodically detaching plumes; the dark arrow indicates the oscillatory motion inside the bulk region. } \label{fig7} \end{figure} For the parameters $\Ra = 10^{11}$ and $\Pran = 10$ it was observed that sheared plumes, originated by a Rayleigh--Taylor instability, periodically arise above the heated plate and travel towards the center \citep{Reiter2020}. However, another time-dependent feature that emerges is the oscillatory instability that breaks symmetry inside the bulk region, see figure~\ref{fig7} (b). So there is a fast periodic emission of thermal plumes close to the boundary and a slow periodic oscillation in the bulk region. Streaming DMD is used to separate these two coexisting dynamics. \subsubsection{Dynamic equations \& numerical details} The dataset consists of velocity fields obtained in the DNS for a rectangular geometry, as shown in the schematic drawing in figure~\ref{fig7}(a). The temperature boundary conditions at the bottom plate are $\theta =0.5$ for $0 \leq x \leq 0.1$ and $\theta =-0.5$ for $0.9 \leq x \leq 1$, all the other walls are adiabatic. No-slip boundary conditions are imposed at all walls for the velocity field. The calculations were carried out using the {\sc goldfish} code, as in the previous section. Further details can be found in \cite{Reiter2020}. The original grid is $N_{x} \times N_{y} \times N_{z}=66 \times 1026\times 98$, where $N_x$, $N_y$, and $N_z$, denote the number of grid points in the mean-flow $x$-direction, the spanwise $y$-direction and the $z$-direction, which is normal to the heated and cooled bottom plates, respectively. Though, since plumes and oscillations are concentrated above the heated plate, we extract only the data inside the shaded domain, shown in figure~\ref{fig7}(b), with $N_{x} \times N_{y} \times N_{z}=66 \times 200\times 98$. The truncation number $r$ is set to 80 to ensure the dominant modes can be captured properly. Since the plume emission motion is more than ten times faster than the oscillatory flow, a small time interval is needed to capture the fast plume emission while a large number of velocity-field samples is required to simultaneously identify the slow oscillations. To save computational resources, we decouple the two tasks and use two datasets comprised of 200 snapshots each, sampled at different time intervals: $0.1$ free-fall time units for the fast plume emission and $0.5$ free-fall time units for the slow oscillatory flow. \begin{figure} \centering \includegraphics[width=0.7\columnwidth]{Fig8.pdf} \caption{ (a,~b) Time evolution of the temperature of the original flow at a horizontal slice located at $y=H/2$ and at the height (a) $z=0.1H$, to capture plume emission, and (b) $z=0.8H$, to capture oscillations. (c,~d) Time evolution of the temperature of the reconstructed field with the first $2$ dominant modes. It is noted that the reconstruction (c) and (d) are based on different snapshot intervals. } \label{fig8} \end{figure} \subsubsection{Streaming DMD} The temporal structure of the original temperature field and the temperature field reconstructed from the first two dynamic modes is shown in figure~\ref{fig8} using horizontal slices located that the spanwise middle of the domain, $y = H/2$, and at different heights. Figures~\ref{fig8}(a) and (b) contain visualisations of the original field at $z=0.1H$, to capture fast plume emission, and at $z=0.8H$, to capture slow oscillations, respectively, and figures~\ref{fig8}(c) and (d) present the corresponding reconstructions. A visual comparison of the original and the reconstructed fields qualitatively shows that sDMD can clearly distinguish the two dominant spatio-temporal structures, with the first two dynamic modes identifying the fast motion of the plume emission for the dataset sampled at $0.1$ free-fall time units (figures~\ref{fig8}(a) and (c)), and the first two dynamic modes capturing the slow oscillatory mode for the dataset sampled at $0.5$ free-fall time units (figures~\ref{fig8}(b) and (d)). The frequencies obtained from the DNS data and the sDMD calculations are compared with the DMD frequencies being calculated according to Eq.~\eqref{eq:eigenvalues}. The period of the first dynamic mode is 16.8 free-fall time units, which matches perfectly the period of the slow oscillation observed in the original dataset. The second dominant mode has a period of 1.6 free-fall time units, which fits the period of the fast plume emission determined from the original DNS data. The agreement between the sDMD results and the DNS data, and the distinct identification and separation of the two dominant spatio-temporal structures with frequencies that differ by an order of magnitude, gives further confidence in the capability of DMD to capture the relevant processes, be it in the temporal or spatial framework. \subsection{Asymptotic Suction Boundary Layer (ASBL)} \label{sec:asbl} \subsubsection{Fluid flows \& underlying dynamic equations} The ASBL is an open flow that develops over a flat bottom plate in the presence of suction through that plate. In consequence, the BL thickness remains constant in the streamwise direction, and the ASBL shares certain properties with parallel shear flows and spatially developing BLs. In the DNS, the ASBL is emulated by a plane Couette setup using a simulation domain with a large height. That is, we consider a fluid located in a wide gap between two parallel plates as shown schematically in fig.~\ref{fig:ASBL}. The bottom plate is stationary and the fluid is set in motion through the top plate moving in the $x$-direction with velocity $U_\infty$. The latter corresponds to the free-stream velocity of the open flow. The flow is assumed to be incompressible and the conditions isothermal such that the density can be regarded as constant. \begin{figure} \centering \includegraphics[width=.5\columnwidth]{Fig9.pdf} \caption{Schematic drawing of the asymptotic suction boundary layer in numerical simulations. The lower plate is stationary and the fluid is set in motion by the upper plate that moves in $x$-direction with velocity $U_\infty$, representing the free-stream velocity of the emulated open flow. Fluid is removed through a porous bottom plate with velocity $V_S$, to guarantee conservation of mass, fluid enters the system at the same speed through a porous top plate. In numerical simulations, this is realised uniformly through boundary conditions on the wall-normal component of the velocity field. } \label{fig:ASBL} \end{figure} Expressed in units of the free-stream velocity, the laminar flow is given by \begin{equation} \bm{U}= \begin{pmatrix} 1-e^{-yVs/\nu} \\ -V_s/U_\infty\\ 0 \end{pmatrix} \ , \end{equation} where $V_s$ is the suction velocity and $\nu$ is the kinematic viscosity. The deviations $\bm{u}$ of the laminar flow are then described by the dimensionless equations \begin{align} \label{eq:asbl-nse} \partial_t \bm{u}+\bm{u}\cdot\nabla\bm{u} + \bm{U}\nabla\bm{u} + \bm{u}\nabla\bm{U} + \nabla p -\Rey^{-1}\Delta\bm{u} = 0 \ , \nabla\cdot\bm{u} = 0 \ , \end{align} where $p$ is the pressure divided by the constant density $\rho$ and $\Rey = U_\infty\delta/\nu$ the Reynolds number based on the free-stream velocity, the laminar displacement thickness $\delta = \nu/V_s$ and the kinematic viscosity of the fluid. \subsubsection{Numerical details} The DNS data was generated with the open-source code channelflow2.0 \citep{Gibson2014,chflow18}. Equations \eqref{eq:asbl-nse} are solved numerically in a rectangular domain $\Omega = [-L_x/2,L_x/2] \times [0,H] \times [-L_z/2,L_z/2]$ as schematically shown in fig.~\ref{fig:ASBL}, with periodic boundary conditions in the streamwise $x$- and the spanwise $z$-directions and no-slip boundary conditions in the wall-normal $y$-direction. Channelflow2.0 uses the standard pseudospectral technique with $2/3^{\text{rd}}$ dealiasing in stream- and spanwise directions, where the spatial discretisation is by Fourier expansions in the homogeneous directions and a Chebychev expansion in the $y$-direction. The temporal discretisation is given by a third-order semi-implicit backward differentiation scheme (SBDF3). Details of the DNS dataset are summarised in table~I. The occurrence of large-scale persistent coherent flow structures of long streamwise extent is one of the striking features in turbulent BLs, and ASBL is no exception. \begin{table} \centering \begin{tabular}{p{0.9cm}p{0.9cm}p{1.05cm}p{0.9cm}p{0.9cm}p{0.9cm}p{0.9cm}p{0.9cm}p{0.9cm}p{1.6cm}p{0.9cm}} \hline Re & $\Rey_\tau$ & $\tau_w/{\rho U^2_\infty}$ & $L_x/\delta$ & $H/\delta$ & $L_z/\delta$ & $N_x$ & $N_y$ & $N_z$ & $\Delta t/(\delta/U_\infty)$ & N \\ \hline 1000 & 320 & 0.0003 & $4\pi$ & 20 & $4.6\pi$ & 64 & 161 & 96 & 1 & 4068 \\ \hline \end{tabular} \caption{ Details of the ASBL simulations discussed in Sec.~\ref{sec:asbl}. The Reynolds number based on the free-stream velocity $U_\infty$ and the laminar displacement thickness $\delta$ is denoted by $\Rey$. $\Rey_\tau$ is the friction Reynolds number, $\tau_w$ the shear stress at the bottom wall, $\rho$ the density, $U_\infty$ the free-stream velocity, $L_x, H$ and $L_z$ the length, height and width of the simulation domain, $N_x,~N_y$ and $N_z$ the number of grid points in $x$, $y$ and $z$-directions, respectively, $\Delta t$ the sampling interval and $N$ the number of samples. } \label{table2} \end{table} \subsubsection{Streaming DMD} We attempt to describe the dynamics of such a large-scale structure in a long time series using a small number of dynamic modes. In order to alleviate the computational effort, the simulations were carried out at moderate Reynolds number using a short computational domain in the streamwise direction and the sampled flow fields were averaged in streamwise direction. As such, the analysed two-dimensional fields obtained by streamwise averaging adequately represent the three-dimensional fields at least concerning the large-scale dynamics with streamwise coherence that is of interest here. Figure~\ref{fig9}(a) shows the deviations from the laminar flow averaged in the streamwise direction of a typical sample. A large-scale coherent region that is localised in the spanwise direction and extends from about $2\delta$ to $7\delta$ in wall-normal direction is clearly visible. This structure moves slower than the laminar flow and is accompanied by near-wall small-scale regions where the flow is faster than the laminar flow. The slow large-scale structure drifts through the simulation domain in spanwise direction as indicated by the white arrow in figure~\ref{fig9}(a). The spanwise shift occurs with velocity $c = (L_z/\delta)/(2000\delta/U_\infty)\approx 0.007U_\infty$ as evidenced by the periodic pattern in the spatio-temporal evolution of the flow at a fixed distance $y/\delta = 6$ from the bottom plate shown in figure~\ref{fig9}(c). That is, it takes the large-scale coherent structure approximately $2000$ time units to cross the simulation domain once. During that time it varies in intensity, as can be seen when considering the diagonal pattern shown in the spatio-temporal evolution presented in fig.~\ref{fig9}(c), it does not seem to disappear completely and it is difficult to discern a pattern in the evolution. The aim is to reconstruct the dominant spatio-temporal scales of the dynamics, i.e. the spatial extent of the slow large-scale structure and the slow spanwise drift, with a few dynamic modes. To capture the latter requires a very long time series and as such the application of streaming DMD as opposed to classical DMD, as not all data can be stored in memory at the same time. As can be seen by comparison of the reconstructed data in shown in fig.~\ref{fig9}(b) with the original data shown in fig.~\ref{fig9}(a), the wall-normal and spanwise extent of the large-scale coherent structure and the small-scale fast flow regions are well reproduced by only the first two dynamic modes. Similarly, two dynamic modes suffice to reproduce the slow spanwise drift with velocity $c$ as demonstrated by comparison of the spatio-temporal evolution of the reconstructed flow shown in fig.~\ref{fig9}(d) with that of the original data shown in figure~\ref{fig9}(c). The distinctive maxima and minima that are present in the diagonal pattern of the reconstructed flow evolution in fig.~\ref{fig9}(d) reveal the presence of slow periodicity in the structure's intensity, which is not clearly visible in the full dynamics. \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig10.pdf} \caption{ Spatio-temporal structure and reconstruction of large-scale dynamics in the asymptotic suction boundary layer. (a) Representative original velocity-field sample. The colour coding indicated the streamwise-averaged deviation $\langle u \rangle_x$ from laminar flow in streamwise direction. A slow large-scale coherent structure is clearly visible, its horizontal movement is indicated by the white arrow. (b) Reconstructed flow field using the first two dynamic modes. (c) Time evolution of the original flow at the centre of the coherent structure at $y/\delta \simeq 6$. (d) Time evolution of the reconstructed flow from the first two dynamic modes at $y/\delta \simeq 6$. } \label{fig9} \end{figure} \section{Conclusion \& Outlooks} \label{sec:conclusions} In this paper we demonstrated the applicability of streaming DMD \citep{Hemati2014}, an efficient low-storage version of the classical SVD-based DMD \citep{Schmid2010}, for the analysis of turbulent flows that show a certain degree of spatio-temporal coherence. We first validate the proposed streaming DMD by comparing it to the classical SVD-based DMD \citep{Schmid2010}, based on the example of the flow past a cylinder for $Re=100$. The comparison shows that the obtained streaming dynamic modes and eigenvalues match well with those computed from a post-processing implementation of the SVD-based DMD given enough truncation modes. However, streaming DMD can handle considerably larger datasets with less computational costs compared to the SVD-based DMD, thanks to the feature of incremental data updating, which only requires two data samples to be held in memory at a given time. The objective of this study was to extract the main dynamic features with an efficient data-driven method and use the resulting information for a low-dimensional reconstruction of the flow. We considered three examples, namely rapidly rotating turbulent Rayleigh--B\'enard convection, horizontal convection, and asymptotic suction BL. For rapidly rotating turbulent RBC, a dominant zonal flow pattern, the boundary zonal flow, was identified through the first two dynamic modes. Similarly, for horizontal convection two processes that operate on different time scales could be clearly classified in terms of dynamic modes: The second dynamic mode captures the slow oscillatory dynamics in the bulk while the third dynamic mode describes the much faster process of thermal plume emission. Finally, for ASBL a distinctive coherent low-momentum zone that travels through the simulation domain in spanwise direction can be well described by the only first two dynamic modes. These examples show that the incrementally updated DMD algorithm can successfully decompose the dominant structures with corresponding frequencies and modes. This establishes sDMD as an accurate and efficient method to identify and capture dominant spatio-temporal features from large datasets of highly-turbulent flows. As DMD decomposes datasets into coherent structures based on characteristic frequencies, it is especially useful for the analysis of flows featuring large-scale coherent structures and periodic motion. The advantages of the sDMD algorithm, both in terms of low-storage and potential real-time implementation, will make DMD available in numerous contexts where it would have been infeasible previously. This includes in particular the analysis of massive datasets that cannot completely reside in memory. One such application, for instance, concerns the search for unstable periodic orbits in turbulent flows, where a classical DMD-based approach has been successfully applied at moderate Reynolds number \citep{Page2020}. Streaming DMD may constitute a step forward in extending the applicability of this method to higher Reynolds numbers. In the further steps, the streaming DMD can be applied to different turbulent flow datasets, to investigate in detail the ability to decompose coherent flow structures. One disadvantage of streaming DMD is that the truncation number of snapshots to achieve the similar reconstruction accuracy as the SVD-based DMD is typically larger. Here we have not considered the effect of the truncation number but have only focused on the analysis of the first several dominant modes. The truncation number is, however, important and should be quantitatively considered when applying the streaming DMD to flow field reconstruction or decomposition of complex turbulent flows with multi-frequency temporal structures. Finally we want to mention that not only DMD but also some other approaches, based on or related to DMD, might be very efficient in extraction and analysis of the dynamics of the turbulent superstructures. While in the DMD we apply linear transformations to obtain modes out of snapshots and vice versa, a natural extension of the DMD would be to employ non-linear transformations instead. This can be realized either via application of hand-picked nonlinear functions (e.g. to use so-called extending DMD -- eDMD, see \citet{Williams2015}), or by training a deep neural network (e.g. to use deep Koopman models like \citet{Morton2018DeepDM}). A multilayer convolutional neural network appears to be a good candidate for such a task. In general, (un)supervised deep learning seems to be very promising in the problems of the extraction and analysis of the global dynamics of the turbulent flow superstructures. A more detailed consideration of these alternate approaches is beyond the scope of this article and application of these advanced methods for the turbulent superstructure analysis remains a challenge for future studies. \section*{Acknowledgments} This work is supported by the Max Planck Center for Complex Fluid Dynamics, the Priority Programme SPP 1881 "Turbulent Superstructures" of the Deutsche Forschungsgemeinschaft (DFG) under grants Sh405/7 and Li3694/1 and DFG grants Sh405/8 and Sh405/10. The authors acknowledge the Leibniz Supercomputing Centre (LRZ) and the Lichtenberg high performance computer of the TU Darmstadt for providing computing time.
1,314,259,992,861
arxiv
\section{Introduction} Non-compact spin chains are quantum integrable models that appear in certain limits of four-dimensional quantum field theories \cite{Lipatov:1993yb,Beisert:2010jr,Nekrasov:2009rc,Dorey:2011pa}. In contrast to their compact counterparts the physical or quantum space of non-compact spin chains is infinite-dimensional. Though physically these spin chains are of very different nature, they can be uniformly described algebraically in the framework of the quantum inverse scattering method, see e.g. \cite{Faddeev:1996iy}. Here a commuting family of transfer matrices is built from so called R-matrices which are solutions to the Yang-Baxter equation which are studied systematically in the theory of the Yangian and the universal R-matrix \cite{Jimbo:1985zk,Drinfeld:1985rx}. For a given quantum space, transfer matrices are constructed from the monodromy of R-operators by tracing over the auxiliary space. Due to the Yang-Baxter equation the transfer matrices built in this way commute for different representations in the auxiliary space. They satisfy certain functional equations which arise from fusion in the auxiliary space, see e.g. \cite{Zabrodin:1996vm} for an overview. A distinguished role among the commuting operators is taken by the Q-operators \cite{Baxter:1972hz}. They are related to the transfer matrices via the quantum Wronskian and satisfy so called QQ-relations. For the case of interest see \cite{Tsuboi:2009ud} and references therein where these relations are discussed on the level of eigenvalues, i.e. Q-functions. In the pioneering work \cite{Bazhanov:1994ft,Bazhanov:1996dr,Bazhanov:1998dq}, Q-operators were constructed as traces over an infinite-dimensional oscillator space. We refer the reader to the more recent works \cite{hernandez2012asymptotic,frenkel2015baxter,boos2016oscillator,boos2017oscillator} for a mathematical discussion of these infinite-dimensional representations. Similarly, Q-operators for spin chains with diagonal twist can be constructed in the framework of the quantum inverse scattering method. The corresponding Lax operators for certain compact q-deformed higher rank spin chains were written down explicitly in \cite{Bazhanov:2008yc} and derived from the universal R-matrix in \cite{Boos:2010ss} , see also \cite{Antonov:1996ag,Rossi:2002ed,korff2004auxiliary} for earlier works. In the rational case the relevant Lax operators were obtained in a series of papers \cite{Bazhanov:2010ts,Bazhanov:2010jq,Frassek:2010ga,Frassek:2011aa,Frassek:xxx} for $\mathfrak{gl}(N|M)$. These solutions allow for the definition of Q-operators for more general representations and in particular as discussed in this article for representations of the non-compact super algebras $\mathfrak{u}(p,q|r+s)$ acting on the quantum space. While for compact spin chains the Lax operators derived in \cite{Frassek:2011aa} can straightforwardly be used to evaluate the matrix elements of Q-operators, it is much more involved to extract their matrix elements in the case of non-compact representations. The first obvious reason is that the quantum space is infinite-dimensional, however noting that the Q-operators are block diagonal this problem can be overcome by considering magnon blocks separately. The other, more serious issue arises because the Lax operators relevant to construct Q-operators were derived in the form of a Gauss decomposition. Besides its beauty, this form is rather inconvenient for practical purposes. One has to sum over intermediate states to compute explicit matrix elements, and in the case of non-compact representations there are potentially infinitely many such states. In this paper we overcome these difficulties which arise when evaluating Q-operators for non-compact spin chains of Jordan-Schwinger type and present an efficient method to determine the full operatorial Q-system% \footnote{ We use the term Q-system to denote the full set of Q-operators or Q-functions of a given integrable model together with their functional relation. The term is also used for the system of functional relations among characters of Kirillov-Reshetikhin modules, see \cite{Kirillov1990}. } for a fixed magnon block. Section~\ref{sec:qoperatorsjs} is a brief review where we introduce the operatorial Q-system and the corresponding Lax operators, and discuss the Jordan-Schwinger type representations in the quantum space. We then first discuss non-compact spin~$-s$ chains in Section~\ref{sec:sl2}. In this case there are two non-trivial Lax operators, one involving an infinite sum, whose matrix elements we evaluate in full detail. We generalise our approach to higher rank super spin chains in Section~\ref{sec:finiterep}. Here we discuss which Q-operators involve infinite sums and provide a decomposition of the Lax operators on which our approach is based. For the lowest level Lax operators we present an integral formula which conveniently allows to evaluate them in terms of rational functions. The evaluation of higher level Lax operators is discussed in Section~\ref{sec:higher}. In Section~\ref{sec:qsys}, we use the integral representation of the lowest level Lax operators to calculate the matrix elements of the corresponding Q-operators as finite matrices in each magnon block. We furthermore show how the remaining Q-operators can efficiently be determined from this data. In Section~\ref{sec:vacuum} we show how to apply these methods to the $\mathcal{N}\! = \! 4$ SYM\xspace spin chain in the presence of a full diagonal twist, and calculate the Q-functions of the BMN vacuum of arbitrary length. We conclude our work in Section~\ref{sec:conclusion} and speculate about the application of Q-operators to the Quantum Spectral Curve of $\mathcal{N}\! = \! 4$ SYM\xspace \cite{Gromov:2013pga,Gromov:2014caa}. We provide further information on the operatorial Q-systems in the appendix. In particular, to facilitate the application of our results, we collect all formulas which are needed for the calculation of the Q-systems in Appendix~\ref{sec:formulas}. These include formulas for the matrix elements of the Lax operators, and for the evaluation of the supertraces over the auxiliary Fock spaces. \section{Q-operators for representations of oscillator type} \label{sec:qoperatorsjs} In this section we present the derivation of the Lax operators ($\mathcal{R}$-operators) which allow to construct the Q-operators of $\mathfrak{gl}(N|M)$ spin chains with representations realised via Jordan-Schwinger oscillators as traces of monodromy matrices. The general construction reviewed here was developed in a series of papers: The derivation of the $\mathcal{R}$-operators follows the bosonic case in~\cite{Frassek:2011aa} but incorporates the supersymmetric Lax matrices derived in~\cite{Frassek:2010ga}. For the Lax operators of bosonic models, Schwinger oscillators were discussed in \cite{Meneghelli:thesis}. The more general derivation of the Lax operators for generalised rectangular representations is unpublished~\cite{Frassek:xxx} while expressions for the resulting operators can be found in~\cite{Frassek:thesis}. \subsection{Lax operators for Q-operators}\label{sec:laxe} The study of supersymmetric rational spin chains goes back to Kulish~\cite{Kulish:1985bj} who introduced the $\mathfrak{gl}(N|M)$~invariant Lax operators \begin{equation} \mathcal{L}(z)=z+\sum_{a,b=1}^{N+M}(-1)^{\gr{b}} e_{ab}E_{ba}\,, \label{eq:slax} \end{equation} intertwining arbitrary representations of $\mathfrak{gl}(N|M)$ with the defining fundamental one. Here the indices take the values $a,b=1,\ldots,N+M$ while $\gr{a}$ denotes the grading $\gr{\text{fermion}}=1$ and $\gr{\text{boson}}=0$. The $\mathfrak{gl}(N|M)$ generators $E_{ab}$ satisfy the commutation relations \begin{equation} [E_{ab},E_{cd}] = \delta_{bc}E_{ad}-(-1)^{(\gr{a}+\gr{b})(\gr{c}+\gr{d})}\delta_{da}E_{cb}\,, \label{eq:glnm} \end{equation} where we defined the graded commutator as $ [X,Y]=XY-(-1)^{\gr{X}\gr{Y}}YX$. The generators $e_{ab}$ in \eqref{eq:slax} denote the defining fundamental generators of $\mathfrak{gl}(N|M)$ satisfying $e_{ab}e_{cd}=\delta_{bc}e_{ad}$. In the following we restrict to the Schwinger oscillator realisation \begin{equation} E_{ab}=\dagg{\oscgreek{\chi}}_{a}\oscgreek{\chi}_{b}\,, \label{eq:schwinger} \end{equation} where $[\oscgreek{\chi}_{a},\dagg{\oscgreek{\chi}}_{b}] =\delta_{ab}$. The Lax operators for Q-operators with the defining representation of $\mathfrak{gl}(N|M)$ at each spin chain site (the so-called quantum space) were derived in \cite{Frassek:2010ga}, and are given by \begin{equation} L_I(z) = \left(\begin{array}{cc} (z-s_I)\delta_{ab}-(-1)^{\gr{b}}\dagg{\oscgreek{\xi}}_{\udt a \dt a}\oscgreek{\xi}_{\dt a \udt b} & \dagg{\oscgreek{\xi}}_{\udt a \dt b} \\ -(-1)^{\gr{b}}\oscgreek{\xi}_{\dt a \udt b} & \delta_{\dt a \dt b} \end{array}\right)\,. \label{eq:splax} \end{equation} There are $2^{N+M}$ such Lax operators labelled by the set $I\subseteq \{1,\ldots,M+N\}$. The notation here and in the rest of this article is as follows: we sum over repeated indices (appearing two or more times); unbarred indices take values $\udt a, \udt b\in I$ while barred ones take values in its complement, $\dt a,\dt b\in \bar I$. The $(N|M)\times(N|M)$ matrix in \eqref{eq:splax} is written in terms of the sub blocks under this decomposition.% \footnote{We remark that quantities labelled by the set $I$ depend on the partition $I\cup\bar I=\{1,\cdots,N+M\}$. We leave the dependence on this full set implicit.} The shift $s_I$ in the spectral parameter $z$ is introduced for convenience and reads \begin{equation} s_I=\frac{\sum_{\dt a\in \bar I}(-1)^{\gr{\dt a}}}{2}\,. \label{eq:def_shift} \end{equation} The oscillators $(\oscgreek{\xi}_{\dt a \udt a} ,\dagg{\oscgreek{\xi}}_{\udt a \dt a})$ satisfy the graded Heisenberg algebra \begin{equation} [\oscgreek{\xi}_{\dt a \udt a},\dagg{\oscgreek{\xi}}_{\udt b \dt b}] = \oscgreek{\xi}_{\dt a \udt a}\dagg{\oscgreek{\xi}}_{\udt b \dt b} - (-1)^{(\gr{\udt a}+\gr{\dt a})(\gr{\udt b}+\gr{\dt b})} \dagg{\oscgreek{\xi}}_{\udt b \dt b}\oscgreek{\xi}_{\dt a \udt a} = \delta_{\udt a \udt b}\delta_{\dt a \dt b}\,. \label{eq:comosc} \end{equation} We can write down the defining Yang-Baxter equation for the $\mathcal{R}$-operators which are the building blocks for Q-operators when the sites of the quantum space are in a representation space different from the fundamental representation. As in the bosonic case \cite{Frassek:2011aa} this relation is given by \begin{equation} \mathcal{L}(x-y)L_{I}(x)\mathcal{R}_{I}(y) = \mathcal{R}_{I}(y)L_{I}(x)\mathcal{L}(x-y)\,. \label{eq:yberll} \end{equation} The form of $\mathcal{R}$-operators was obtained in \cite{Frassek:xxx} and spelled out in \cite{Frassek:thesis}. The derivation follows \cite{Frassek:2011aa} and, as we will discuss in the following, simplifies significantly in the case which we are interested in. As for $\mathfrak{gl}(N)$ one takes a factorised ansatz, \begin{equation} \mathcal{R}_{I}(z) = e^{(-1)^{\gr{\udt c}+\gr{\udt c}\gr{\dt c}} \dagg{\oscgreek{\xi}}_{\udt c \dt c}E_{\udt c \dt c}} \, \mathcal{R}_{0}^{I}(z) \, e^{-(-1)^{\gr{\udt d}\gr{\dt d}+\gr{\udt d}+\gr{\dt d}} \oscgreek{\xi}_{\dt d \udt d}E_{\dt d \udt d}}\,, \label{eq:laxus} \end{equation} and ends up with a difference equation for the middle part $ \mathcal{R}_{0}^{I}(z)$. The solution to the difference equation simplifies significantly for the choice of generators \eqref{eq:schwinger}. For representations of this type one finds that $\mathcal{R}_{0}^{I}(z)$ can be written in terms of a single Gamma~function \begin{equation} \mathcal{R}_{0}^{I}(z) = \rho_{I}(z)\, \Gamma\left(z+1-s_I-\textstyle\sum_{\dt a}E_{\dt a\dt a}\right) \; . \label{eq:r0} \end{equation} Here $\rho_I$ denotes a normalisation not fixed by the Yang-Baxter equation \eqref{eq:yberll}. As we will see, a good choice for it is given by \begin{equation} \rho_I(z)=\frac{1}{\Gamma(z+1-s_I-\mathbf{C})} \; , \label{eq:norm} \end{equation} which depends on the central charge $\mathbf{C}$ that can be expressed in terms of the number operators $\mathbf{N}_a=\dagg{\oscgreek{\chi}}_{a}\oscgreek{\chi}_{a}$ as \begin{equation} \mathbf{C}=\sum_{a=1}^{N+M} \mathbf{N}_a \; . \label{eq:def_central_charge} \end{equation} We conclude that \begin{equation} \mathcal{R}_{I}(z) = e^{(-1)^{\gr{\udt c}+\gr{\udt c}\gr{\dt c}} \dagg{\oscgreek{\xi}}_{\udt c \dt c}\dagg{\oscgreek{\chi}}_{\udt c}\oscgreek{\chi}_{\dt c}} \, \frac{ \Gamma(z+1-s_I-\dagg{\oscgreek{\chi}}_{\dt a}\oscgreek{\chi}_{\dt a}) }{ \Gamma(z+1-s_I-\mathbf{C}) } \, e^{-(-1)^{\gr{\udt d}\gr{\dt d}+\gr{\udt d}+\gr{\dt d}} \oscgreek{\xi}_{\dt d \udt d}\dagg{\oscgreek{\chi}}_{\dt d}\oscgreek{\chi}_{\udt d}} \label{eq:fullax} \end{equation} solves the Yang-Baxter equation in \eqref{eq:yberll}. The normalisation \eqref{eq:norm} ensures that for the empty set $ \mathcal{R}_{{\varnothing}}(z)=1$ and renders $ \mathcal{R}_{I}(z)$ a polynomial in $z$ for compact representations. Finally we note that the middle part of \eqref{eq:fullax} involves multiple Gamma functions for more general representations, see~\cite{Frassek:2011aa,Frassek:xxx,Frassek:thesis}. However, for Jordan-Schwinger type representations only one Gamma function appears, cf.~\cite{Meneghelli:thesis}. \subsection{Definition of the Q-operators} \label{sec:qs} Using the Lax operators $L_I(z)$ defined in \eqref{eq:splax}, the Q-operators for $\mathfrak{gl}(N|M)$ rational spin chains were introduced in \cite{Frassek:2010ga} for the defining fundamental representations at each site of the quantum space. We are interested in more general representations of oscillators type, cf. \eqref{eq:schwinger}. However, as discussed in e.g. \cite{Frassek:2011aa} for $\mathfrak{gl}(N)$ the construction of the Q-operators and the functional relations among them should be independent of the quantum space. Thus, following \cite{Frassek:2010ga} we define the Q-operators as \begin{equation} \mathbf{Q}_I(z)=e^{iz\sum_{a\in I}(-1)^{|a|}\phi_a}\,\widehat\str\, \mathcal{M}_I(z)\,. \label{eq:qop} \end{equation} Here the monodromy $\mathcal{M}_I$ is built from the tensor product of the $\mathcal{R}$-operators in \eqref{eq:fullax} in the space of oscillators $(\oscgreek{\chi},\dagg{\oscgreek{\chi}})$ and multiplication in the auxiliary space of oscillators $(\oscgreek{\xi},\dagg{\oscgreek{\xi}})$ as \begin{equation} \mathcal{M}_I(z)=\mathcal{R}_{I}^{[1]}(z)\otimes \mathcal{R}_{I}^{[2]}(z)\otimes\ldots\otimes \mathcal{R}_{I}^{[L]}(z)\,. \end{equation} The normalised supertrace $\widehat\str$ is defined by \begin{equation} \widehat\str\, X =\frac{\str e^{-i\sum_{a,b}(\phi_a-\phi_b)\mathbf{N}_{ab}} X}{\str e^{-i\sum_{a,b}(\phi_a-\phi_b)\mathbf{N}_{ab}}}\,, \label{eq:str} \end{equation} where $\str$ denote the ordinary supertrace over the auxiliary Fock space spanned by the states generated from acting with the operators $\dagg{\oscgreek{\xi}}_{\udt a \dt a}$ on a Fock vacuum satisfying $\oscgreek{\xi}_{\dt a \udt a}\ket{0}=0$. These states are labelled by the values of the number operators \begin{equation} \mathbf{N}_{ab}=\dagg{\oscgreek{\xi}}_{ab}\oscgreek{\xi}_{ba}\,, \end{equation} where no sum is implied over the indices $a$ and $b$. The twist parameters $\phi_a$ which can be interpreted as Aharonov-Bohm phases, cf. \cite{Frassek:2010ga}, break the $\mathfrak{gl}(N|M)$ invariance down to its diagonal subalgebra. They are required for the convergence of the supertraces. Note that a regularisation is needed to make some of the traces converge, even in the presence of twists; one can use an $i\varepsilon$ prescription for the twists, such that $\Re(\exp({-i\sum_{a,b}(\phi_a-\phi_b)}))< 1$, see~\cite{Bazhanov:2010jq}. The Q-operators defined in \eqref{eq:qop} commute with each other, $[\mathbf{Q}_I(z),\mathbf{Q}_{I'}(z')]=0$, and, as a consequence of the Yang-Baxter equation \eqref{eq:yberll}, also with the transfer matrix built from the Lax operators $\mathcal{L}$ realised via \eqref{eq:schwinger}. For a discussion on how to obtain the Hamiltonian from the Q-operators we refer the reader to \cite{Frassek:2012mg}. Further it was argued in \cite{Frassek:2010ga} that depending on the grading the Q-operators satisfy either the bosonic QQ-relations \begin{equation} \Delta_{ab} \mathbf{Q}_{I\cup \{a,b\}}(z) \mathbf{Q}_I(z)= \mathbf{Q}_{I\cup \{a\}}(z+\sfrac{1}{2}) \mathbf{Q}_{I\cup \{b\}}(z-\sfrac{1}{2})-\mathbf{Q}_{I\cup\{a\}}(z-\sfrac{1}{2}) \mathbf{Q}_{I\cup \{b\}}(z+\sfrac{1}{2})\,, \label{eq:QQb} \end{equation} where $|a|=|b|$ or the fermionic QQ-relations \begin{equation} \Delta_{ab} \mathbf{Q}_{I\cup \{a\}}(z) \mathbf{Q}_{I\cup \{b\}}(z)= \mathbf{Q}_{I\cup\{a,b\}}(z+\sfrac{1}{2}) \mathbf{Q}_{I}(z-\sfrac{1}{2})-\mathbf{Q}_{I\cup \{a,b\}}(z-\sfrac{1}{2}) \mathbf{Q}_{I}(z+\sfrac{1}{2})\,, \label{eq:QQf} \end{equation} where $|a|\neq|b|$. Here we defined the trigonometric prefactor \begin{equation} \Delta_{ab}= (-1)^{\gr{a}} 2i\sin\left(\frac{\phi_a-\phi_b}{2}\right)\,. \label{eq:delta} \end{equation} The set of all Q-operators can be visualised on a hypercubic Hasse diagram, representing the partial order induced by the inclusion of indices, see for example \cite{Tsuboi:2009ud}. The relations \eqref{eq:QQb} and \eqref{eq:QQf} then constrain the operators on each quadrilateral of this diagram. It is straightforward to compute the Q-operators for the empty set $I={\varnothing}$ and the full set $I=\{1,\ldots,N+M\}={\overline{\varnothing}}$. Using the normalisation in \eqref{eq:norm} one finds \begin{equation} \mathbf{Q}_{\varnothing}(z) = 1\,, \qquad \mathbf{Q}_{\overline{\varnothing}}(z) = \left(\frac{\Gamma(z+1)}{\Gamma(z+1-\mathbf{C})}\right)^L\,, \label{eq:qemptyfull} \end{equation} where we imposed the constraint% \footnote{ For $\mathfrak{gl}(N|N)$ one has the additional constraint $ \sum_{a=1}^{2N}\phi_a=0$. } \begin{equation} \sum_{a=1}^{N+M}(-1)^{\gr{a}}\phi_a=0 \label{eq:twistconstraint} \; . \end{equation} This relation is needed for $\mathbf{Q}_{{\overline{\varnothing}}}$ to be a rational function of the spectral parameter. \subsection{Representation in the quantum space} \label{sec:representation} So far we did not specify a representation in the quantum space. Parts of our derivations will be independent of the concrete representation, but calculations of explicit matrix elements of course require a concrete knowledge of the representation space. Here we focus on unitary highest- or lowest-weight representations of $\mathfrak{u}(p,q|r+s)$ of oscillator type which were first investigated in \cite{Bars:1982ep}. To specialise to a real form of the algebra, we have to indicate, in addition to the grading $\gr{a}=0,1$, which directions have opposite sign under conjugation that can be realised via a particle-hole transformation. We indicate these using the variables \begin{equation} \omega_{a} = \begin{cases} +1 & \text{if oscillator $a$ is not transformed} \\ -1& \text{if oscillator $a$ is transformed} \end{cases} \; . \label{eq:phindicator} \end{equation} Then the generators $E_{ab}=\dagg{\oscgreek{\chi}}_a\oscgreek{\chi}_b$ can be realised by the oscillators \begin{equation} (\oscgreek{\chi}_a,\dagg{\oscgreek{\chi}}_a)=\begin{rrcases} \;\;\; (\osc{a}_a,\dagg{\osc{a}}_a) &\qquad\text{for }\gr{a}=0\text{ and }\omega_a=+1\\ (\dagg{\osc{b}}_a,-\osc{b}_a) &\qquad\text{for }\gr{a}=0\text{ and }\omega_a=-1\\ (\osc{c}_a,\dagg{\osc{c}}_a) &\qquad\text{for }\gr{a}=1\text{ and }\omega_a=+1\\ (\dagg{\osc{d}}_a,\osc{d}_a) &\qquad\text{for }\gr{a}=1\text{ and }\omega_a=-1 \end{rrcases} \; . \label{eq:realoscs} \end{equation} These oscillators act on a Fock space with a vacuum state satisfying $ \osc{a}_a\ket{0}= \osc{b}_a\ket{0}= \osc{c}_a\ket{0}= \osc{d}_a\ket{0}=0 $, such that an orthonormal basis is given by \begin{equation} \ket{{\mathbf{m}}} = \ket{m_1,\ldots,m_K} = \frac{\left\{\begin{smallmatrix}\dagg{\osc{a}}_1\\\dagg{\osc{b}}_1\\\dagg{\osc{c}}_1\\\dagg{\osc{d}}_1\end{smallmatrix}\right\}^{m_1}}{\sqrt{m_1!}} \cdots \frac{\left\{ \begin{smallmatrix}\dagg{\osc{a}}_{K}\\\dagg{\osc{b}}_{K}\\\dagg{\osc{c}}_{K}\\\dagg{\osc{d}}_{K}\end{smallmatrix}\right\}^{m_{K}}}{\sqrt{m_{K}!}} \ket{0,\ldots,0} \; , \label{eq:defstates} \end{equation} with $K=N+M=p+q+r+s$. Since the oscillators obey the standard conjugation $\osc{a}^\dagger=\dagg{\osc{a}}$, $\osc{b}^\dagger=\dagg{\osc{b}}$, $\osc{c}^\dagger=\dagg{\osc{c}}$ and $\osc{d}^\dagger=\dagg{\osc{d}}$, the generators are those of $\mathfrak{u}(p,q|r+s)$, satisfying \begin{equation} E_{ab}^\dagger = \omega_a^{1+\gr{a}} \omega_b^{1+\gr{b}} E_{ba} \; . \end{equation} Finally, the Fock space contains a series of representations labelled by the central charge $\mathbf{C}=\sum_a\mathbf{N}_a$ where the number operators have to be expressed in terms of oscillators $\osc{a}$, $\osc{b}$, $\osc{c}$ and $\osc{d}$ via \eqref{eq:realoscs}. These representations are of highest or lowest weight type depending on the order of the different types of oscillators. \section{Non-compact Heisenberg spin chains} \label{sec:sl2} In this section we provide the formulas necessary to evaluate Q-operators of non-compact Heisenberg spin chains, which include for example the spin $-1$ model which is of interest for QCD in the Regge limit \cite{Faddeev:1994zg}. These models constitute the simplest non-trivial case which can be treated in the more general framework presented in Section~\ref{sec:finiterep} and \ref{sec:qsys}. While the formula for the Lax operators \eqref{eq:fullax} is extremely compact, it is rather inconvenient for practical purposes where the matrix elements of the Lax operators and Q-operators are of interest. In particular, for non-compact representations we encounter infinite sums. To understand this problem we consider the $\mathcal{R}$-operators \eqref{eq:fullax} with $|I|=1$ which for the case of $\mathfrak{gl}(2)$ are given by \begin{equation} \mathcal{R}_{\{a\}}(z) = e^{\dagg{\oscgreek{\xi}}_{\udt a \dt a}E_{\udt a\dt a}} \frac{ \Gamma(z+\frac{1}{2}-E_{\dt a\dt a}) }{ \Gamma(z+\frac{1}{2}-\mathbf{C}) } e^{- \oscgreek{\xi}_{\dt a \udt a}E_{\dt a\udt a}}\,, \label{eq:fullaxsl2} \end{equation} where $a=1,2$ and $\dt a \neq a$. For infinite-dimensional representations the Lax operator contains two infinite sums emerging from the exponential functions. Using the algebraic relations in \eqref{eq:glnm} we note that the Lax operators can be rewritten as \begin{equation} \mathcal{R}_{\{a\}}(z) = \sum_{n=-\infty}^{+\infty} (\dagg{\oscgreek{\xi}}_{\udt a \dt a}E_{\udt a\dt a})^{\theta(+n)|n|} \mathbb{M}_{\{a\}}(z;|n|) (- \oscgreek{\xi}_{\dt a \udt a}E_{\dt a\udt a})^{\theta(-n)|n|}\,, \label{eq:fullaxsl22} \end{equation} with $\theta(-m)=\theta(0)=0$ and $\theta(m)=1$ for $m\in\mathbb{N}_+$. The middle part is given by an infinite sum and only depends on Cartan elements \begin{equation} \mathbb{M}_{\{a\}}(z;|n|)=\frac{1}{|n|!}\frac{\Gamma(z+\frac{1}{2}-E_{\dt a\dt a})}{ \Gamma(z+\frac{1}{2}-\mathbf{C})}\,_3F_2(E_{\dt a\dt a}-\lambda_1,E_{\dt a\dt a}-\lambda_2+1,-\mathbf{N}_{a\dt a};1+|n|,E_{\dt a\dt a}+\frac{1}{2}-z;1)\,, \label{eq:laxmid} \end{equation} with the $\mathfrak{gl}(2)$ weights $\lambda_1$ and $\lambda_2$.\footnote{Note that in the rank $1$ case the reformulation \eqref{eq:fullaxsl22} with \eqref{eq:laxmid} is also valid for representations that are not of Jordan-Schwinger type.} We are interested in highest-weight state representations of the type discussed in Section~\ref{sec:representation}. To describe non-compact spin chains with spin~$-s$, where $s$ is a positive half integer, we take the Jordan-Schwinger realisation \eqref{eq:schwinger} and perform a particle hole transformation on the oscillators of type $1$: \begin{equation} (\dagg{\oscgreek{\chi}}_1,\oscgreek{\chi}_1)\rightarrow (-\osc{b},\dagg{\osc{b}}) \,,\qquad (\dagg{\oscgreek{\chi}}_2,\oscgreek{\chi}_2)\rightarrow (\dagg{\osc{a}},\osc{a})\,. \end{equation} For convenience we use a notation different from the rest of this article and label the states in the spin $-s$ representation as \begin{equation} |m\rangle_s=|2s-1+m,m\rangle\,, \label{eq:statesrank1} \end{equation} cf.~\eqref{eq:defstates}. The highest-weight state $ |0\rangle_s$ then satisfies \begin{equation} E_{12}|0\rangle_s=0\,,\qquad E_{11}|0\rangle_s=\lambda_1|0\rangle_s=-2s|0\rangle_s\,,\qquad E_{22}|0\rangle_s=\lambda_2|0\rangle_s = 0 \; , \end{equation} and the other states of the representation can be generated from $|0\rangle_s$ by acting with the operator $E_{21}$. The central charge takes the value $\mathbf{C}|m\rangle_s=-2s|m\rangle_s$. Our goal is to obtain the matrix elements of the $\mathcal{R}$-operators in \eqref{eq:fullaxsl2}. From \eqref{eq:qemptyfull} we find that $\mathcal{R}_{\varnothing}(z)=1$ and $\mathcal{R}_{\overline{\varnothing}}(z)=\frac{1}{(z+1)_{2s}}$, where the Pochhammer symbol is defined by $(a)_n\coloneqq \Gamma(a+n)/\Gamma(a)$.% \footnote{In the following we will sometimes consider Pochhammer symbols where $a$ can be a negative integer. In this case we define the symbol using the identity \( \frac{\Gamma(a+n)}{\Gamma(a)} =(-1)^n \frac{\Gamma(1-a)}{\Gamma(1-a-n)} \), which follows from Euler's reflection formula.} It is rather straightforward to obtain the matrix elements of $\mathcal{R}_{\{2\}}$. They are polynomials in the spectral parameter $z$ and can be obtained noting that the series representation of the hypergeometric function in \eqref{eq:laxmid} truncates. One finds \begin{equation} \begin{split} _s\langle \tilde m|\mathcal{R}_{ \{2\} }(z)| m\rangle_s&=\sqrt{\frac{\max(m,\tilde m)!}{\min(m,\tilde m)!}\frac{\max(2s-1+m,2s-1+\tilde m)!}{\min(2s-1+m,2s-1+\tilde m)!}}\\[5pt] &\;\times\dagg{\oscgreek{\xi}}_{21}^{\,\theta(\tilde m- m)|\tilde m- m|}\; \mathbb{M}_{\{2\}}(z,\mathbf{N}_{21},|m-\tilde m|,\min(m,\tilde m))\;\oscgreek{\xi}_{12}^{\,\theta( m-\tilde m)|m-\tilde m|}\,, \end{split} \label{eq:formR2} \end{equation} with the middle part which is diagonal in the auxiliary space \begin{equation} \mathbb{M}_{ \{2\}}(z,\mathbf{N}_{21},k,l)=(2s-1+l)!\sum_{p=0}^{l}\binom{l}{p}\frac{(\mathbf{N}_{21}+1+p-l)_{l-p}(z+\frac{1}{2}+2s)_{p}}{(2s-1+p)!(k+l-p)!} \; . \label{eq:middleR2} \end{equation} Here $\theta(-m)=\theta(0)=0$ and $\theta(m)=1$ for $m\in\mathbb{N}_+$. However, as already noted in \cite{Frassek:2011aa} the operator $\mathcal{R}_{\{1\}}$ yields infinite sums when evaluated naively, since there are only raising operators acting on the states, cf.~\eqref{eq:fullaxsl2}. This makes it difficult to evaluate its matrix elements concretely in terms of rational functions. Nevertheless, as we will show in Section~\ref{sec:derivation}, using the integral representation of the hypergeometric function and the Euler transformation the matrix elements can be obtained from \eqref{eq:fullaxsl22} and written as \begin{equation} \begin{aligned} &_s\langle \tilde m|\mathcal{R}_{\{1\}}(z)| m\rangle_s=\sqrt{\frac{\max(m,\tilde m)!}{\min(m,\tilde m)!}\frac{\max(2s-1+m,2s-1+\tilde m)!}{\min(2s-1+m,2s-1+\tilde m)!}} \\[6pt] &\qquad\times (-\dagg{\oscgreek{\xi}}_{12})^{\,\theta(m-\tilde m)|m-\tilde m|} \mathbb{M}_{ \{1\}}(z,\mathbf{N}_{12},|m-\tilde m|,\max(m,\tilde m))(-\oscgreek{\xi}_{21})^{\,\theta(\tilde m - m)|\tilde m -m|} \; , \label{eq:formR1} \end{aligned} \end{equation} with the middle part taking the simple form \begin{equation} \mathbb{M}_{ \{1\}}(z,\mathbf{N}_{12},k,l)=\frac{ \mathbb{M}_{\{2\}}(z,\mathbf{N}_{12},k,l-k)}{(z-\mathbf{N}_{12}-l+\frac{1}{2})_{2l-k+2s}}\,. \label{eq:middleR1} \end{equation} We see that also this non-polynomial $\mathcal{R}$-operator can be written in a very compact form and observe that the resulting expression is very similar to the polynomial $\mathcal{R}$-operator. In particular, both are simple rational functions of the spectral parameter and the auxiliary oscillators. The only difference is the dependence on the auxiliary space number operators in the denominator. This has important consequences for the analytic structure of the resulting Q-operators. The matrix elements of the corresponding Q-operators \eqref{eq:qop} can now be derived as \begin{equation} _s\langle\tilde{\mathbf{m}}|\mathbf{Q}_{\{a\}}(z)|{\mathbf{m}}\rangle_s=e^{iz\phi_{a}}\,\widehat \tr\,_s \langle \tilde m_1|\mathcal{R}_{\{a\}}(z)| m_1\rangle_s\cdots\,_s \langle \tilde m_L|\mathcal{R}_{\{a\}}(z)| m_L\rangle_s\,. \end{equation} The big advantage of first evaluating the $\mathcal{R}$-operators in the quantum space and subsequently taking the trace in the auxiliary space is that we can restrict to individual magnon sectors with $M=\sum_{i=1}^L m_i=\sum_{i=1}^L \tilde m_i$. For each such sector, the Q-operators can then be realised as matrices of finite size. As we will show in Section~\ref{sec:qsys}, the matrix elements of the Q-operators corresponding to the $\mathcal{R}$-operators with non-truncating sums are non-rational functions and can be written in terms of the Lerch transcendent (Lerch zeta-function) defined as \begin{equation} \label{eq:def_hl} \Phi^\tau_\ell(z)=\sum_{k=0}^\infty \frac{\tau^{ k}}{(k+z)^\ell} \; . \end{equation} To give the reader an impression of the resulting Q-functions we consider the concrete case of a spin chain with spin $-\frac{1}{2}$. For small length $L$ and magnon number $M$ the Q-operators resulting from the monodromy construction can easily be diagonalised. The eigenvalues and eigenvectors containing the twist parameters are rather involved. For the case $L=2$ and $M=0,1,2$ one easily obtains explicit though rather lengthy expressions for the eigenvalues and eigenvectors. Due to the constraint \eqref{eq:twistconstraint}, there is only one independent twist parameter with $\phi_1=-\phi_2$. For small values of $\phi_1$ the eigenvalues corresponding to the highest-weight states of the untwisted spin chain are given by: \begin{center} \begin{tabular}{l|l|l} $M$&$Q_{\{1\}}(z)$&$Q_{\{2\}}(z)$\\ \hline 0 & $2i\phi_1\big[\psi'(-z-\frac{1}{2})\big]+\mathcal{O}(\phi_1^2)$ & $1$ \\ 1 & $2i\phi_1\times(-4)\times\big[1 + (z+1) \psi'(-z-\frac{1}{2})\big]+\mathcal{O}(\phi_1^2)$ & $(z+1)+\mathcal{O}(\phi_1)$ \\ 2 & $2i\phi_1\times 9 \times\big[(z+1) +(z^2+2z+\sfrac{13}{12}) \psi'(-z-\frac{1}{2})\big]+\mathcal{O}(\phi_1^2)$ & $(z^2+2z+\frac{13}{12})+\mathcal{O}(\phi_1)$ \\ \end{tabular} \end{center} Here the non-polynomial Q-functions are expressed in terms of the Polygamma function $\psi'(z)=\Phi^1_2(z)$. We observe that for fixed $M$, the prefactors of these functions in $Q_{\{1\}}$ are given by the functions $Q_{\{2\}}$, which are known in closed form and given by Hahn polynomials \cite{Korchemsky:1995be,Eden:2006rx}. Expanding the factor $\Delta_{12}=2i\phi_1+\mathcal{O}(\phi_1^3)$ in the functional relation \eqref{eq:QQb}, we see that the functions $\frac{1}{2i\phi_1}Q_{\{1\}}(z)$ and $Q_{\{2\}}(z)$ satisfy the functional relations of the untwisted spin chain, where the factor $\Delta_{12}$ is not present. \section{Non-compact $\mathcal{R}$-operators and infinite sums} \label{sec:finiterep} In Section~\ref{sec:laxe} we introduced the $\mathcal{R}$-operators with Jordan-Schwinger oscillator realisation \eqref{eq:schwinger} in the quantum space. As for more general representations in the quantum space the $\mathcal{R}$-operators naturally decompose into three factors, cf.~\eqref{eq:fullax}. However, as discussed for the case of non-compact spin chains with $\mathfrak{u}(1,1)$ symmetry in Section~\ref{sec:sl2}, this undoubtedly elegant expression has a drawback when considering non-compact representations. The exponentials appearing on the right (left) in the $\mathcal{R}$-operators \eqref{eq:fullax} do not truncate in the case with only creation (annihilation) operators in the quantum space. Thus one has to sum over an infinite tower of states.% \footnote{Note that at this stage we do not consider the auxiliary space. Later on when evaluating Q-operators we will take the trace over the infinite-dimensional Fock space, see \eqref{eq:qop}.} This issue likewise appears for higher rank algebras $\mathfrak{u}(p,q|r+s)$ as introduced in Section~\ref{sec:representation}. Furthermore, in this case even the $\mathcal{R}$-operators with truncating sums are complicated, and naively expanding the exponentials leads to a large number of crossterms which grows exponentially; most of these terms do not contribute to a given matrix element. We start with a survey of the Q-system of $\mathfrak{u}(p,q|r+s)$ and discuss its analytic structure. Afterwards we study a different representation of \eqref{eq:fullax}, that is simpler to evaluate in the non-compact, but also in the compact, case. We obtain a convenient formula \eqref{eq:middlepart_lowest} to compute the lowest order Lax operators which, as we will see in Section~\ref{sec:qsys}, is sufficient to determine the whole Q-system. Finally we speculate about generalisations beyond the first level. \subsection{Comments on the Q-system} \label{sec:com} \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.6] \draw[] (0,0) -- (5,0); \draw[] (0,-1) -- (5,-1); \draw[] (0,-3) -- (5,-3); \draw[] (0,-4) -- (5,-4); \draw[] (0,-5) -- (5,-5); % \draw[] (0,0) -- (0,-5); \draw[] (1,0) -- (1,-5); \draw[] (2,0) -- (2,-5); \draw[] (4,0) -- (4,-5); \draw[] (5,0) -- (5,-5); % \draw[color=gray] (0,0) -- (2,2); \draw[color=gray] (1,0) -- (3,2); \draw[color=gray] (2,0) -- (4,2); \draw[color=gray] (4,0) -- (6,2); \draw[color=gray] (5,0) -- (7,2); \draw[color=gray] (2,2) -- (7,2); \draw[color=gray] (0+0.5,0+0.5) -- (5+0.5,0+0.5); \draw[color=gray] (0+1.5,0+1.5) -- (5+1.5,0+1.5); \draw[color=gray] (5,-5) -- (7,-3); \draw[color=gray] (5,-4) -- (7,-2); \draw[color=gray] (5,-3) -- (7,-1); \draw[color=gray] (5,-1) -- (7,1); \draw[color=gray] (7,-3) -- (7,2); \draw[color=gray] (5+0.5,-5+0.5) -- (5+0.5,0+0.5); \draw[color=gray] (5+1.5,-5+1.5) -- (5+1.5,0+1.5); % \dcirc{0}{0} \dcirc{0}{-1} \dcirc{0}{-3} \dcirc{0}{-4} \dcirc{0}{-5} \dcirc{1}{0} \dcirc{2}{0} \dcirc{4}{0} \dcirc{5}{0} \foreach \i in {-1,-3,-4,-5} { \foreach \j in {1,2,4,5} { \dcross{\j}{\i} } } % \foreach \i in {0,1,2,4,5} { \foreach \j in {0.5,1.5,2} { \dcircg{\i+\j}{0+\j} } } \foreach \i in {0,1,2,4} { \foreach \j in {0.5,1.5,2} { \dcrossg{5+\j}{-5+\i+\j} } } \node[anchor=north east] at (0,-5) {\footnotesize$(0,0,0)$}; \node[anchor=north west] at (5,-5) {\footnotesize$(0,q,0)$}; \node[anchor=south east] at (0,0) {\footnotesize$(p,0,0)$}; \node[anchor=south west] at (5+2,0+2) {\footnotesize$(p,q,r+s)$}; \draw[thick,-{Latex[width=3pt,length=4pt]}] (-0.7,-4.3) -- (-0.7,-4.3+1.6); \draw[thick,-{Latex[width=3pt,length=4pt]}] (0.7,-5.7) -- (0.7+1.6,-5.7); \draw[gray,thick,-{Latex[width=3pt,length=4pt]}] (5+1.1,-5+0.1+0.2) -- (5+1.1+1.0,-5+0.1+1.0+0.2); \node[anchor=east] at (-0.7,-4.3+0.8) {\footnotesize$\osc{a}$}; \node[anchor=north] at (0.7+0.8,-5.7) {\footnotesize$\osc{b}$}; \node[anchor=north west] at (5+1.1+0.1,-5+0.1+0.9+0.2) {\rotatebox{45}{\color{gray}\footnotesize$\osc{c},\osc{d}$}}; \end{tikzpicture} \end{center} \caption[]{ Projection of the Hasse diagram with Q-operators on the lattice points $(i,j,k)$. Rational Q-operators are marked with $\begin{tikzpicture}[scale=0.9]\dcirc{0}{0}\end{tikzpicture}$ and non-rational ones with $\begin{tikzpicture}[scale=0.9]\dcross{0}{0}\end{tikzpicture}$ corresponding to truncating and non-truncating $\mathcal{R}$-operators respectively. } \label{fig:distribution} \end{figure} The Q-system of $\mathfrak{u}(p,q|r+s)$ contains a total number of $2^{p+q+r+s}$ Q-operators $\mathbf{Q}_I$ built from the operators $\mathcal{R}_I$ with $I\subseteq\{1,\ldots,p+q+r+s\}$. Their functional relations can conveniently be depicted in a Hasse diagram spanned by a hypercube. For our purposes it is convenient to consider a projection of the Hasse diagram onto an ordinary three-dimensional cube. This projection is visualised in Figure~\ref{fig:distribution}. In this diagram, each lattice point $(i,j,k)$ is occupied by $\binom{p}{i}\times \binom{q}{j}\times \binom{r+s}{k}$ Q-operators, namely those Q-operators whose index set $I$ includes $i$ bosonic indices which are not particle-hole transformed according to \eqref{eq:realoscs}, $j$ bosonic indices which are transformed, and $k$ fermionic indices. Here $i=1,\ldots,p$, $j=1,\ldots,q$ and $k=1,\ldots,r+s$. The total number of indices in the set $I$ is referred to as the level $k=|I|$ which can be thought of as a diagonal slice of the cube. Each level contains $\binom{p+q+r+s}{k}$ Q-operators. Interestingly, for only $2^{r+s}(2^p+2^q-1)$ of them all exponentials in the $\mathcal{R}$-operators \eqref{eq:fullax} truncate. This can be seen from the action of the exponentials on the states defined in \eqref{eq:defstates}. An $\mathcal{R}$-operator $\mathcal{R}_I$ has matrix elements with truncating sums if one or both of the following conditions hold: \begin{itemize} \item $I$ does not contain any indices corresponding to bosonic oscillators that are particle-hole transformed. \item $I$ contains all indices corresponding to bosonic oscillators that are not particle-hole transformed. \end{itemize} Else the matrix elements of the $\mathcal{R}$-operator will involve infinite sums. Due to the nilpotency of the fermionic oscillators, the fermionic degrees of freedom do not change the truncating or non-truncating nature of the $\mathcal{R}$-operators. It follows that the truncating $\mathcal{R}$-operators are located at the lattice sites $(i,0,k)$ and $(p,j,k)$ where $i=1,\ldots,p$, $j=1,\ldots,q$ and $k=1,\ldots,r+s$. We denote the vertices with truncating ones by $\begin{tikzpicture}[scale=0.9]\dcirc{0}{0}\end{tikzpicture}$ and the non-truncating ones by $\begin{tikzpicture}[scale=0.9]\dcross{0}{0}\end{tikzpicture}$ in Figure~\ref{fig:distribution}. At a given lattice point the latter ones contain $j(p-i)$ pairs of exponentials which do not truncate. Truncating $\mathcal{R}$-operators yield Q-operators whose matrix elements are rational functions of the spectral parameter multiplied by an exponential function including the twist phases. In the case of non-truncating $\mathcal{R}$-operators the resulting matrix elements of the Q-operators are written in terms of rational functions and the generalised Lerch transcendent, see Section~\ref{sec:qsys} and Appendix~\ref{sec:formulas}. \subsection{Ladder decomposition of $\mathcal{R}$-operators} In this section we introduce the decomposition of the $\mathcal{R}$-operators \eqref{eq:fullax} on which our approach to evaluate Q-operators for non-compact spin chains is based. Using only algebraic relations we reduce the number of infinite sums and make the actual challenge of evaluating $\mathcal{R}$-operators manifest. Since the factors in the exponentials of the $\mathcal{R}$-operators \eqref{eq:fullax} appear often in the following derivations, we define abbreviations for them: \begin{equation} Y_{\udt a \dt a} = (-1)^{\gr{\udt a}+\gr{\dt a}\gr{\udt a}} \dagg{\oscgreek{\xi}}_{\udt a \dt a}\dagg{\oscgreek{\chi}}_{\udt a}\oscgreek{\chi}_{\dt a}\,, \qquad X_{\udt a \dt a} = (-1)^{\gr{\dt a}+\gr{\udt a}+\gr{\dt a}\gr{\udt a}} \oscgreek{\xi}_{\dt a \udt a}\dagg{\oscgreek{\chi}}_{\dt a}\oscgreek{\chi}_{\udt a}\,. \label{eq:defxy} \end{equation} The main idea is to expand the exponentials and to combine terms with the same difference in the powers of the matching factors $X_{\udt a \dt a}$ and $Y_{\udt a \dt a}$ in the exponents. This can be done using the formula \begin{equation} e^{Y_{\udt a \dt a}} f(\mathbf{N}_{\udt a}, \mathbf{N}_{\dt a}) e^{-X_{\udt a \dt a}} = \sum_{n=-\infty}^{+\infty} (Y_{\udt a \dt a})^{\theta(+n)\abs{n}} \sum_{k=0}^{\infty} \frac{(-1)^k Y_{\udt a \dt a}^k X_{\udt a \dt a}^k}{k! (\abs{n}+k)!} f(\mathbf{N}_{\udt a} - k, \mathbf{N}_{\dt a} + k) (-X_{\udt a \dt a})^{\theta(-n)\abs{n}} \label{eq:reordering} \end{equation} that can be derived using the oscillator algebra. We can furthermore express the factor $Y_{\udt a \dt a}^k X_{\udt a \dt a}^k$ as \begin{equation} Y_{\udt a \dt a}^k X_{\udt a \dt a}^k =(-1)^{\gr{\udt a}+\gr{\dt a}} \frac{\Gamma(1+\mathbf{N}_{\udt a \dt a})}{\Gamma(1+\mathbf{N}_{\udt a \dt a}-k)} \frac{\Gamma(1+\mathbf{N}_{\udt a})}{\Gamma(1+\mathbf{N}_{\udt a}-k)} \frac{\Gamma(\mathbf{N}_{\dt a}+k+(-1)^{\gr{\dt a}})} {\Gamma(\mathbf{N}_{\dt a}+(-1)^{\gr{\dt a}})} \; . \label{eq:xypowers} \end{equation} Applying \eqref{eq:reordering} to the $\mathcal{R}$-operators \eqref{eq:fullax}, we find that they can be written in a form which features a minimal number of creation and annihilation operators, \begin{equation} \mathcal{R}_I(z) = \sum_{\{n_{\udt a \dt a}\}=-\infty}^\infty \Bigg[ \prod_{\udt a,\,\dt a} (Y_{\udt a\dt a})^{\theta(n_{\udt a \dt a})\abs{n_{\udt a \dt a}}} \Bigg] \mathbb{M}_I(z , \{\mathbf{N}\}, \{ n\} ) \Bigg[ \prod_{\udt a,\,\dt a} (-X_{\udt a\dt a})^{\theta(-n_{\udt a \dt a})\abs{n_{\udt a \dt a}}} \Bigg] \; . \label{eq:multireorder} \end{equation} The purely diagonal part $\mathbb{M}_I$ is then given by \begin{equation} \begin{aligned} \mathbb{M}_I(z, \{\mathbf{N}\}, \{ n\} ) = \sum_{\{k_{\udt a \dt a}\}=0}^\infty &\left[ \prod_{\udt a, \dt a} \frac{ (-1)^{(\gr{\udt a}+\gr{\dt a}+1)k_{\udt a \dt a}} }{ \Gamma(k_{\udt a \dt a}+1) \Gamma(\abs{n_{\udt a \dt a}}+k_{\udt a \dt a}+1) }\frac{\Gamma(1+\mathbf{N}_{\udt a \dt a})}{ \Gamma(1+\mathbf{N}_{\udt a \dt a}-k_{\udt a \dt a})} \right] \\ &\left[ \prod_{\udt a} \frac{ \Gamma(1+\mathbf{N}_{\udt a}) }{ \Gamma(1+\mathbf{N}_{\udt a}-\sum_{\dt a}k_{\udt a \dt a}) } \right] \left[ \prod_{\dt a} \frac{ \Gamma( \mathbf{N}_{\dt a}+(-1)^{\gr{\dt a}}+\sum_{\udt a}k_{\udt a \dt a} ) }{ \Gamma(\mathbf{N}_{\dt a}+(-1)^{\gr{\dt a}}) } \right] \\ & \;\frac{\Gamma(z+1-s_I-\sum_{\dt a}\mathbf{N}_{\dt a}-\sum_{\udt a, \dt a}k_{\udt a \dt a})}{\Gamma(z+1-s_I-\mathbf{C})}\,. \end{aligned} \label{eq:multireordermiddle} \end{equation} While $\mathbb{M}_I$ looks rather complicated, this representation is in fact quite convenient. First note that for any matrix element of \eqref{eq:multireorder}, the outer sums over the variables $n_{\udt a \dt a}$ are always finite, and only serve to introduce enough creation and annihilation operators to produce overlapping states. For the lowest and the highest level of the Q-system, only a single term contributes for any matrix element, which is then effectively given by the diagonal part. Compared to naively expressing each term in the exponentials by their power series, this representation already removes half of the infinite sums. So far our discussion was purely algebraic and we did not specify the spectrum of the number operators. Assuming that these operators act on a Hilbert space as given in \eqref{eq:defstates}, the spectrum of the $\mathbf{N}_{\udt a}$ is positive or negative integer valued depending on whether the corresponding oscillators are particle-hole transformed, cf. \eqref{eq:realoscs}. For compact representations, all $\mathbf{N}$ are positive or zero and the sums over the variables $k_{\udt a \dt a}$ in \eqref{eq:multireordermiddle} are finite; they are effectively truncated by the Gamma functions in the denominator. However, for non-compact representations some $\mathbf{N}$ take negative integer values such that some of the sums may not truncate. The evaluation of those sums is however simplified by the fact that they only involve diagonal operators. \subsection{Integral representation for lowest level $\mathcal{R}$-operators} \label{sec:lowest} In this subsection we focus on the $\mathcal{R}$-operators of the lowest level $\mathcal{R}_{\{\udt a\}}$. For these $\mathcal{R}$-operators only one term in the sum \eqref{eq:multireorder} contributes to any matrix element, which is then directly given by the diagonal part $\mathbb{M}_{\{a\}}$. Furthermore, as will be discussed in Section~\ref{sec:qsys}, the corresponding Q-operators determine the full Q-system. Here we derive an integral representation of the diagonal part \eqref{eq:multireordermiddle}, which can easily be evaluated and from which rational finite sum expressions can readily be obtained, cf.~Section~\ref{sec:derivation} and \ref{sec:schml}. We first specialise the expression given in \eqref{eq:multireorder} and \eqref{eq:multireordermiddle} to the lowest level, and write the Lax operators as \begin{equation} \mathcal{R}_{\{\udt a\}}(z) = \sum_{\{n_{\dt a}\}=-\infty}^\infty \left[ \prod_{\dt a} (Y_{\udt a \dt a})^{\theta(+n_{\dt a})\abs{n_{\dt a}}} \right] \mathbb{M}_{\{\udt a\}}(z,\{\mathbf{N}\}, \{ n\}) \left[ \prod_{\dt a} (-X_{\udt a \dt a})^{\theta(-n_{\dt a})\abs{n_{\dt a}}} \right] \; , \label{eq:form_lax_lowest} \end{equation} with $X$ and $Y$ given in \eqref{eq:defxy}. Here the diagonal part reads \begin{equation} \begin{aligned} \mathbb{M}_{\{\udt a\}}(z,\{\mathbf{N}\}, \{ n\}) = \sum_{\{k_{\dt a}\}=0}^{\infty} & \prod_{\dt a} \Bigg[ \frac{ (-1)^{(\gr{\udt a}+ \gr{\dt a}+1)k_{\dt a}} \Gamma(1+\mathbf{N}_{\udt a \dt a}) }{ k_{\dt a}! \, (\abs{n_{\dt a}}+k_{\dt a})! \; \Gamma(1+\mathbf{N}_{\udt a \dt a}-k_{\dt a}) } \frac{ \Gamma(\mathbf{N}_{\dt a}+(-1)^{\gr{\dt a}}+k_{\dt a}) }{ \Gamma(\mathbf{N}_{\dt a}+(-1)^{\gr{\dt a}}) } \Bigg] \\ &\times \frac{ \Gamma(\mathbf{N}_{\udt a}+1) }{ \Gamma(\mathbf{N}_{\udt a}+1-\textstyle\sum\nolimits_{\dt a}k_{\dt a}) } \frac{ \Gamma(z+1-\textstyle\sum\nolimits_{\dt a}(\mathbf{N}_{\dt a}+k_{\dt a}-\sfrac{1}{2}(-1)^{\gr{\dt a}})) }{ \Gamma(z+1-\mathbf{C}-\textstyle\frac{1}{2}\sum_{\dt a}(-1)^{\gr{\dt a}}) } \; . \end{aligned} \label{eq:middlepartlowest} \end{equation} To obtain the aforementioned integral representation, we evaluate all sums over the variables $k_{\dt a}$ in the diagonal part \eqref{eq:middlepartlowest}. Since the intermediate formulas are quite lengthy, we only sketch this derivation. Consider the first sum, which we take to be over some index $\dt b$. It is straightforward to see that this sum can be written as a product involving Gamma functions and the following hypergeometric function: \begin{equation} \pFq[36]{3}{2}{% \sum\nolimits_{\dt a \neq \dt b}k_{\dt a}-\mathbf{N}_{\udt a},% -\mathbf{N}_{\udt a \dt b},% \mathbf{N}_{\dt b}+(-1)^{\gr{\dt b}}% }{% -z +{\textstyle\frac{1}{2}\sum_{\dt a}(-1)^{\gr{\dt a}}} + \sum\nolimits_{\dt a \neq \dt b}(\mathbf{N}_{\dt a}+k_{\dt a})+\mathbf{N}_{\dt b},% 1+\abs{n_{\dt b}}% }{(-1)^{\gr{\udt a}+\gr{\dt b}}} \; . \label{eq:hyper} \end{equation} Since the other summation variables appear in the arguments of this hypergeometric function, the other sums cannot be taken easily. To remedy this, and to disentangle the sums, one can use an integral representation of the hypergeometric function. The type of integral however depends on the spectrum of the operator $\mathbf{N}_{\udt a}$. If the oscillator with index $\udt a$ is bosonic and particle-hole transformed, $\omega_{\udt a}=-1$, the first argument $\sum_{\dt a \neq \dt b}k_{\dt a}-\mathbf{N}_{\udt a}$ of the hypergeometric function \eqref{eq:hyper} takes positive integer values, and we can use the standard Euler type integral, expressing the function ${}_3F_2$ as an integral over the interval $(0,1)$ on the real line involving the function ${}_2F_1$. For all other cases, the Gamma function $\Gamma(\mathbf{N}_{\udt a}+1-\textstyle\sum\nolimits_{\dt a}k_{\dt a})$ in the denominator of \eqref{eq:middlepartlowest} truncates the range of the summation variables such that the argument $\sum_{\dt a \neq \dt b}k_{\dt a}-\mathbf{N}_{\udt a}$ of the hypergeometric function \eqref{eq:hyper} takes non-positive integer values. For negative arguments, one can use an analytic continuation of the Euler integral employing the Pochhammer contour. For negative integers, this contour collapses into a contour integral around the origin. Using the appropriate integral formulas to rewrite the hypergeometric function \eqref{eq:hyper}, one finds that all subsequent summations decouple, and can be performed easily using the series representation of the hypergeometric function ${}_2F_1$. We then arrive at the result \begin{equation} \begin{aligned} & \mathbb{M}_{\{\udt a\}}(z,\{\mathbf{N}\}, \{ n\}) \\&= \int\mathrm{d} t \; t^{-\mathbf{N}_{\udt a }-1} (1-t)^{-z-1+\mathbf{C} +\frac{1}{2}\sum_{\dt a}(-1)^{\gr{\dt a}}} \prod_{\dt a} \frac{1}{\abs{n_{\dt a}}!}\; \pFq{2}{1}{\mathbf{N}_{\dt a} + (-1)^{\gr{\dt a}}, -\mathbf{N}_{\udt a \dt a}}{1+\abs{n_{\dt a}}}{(-1)^{\gr{\dt a}+\gr{\udt a}}t} \; , \end{aligned} \label{eq:middlepart_lowest} \end{equation} where $\mathbf{C}$ is the central charge defined in \eqref{eq:def_central_charge} and the integration is \begin{equation} \int \mathrm{d} t = \begin{cases}\displaystyle \frac{(-1)^{\mathbf{N}_{\udt a}} }{\Gamma(-\mathbf{N}_{\udt a})} \int_0^1 \mathrm{d} t & \quad \text{if } \gr{\udt a}=0 \text{ and } \omega_{\udt a} = -1 \\[20pt] \displaystyle \frac{\Gamma(1+\mathbf{N}_{\udt a})}{2\pi i} \oint_{t=0} \mathrm{d} t & \quad \omega_a=1\text{ or }\gr{a}=1 \end{cases} \; . \label{eq:def_int_lowest_new} \end{equation} This means that for truncating $\mathcal{R}$-operators, the integral just computes a residue, while for the non-truncating ones, it is an integral over the interval $(0,1)$. Note that strictly speaking, the integral is only convergent for appropriately chosen values of the spectral parameter $z$; this however poses no problem, since the result for any matrix element is a rational function which can be analytically continued to any value of the spectral parameter. Further we note that while it might seem that we did not gain much by writing the potentially infinite sums of the $\mathcal{R}$-operator in terms of an integral, this integral is in fact trivial to evaluate, by expanding the integrand and, depending on the case, either taking a simple residue or evaluating the line integral in terms of a Beta function. It provides a convenient way of treating the truncating and non-truncating $\mathcal{R}$-operators in a joint way, the essentially only difference being the contour of integration. In the next section we show how the integral formula \eqref{eq:middlepart_lowest} is used to recover the matrix elements in the case of the spin~$-s$ chains discussed in Section~\ref{sec:sl2}. Subsequently, we demonstrate how the integral formula can be rewritten in terms of finite sums in Section~\ref{sec:schml}. \subsection{Derivation of the $\mathcal{R}$-operators for the spin $-s$ models} \label{sec:derivation} The integral representation for $\mathcal{R}$-operators of the lowest level, given in \eqref{eq:form_lax_lowest} together with \eqref{eq:middlepart_lowest}, can easily be evaluated in practise. To show that it also serves as a good starting point to obtain representations in terms of finite sums, we now derive the formulas \eqref{eq:formR2}, \eqref{eq:middleR2}, \eqref{eq:formR1} and \eqref{eq:middleR1} for the $\mathcal{R}$-operators of the spin~$-s$ spin chains considered in Section~\ref{sec:sl2}. For these models, both oscillators are bosonic, $\gr{\cdot}=(0,0)$, and the first oscillator is particle-hole transformed, $\omega=(-1,+1)$. The central charge is constrained to $\mathbf{C}=-2s$, such that the states are given by $\ket{m}_s=\ket{2s-1+m,m}$, cf. \eqref{eq:statesrank1}. We begin with the truncating $\mathcal{R}$-operator $\mathcal{R}_{\{2\}}$. The matrix elements ${}_s\bra{\tilde m}\mathcal{R}_{\{2\}}(z)\ket{m}_s$ can be determined from \eqref{eq:form_lax_lowest} by noting that the summation variable $n_1$ is fixed to be $n_1=\tilde m -m$, and that the diagonal part then acts on a state $\ket{\hat{m}}_s$ with $\hat{m}=\min(m,{\tilde{m}})$, cf.~\eqref{eq:fixns_one} and \eqref{eq:m0s_one}. Using this it is straightforward to show that the general structure of the Lax operator exactly matches \eqref{eq:formR2}. The diagonal part $\mathbb{M}_{\{2\}}$ could in principle be derived directly from expression \eqref{eq:middlepartlowest}; we nevertheless start from the generally applicable formula \eqref{eq:middlepart_lowest} expressing it as a contour integral. This integral can be evaluated by plugging in the series representations of the hypergeometric function and of the power of $(1-t)$, \begin{align} \allowdisplaybreaks[1] &{}_s\bra{\hat{m}}\mathbb{M}_{\{2\}}(z)\ket{\hat{m}}_s \nonumber\\[6pt] &= \frac{\hat{m}!}{\abs{m-{\tilde{m}}}!} \frac{1}{2\pi i} \oint_{t=0}\mathrm{d} t \; t^{-\hat{m}-1}(1-t)^{-z-\frac{1}{2}-2s} \pFq{2}{1}{1-2s-\hat{m}, -\mathbf{N}_{21}}{1+\abs{{\tilde{m}}-m}}{t} \nonumber\\[6pt] &= \hat{m}! \frac{1}{2\pi i} \oint_{t=0}\mathrm{d} t \; t^{-\hat{m}-1} \left[ \sum_{\ell=0}^\infty \frac{(z+\frac{1}{2}+2s)_\ell}{\ell!}t^\ell \right] \left[ \sum_{k=0}^{\hat{m}+2s-1} \frac{ (2s+\hat{m}-k)_k(1+\mathbf{N}_{21}-k)_k }{ (\abs{{\tilde{m}}-m}+k)! k! } t^k \right] \nonumber\\[6pt] &= \sum_{k=0}^{\hat{m}} \binom{\hat{m}}{k} \frac{ (z+\frac{1}{2}+2s)_{\hat{m}-k} (1+\mathbf{N}_{21}-k)_k (2s+\hat{m}-k)_k }{ (\abs{{\tilde{m}}-m}+k)! } \; , \end{align} which is the same as \eqref{eq:middleR2}. Next we turn to the non-truncating $\mathcal{R}$-operator $\mathcal{R}_{\{1\}}$. For each matrix element, the summation variable is fixed to $n_2=m-{\tilde{m}}$, and the diagonal part acts on $\ket{\hat{m}}_s$, where now $\hat{m}=\max(m,{\tilde{m}})$. One finds that the form of the matrix elements in \eqref{eq:formR1} is reproduced by \eqref{eq:form_lax_lowest}. The diagonal part \eqref{eq:middlepart_lowest} is then given by \begin{equation} \begin{aligned} {}_s\bra{\hat{m}}\mathbb{M}_{\{1\}}(z)\ket{\hat{m}}_s&= \frac{(-1)^{2s+\hat{m}}}{(2s-1+\hat{m})!\abs{{\tilde{m}}-m}!}\int_0^1 \mathrm{d} t \; t^{2s-1+\hat{m}}(1-t)^{-z-\frac{1}{2}-2s} \pFq{2}{1}{\hat{m}+1, -\mathbf{N}_{12}}{1+\abs{{\tilde{m}}-m}}{t}\,. \end{aligned} \end{equation} To write the matrix elements as finite sums, we have to apply the Euler transformation $ \,_2F_1(n,b;m;z)=(1-z)^{m-n-b} {}_2F_1(m-n,m-b;m;z) $ to the hypergeometric function. Then this function can be written as a finite sum and the integral can be evaluated using the integral representation of the Beta function: \begin{align} \allowdisplaybreaks[1] {}_s\bra{\hat{m}}\mathbb{M}_{\{1\}}(z)\ket{\hat{m}}_s &= \frac{(-1)^{2s+\hat{m}}}{(2s-1+\hat{m})!\abs{{\tilde{m}}-m}!} \int_0^1 \mathrm{d} t \; t^{2s-1+\hat{m}}(1-t)^{-z-\frac{1}{2}-2s-\min(m,{\tilde{m}})+\mathbf{N}_{12}} \nonumber\\&\qquad\qquad\qquad\qquad\times \pFq{2}{1}{-\min(m\text{,}\,{\tilde{m}}), 1+\abs{{\tilde{m}}-m}+\mathbf{N}_{12}}{1+\abs{{\tilde{m}}-m}}{t} \nonumber\\[6pt] &= \sum_{k=0}^{\min(m,{\tilde{m}})} \frac{(-1)^{2s+\hat{m}}}{(2s-1+\hat{m})!} \frac{ (1+\min(m,{\tilde{m}})-k)_k (1+\abs{{\tilde{m}}-m}+\mathbf{N}_{12})_k }{ k! (\abs{{\tilde{m}}-m}+k)! } \nonumber\\&\qquad\qquad\qquad\qquad\times \int_0^1 \mathrm{d} t \; t^{2s-1+\hat{m}+k}(1-t)^{-z-\frac{1}{2}-2s-\min(m,{\tilde{m}})+\mathbf{N}_{12}} \nonumber\\[6pt] &= \sum_{k=0}^{\min(m,{\tilde{m}})} \frac{(-1)^{2s+\hat{m}}}{(2s-1+\hat{m})!} \frac{ (1+\min(m,{\tilde{m}})-k)_k (1+\abs{{\tilde{m}}-m}+\mathbf{N}_{12})_k }{ k! (\abs{{\tilde{m}}-m}+k)! } \nonumber\\&\qquad\qquad\qquad\qquad\times B(2s+\hat{m}+k,-z+\sfrac{1}{2}-2s-\min(m,{\tilde{m}})+\mathbf{N}_{12}) \; . \end{align} This expression is identical to \eqref{eq:middleR1}, upon using $B(x,y)=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}$. In the next section we represent the generalisation of this finite sum formula for the non-truncating $\mathcal{R}$-operators of arbitrary non-compact spin chains of Jordan-Schwinger type. \subsection{Finite sum representation for lowest level $\mathcal{R}$-operators}\label{sec:schml} Evaluating the integral formula in \eqref{eq:middlepart_lowest} is a very efficient way to determine matrix elements of truncating as well as non-truncating $\mathcal{R}$-operators. It is however also possible to directly derive finite sum expressions from the integral representation using the same ideas as in the previous section. Here one has to treat the truncating and non-truncating $\mathcal{R}$-operators separately, corresponding to the two integration contours in \eqref{eq:def_int_lowest_new}. In the truncating case, evaluating the residue returns the expression given in \eqref{eq:middlepartlowest}, which can be expressed in terms of number operators for the particle-hole transformed oscillators \eqref{eq:realoscs}; then all sums are manifestly finite. For the non-truncating $\mathcal{R}$-operators $\mathcal{R}_{\{a\}}$ with $\gr{\udt a}=0$ and $\omega_a=-1$, one can evaluate the integral as follows: First, we decompose the set into sets of indices corresponding to the different types of oscillators $\bar I = \bar I_\osc{a} \cup \bar I_\osc{b} \cup \bar I_\osc{c} \cup \bar I_\osc{d} $, cf.~\eqref{eq:realoscs}. Subsequently we apply the Euler transformation to the hypergeometric functions corresponding to the set $\bar I_{\osc{a}}$ and use the series expansion of all such functions to perform the Beta integral. We find \begin{equation} \begin{aligned} &\mathbb{M}_{\{\udt a\}}(z, \{\mathbf{N}\}, \{ n\} ) \\[5pt] &= \sum_{\{k_{\dt a}\}=0}^\infty \frac{(-1)^{1+\mathbf{N}_{\osc{b}_{\udt a}}}}{\mathbf{N}_{\osc{b}_{\udt a}}!} \frac{1}{ \prod_{\dt a\in\bar I } k_{\dt a}! (\abs{n_{\dt a}}+k_{\dt a})! } \\ &\qquad\qquad \prod_{\dt a \in \bar I_{\osc{a}}} (\abs{n_{\dt a}}-\mathbf{N}_{\osc{a}_{\dt a}})_{k_{\dt a}} (\abs{n_{\dt a}}+\mathbf{N}_{\udt a \dt a}+1)_{k_{\dt a}} \qquad \prod_{\dt a \in \bar I_{\osc{b}}} (-\mathbf{N}_{\osc{b}_{\dt a}})_{k_{\dt a}} (-\mathbf{N}_{\udt a \dt a})_{k_{\dt a}} \\&\qquad\qquad \prod_{\dt a \in \bar I_{\osc{c}}} (-1)^{k_{\dt a}} (\mathbf{N}_{\osc{c}_{\dt a}}-1)_{k_{\dt a}} (-\mathbf{N}_{\udt a \dt a})_{k_{\dt a}} \qquad \prod_{\dt a \in \bar I_{\osc{d}}} (-1)^{k_{\dt a}} (-\mathbf{N}_{\osc{d}_{\dt a}})_{k_{\dt a}} (-\mathbf{N}_{\udt a \dt a})_{k_{\dt a}} \\[3pt] & \qquad\qquad B\Big( -z+\mathbf{C}+\sfrac{1}{2}\sum_{\dt a \in \bar I}(-1)^{\gr{\dt a}}+\sum_{\dt a \in \bar I_{\osc{a}}}(\mathbf{N}_{\udt a\dt a}+\abs{n_{\dt a}}-\mathbf{N}_{\osc{a}_{\dt a}}) , \mathbf{N}_{\osc{b}_{\udt a}}+1+\sum_{\dt a \in \bar I} k_{\dt a} \Big) \; , \end{aligned} \label{eq:finite_sum_nonpolynomial} \end{equation} where we denoted the number operators of the particle-hole transformed oscillators as $\mathbf{N}_{\osc{a}_{a}}=\dagg{\osc{a}}_a\osc{a}_a$ et cetera. All the Pochhammer symbols involving these operators are of the form $(-m)_{k}$ with $m\geq 0$ which gives $(-m)_k=(-1)^k\frac{\Gamma(m+1)}{\Gamma(m+1-k)}$, such that all sums truncate. Note that the fact that $\abs{n_{\dt a}-\mathbf{N}_{\osc{a}_{\dt a}}} \leq 0$ can be seen from the structure of the outer sums in \eqref{eq:form_lax_lowest}, see also Appendix~\ref{sec:matrixelements}. \subsection{Towards higher level finite sum representations} \label{sec:higher} We have seen that the $\mathcal{R}$-operators of the lowest level can conveniently be written either using the integral representation \eqref{eq:middlepart_lowest} or as finite sums as in \eqref{eq:middlepartlowest} and \eqref{eq:finite_sum_nonpolynomial}. Here we want to discuss the generalisation of such representation to the remaining levels of the Q-system. First note that the $\mathcal{R}$-operators are almost symmetric under the exchange $I\leftrightarrow\bar I$; it is therefore possible to proceed with the highest level $\mathcal{R}$-operators similarly as with the lowest level ones. These results are summarised in Appendix~\ref{sec:highest}. The intermediate levels can be much more involved. However, we note that the difficulty of deriving representations without infinite sums does not necessarily increase according to the level $|I|$ of the operators, but rather by the number of infinite sums, or more precisely by the number of indices $a\in I$ which correspond to bosonic and particle-hole transformed oscillators, $\gr{a}=0$ and $\omega_a=-1$. If no such indices appear in the index set of $\mathcal{R}_I$, the formula \eqref{eq:multireordermiddle} contains no infinite sum to begin with, cf.~Section~\ref{sec:com}. Furthermore, for the case that there is exactly one such index, one can apply the same strategy as was used for the lowest level. Let this index be $\udt b$; then one can perform all sums over the variables $k_{\udt b \dt a}$ in \eqref{eq:multireordermiddle}, and obtain a formula with finite sums and an integral as in Section~\ref{sec:lowest}. Writing the integral in terms of finite sums as in Section~\ref{sec:schml}, one obtains a formula in terms of finite sums only. The first case where more severe difficulties arise can best be discussed using a concrete example. Consider a $\mathfrak{u}(2,2)$ invariant model, with oscillators $\osc{a}_1$, $\osc{a}_2$, $\osc{b}_3$ and $\osc{b}_4$. Then the operator $\mathcal{R}_{\{3,4\}}$ contains two particle-hole transformed indices. After performing similar calculations as for the lowest level, one finds the following representation: \begin{equation} \begin{split} \mathcal{R}_{\{3,4\}}(z)&=e^{-\sum_{a,\bar a}\dagg{\oscgreek{\xi}}_{a\bar a} \osc{a}_a \osc{b}_{\bar a}} \; \frac{ \Gamma(z-\mathbf{N}_{\osc{a}_1}-\mathbf{N}_{\osc{a}_2}) }{ \Gamma(z-\mathbf{C}) }\; e^{-\sum_{a, \bar a}\oscgreek{\xi}_{\bar a a} \dagg{\osc{a}}_a\dagg{\osc{b}}_{\bar a}}\\ &=\sum_{\{n_{a\bar a }\}=-\infty}^{+\infty}\left[\prod_{a,\bar a} \left(-\dagg{\oscgreek{\xi}}_{a\bar a}\osc{a}_a\osc{b}_{\bar a}\right)^{\theta(+n_{a\bar a})|n_{a\bar a}|}\right] \mathbb{M}_{\{3,4\}}(z , \{\mathbf{N}\}, \{ n\}) ) \left[\prod_{a,\bar a} \left(-\oscgreek{\xi}_{\bar a a}\dagg{\osc{a}}_a\dagg{\osc{b}}_{\bar a}\right)^{\theta(-n_{a\bar a})|n_{a\bar a}|}\right]\,, \end{split} \end{equation} where the indices run over $\udt a\in I=\{1,2\}$ and $\dt a\in \bar I=\{3,4\}$. Here the diagonal part $\mathbb{M}_{\{3,4\}}(z)$ can be written in terms of finite sums and an integral, \begin{equation} \small \begin{split} &\mathbb{M}_{\{3,4\}}(z, \{\mathbf{N}\}, \{ n\}) = \sum_{k_{13},k_{23}=0}^\infty \; \frac{ (-1)^{\mathbf{N}_{\osc{b}_3}+\mathbf{N}_{\osc{b}_4}} }{ n_{14}!n_{24}! \mathbf{N}_{\osc{b}_3}!\mathbf{N}_{\osc{b}_4}! } \; \frac{ (k_{13}+k_{23}+\mathbf{N}_{\osc{b}_3})! }{ k_{13}!k_{23}!(k_{13}+n_{13})!(k_{23}+n_{23})! } \\[8pt] & \times \frac{ (n_{13}-\mathbf{N}_{\osc{a}_1})_{k_{13}} (n_{23}-\mathbf{N}_{\osc{a}_2})_{k_{23}} (1+n_{13}+\mathbf{N}_{13})_{k_{13}} (1+n_{23}+\mathbf{N}_{23})_{k_{23}} }{ (n_{13}+n_{23}+\mathbf{N}_{13}+\mathbf{N}_{23}-\mathbf{N}_{\osc{b}_3}+1-z)_{k_{13}+k_{23}+\mathbf{N}_{\osc{b}_3}} } \\[3pt] &\times \int_0^1t^{\mathbf{N}_{\osc{b}_4}}(1-t)^{\mathbf{C} -z} \;\; \,_3F_2(\mathbf{N}_{\osc{a}_1}+1,1-n_{13}+\mathbf{N}_{\osc{a}_1},-\mathbf{N}_{14};1-k_{13}-n_{13}+\mathbf{N}_{\osc{a}_1},1+n_{14};t) \\ &\;\;\,\qquad\qquad\qquad\qquad\times\,_3F_2(\mathbf{N}_{\osc{a}_2}+1,1-n_{23}+\mathbf{N}_{\osc{a}_2},-\mathbf{N}_{24};1-k_{23}-n_{23}+\mathbf{N}_{\osc{a}_2},1+n_{24};t) \end{split} \end{equation} where the central charge is $\mathbf{C}=\mathbf{N}_{\osc{a}_1}+\mathbf{N}_{\osc{a}_2}-\mathbf{N}_{\osc{b}_3}-\mathbf{N}_{\osc{b}_4}-2$. It resembles the integral formula \eqref{eq:middlepart_lowest}, but this time involving generalised hypergeometric functions. Indeed it is even possible to write the integral in terms of finite sums, using an analogue of the Euler transformation which can be found in \cite{miller2013}. This identity is however rather involved and not very explicit, and requires finding the zeros of an auxiliary polynomial. Note that the formula for $\mathbb{M}_{\{3,4\}}(z)$ follows from first using the result \eqref{eq:finite_sum_nonpolynomial} for finite sum representations of the lowest levels to make half of the sums finite. Then one applies the same strategy to the remaining sums. The fact that the next step requires implicit identities of the type just discussed for the hypergeometric functions renders it difficult to treat cases with more infinite sums in this recursive fashion. Nevertheless, as discussed in the next section, for the purpose of calculating Q-operators explicitly there is no need to evaluate higher level $\mathcal{R}$-operators as the whole Q-system can be obtained from the lowest level. \section{Generating the operatorial Q-system} \label{sec:qsys} Above we focused on the calculation of matrix elements for lowest level $\mathcal{R}$-operators. In fact, as we will discuss now, the lowest level $\mathcal{R}$-operators are sufficient to generate the entire operatorial Q-system. Our strategy is to first combine the $\mathcal{R}$-operator's matrix elements into matrix elements of the respective Q-operator by taking products and tracing out the auxiliary Fock space. For each magnon block, the Q-operators can be represented explicitly as matrices of finite size. Systematically solving the functional relations \eqref{eq:QQb} and \eqref{eq:QQf}, we determine all other Q-operators in the corresponding magnon block. To facilitate concrete calculations, we collect all formulas necessary in this process in Appendix~\ref{sec:formulas}. This allows to perform all calculations in computer algebra systems such as Mathematica. \subsection{Lowest level Q-operators} Using the matrix elements of the $\mathcal{R}$-operators of the lowest level one can construct matrix elements of the corresponding Q-operators via \eqref{eq:qop}. Due to the remaining $\mathfrak{u}(1)^{N+M}$ invariance which persists in the presence of diagonal twists, the Q-operators are block diagonal. These blocks correspond to sectors with a fixed number of magnons; they are labelled by the total excitation numbers $\sum_{i=1}^L m_a^{(i)}$, where $a=1,\ldots,p+q+r+s$, of the oscillators of the representation of $\mathfrak{u}(p,q|r+s)$ given in \eqref{eq:realoscs}, see also \eqref{eq:defstates}, and the number of sites $L$. For each such magnon block, the matrix elements can therefore be combined into a matrix of finite size. This gives the operatorial form of Q-operators in a subspace of the infinite-dimensional Hilbert space of non-compact models. For spin chains of length $L$ the matrix elements of the lowest level Q-operators can be expressed as \begin{multline} \Big( \bra{{\tilde{\mathbf{m}}}^{(L)}} \cdots \bra{{\tilde{\mathbf{m}}}^{(1)}} \Big) \; \mathbf{Q}_{\{a\}}(z) \; \Big( \ket{{\mathbf{m}}^{(1)}} \cdots \ket{{\mathbf{m}}^{(L)}} \Big) \\[8pt] =(-1)^{\sum_{i<j}\gr{{\tilde{\mathbf{m}}}^{(j)}}(\gr{{\mathbf{m}}^{(i)}}+\gr{{\tilde{\mathbf{m}}}^{(i)}})}\; e^{iz\phi_{a}}\; \widehat \str\, \bra{{\tilde{\mathbf{m}}}^{(1)}}\mathcal{R}_{\{a\}}(z)\ket{{\mathbf{m}}^{(1)}} \cdots \bra{{\tilde{\mathbf{m}}}^{(L)}}\mathcal{R}_{\{a\}}(z)\ket{{\mathbf{m}}^{(L)}}\,. \label{eq:meQ} \end{multline} Here we denote the Graßmann degree of the state $\ket{{\mathbf{m}}^{(i)}}$ defined in \eqref{eq:defstates} by $\gr{{\mathbf{m}}^{(i)}}$. The matrix elements of the $\mathcal{R}$-operators $\bra{{\tilde{\mathbf{m}}}^{(i)}}\mathcal{R}_{\{a\}}(z)\ket{{\mathbf{m}}^{(i)}}$ follow immediately from the integral representation given in \eqref{eq:form_lax_lowest} and \eqref{eq:middlepart_lowest}, or the finite sum for the non-truncating $\mathcal{R}$-operators given in \eqref{eq:finite_sum_nonpolynomial}. They can be found in full detail in Appendix~\ref{sec:matrixelements}. Of course, these matrix elements still depend on the auxiliary space operators $\dagg{\oscgreek{\xi}}_{\udt a \dt a}$ and $\oscgreek{\xi}_{\dt a \udt a}$. To evaluate \eqref{eq:meQ}, one first commutes all the auxiliary space operators either to the left or to the right, and combines them into number operators $\mathbf{N}_{\udt a \dt a}$. All terms containing any off-diagonal terms, i.e. raising or lowering operators, can then be dropped since they do not contribute to the supertrace. The normalised supertrace is then given in terms of ordinary sums over these remaining diagonal terms, which however need to be regularised, by giving the twist angles small imaginary parts, as discussed in Section~\ref{sec:qs}. Note that the definition of the trace \eqref{eq:str} factors into traces over the individual Fock spaces of the different auxiliary space oscillators, $\widehat\str=\prod_{a,b}\widehat\str_{ab}$, where $\widehat\str_{ab}$ traces out the oscillator $(\dagg{\oscgreek{\xi}}_{ab},\oscgreek{\xi}_{ba})$. One finds that only a closed set of a few different types of sums can occur when calculating the traces $\widehat\str_{ab}$, including sums over rational functions and the Lerch transcendent \eqref{eq:def_hl}. Formulas for all these sums are collected in Appendix~\ref{sec:str}. \subsection{Operatorial Q-system from functional relations} \label{sec:abcde} \begin{figure}[t] \centering \begin{picture}(120,130) \put(-12,-12){\itshape $\mathbf{Q}_{\varnothing}$} \put(15,-20){\footnotesize\itshape number of bosonic} \put(15,-30){\footnotesize\itshape indices in set $I$ \normalfont($\dagg{\osc{a}}$, $\dagg{\osc{b}}$)} \put(-33,15){\rotatebox{90}{\footnotesize\itshape number of fermionic}} \put(-23,15){\rotatebox{90}{\footnotesize\itshape indices in set $I$ \normalfont($\dagg{\osc{c}}$, $\dagg{\osc{d}}$)}} \linethickness{0.4mm} \put(0,0){\vector(1,0){110}} \put(0,30){\line(1,0){100}} \put(0,60){\line(1,0){100}} \put(0,90){\line(1,0){100}} \put(0,0){\vector(0,1){110}} \put(30,0){\line(0,1){100}} \put(60,0){\line(0,1){100}} \put(90,0){\line(0,1){100}} \put(0,0){\circle*{5}} \put(30,0){\circle*{5}} \put(0,30){\circle*{5}} \put(60,0){\color{white}\circle*{5}}\put(60,0){\circle{5}} \put(90,0){\color{white}\circle*{5}}\put(90,0){\circle{5}} \put(30,30){\color{white}\circle*{5}}\put(30,30){\color{lightgray}\circle*{5}}\put(30,30){\circle{5}} \put(60,30){\color{white}\circle*{5}}\put(60,30){\circle{5}} \put(90,30){\color{white}\circle*{5}}\put(90,30){\circle{5}} \put(0,60){\color{white}\circle*{5}}\put(0,60){\circle{5}} \put(30,60){\color{white}\circle*{5}}\put(30,60){\circle{5}} \put(60,60){\color{white}\circle*{5}}\put(60,60){\circle{5}} \put(90,60){\color{white}\circle*{5}}\put(90,60){\circle{5}} \put(0,90){\color{white}\circle*{5}}\put(0,90){\circle{5}} \put(30,90){\color{white}\circle*{5}}\put(30,90){\circle{5}} \put(60,90){\color{white}\circle*{5}}\put(60,90){\circle{5}} \put(90,90){\color{white}\circle*{5}}\put(90,90){\circle{5}} \linethickness{1mm} \put(5,10){\color{black}\huge$\nearrow$} \end{picture} \vspace{2.5\baselineskip} \caption{Generation of the full Q-system from $\mathbf{Q}_{{\varnothing}}$ and the set of $\mathbf{Q}_{\{a\}}$ at black nodes. The arrow signals the need to solve the difference equation \eqref{eq:guemmel} to obtain the Q-operators on the grey node. All Q-operators on the white nodes can then be obtained from the determinant formulas \eqref{eq:detformulas}. The lattice shown here is a projection of the one used in Figure~\ref{fig:distribution}.} \label{fig:solveQQ} \end{figure} Knowing the Q-operators with a single index as explicit matrices for a given magnon block, one can produce explicit matrices for all operators of higher level by imposing the QQ-relations \eqref{eq:QQb} and \eqref{eq:QQf}. \footnote{ Using the approach we present here, recovering the known expression for $\mathbf{Q}_{{\overline{\varnothing}}}$ given in \eqref{eq:qemptyfull} constitutes a non-trivial check of these relations, which we performed for specific examples. } A naive way of solving the bosonic relation \eqref{eq:QQb} however involves a matrix inversion, which is problematic given that the Q-operators are expressed in terms of special functions. A more efficient strategy is to first calculate the Q-operators with one bosonic and one fermionic index, $\mathbf{Q}_{\{a,b\}}$ with $|a|\neq |b|$. To obtain these, we need to solve the first order difference equation given by \eqref{eq:QQf}: \begin{eqnarray} \mathbf{Q}_{\{a,b\}}(z)-\mathbf{Q}_{\{a,b\}}(z+1)=-\Delta_{ab}\mathbf{Q}_{\{a\}}(z+\sfrac{1}{2})\mathbf{Q}_{\{b\}}(z+\sfrac{1}{2})\quad\quad |a|\neq |b|\,. \label{eq:guemmel} \end{eqnarray} The formal solution to this equation can be written in terms of the discrete analogue of integration, which we denote by $\Sigma$ and define through $\Sigma\left[ f(z)-f(z+1) \right]=f(z)+\mathcal{P}$. Here $\mathcal{P}$ is periodic, $\mathcal{P}(z)=\mathcal{P}(z+1)$. The discrete integral can be written as a sum, $\Sigma[f(z)]=\sum_{n=0}^\infty f(z+n)$, whenever this sum converges. For the Q-operators with one bosonic and one fermionic index we can thus write \begin{eqnarray} \mathbf{Q}_{\{a,b\}}(z) = -\Delta_{ab}\;\Sigma\left[ \mathbf{Q}_{\{a\}}(z+\sfrac{1}{2})\mathbf{Q}_{\{b\}}(z+\sfrac{1}{2}) \right]\quad\quad |a|\neq |b|\,. \label{eq:finite_diff} \end{eqnarray} We describe the explicit realisation of this operation on the encountered basis of functions in Appendix~\ref{sec:Psi}, where we also make it clear that all Q-operators are given in terms of linear combinations of rational functions and generalised Lerch transcendents which are likewise defined there. It is important to note that in contrast to the untwisted case, the arbitrary periodic function $\mathcal{P}$ is fixed to be zero if we require the Q-operators obtained from \eqref{eq:finite_diff} to be identical to the monodromy construction, since $\mathcal{P}$ is incompatible with the exponential scaling in terms of the twist phases. Via the QQ-relations, it is possible to write all other Q-operators as determinants of $\mathbf{Q}_{\{a\}}$ and $\mathbf{Q}_{\{a,b\}}$ with $|a|\neq|b|$, \begin{eqnarray} &&\mathbf{Q}_{\{a_1,\ldots,a_m,b_1,\ldots,b_n\}} = \frac{\prod_{i=1}^m\prod_{j=1}^n\Delta_{a_i b_j}}{\prod_{1\le i<j\le m} \Delta_{a_i a_j} \prod_{1\le i<j\le n} \Delta_{b_i b_j} } \\[10pt] &&\quad\quad \times \left\{ \begin{matrix} (-1)^{(n-m)m} \epsilon^{k_1,...,k_n} \prod_{r=1}^{m} \frac{1}{ \Delta_{a_r b_{k_r} } } \mathbf{Q}_{\{a_r, b_{k_r}\}}^{[\star]} \, \prod_{s=1}^{n-m} \mathbf{Q}_{\{b_{k_{m+s}}\}}^{[n-m+1-2s]} & m<n \\[10pt] \epsilon^{k_1,...,k_m} \prod_{r=1}^m \mathbf{Q}_{\{a_{k_r}, b_r\}} & m=n \\[10pt] (-1)^{(n-m)n} \epsilon^{k_1,...,k_m} \prod_{r=1}^{n} \frac{1}{\Delta_{a_{k_r} b_r} } \mathbf{Q}_{\{a_{k_r}, b_r\}}^{[\star]} \, \prod_{s=1}^{m-n} \mathbf{Q}_{\{a_{k_{n+s}}\}}^{[m-n+1-2s]} & m>n \end{matrix} \right. \,,\nonumber \label{eq:detformulas} \end{eqnarray} see e.g. \cite{Tsuboi:2009ud,Tsuboi:2011iz,Gromov:2014caa}. Here $|a_j|=0$ and $|b_j|=1$, $\mathbf{Q}^{[n]}=\mathbf{Q}(z+\sfrac{n}{2})$, and $\star$ can take any value in $-|m-n|,-|m-n|+2,...,|m-n|-2,|m-n|$. The prefactor is a consequence of the normalisation we use for the Q-operators \eqref{eq:qop}, cf. Appendix~\ref{sec:norm}. The procedure to construct all Q-operators in this way is shown in Figure~\ref{fig:solveQQ}. As a consequence of this construction, one finds that the Q-operators only develop poles at $z\in\mathbb{N}$ or $z\in\mathbb{N}-\frac{1}{2}$, depending on the number of indices. \section{The BMN vacuum of fully twisted $\mathcal{N}\! = \! 4$ SYM\xspace at leading order} \label{sec:vacuum} In this section we want to show how the Q-operator construction and the methods for their evaluation can be applied to the $\mathcal{N}\! = \! 4$ SYM\xspace spin chain at the one-loop level. To make comparisons to other approaches easier, we also show how to convert our expressions to the conventions commonly used in the literature on the Quantum Spectral Curve, see in particular \cite{Kazakov:2015efa}, where the twisted case is discussed. From our construction we obtain the Q-operators for the theory with a full diagonal twist. This generalises the well-know $\gamma_i$ and $\beta$ deformation \cite{Leigh:1995ep,Lunin:2005jy,Frolov:2005dj} and includes twists of the space-time part of the symmetries, such that the field theory is non-commutative.% \footnote{ See \cite{Beisert:2005if} for a discussion of the subtleties which arise when trying to deduce the precise non-commutative field theory from the integrable spin chain description. } The results can be specialised to the $\gamma_i$ and $\beta$ deformed cases, or to the untwisted theory by choosing the twist angles appropriately. While this leads to divergent matrix elements in the Q-operators, their eigenstates and the conserved charges such as the Hamiltonian which can be obtained from the Q-operators as described in \cite{Frassek:2012mg} remain finite. To specialise our construction to $\mathcal{N}\! = \! 4$ SYM\xspace at one-loop, we first restrict to the singleton representation of $\mathfrak{u}(2,2|4)$ by choosing a grading and applying particle-hole transformation as \begin{equation} (\gr{a})_{a=1}^8=(0,0,1,1,1,1,0,0) \; , \qquad (\omega_a)_{a=1}^8 = (+1,+1,-1,-1,-1,-1,-1,-1) \; , \label{eq:qsc_grading_ph} \end{equation} and requiring that the central charge vanishes, i.e. $\mathbf{C}=0$. Comparing with \eqref{eq:realoscs}, this gives the representation of the fields of $\mathcal{N}\! = \! 4$ SYM\xspace in terms of the oscillators $(\dagg{\osc{a}}_1,\dagg{\osc{a}}_2,\dagg{\osc{d}}_1,\dagg{\osc{d}}_2,\dagg{\osc{d}}_3,\dagg{\osc{d}}_4,\dagg{\osc{b}}_1,\dagg{\osc{b}}_2)$ typically used in the spin chain description of $\mathcal{N}\! = \! 4$ SYM\xspace at weak coupling and first investigated in \cite{Gunaydin:1984fk}. With this choice, the representation has the scalar field $\mathcal{Z}$ as the lowest-weight state: \begin{equation} \ket{\mathcal{Z}}=\dagg{\osc{d}}_1\dagg{\osc{d}}_2\ket{0}=\ket{0,0,1,1,0,0,0,0} \; . \end{equation} To facilitate the application of our results to $\mathcal{N}\! = \! 4$ SYM\xspace and to make comparisons with the literature easier, we note that our conventions can easily be transformed into those typically employed by literature on the Quantum Spectral Curve of $\mathcal{N}\! = \! 4$ SYM\xspace. There, bosonic and fermionic indices are treated separately. To obtain Q-functions with the expected asymptotics, we call the Q-operators of the lowest level \begin{equation} (\mathbf{Q}_a)_{a=1}^8 = ( \mathbf{Q}_{{\varnothing}|1}, \mathbf{Q}_{{\varnothing}|2}, \mathbf{Q}_{1|{\varnothing},} \mathbf{Q}_{2|{\varnothing},} \mathbf{Q}_{3|{\varnothing},} \mathbf{Q}_{4|{\varnothing},} \mathbf{Q}_{{\varnothing}|3,} \mathbf{Q}_{{\varnothing}|4} )\,. \label{eq:qsc_qs} \end{equation} We note that the eigenvalues of these operators correspond to the leading perturbative contribution to the functions that appear in the $\mathbf{P}\mu$ and $\mathbf{Q}\omega$ systems of the Quantum Spectral Curve, which govern the monodromy properties of the Q-system of $\mathcal{N}=4$ SYM at any coupling \cite{Gromov:2013pga,Gromov:2014caa}. To obtain the twist variables which were used in \cite{Kazakov:2015efa}, we set \begin{equation} (e^{-i\phi_a})_{a=1}^8 =(\tau_a)_{i=a}^8 = (y_1,y_2,x_1,x_2,x_3,x_4,y_3,y_4)\,. \label{eq:qsc_twists} \end{equation} Finally, the spectral parameter used in the QSC is related to the one used here by $z+\frac{1}{2} = i u$, and the Lerch transcendents are given in terms of so-called $\eta$ functions, which in the twisted case are defined by \(\eta_a^x (u) \coloneqq \sum_{k=0}^\infty \frac{x^k}{(u+ik)^a}\). For the generalised Lerch transcendents see Appendix~\ref{sec:lerch}. These conventions ensure that the Q-operators have poles at positions in the spectral parameter plane which are expected from the Quantum Spectral Curve. As a simple application of the formulas derived in this paper, and in order to give some further examples of how they can be used in practise, we calculate the matrix elements of the single-index Q-operators with the BMN vacuum $\tr\mathcal{Z}^L$ of arbitrary length $L$. Since these states constitute their own ``magnon blocks'', we directly obtain the corresponding Q-functions in this case. We consider the matrix elements of the single-index $\mathcal{R}$-operators of the form $\bra\mathcal{Z} \mathcal{R} \ket\mathcal{Z}$. These can be determined from the integral representation of the diagonal part of the $\mathcal{R}$-operators given in \eqref{eq:middlepart_lowest}; equivalently one can use the finite sum representation in \eqref{eq:finite_sum_nonpolynomial} or \eqref{eq:middlepartlowest}. Further relevant formulas are given in Appendix~\ref{sec:matrixelements} where we describe the combinatorial structure arising from the oscillator algebra, see in particular~\eqref{eq:melowest}. For the matrix elements under consideration, there are in fact no combinatorial factors and no signs. Since we look at matrix elements on the diagonal, there are no auxiliary space operators, which means that $m_A={\tilde{m}}_A=\hat{m}_A$ in equation \eqref{eq:melowest}. Thus we only have to evaluate the diagonal part given in \eqref{eq:middlepart_lowest}, where all $n_{\dt a}$ are zero. We can now evaluate the integrals appearing in the diagonal part \eqref{eq:middlepart_lowest}. The matrix elements of the operators $\mathcal{R}_{\{1\}},\ldots,\mathcal{R}_{\{6\}}$ are polynomials in the spectral parameter and in the number operators in the auxiliary space, since all sums truncate. In this case the integral in \eqref{eq:middlepart_lowest} is a contour integral which computes a residue, and can be evaluated by using the series representations of hypergeometric functions ${}_2F_1$. The operators $\mathcal{R}_{\{7\}}$ and $\mathcal{R}_{\{8\}}$ have non-truncating sums and their matrix elements are rational functions of both the spectral parameter as well as the auxiliary space operators. For them, the integral in \eqref{eq:middlepart_lowest} has to be taken along the interval $(0,1)$; using $\pFq{2}{1}{a,-b}{a}{x}=(1-x)^b$ and $\pFq{2}{1}{0,b}{c}{x}=1$, one directly finds Beta integrals, which give these rational functions. Performing these calculations one finds the following matrix elements: \begin{equation} \begin{aligned} \bra\mathcal{Z}\mathcal{R}_{\{1\}}\ket\mathcal{Z} &= \bra\mathcal{Z}\mathcal{R}_{\{2\}}\ket\mathcal{Z} = \bra\mathcal{Z}\mathcal{R}_{\{3\}}\ket\mathcal{Z} = \bra\mathcal{Z}\mathcal{R}_{\{4\}}\ket\mathcal{Z} =1\,, \\[3pt] \bra\mathcal{Z}\mathcal{R}_{\{5\}}\ket\mathcal{Z} &= z+\frac{1}{2}+\mathbf{N}_{51}+\mathbf{N}_{52}+\mathbf{N}_{53}+\mathbf{N}_{54}\,, \\[3pt] \bra\mathcal{Z}\mathcal{R}_{\{6\}}\ket\mathcal{Z} &= z+\frac{1}{2}+\mathbf{N}_{61}+\mathbf{N}_{62}+\mathbf{N}_{63}+\mathbf{N}_{64}\,, \\[3pt] \bra\mathcal{Z}\mathcal{R}_{\{7\}}\ket\mathcal{Z} &= \frac{1}{z+\frac{1}{2}-\mathbf{N}_{71}-\mathbf{N}_{72}-\mathbf{N}_{73}-\mathbf{N}_{74}}\,, \\[3pt] \bra\mathcal{Z}\mathcal{R}_{\{8\}}\ket\mathcal{Z} &= \frac{1}{z+\frac{1}{2}-\mathbf{N}_{81}-\mathbf{N}_{82}-\mathbf{N}_{83}-\mathbf{N}_{84}}\,. \end{aligned} \label{eq:rvacuum} \end{equation} We now calculate the actual Q-functions as \begin{equation} \bra{\mathcal{Z}^L} \mathbf{Q}_{\{a\}}(z) \ket{\mathcal{Z}^L} = \tau_a^{-(-1)^{\gr{a}}z} \widehat\str \left( \bra\mathcal{Z} \mathcal{R}_{\{a\}}(z) \ket\mathcal{Z}^L\right) \; , \end{equation} where the BMN vacuum state of length $L$ is $\ket{\mathcal{Z}^L}=\ket{\mathcal{Z}}^{\otimes L}$. All formulas that are needed to evaluate the supertraces over the auxiliary Fock space are collected in Appendix~\ref{sec:str}, and can directly be applied to the matrix elements under consideration. From \eqref{eq:rvacuum} we immediately see that \begin{equation} \bra{\mathcal{Z}^L} \mathbf{Q}_{\{a\}}(z) \ket{\mathcal{Z}^L} = \tau_a^{-(-1)^{\gr{a}}z} \; , \qquad a=1,2,3,4 \; . \end{equation} Using the multinomial theorem and the formula for the supertrace of polynomials in the number operators given in \eqref{eq:str_polynomial} we find \begin{equation} \begin{aligned} &\bra{\mathcal{Z}^L} \mathbf{Q}_{\{5\}}(z) \ket{\mathcal{Z}^L} = \\& \tau_5^{z} \sum_{k=0}^L z^k \Bigg[ \sum_{\substack{k_0+k_1+k_2\\+k_3+k_4=L-k}} \binom{L}{k, k_0, k_1, k_2, k_3, k_4} \frac{ \sum_{\ell_3=0}^{k_3} \eulerian{k_3}{\ell_3}\big(\frac{\tau_5}{\tau_3}\big)^{\ell_3+1-\delta_{k_3,0}} \sum_{\ell_4=0}^{k_4} \eulerian{k_4}{\ell_4}\big(\frac{\tau_5}{\tau_4}\big)^{\ell_4+1-\delta_{k_4,0}} }{ 2^{k_0} \big(\frac{\tau_5}{\tau_5-\tau_1}\big)^{\delta_{k_1,0}-1} \big(\frac{\tau_2}{\tau_5-\tau_2}\big)^{\delta_{k_2,0}-1} \big(1-\frac{\tau_5}{\tau_3}\big)^{k_3} \big(1-\frac{\tau_5}{\tau_4}\big)^{k_4} } \Bigg] \; , \end{aligned} \end{equation} where we abbreviate the twist angles as $\tau_a=e^{-i\phi_a}$. The Q-function $\bra{\mathcal{Z}^L} \mathbf{Q}_{\{6\}}(z) \ket{\mathcal{Z}^L} = \bra{\mathcal{Z}^L} \mathbf{Q}_{\{5\}}(z) \ket{\mathcal{Z}^L} \vert_{\tau_5\to \tau_6}$ is obtained by a simple relabelling of the result for $\mathbf{Q}_{\{5\}}$. For the non-rational Q-functions we can use \eqref{eq:str_rational} to first evaluate the fermionic traces; the first bosonic trace generates Lerch transcendents according to \eqref{eq:str_rational}. The last trace can then be evaluated using \eqref{eq:str_hl}, and the resulting expressions simplified via the identity \eqref{eq:hl_shift}. For $\mathbf{Q}_{\{7\}}$ we find \begin{equation} \begin{aligned} \bra{\mathcal{Z}^L} \mathbf{Q}_{\{7\}}(z) \ket{\mathcal{Z}^L} = \tau_7^{-z} \frac{(\tau_2-\tau_7)(\tau_1-\tau_7)}{(\tau_4-\tau_7)(\tau_3-\tau_7)} \Bigg[ \frac{1}{(z+\frac{1}{2})^L} & +(-1)^L \frac{(\tau_2-\tau_3)(\tau_2-\tau_4)}{(\tau_1-\tau_2)\tau_2} {\textstyle \Phi^{\tau_7/\tau_2}_L(-z-\frac{1}{2})} \\& +(-1)^L \frac{(\tau_1-\tau_3)(\tau_1-\tau_4)}{(\tau_2-\tau_1)\tau_1} {\textstyle \Phi^{\tau_7/\tau_1}_L(-z-\frac{1}{2})} \Bigg]\,. \end{aligned} \end{equation} The calculation proceeds similarly for $\mathbf{Q}_{\{8\}}$, and gives $\bra{\mathcal{Z}^L} \mathbf{Q}_{\{8\}}(z) \ket{\mathcal{Z}^L} = \bra{\mathcal{Z}^L} \mathbf{Q}_{\{7\}}(z) \ket{\mathcal{Z}^L} \vert_{\tau_7\to \tau_8}$. Quite remarkably, the Q-functions for these most trivial states of the theory are already rather complicated, due to the presence of the full twist. The higher level Q-functions for the BMN vacuum can be generated from the ones given above as described in Section~\ref{sec:abcde}, using \eqref{eq:finite_diff} and \eqref{eq:detformulas}. We note that the calculations for excited states are not much more difficult; the corresponding Q-operators for each magnon block can likewise be evaluated using the formulas in Appendix~\ref{sec:formulas}. \section{Conclusions and Outlook} \label{sec:conclusion} In this article, we discussed the oscillator construction of the Baxter Q-operators of integrable models for the case of non-compact super spin chains with representations of Jordan-Schwinger form, focusing on the concrete evaluation of these operators. We outlined the derivation of the Lax operators on which this construction is based, and defined the Q-operators with their functional relations. For non-compact spin chains with infinite-dimensional state spaces, these Lax operators are given in terms of infinite sums which hides the analytic properties of the resulting Q-system and complicates their evaluation. We proposed a strategy to overcome these difficulties. For the Lax operators of the lowest level, we derived a representation without infinite sums, which allows to compute explicit matrix elements. Employing a small set of formulas for the normalised supertrace, it is then possible to obtain the matrix elements of the corresponding Q-operators. Due to the remaining symmetry, the Q-operators can be realised as finite matrices for each magnon block, and the functional relations then allow to uniquely recover the entire Q-system starting from the lowest level. For all the steps in this procedure we provided explicit formulas which can directly be implemented in computer algebra systems for practical calculations. Although our approach only relies on the Q-operators of the lowest level to determine the whole Q-system, it would be desirable to find analogues of our integral formula \eqref{eq:middlepart_lowest} also for the Lax operators of higher levels. Our initial studies in Section~\ref{sec:higher} indicate that this rather difficult task might require novel ideas. A promising route might be to derive these formulas directly from the Yang-Baxter equation \eqref{eq:yberll}. Our approach naturally incorporates compact spin chains with symmetric representations at each site of the quantum space. It would furthermore be interesting to study whether it can be generalised to more general representations and in particular to principal series representations. Furthermore, it should be straightforward to apply our method to open spin chains, for which the study of Baxter Q-operators was initiated only recently in \cite{Frassek:2015mra}. The main motivation of our work is to allow the application of Q-operators to concrete physical problems. Apart from applications in high energy physics, we hope that our method can be applied in the context of the ODE/IM correspondence \cite{Dorey:2007zx} and the computation of correlation functions~\cite{Boos:2006mq,Boos:2008rh,Jimbo:2008kn}. Currently our main focus lies on $\mathcal{N}\! = \! 4$ SYM\xspace where a similar Q-system arises in the form of the Quantum Spectral Curve. So far, the QSC of $\mathcal{N}\! = \! 4$ SYM\xspace has only been investigated on the eigenvalue level, and it is tempting to ask how it lifts to the operatorial level, see also the discussion in \cite{Staudacher:2010jz}. The individual Q-functions are multivalued functions of the spectral parameter with particular monodromies and asymptotic behaviour. These Q-functions are believed to be eigenvalues of Q-operators, but the nature of the operatorial Q-system remains a mystery. Our approach should be equivalent to the construction of the leading perturbative contribution to this system. It is well-understood how to iteratively construct perturbative corrections to the Q-functions \cite{Marboe:2014gma,Gromov:2015vua}. There is no immediate reason why these methods should not lift to the operatorial level, even though the perturbative solution of the QSC with general twists, where eigenstates possibly correspond to spin chain states with non-zero momentum, has not yet been examined in detail in the literature. Thus a systematic way of performing the untwisting would be of great practical value. On the eigenvalue level, a rather general method was proposed in \cite{Kazakov:2015efa}, but it has to be applied to each state individually; a discussion on the operator level can be found in \cite{Korff2006,Bazhanov:2010ts} for the case of the Heisenberg spin chain, see also \cite{Pronko:1998xa}. Such computations would yield access to perturbative information about the operatorial Q-system, which might give hints about its deeper nature. Furthermore, diagonalisation of the Q-operators would immediately yield higher loop corrections to the eigenstates of the dilatation operator. This information might shed light on higher-point correlation functions about which many aspects are still not fully understood and also on the emergence of the integrable system that underlies $\mathcal{N}\! = \! 4$ SYM\xspace from the field theory. We plan to address these questions in future work. It would furthermore be interesting to apply our construction to $\mathcal{N}\! = \! 4$ SYM\xspace in the presence of defects \cite{deLeeuw:2015hxa,Buhl-Mortensen:2015gfd} where Q-operators have already been used to calculate one-point functions, and to the integrable chiral field theories discovered recently \cite{Gurdogan:2015csr}. We finally note that there are other approaches to the construction of Q-operators, which superficially are quite distinct from the oscillator construction pursued in this work. It would be interesting to see how results analogous to those presented here can be obtained from the approach developed in \cite{Derkachov:2006fw,Derkachov:2010qe}, and if the interesting Q-operator construction in \cite{Kazakov:2010iu} can be generalised to non-compact spin chains. \section*{Acknowledgements} We like to thank Matthias Staudacher, Dmytro Volin, Zengo Tsuboi, Gregory Korchemsky, Gregor Richter, Leonard Zippelius, Ivan Kostov, Didina Serban, and Stijn van Tongeren for interesting discussions. RF thanks Vasily Pestun for related discussions. We thank the referees for useful remarks. Further we thank the IPhT, Saclay and \emph{``Mathematische Physik von Raum, Zeit und Materie''}, Humboldt University Berlin for hospitality. DM received support from GK 1504 \emph{``Masse, Spektrum, Symmetrie''}. RF is supported by the IH\'{E}S visitor program. The research leading to these results has received funding from the People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme FP7/2007- 2013/ under REA Grant Agreement No 317089 (GATIS).
1,314,259,992,862
arxiv
\section{Introduction} This paper addresses the problem of classifying orientably regular maps, those maps $\cal N$ on surfaces (always assumed to be compact and orientable) for which the orientation-preserving automorphism group ${\rm Aut}^+{\cal N}$ acts transitively on directed edges. Such maps are often formed as regular coverings of a simpler orientably regular map $\cal M$, in which case they correspond to normal subgroups of the fundamental group $\pi_1\,{\cal S}$ of the underlying surface $\cal S$ of $\cal M$, punctured at any branch-points, which are invariant under the induced action of the orientation-preserving automorphism group $G:={\rm Aut}^+{\cal M}$ on $\pi_1\,{\cal S}$. In the case of abelian coverings, one can abelianise $\pi_1{\cal S}$, and instead look for $G$-invariant submodules of the first homology group $H_1({\cal S};{\mathbb Z})$: for instance, Surowski and the author~\cite{JSu} have used this method to classify the cyclic regular coverings of the Platonic maps, branched over the vertices, edges or faces. When considering coverings by elementary abelian $p$-groups, one can go further and reduce the homology module mod~$(p)$, so that $G$ acts on the vector space $H_1({\cal S};{\mathbb F}_p)$: this idea has been used by Kazaz~\cite{K, K02, K09} to classify the elementary abelian unbranched regular coverings of the orientably regular hypermaps of genus $2$. Here we combine these two approaches to study regular branched coverings of the Platonic maps $\cal M$, the orientably regular maps of genus $0$, by elementary abelian $p$-groups, as a first step towards a more general classification of the abelian coverings of these maps. The representation theory of the corresponding Platonic groups $G$ plays an important role, and this theory is rather easier to apply in the case where $p$ is coprime to $|G|$, as we shall generally assume here. The remaining cases, where $p$ divides $|G|$ and modular rather than ordinary representation theory is required, will be considered in a separate paper~\cite{J}. The study of the action of automorphisms on homology is a classical technique in the context of Riemann surfaces~\cite[\S V.3]{FK}, and in recent years it has also been applied to coverings of graphs: see~\cite{KO} and~\cite{MMP}, for instance. In fact, the present work could be restated in terms of coverings of the Platonic graphs, restricted to those coverings which respect the surface-embeddings. The main result is the following theorem, where we use the notation $\{n,m\}$ of Coxeter and Moser~\cite{CM} for the $m$-valent Platonic map with $n$-gonal faces: \begin{thm} The orientably regular coverings of the Platonic maps $\cal M$, branched over the faces, with elementary abelian $p$-groups of rank $c\ge 1$ as covering groups, where the prime $p$ does not divide the order of the group $G={\rm Aut}^+{\cal M}$, are as follows: \begin{itemize} \item the tetrahedron $\{3,3\}$ has one regular covering for each $p$, with $c=3$; \item the cube $\{4,3\}$ has three regular coverings for each $p$, with $c=2,3,5$; \item the octahedron $\{3,4\}$ has seven regular coverings for each $p$, with $c=1,3,3,4,4,6,7$; \item the dodecahedron $\{5,3\}$ has seven regular coverings for each $p\equiv\pm 1$ {\rm mod}~$(5)$, with $c=3,3,5,6,8,8,11$, and has three regular coverings for each $p\equiv\pm 2$ {\rm mod}~$(5)$, with $c=5,6,11$; \item the icosahedron $\{3,5\}$ has $8p+23$ coverings, $31$ regular and $8(p-1)$ chiral, for each $p\equiv\pm 1$ {\rm mod}~$(5)$, with $c\in\{3,4, \ldots, 16, 19\}$, and has $4p+11$ coverings, $15$ regular and $4(p-1)$ chiral, with $c\in\{4,5,6,8,9,10,11,13,14,15,19\}$, for each $p\equiv\pm 2$ {\rm mod}~$(5)$; \item the dihedron $\{n,2\}$ has one regular covering for each $p$, with $c=1$; \item the hosohedron $\{2,n\}$ has $2^{\nu}-1$ regular coverings for each $p$, where $\nu$ is the number of orbits $\Delta\ne\{1\}$ of the group generated by the Frobenius automorphism and inversion on the set of $n$th roots of $1$ in the algebraic closure $\overline{\mathbb F}_p$ of the field ${\mathbb F}_p$. \end{itemize} For each ${\cal M}=\{n,m\}$ and prime $p$ these covering maps have type $\{np,m\}$ and genus \[\left(\frac{f}{2}-1\right)p^c-\frac{f}{2}p^{c-1}+1,\] where $f$ is the number of faces of $\cal M$; they are all quotients of a single covering with $c=f-1$. In all cases except when $\cal M$ is the icosahedron, the $2^{\nu}-1$ coverings are the joins formed from a set of $\nu$ irreducible coverings of $\cal M$. \end{thm} \noindent{\bf Comments. 1.} The Schur-Zassenhaus Theorem implies that the orientation-preserving automorphism group of each of these coverings is a semidirect product of the covering group, an elementary abelian normal subgroup of order $p^c$, by $G$. \smallskip \noindent{\bf 2.} In the case of the hosohedron, the method for finding the ranks $c$ of the covering groups, too complicated to state here, is described in \S 9.2. \smallskip \noindent{\bf 3.} A map is called regular or chiral as it is or is not isomorphic to its mirror image. When $\cal M$ is the icosahedron, the chiral coverings arise from the fact that its full automorphism group ${\rm Aut}\,{\cal M}$ has fewer orbits than $G$ on pairs of faces (see \S 2.6 and \S 7). The fact that the number of coverings is unbounded as a function of $p$ is related to the fact that the permutation character of $G$ on faces is not multiplicity-free (see \S 2.5 and \S 7). \smallskip \noindent{\bf 4.} The conclusions of Theorem~1.1 also apply in a few cases where $p$ divides $|G|$, such as the tetrahedron and octahedron with $p=3$, and the hosohedron with $n$ odd and $p=2$. \medskip The paper is organised as follows. In \S 2 some useful techniques are outlined. These are applied in \S\S3--9 to enumerate and describe the coverings in Theorem~1.1. The individual cases are fairly straightforward, except for that involving the hosohedron ${\cal M}=\{n,2\}$, which depends on the reduction mod~$(p)$ of cyclotomic polynomials. Branching over vertices is easily dealt with by duality. In \S 10 we consider simultaneous branching over vertices and faces, concentrating on the tetrahedron as a typical example. In \S\S11--12 we briefly consider the hypermaps arising from branching over edges (and also vertices and faces). The author is grateful to Young Soo Kwon for raising this problem, and to Rub\'en Hidalgo and Roman Nedela for helpful comments on an earlier draft of this work. \section{Techniques} \subsection{Maps} First we briefly sketch the connections between maps (always assumed to be orientable) and triangle groups; for background, see~\cite{JSi}. We define a triangle group to be \[\Delta(p, q, r)=\langle x, y, z \mid x^p=y^q=z^r=xyz=1 \rangle\] where $p,q,r\in{\mathbb{N}}\cup\{\infty\}$ and we ignore any relation $g^{\infty}=1$. Any $m$-valent map $\cal N$ corresponds to a subgroup $N$ of the triangle group \[\Delta:=\Delta(m, 2, \infty)=\langle x, y, z \mid x^m=y^2=xyz=1\rangle,\] with vertices, edges and faces corresponding to the cycles of $x, y$ and $z$ on the cosets of $N$. The map $\cal N$ is orientably regular if and only if $N$ is normal in $\Delta$, in which case the orientation-preserving automorphism group ${\rm Aut}^+{\cal N}$ is isomorphic to $\Delta/N$; we will assume this throughout. In particular, the Platonic map ${\cal M}=\{n, m\}$ corresponds to the normal closure $M$ of $z^n$ in $\Delta$, and ${\rm Aut}^+{\cal M}\cong\Delta/M\cong\Delta(m,2,n)$. A map $\cal N$ is a $d$-sheeted covering of $\cal M$ if and only if $N$ is a subgroup of index $d$ in $M$, in which case ${\cal N}\to{\cal M}$ is a regular covering and the group of covering transformations is $M/N$. The orientably regular $m$-valent maps which are $d$-sheeted abelian coverings of $\cal M$ are therefore in bijective correspondence with the subgroups $N$ of $M$ which are normal in $\Delta$, with $M/N$ abelian of order $d$. If $M/N$ has exponent $l$, such subgroups $N$ contain the commutator subgroup $M'$ and the subgroup $M^l$ generated by the $l$-th powers in $M$, so they correspond to subgroups $\overline N=N/M'M^l$ of $\overline M=M/M'M^l$, with $M/N\cong \overline M/\overline N$. The action of $\Delta$ by conjugation on the normal subgroup $M$ preserves its characteristic subgroups $M'$ and $M^l$, so there is an induced action of $\Delta$ on $\overline M$; since $M$ is in the kernel of this action, we therefore have an action of the group $G:={\rm Aut}^+{\cal M}\cong\Delta/M$ on $\overline M$, which is a module for $G$ over the ring ${\mathbb{Z}}_l$; it follows that a subgroup $N$ of $M$, containing $M'M^l$, is normal in $\Delta$ if and only if $\overline N$ is a $G$-invariant submodule of $\overline M$. The orientation-preserving automorphism group $\tilde G:={\rm Aut}^+{\cal N}\cong\Delta/N$ of $\cal N$ has an abelian normal subgroup $K$, corresponding to $M/N$, with $\tilde G/K\cong G$; under the induced action of $G$ on $K$ by conjugation, $K$ is isomorphic as a $G$-module to $\overline M/\overline N$. If $l$ is coprime to $|G|$ then the Schur-Zassenhaus Theorem implies that $\tilde G$ is a semidirect product of $K$ by $G$. \subsection{Homology modules} Let $S=S^2\setminus\{c_1,\ldots, c_f\}$, where $c_1,\ldots, c_f$ are the centres of the $f$ faces of $\cal M$. Then $M$ can be identified with the fundamental group $\pi_1(S)$ of $S$, a free group of rank $f-1$ generated by the homotopy classes $g_i$ of loops around the punctures $c_i$, with a single defining relation $g_1\ldots g_f=1$. It follows that the group $M^{\rm ab}=M/M'$ can be identified with the first integer homology group $H_1(S;{\mathbb{Z}})=\pi_1(S)^{\rm ab}$ of $S$, a free abelian group of rank $f-1$ generated by the homology classes $[g_i]$, with defining relation $[g_1]+\cdots+[g_f]=0$. By the Universal Coefficient Theorem, $\overline M$ is identified with the mod~$(l)$ homology group $H_1(S;{\mathbb{Z}}_l)=H_1(S;{\mathbb{Z}})\otimes_{\mathbb Z}{\mathbb{Z}}_l\cong{\mathbb{Z}}_l^{f-1}$. Under these identifications, the actions of $G$ induced by conjugation in $\Delta$ and by homeomorphisms of $S$ are the same, so our problem is to find the $G$-submodules of $H_1(S;{\mathbb{Z}}_l)$. The prime power factorisation $\prod_pp^{e_p}$ of $l$ induces a $G$-invariant decomposition of $H_1(S;{\mathbb{Z}}_l)$ as a direct sum of its Sylow $p$-subgroups $H_1(S;{\mathbb{Z}}_{p^{e_p}})$, so it is sufficient to restrict attention to the case where $l$ is a power of a prime $p$. For each prime $p$ we have an infinite descending series \[M^{\rm ab} = M/M' > M'M^p/M' > M'M^{p^2}/M' > \ldots > M'M^{p^i}/M' > \ldots,\] of characteristic subgroups of $M^{\rm ab}$, corresponding to a descending series \[H_1(S;{\mathbb{Z}}) = H^0 > H^1>\ldots > H^i > \ldots,\] of $G$-submodules of $H_1(S;{\mathbb{Z}})$, where each $H^i$ is the kernel of the reduction mod~$(p^i): H_1(S;{\mathbb{Z}})\to H_1(S;{\mathbb{Z}}_{p^i})$. This induces a finite descending series \[H_1(S;{\mathbb{Z}}_{p^i}) = H^0/H^i > H^1/H^i>\ldots > H^i/H^i = 0\] of $G$-submodules of $H_1(S;{\mathbb{Z}}_{p^i})$. Successive quotients \[(H^j/H^i)/(H^{j+1}/H^i) \cong H^j/H^{j+1} \cong M'M^{p^j}/M'M^{p^{j+1}}\] in this series are $G$-isomorphic to the module \[H_1(S;{\mathbb{F}}_p) = H_1(S;{\mathbb{Z}}_p) = H^0/H^1 \cong M_p:= M/M'M^p,\] where $\mathbb{F}_p$ is the field of $p$ elements. This $G$-module $H_1(S;{\mathbb{F}}_p)$, along with all the other quotients in the series, is a vector space over $\mathbb{F}_p$ of dimension $f-1$ affording a representation $\rho_p:G\to {\rm Aut}(H_1(S;{\mathbb{F}}_p)) \cong GL_{f-1}(\mathbb{F}_p)$ of $G$. Our main problem therefore is to determine the structure of this $G$-module $H_1(S;{\mathbb{F}}_p)$ for each prime $p$. In particular, this will immediately determine the elementary abelian regular coverings of $\mathcal{M}$. A $G$-submodule of $L$ of $H_1(S;{\mathbb{F}}_p)$ of codimension $c$ corresponds to a normal subgroup $N$ of $\Delta$ contained in $M$, with $M/N$ elementary abelian of order $p^c$, and hence to a regular $p^c$-sheeted covering $\cal N$ of $\cal M$, branched over its face-centres. This is an orientably regular map of type $\{np, m\}$. There are $p^{c-1}$ points above each of the $f$ face-centres of $\cal M$, so the total order of branching of the covering ${\cal N}\to{\cal M}$ is $(p^c-p^{c-1})f$. By the Riemann-Hurwitz formula, $\cal N$ therefore has genus \[g=1-p^c+\frac{1}{2}p^{c-1}(p-1)f= \left(\frac{f}{2}-1\right)p^c-\frac{f}{2}p^{c-1}+1.\] The orientation-preserving automorphism group $\tilde G={\rm Aut}^+{\cal N}$ of $\cal N$ has an elementary abelian normal subgroup $K$ of order $p^c$, isomorphic to $H_1(S;{\mathbb{F}}_p)/L$ as a $G$-module over $\mathbb{F}_p$. For notational convenience, we let $E_p(\mathcal{M})$ denote the set of such regular coverings of $\mathcal{M}$ with elementary abelian $p$-groups as covering groups, and for each ${\cal N}\in E_p(\mathcal{M})$ we let $\dim({\cal N})=c$ where the covering ${\cal N}\to\mathcal{M}$ has $p^c$ sheets. Thus $\dim({\cal N})$ is simply the dimension of the corresponding $G$-module $H_1(S;\mathbb{F}_p)/L$, or equivalently the codimension of $L$. We will denote the representations of $G$ on $L$ ($\,\cong_GN/M'M^p$) and on $H_1(S;{\mathbb{F}}_p)/L$ ($\,\cong_G M/N \cong_G K$) respectively by $\rho_L$ and $\rho^L$, and the corresponding characters by $\chi_L$ and $\chi^L$. In particular, we will say that the covering $\cal N$ {\sl affords\/} the representation $\rho^L$ and the character $\chi^L$, since these correspond to the action of $G$ by conjugation on the covering group $K$. We will call a covering $\cal N$ {\sl irreducible\/} or {\sl indecomposable\/} if $\rho^L$ has this property. If $p$ is coprime to $|G|$ then the representation theory of $G$ over fields of characteristic $p$ is essentially the same as that over fields of characteristic $0$ (ordinary representation theory): for instance Maschke's Theorem applies in both cases. In this situation, the decomposition of $H_1(S;\mathbb{F}_p)$ as a $G$-module can be obtained from that of the corresponding homomology module over $\mathbb C$ (or indeed any algebraically closed field of characteristic $0$, such as the field $\overline{\mathbb Q}$ of algebraic numbers). Moreover, if $p$ is coprime to $|G|$ then $\tilde G$ is a semidirect product of $K$ by $G$, by the Schur-Zassenhaus Theorem. If, on the other hand, $p$ divides $|G|$ then this extension splits if and only if its Sylow $p$-subgroups split over $K$. Moreover, in this situation, Maschke's Theorem does not apply, and the submodule structure of $H_1(S;\mathbb{F}_p)$ is less transparent, requiring modular rather than ordinary representation theory. In this paper we therefore concentrate on the ordinary case, avoiding the finitely many primes $p$ dividing $|G|$; these will be dealt with in a later paper~\cite{J}. \subsection{Permutation modules} Let $\Phi$ be the set of faces of $\cal M$, and let $\mathbb{C}\Phi$ be the corresponding permutation module for $G$; this is an $f$-dimensional complex vector space with basis $\Phi$ permuted naturally by $G$. The homology module $H_1(S;{\mathbb C})$ is isomorphic to the quotient of $\mathbb{C}\Phi$ by the $1$-dimensional $G$-submodule spanned by the element $\sum_{\phi\in\Phi}\phi$. The $G$-module decomposition of $\mathbb{C}\Phi$ corresponds to that of the corresponding permutation character $\pi$, where $\pi(g)$ is the number of faces $\phi\in\Phi$ invariant under each $g\in G$. If $H$ denotes the subgroup $\langle z\rangle\cong C_n$ of $G$ leaving invariant a face, then since $G$ acts transitively on $\Phi$ we have $\pi=1_H^G$, the character obtained by inducing the principal character $1_H$ of $H$ up to $G$. By Frobenius reciprocity, the multiplicity $(\chi_i, \pi)$ of any irreducible complex character $\chi_i$ of $G$ in $\pi$ is equal to $(\chi_i\mid_H, 1_H)$, the multiplicity of $1_H$ in the restriction of $\chi_i$ to $H$. This is equal to the average value $|H|^{-1}\sum_{g\in H}\chi_i(g)$ of $\chi_i$ on $H$, or equivalently the multiplicity of $1$ as an eigenvalue for $z$ in the irreducible representation $\rho_i$ of $G$ corresponding to $\chi_i$. Once these multiplicities are calculated, the direct sum decomposition of $\mathbb{C}\Phi$ is known, and hence so is that of $H_1(S;{\mathbb C})$. In particular, this module affords the character \[\chi=\pi-\chi_1\] of $G$, where $\chi_1$ is the principal character $1_G$ of $G$, given by $\chi_1(g)=1$ for all $g\in G$. The same decompositions apply if $\mathbb{C}$ is replaced with the algebraic closure $\overline{\mathbb{F}}_p$ of $\mathbb{F}_p$, where $p$ is any prime not dividing $|G|$. In order to find the corresponding decompositions over $\mathbb{F}_p$, various algebraically conjugate summands must be merged to give summands which are realised over $\mathbb{F}_p$. For any given map Platonic $\mathcal{M}$, how this happens depends on certain congruences satisfied by $p$. \subsection{Finding submodules} Some submodules $L$ of $H_1(S;\mathbb{F}_p)$ arise naturally from the action of $G$ on $\Phi$, whether or not $p$ divides $|G|$. Let $P$ denote the permutation module $\mathbb{F}_p\Phi$, a $G$-module over $\mathbb{F}_p$. Given any subset $\Psi\subseteq\Phi$, let $\underline\Psi$ denote the element $\sum_{\phi\in\Psi}\phi$ of $P$. The $G$-module $H_1(S;\mathbb{F}_p)$ can then be identified with the quotient $Q:=P/P_1$ of $P$ by the $1$-dimensional $G$-submodule $P_1$ spanned by $\underline\Phi$. There is a regular covering $\mathcal{M}_0\in E_p(\mathcal{M})$, with $\dim({\mathcal{M}_0})=f-1$, corresponding to the submodule $L=0$ of $Q=H_1(S;\mathbb{F}_p)$, i.e.~to the submodule $P_1$ of $P$; all other regular coverings $\cal N$ of $\cal M$ in $E_p(\mathcal{M})$ are proper quotients of $\mathcal{M}_0$, so they satisfy $\dim({\cal N})\le f-2$. There is a $G$-submodule $P^1$ of codimension $1$ in $P$, consisting of the elements $\sum_{\phi\in\Phi}a_{\phi}\phi$ with coordinate-sum $\sum_{\phi\in\Phi}a_{\phi} =0$. If $p$ divides $f$ then $P_1\le P^1$, giving a $G$-submodule $L=Q^1=P^1/P_1$ of codimension $1$ in $Q$, and hence a regular covering $\mathcal{M}^1\in E_p(\mathcal{M})$ with $\dim(\mathcal{M}^1)=1$. This is therefore a cyclic covering of $\mathcal{M}$, and since $L/Q^1\cong P/P^1$ affords the principal representation of $G$ it is also a central covering, in the sense that $K$ is in the centre of $\tilde G$. The cyclic coverings of the Platonic hypermaps were classified by the author and Surowski in~\cite{JSu} (see also~\cite{SJ}), where Proposition~3 implies that $\tilde G$ is a split extension of $G$ by $K$, giving $\tilde G=G\times K\cong G\times C_p$, if and only if $p$ divides $2m$ but not $n$. On the other hand, if $p$ does not divide $f$ then $P=P_1\oplus P^1$ so that $Q\cong P^1$ and we obtain the identity covering $\mathcal{M}\to\mathcal{M}$. If $G$ acts imprimitively on $\Phi$, preserving a non-trivial equivalence relation $\sim$ (equivalently, if there is a subgroup $H_{\sim}$ of $G$ such that $H<H_{\sim}<G$), then as $\Psi$ ranges over the $k=|G:H_{\sim}|$ equivalence classes, the elements $\underline{\Psi}$ form a basis for a $k$-dimensional $G$-submodule $P_{\sim}$ of $P$, containing $P_1$ (and contained in $P^1$ if and only if $p$ divides the size $|H_{\sim}:H|=f/k$ of each class). The $G$-submodule $L=Q_{\sim}=P_{\sim}/P_1$ of $Q$ therefore corresponds to a regular covering $\mathcal{M}_{\sim}\in E_p(\mathcal{M})$ with $\dim(\mathcal{M}_{\sim})=f-k$. The normal subgroup $K$ of $\tilde G$ affords a representation of $G$ with character $\pi-\pi_{\sim}$, where $\pi_{\sim}$ is the permutation character of $G$ on the equivalence classes of $\sim$, i.e.~on the cosets of $H_{\sim}$ in $G$. If $\approx$ is a $G$-invariant equivalence relation which refines $\sim$ (i.e.~$\phi_1\approx\phi_2$ implies $\phi_1\sim\phi_2$), then the inclusion $Q_{\sim}\le Q_{\approx}$ induces a covering $\mathcal{M}_{\sim}\to\mathcal{M}_{\approx}$. For instance, suppose that $\cal M$ has an antipodal symmetry, so that $f$ is even. Then $P$ has a $G$-submodule $P_a\ge P_1$ of dimension $k=f/2$ with basis elements $\underline\Psi$ corresponding to the antipodal pairs $\Psi\subseteq\Phi$. This gives a $G$-submodule $L=Q_a=P_a/P_1$ of codimension $f-k=f/2$ in $Q$, so we obtain a regular covering $\mathcal{M}_a\in E_p(\mathcal{M})$ with $\dim(\mathcal{M}_a)=f/2$. If $p>2$ there is a second $G$-submodule $P_{a'}$ of dimension $f/2$ in $P$, with a basis element $\phi-\phi'$ for each antipodal pair $\{\phi, \phi'\}\subseteq\Phi$ (so $P_{a'}\le P^1$). The $G$-submodule $L=Q_{a'}=(P_{a'}\oplus P_1)/P_1$ of codimension $(f-2)/2$ in $Q$ corresponds to a regular covering $\mathcal{M}_{a'}\in E_p(\mathcal{M})$ with $\dim(\mathcal{M}_{a'})=(f-2)/2$. A simple calculation shows that $P=P_a\oplus P_{a'}$, so $Q=Q_a\oplus Q_{a'}$. Since $P_a$ affords the permutation character $\pi_a$ of $G$ on the set of antipodal pairs of faces, it follows that $Q_a$ and hence $Q/Q_{a'}$ afford $\pi_a-\chi_1$, while $Q_{a'}$ and $Q/Q_a$ afford $\chi-\pi_a$. If $p=2$ one can identify $P$ with the power set ${\cal P}(\Phi)$ of $\Phi$, a group under symmetric difference, with the natural induced action of $G$, so that $P^1=\{\Psi\subseteq\Phi\mid|\Psi|\;\hbox{is even}\}$ and $P_1=\{\emptyset, \Phi\}$. In this case, $Q$ can be identified with the set of complementary pairs $\{\Psi, \Phi\setminus\Psi\}$ of subsets of $\Phi$. For each $\mathcal{M}$, the group $G$ has a natural $3$-dimensional complex representation $\rho_n$, obtained by extending its natural real representation, as the rotation group of $\mathcal{M}$, by linearity to $\mathbb{C}$. This representation is irreducible, except when $\mathcal{M}$ is the dihedron or hosohedron, in which case it splits as a direct sum of $1$- and $2$-dimensional representations. When $\mathcal{M}$ is one of the five Platonic solids, by sending each basis element $\underline\phi\;(\phi\in\Phi)$ of $\mathbb{C}\Phi$ to the position-vector in $\mathbb{R}^3\subset\mathbb{C}^3$ of the face-centre of $\phi$, we see that $\rho_n$ is a quotient, and hence a direct summand, of the representation of $G$ on $\mathbb{C}\Phi$. Frobenius reciprocity implies that $\rho_n$ has multiplicity $1$ in this representation, and in the cases of the dodecahedron and icosahedron, the same applies to the Galois conjugate representation $\rho_{n*}$. By realising $\rho_n$ over the ring of integers of an appropriate algebraic number field (namely $\mathbb{Q}$ for the tetrahedron, cube or octahedron, $\mathbb{Q}(\sqrt 5)$ for the dodecahedron or icosahedron, and a cyclotomic field $\mathbb{Q}(\zeta_n)$ for the dihedron or hosohedron, where $\zeta_n=e^{2\pi i/n}$), and then taking quotients modulo some prime ideal containing $p$, we find that the mod~$(p)$ reduction of $\rho_n$ or of $\rho_n\oplus\rho_{n*}$ is a summand of the representation of $G$ on $P$, and hence on $Q$. \subsection{Diagonal submodules} We need to deal with those cases when $Q$ has summands of multiplicity greater than one. If $A_1$ and $A_2$ are isomorphic $FG$-modules for some field $F$ and group $G$, then a {\sl diagonal submodule\/} of $A_1\oplus A_2$ is a submodule of the form $\{(a,a')\mid a\in A_1\}$ for some isomorphism $A_1\to A_2, a\mapsto a'$. \begin{lemma} Let $U$ be a submodule of a module $V=V_1\oplus V_2$, let $U_i=U\cap V_i$, and let $\overline V_i=V_i/U_i$ for $i=1, 2$. Then $U$ is the inverse image in $V$ of a diagonal submodule of $\overline W_1\oplus\overline W_2$ for some submodules $W_i$ such that $U_i\le W_i\le V_i$ for $i=1, 2$. \end{lemma} \noindent{\sl Proof.} The image of $U$ in $\overline V_1\oplus\overline V_2$ intersects each summand $V_i$ trivially, so it is isomorphic to its projection $\overline W_i$ in each $V_i$, and is therefore a diagonal submodule of $\overline W_1\oplus\overline W_2$. \hfill$\square$ \medskip Thus, provided the submodule structures of each $V_i$ are known, so is that of $V$. \begin{cor} If $V_1$ and $V_2$ are isomorphic irreducible $FG$-modules, then the only proper submodules of $V_1\oplus V_2$ are $0$, $V_1$, $V_2$ and the diagonal submodules; given any isomorphism $V_1\to V_2, v\mapsto v'$, each of the latter has the form $\{(v,\lambda v')\mid v\in V_1\}$ for some $\lambda\in F^*$. \end{cor} \noindent{\sl Proof.} Let $U$ be a submodule of $V=V_1\oplus V_2$. In the notation of Lemma~1.1, for each $i=1, 2$ the irreducibility of $V_i$ implies that $U_i=0$ or $V_i$. If $U_i=V_i$ for some $i$ then $U\ge V_i$, so the irreducibility of $V/V_i$ implies that $U=V_i$ or $V$. Otherwise, $U_1=U_2=0$, so in Lemma~1.1 we have either $W_i=0$ for some $i$, giving $U=0$, or $W_i=V_i$ for each $i$, in which case $U$ is a diagonal submodule of $V$. In the latter case, Schur's Lemma implies that the isomorphisms $V_1\to V_2$ are the functions $v\mapsto\lambda v'$ for $\lambda\in F^*$, so $U$ is as claimed.\hfill$\square$ \medskip This shows that if $V_1=V_2$ is irreducible and $F=\mathbb{F}_q$, then $V=V_1\oplus V_2=V_1\oplus V_1$ has $q+1$ non-trivial proper submodules, all isomorphic to $V_1$. We can write them as \[V(\lambda)=\{(\alpha v, \beta v)\mid v\in V_1,\, \alpha/\beta=\lambda\}\] for $\lambda\in{\mathbb P}^1(\mathbb{F}_q)=\mathbb{F}_q\cup\{\infty\}$; here $V(\infty)$ and $V(0)$ are the first and second direct summands, with $\beta=0$ and $\alpha=0$ respectively. \subsection{Regularity and chirality} Each of the coverings $\cal N$ constructed here is orientably regular, since it corresponds to a normal subgroup $N$ of $\Delta$. It is regular (i.e.~it also admits an orientation-reversing automorphism) if and only if $N$ is normal in the extended triangle group $\Gamma=\Delta[m, 2, \infty]$, which contains $\Delta$ as a subgroup of index $2$. This is equivalent to the corresponding submodule $L$ of $H_1(S;\mathbb{Z}_l)$ being invariant, not just under the orientation-preserving automorphism group $G={\rm Aut}^+\mathcal{M}\cong \Delta/N$ of $\mathcal{M}$, but under its full automorphism group $A={\rm Aut}\,\mathcal{M}\cong\Gamma/N$. In the case $l=p$, it is clear from their construction that the coverings $\mathcal{M}_0$, $\mathcal{M}^1$, $\mathcal{M}_a$ and $\mathcal{M}_{a'}$ (when they exist) are all regular. One can find the structure of $H_1(S;\mathbb{F}_p)$ as an $A$-module by treating it as the quotient $Q=P/P_1$ of $P$, where $P$ is now regarded as the permutation module for $A$ on $\Phi$. In particular, the permutation character $\pi|_A$ of $A$ on $P$ can be found as for $\pi|_G$, taking $H$ to be the stabiliser $A_{\phi}\cong D_n$ of a face $\phi\in\Phi$ in $A$ instead of its stabiliser $G_{\phi}\cong C_n$ in $G$. If $\mathcal{M}$ is not the icosahedron, and if $p$ is a prime not dividing $|G|$, then all the coverings ${\cal N}\in E_p(\mathcal{M})$ are regular. To prove this, it is sufficient to show that the $G$-submodules of $H_1(S;\mathbb{C})$ are all $A$-invariant, i.e.~that the $A$- and $G$-module direct sum decompositions of $H_1(S;\mathbb{C})$ are identical. If this is not the case, then at least one irreducible summand of this $A$-module must split into a direct sum of two irreducible summands of the $G$-module, so that the permutation character $\pi$ on $\Phi$ satisfies $(\pi|_A,\pi|_A)_A<(\pi|_G,\pi|_G)_G$. But the two sides of this inequality are equal to the ranks of $A$ and $G$ as permutation groups on $\Phi$, i.e.~the number of orbits of $A$ and $G$ on $\Phi^2$, or equivalently the number of orbits of their face-stabilisers $A_{\phi}$ and $G_{\phi}$ on $\Phi$. Now $A_{\phi}\cong D_n$ and $G_{\phi}\cong C_n$, and for all $\mathcal{M}$ except the icosahedron their orbits on $\Phi$ are either trivial (fixing $\phi$ and its antipodal face, if it exists), or natural orbits of length $n$. Thus $A_{\phi}$ and $G_{\phi}$ have the same number of orbits on $\Phi$, so $A$ and $G$ have the same rank, and the assertion is proved. This argument fails for the icosahedron, with $A$ and $G$ having ranks $6$ and $8$ on $\Phi$, and $A_{\phi}$ having two regular orbits, of length $2n=6$; later we will give explicit examples of chiral maps in $E_p(\mathcal{M})$ in this case. \section{The tetrahedron}\label{Tet} Let $\mathcal{M}$ be the tetrahedral map $\{3, 3\}$, with $f=4$ faces and $G={\rm Aut}^+\mathcal{M}\cong A_4$. The group $A_4$ has four conjugacy classes, consisting of the identity, the three double transpositions, and two mutually inverse classes of four $3$-cycles. Its character table is as follows; here $\chi_1$ is the principle character, $\chi_2$ and $\chi_3$ are the faithful characters of $A_4/V_4\cong C_3$ where $V_4$ is the normal Klein four-group, and $\chi_4$ is the character of the natural representation $\rho_n$ as the rotation group of the tetrahedron, extended from $\mathbb{R}^3$ to $\mathbb{C}^3$. \begin{table}[htb] \centering \begin{tabular}{|c|c|c|c|c|} \hline &$1$&$(..)(..)$&$(...)^+$&$(...)^-$\\ \hline $\chi_1$&1&1&1&1\\ $\chi_2$&1&1&$\omega$&$\overline\omega$\\ $\chi_3$&1&1&$\overline\omega$&$\omega$\\ $\chi_4$&3&-1&0&0\\ \hline \end{tabular} \caption{The character table of $A_4$, with $\omega=\exp(2\pi i/3)$.}\label{chartA4} \end{table} Since $G$ acts doubly transitively on $\Phi$, as the natural representation of $A_4$, it follows that $\pi=\chi_1+\chi_4$, so $H_1(S;\mathbb{C})$ is an irreducible $G$-module affording the character $\chi_4$. The primes dividing $|G|$ are $2$ and $3$, so for primes $p>3$ we can regard Table~\ref{chartA4} as the character table of $A_4$ over $\overline\mathbb{F}_p$ by suitable reduction mod~$(p)$, with $\omega^2+\omega+1=0$. In particular, the $G$-module $Q=H_1(S;\mathbb{F}_p)$ is irreducible, affording the character $\chi_4$ of $G$. It follows that in this case $E_p(\mathcal{M})$ consists of a single regular map $\mathcal{M}_0$, with $\dim(\mathcal{M}_0)=3$. (If $p=3$ then $Q$ is again irreducible, and the result is the same.) The map $\mathcal{M}_0$ has type $\{3p, 3\}$ and genus $p^3-2p^2+1$. Its orientation-preserving automorphism group ${\rm Aut}^+\mathcal{M}_0$ has an elementary abelian normal subgroup $K\cong (C_p)^3$; the quotient group, isomorphic to $G$, has an induced action by conjugation on $K$, so that $K$ is a $G$-module over $\mathbb{F}_p$, isomorphic to $Q$. The full automorphism group ${\rm Aut}\,\mathcal{M}_0$ is an extension of $K$ by $A={\rm Aut}\,\mathcal{M}\cong S_4$, with $K\cong Q=P/P_1$ where we now regard $P$ as the natural permutation module for $S_4$; the character of $S_4$ on $K$ is that denoted by $\chi_4$ in Table~2 (see \S 4). When $p=3$ or $5$ the map $\mathcal{M}_0$ has genus $10$ or $76$, and appears as the dual of the map R10.1 or R76.2 in Conder's computer-generated list of regular orientable maps~\cite{C}. {\color{red} } \section{The cube}\label{Cub} Next we consider coverings of the cube ${\cal M}=\{4, 3\}$ branched over its face-centres (or, dually, of the octahedron $\{3, 4\}$, branched over its vertices). In this case $G=\Delta(3, 2, 4)\cong S_4$, acting on the set $\Phi$ of six faces of $\cal M$, so the stabilisers of faces are the three subgroups $H\cong C_4$ of $G$. In $S_4$ there are five conjugacy classes: the identity, six transpositions, three double transpositions, eight $3$-cycles, and six $4$-cycles. Hence there are five irreducible characters. In addition to the principal character $\chi_1$ and the alternating character $\chi_2(g)={\rm sgn}(g)$, $\chi_3$ and $\chi_4$ are the non-principal irreducible characters obtained from the doubly transitive representations of $S_4$ of degrees $3$ (via the epimorphism $S_4\to S_3$) and $4$ (the natural action), while $\chi_5$ is the character of the natural representation $\rho_n$ of $G$ as the rotation group of the cube or octahedron. \begin{table}[htb] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline &$1$&$(..)$&$(..)(..)$&$(...)$&$(....)$\\ \hline $\chi_1$&1&1&1&1&1\\ $\chi_2$&1&-1&1&1&-1\\ $\chi_3$&2&0&2&-1&0\\ $\chi_4$&3&1&-1&0&-1\\ $\chi_5$&3&-1&-1&0&1\\ \hline \end{tabular} \caption{The character table of $S_4$.}\label{chartS4} \end{table} The subgroup $H$ consists of the identity element, one double transposition and two $4$-cycles. By averaging their values over $H$, one therefore sees that the characters $\chi_i$ have multiplicities $1, 0, 1, 0$ and $1$ in the permutation character $\pi$ on $P$, that is, \[\pi=\chi_1+\chi_3+\chi_5.\] It follows that the representation of $G$ on $H_1(S;{\mathbb C})$ has character \[\chi=\chi_3+\chi_5,\] so $H_1(S;{\mathbb C})$ is a direct sum of irreducible submodules of dimensions $2$ and $3$. The primes dividing $|G|$ are $2$ and $3$, so we can use this decomposition to find the decomposition of $H_1(S;{\mathbb F}_p)$ for any prime $p>3$. In this case $H_1(S;{\mathbb F}_p)$ is also a direct sum of irreducible submodules of dimensions $2$ and $3$, so $E_p(\mathcal{M})$ consists of three covering maps $\cal N$, with $\dim({\cal N})=2$, $3$ and $5$. Since the cube has an antipodal symmetry, with the $f=6$ faces permuted in three antipodal pairs, the first two coverings $\cal N$ are the maps $\mathcal{M}_{a'}$ and $\mathcal{M}_a$ corresponding to the summands $L=Q_{a'}$ and $Q_a$ in the $G$-module decomposition $Q=Q_a\oplus Q_{a'}$, while the third is $\mathcal{M}_0$, corresponding to $L=0$. These are regular orientable maps of type $\{4p, 3\}$ and genus $2p^c-3p^{c-1}+1$ where $c=\dim({\cal N})=2, 3$ or $5$, affording the character $\chi_3$, $\chi_5$ or $\chi_3+\chi_5$ respectively; for instance, if $p=5$ or $7$ then the maps $\mathcal{M}_{a'}$ are the duals of the regular maps R36.3 and R78.1 in~\cite{C}. The inclusions $0<Q_{a'}$ and $0<Q_a$ of $G$-submodules induce regular coverings $\mathcal{M}_0\to\mathcal{M}_{a'}$ and $\mathcal{M}_0\to M_a$, with covering groups $Q_{a'}$ and $Q_a$. {\color{red} } {\color{red} } \section{The octahedron}\label{Oct} We now consider coverings of the octahedron ${\cal M}=\{3, 4\}$ branched over its face-centres (or, dually, of the cube $\{4, 3\}$, branched over its vertices). As in the case of the cube we have $G\cong S_4$, but now acting on the set $\Phi$ of eight faces of $\cal M$, so the stabilisers of faces are the four Sylow $3$-subgroups $H\cong C_3$ of $G$. The conjugacy classes and character table of $G$ are as before, but now $H$ consists of the identity and two $3$-cycles. We find that \[\pi=\chi_1+\chi_2+\chi_4+\chi_5,\] so \[\chi=\chi_2+\chi_4+\chi_5.\] Thus $H_1(S;{\mathbb C})$ is a direct sum of irreducible $G$-submodules of dimensions $1, 3$ and $3$. For primes $p>3$ we therefore obtain seven coverings ${\cal N}\in E_p(\mathcal{M})$ with $c=\dim({\cal N})=1, 3, 3, 4, 4, 6$ and $7$, corresponding to the seven proper $G$-submodules $L$ of $Q$. These are regular maps of type $\{3p, 4\}$ and genus $g=3p^c-4p^{c-1}+1$. When $c=1$ we have a regular map $\cal N$ of genus $3(p-1)$, affording the character $\chi_2$ of $G$. The underlying Riemann surfaces of these maps are examples of the well-known Accola-Maclachlan surfaces~\cite{Acc, Macl} with $8(g+3)$ automorphisms; for instance, taking $p=5, 7, 11, 13, 17, 19, 23, 29, 31$ we obtain the duals of the regular maps R12.1, R18.1, R30.1, R36.5, R48.1, R54.1, R66.3, R84.1, R90.1 in~\cite{C}. In each of the cases $c=3$ or $c=4$ we obtain a pair of $p^c$-sheeted coverings $\cal N$ of $\cal M$; not only are they non-isomorphic, but their automorphism groups are non-isomorphic since the corresponding pairs of normal subgroups $K$ of $\tilde G$ are non-isomorphic as $G$-modules, affording characters $\chi_4$ and $\chi_5$ when $c=3$, or $\chi_2+\chi_4$ and $\chi_2+\chi_5$ when $c=4$. The remaining coverings, with $c=6$ and $7$, afford the characters $\chi_4+\chi_5$ and $\chi_2+\chi_4+\chi_5$. These $G$-submodules $L\le Q$, and the corresponding maps ${\cal N}\in E_p({\cal M})$, all have combinatorial interpretations. The octahedron has an antipodal symmetry, so for each prime $p$ there is a $G$-submodule $Q_a$ affording the character $\pi_a-\chi_1$ ($=\chi_4$ if $p>3$), giving a regular map ${\cal N}=M_a\in E_p(\mathcal{M})$ with $\dim({\cal N})=4$ and $K\cong Q/Q_a$ affording $\chi-\pi_a+\chi_1$ ($=\chi_2+\chi_5$ if $p>3$). If $p>2$ there is a $G$-invariant complement $Q_{a'}$ for $Q_a$, giving a regular map ${\cal N}=M_{a'}\in E_p(\mathcal{M})$ with $\dim({\cal N})=3$ and $K\cong Q/Q_{a'}$ affording $\pi_a-\chi_1$ ($=\chi_4$ if $p>3$). Since the octahedron is $2$-face-colourable (i.e.~its dual, the cube, embeds a bipartite graph), there is a $2$-dimensional $G$-submodule $P_b>P_1$ spanned by the sums of the two monochrome subsets $\Psi\subset\Phi$, and a $5$-dimensional $G$-submodule $P_a+P_b<P^1$; these lead, via $G$-submodules $Q_b=P_b/P_1$ and $Q_a\oplus Q_b$ of $Q$, to coverings $\mathcal{M}_b$ and $\mathcal{M}_{a,b}$ in $E_p(\mathcal{M})$ of dimensions $c=6$ and $3$ respectively. If $p>3$ then since $P_b$ affords the character $\chi_1+\chi_2$ of $G$, these two coverings afford $\chi_4+\chi_5$ and $\chi_5$ respectively. There is also a $7$-dimensional $G$-submodule $P_{b'}<P$ consisting of the elements $\sum_{\phi\in\Phi}a_{\phi}\phi$ such that $\sum_{\phi\in\Psi}a_{\phi}=\sum_{\phi\in\Psi'}a_{\phi}$ where $\Psi$ and $\Psi'$ are the monochrome subsets; it contains $P_1$, so its image $Q_{b'}$ in $Q$ is a $6$-dimensional $G$-submodule leadng to a cyclic covering $\mathcal{M}_{b'}$ affording $\chi_2$. Finally, $Q_{a'}\cap Q_{b'}$ is a $3$-dimensional $G$-submodule of $Q$ (the reduction mod~$(p)$ of the natural representation $\rho_n$ of $G$) giving a $4$-dimensional covering $\mathcal{M}_{a',b'}$ affording $\chi_2+\chi_4$. Together with $\mathcal{M}_0$ this accounts for all seven maps in $E_p(\mathcal{M})$ when $p>3$. The direct sum decomposition $Q=Q_b\oplus Q_a\oplus (Q_{a'}\cap Q_{b'})$, with direct factors affording the irreducible representations $\rho_2$, $\rho_4$ and $\rho_5$ corresponding to the characters $\chi_2$, $\chi_4$ and $\chi_5$, is also valid when $p=3$. (The character $\chi_3$, which reduces mod~$(3)$ to $\chi_1+\chi_2$, is not a summand of $\chi$; the remaining four characters in Table~\ref{chartS4} correspond to irreducible characters over $\mathbb{F}_3$.) Thus $E_3(\mathcal{M})$ consists of the seven regular maps described above. {\color{red} } \section{The dodecahedron}\label{Dod} We now consider coverings of the dodecahedron ${\cal M}=\{5, 3\}$ branched over its face-centres (or, dually, of the icosahedron $\{3, 5\}$, branched over its vertices). In this case $G=\Delta(3, 2, 5)\cong A_5$, acting on the set $\Phi$ of twelve faces of $\cal M$, so the stabilisers of faces are the Sylow $5$-subgroups $H\cong C_5$ of $G$. The maps ${\cal N}\in E_p(\mathcal{M})$ have type $\{5p, 3\}$ and genus $5p^c-6p^{c-1}+1$ where $c=\dim({\cal N})$. In $A_5$ there are five conjugacy classes: the identity, fifteen double transpositions, twenty $3$-cycles, and two classes of twelve $5$-cycles. Hence there are five irreducible characters. The character table is as shown in Table~\ref{chartA5}, with $\lambda,\mu=(1\pm\sqrt{5})/2$. \begin{table}[htb] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline &$1$&$(..)(..)$&$(...)$&$(.....)^+$&$(.....)^-$\\ \hline $\chi_1$&1&1&1&1&1\\ $\chi_2$&3&-1&0&$\lambda$&$\mu$\\ $\chi_3$&3&-1&0&$\mu$&$\lambda$\\ $\chi_4$&4&0&1&-1&-1\\ $\chi_5$&5&1&-1&0&0\\ \hline \end{tabular} \caption{The character table of $A_5$.}\label{chartA5} \end{table} In addition to the principal character $\chi_1$, there are algebraically conjugate characters $\chi_2$ and $\chi_3$ obtained from the natural representation $\rho_n$ of $G$ as a rotation group, while the irreducible characters $\chi_4$ and $\chi_5$ are the non-principal summands of the permutation characters corresponding to the doubly transitive natural permutations representations of $G$ as $A_5$ and as $PSL_2(5)$. The subgroup $H$ consists of the identity element, and two elements each from the two conjugacy classes of $5$-cycles. By averaging their values over $H$, one sees that the characters $\chi_i$ have multiplicities $1, 1, 1, 0$ and $1$ in the permutation character $\pi$ on $P$, that is, \[\pi=\chi_1+\chi_2+\chi_3+\chi_5,\] so \[\chi=\chi_2+\chi_3+\chi_5,\] and hence $H_1(S;{\mathbb C})$ is a direct sum of irreducible submodules of dimensions $3, 3$ and $5$. The primes dividing $|G|$ are $2, 3$ and $5$, so we can use this decomposition to find the decomposition of $H_1(S;{\mathbb F}_p)$ for any prime $p>5$. If $p\equiv\pm 1$ mod~$(5)$, so that $5$ has a square root in ${\mathbb F}_p$, then $H_1(S;{\mathbb F}_p)$ is also a direct sum of irreducible submodules of dimensions $3, 3$ and $5$; in this case we obtain seven coverings ${\cal N}\in E_p(\mathcal{M})$, of dimensions $c=3, 3, 5, 6, 8, 8$ and $11$. If $p\equiv\pm 2$ mod~$(5)$, on the other hand, $5$ has no square root in ${\mathbb F}_p$; in this case $H_1(S;{\mathbb F}_p)$ is a direct sum of irreducible submodules of dimensions $5$ and $6$, and there are three coverings ${\cal N}\in E_p(\mathcal{M})$, with $c=5, 6$ and $11$. In either case, the first and second of these three maps are $\mathcal{M}_{a'}$ and $\mathcal{M}_a$, arising from the antipodal symmetry of the dodecahedron, and the third is $\mathcal{M}_0$; these coverings afford the characters $\chi_5$, $\chi_2+\chi_3$ and $\chi$ respectively. When $p\equiv\pm 1$ mod~$(5)$ there are two $3$-dimensional submodules $Q_n$ and $Q_{n*}$, obtained from the natural representation $\rho_n$ and its algebraic conjugate $\rho_{n*}$ of $G$; these lead to maps $\mathcal{M}_n$ and $\mathcal{M}_{n*}$ in $E_p(\mathcal{M})$ with $c=8$ and $K$ affording $\chi_2+\chi_5$ and $\chi_3+\chi_5$. Finally, in this case there are also two $8$-dimensional submodules $Q_a\oplus Q_n$ and $Q_a\oplus Q_{n*}$, leading to $3$-dimensional coverings $\mathcal{M}_{a,n}$ and $\mathcal{M}_{a,n*}$ affording $\chi_3$ and $\chi_2$. The direct sum decomposition $Q=Q_a\oplus Q_{a'}$, with direct factors affording the characters $\chi_5$ and $\chi_2+\chi_3$, is also valid when $p=3$ or $5$. Over $\mathbb{F}_3$, $\chi_5$ splits as $\chi_1+\chi_4$, while $\chi_2+\chi_3$ is irreducible (but splits into two algebraically conjugate characters over $\mathbb{F}_9$). {\color{red} } \section{The icosahedron}\label{Ico} We now consider coverings of the icosahedron ${\cal M}=\{3, 5\}$ branched over its face-centres (or, dually, of the dodecahedron $\{5, 3\}$, branched over its vertices). The maps ${\cal N}\in E_p(\mathcal{M})$ have type $\{3p, 5\}$ and genus $9p^c-10p^{c-1}+1$, where $c=\dim({\cal N})$. As in the case of the dodecahedron we have $G\cong A_5$, but now acting on the set $\Phi$ of twenty faces of $\cal M$, so the stabilisers of faces are the ten Sylow $3$-subgroups $H\cong C_3$ of $G$. The conjugacy classes and character table of $G$ are as before, but now $H$ consists of the identity and two $3$-cycles. In this case we find that \[\pi=\chi_1+\chi_2+\chi_3+2\chi_4+\chi_5,\] so \[\chi=\chi_2+\chi_3+2\chi_4+\chi_5.\] Thus $H_1(S;{\mathbb C})$ is a direct sum of irreducible submodules of dimensions $3, 3, 4$ (with multiplicity $2$) and $5$. By contrast with the previous examples, the existence of a summand with multiplicity greater than $1$ implies that the direct sum decompositions of $\mathbb{C}\Phi$ and of $H_1(S;\mathbb{C})$ are not unique: in each case the $8$-dimensional submodule affording the character $2\chi_4$ contains infinitely many mutually isomorphic $4$-dimensional irreducible submodules affording $\chi_4$ (see \S 2.5). As in the case of the dodecahedron, we can use this to obtain the decomposition of $H_1(S;{\mathbb F}_p)$ for primes $p>5$, but first we need to consider this lack of uniqueness of $4$-dimensional submodules. There is a chiral pair of $G$-invariant equivalence relations $\sim_b$ and $\sim_c$ on $\Phi$, each with five classes of four faces, corresponding to the inclusion of $H$ in two subgroups of $G$ isomorphic to $A_4$. Their equivalence classes $\Psi$ provide basis elements $\underline\Psi$ for a pair of $5$-dimensional submodules $P_b, P_c>P_1$ of $P$, and hence a pair of $4$-dimensional submodules $Q_b$ and $Q_c$ of $Q$. There is an isomorphism $Q_b\to Q_c,\, q\mapsto q'$, induced by the antipodal automorphism $i$ of $\mathcal{M}$, and these submodules each afford the character $\chi_4$ which has multiplicity $2$ in the decomposition of $\chi$. They generate an $8$-dimensional submodule $Q_{b,c}=Q_b\oplus Q_c$ which contains $p-1$ other submodules isomorphic to them, each of the form $Q(\lambda)=\{q+\lambda q'\mid q\in Q_b\}$ for some $\lambda\in\mathbb{F}_p^*$. Now $i$ tranposes $\sim_b$ and $\sim_c$, and hence also transposes $Q(0):=Q_b$ and $Q(\infty):=Q_c$, so it permutes these $p+1$ submodules $Q(\lambda)$ ($\lambda\in{\mathbb P}^1(p)=\mathbb{F}_p\cup\{\infty\}$), leaving $Q(1)$ and $Q(-1)$ (if $p>2$) invariant and otherwise transposing pairs $Q(\lambda^{\pm 1})$. Since $A=G\times\langle i\rangle$, the $G$-submodules $Q(\pm 1)$ are actually $A$-submodules, affording the irreducible characters $\chi_4\otimes\varphi_{\pm 1}$ of $A$, where $\varphi_1$ and $\varphi_{-1}$ are the principal and non-principal characters of $\langle i\rangle\cong C_2$. If $p\equiv\pm 1$ mod~$(5)$ then $H_1(S;{\mathbb F}_p)$ is a direct sum of irreducible submodules of the same dimensions and multiplicities as those for $H_1(S;\mathbb{C})$. The two irreducible submodules $Q_n$ and $Q_{n*}$ of dimension $3$ are obtained from the natural representation of $G$, those of dimension $4$ are the submodules $Q(\lambda)<Q_{a,b}$ defined above, and there is also one irreducible submodule $Q_5$ of dimension $5$. There are eight submodules intersecting $Q_{b,c}$ trivially, namely the various sums of $Q_n$, $Q_{n*}$ and $Q_5$, with codimensions $8, 11, 11, 13, 14, 16, 16$ and $19$; by taking the direct sum of one of these with a submodule $Q(\lambda)$ or $Q_{a,b}$ we obtain $8(p+1)$ submodules with codimensions $4, 7, 7, 9, 10, 12, 12$ and $15$, or eight with codimensions $0, 3, 3, 5, 6, 8, 8$ and $11$. Excluding the submodule $Q$ from the last eight, this gives a total of $8p+23$ proper submodules, and hence the same number of coverings ${\cal N}\in E_p(\mathcal{M})$, with $\dim({\cal N})=3, 4, \ldots, 16$ and $19$. Of these maps, $8(p-1)$ are chiral, namely those corresponding to submodules containing a single submodule $Q(\lambda)$, with $\lambda\ne\pm 1$, while the remaining $8+16+7=31$ are regular. If $3<p\equiv\pm 2$ mod~$(5)$ then $H_1(S;{\mathbb F}_p)$ is the direct sum of four irreducible submodules: two are isomorphic $4$-dimensional submodules of the form $Q(\lambda)$, and the other two are $Q_5$ of dimension $5$, and $Q_{b,c}$ of dimension $6$. There are four submodules intersecting $Q_{a,b}$ trivially, namely $Q_{b,c}\oplus Q_5$, $Q_{b,c}$, $Q_5$ and $0$ with codimensions $8, 13, 14$ and $19$; by taking the direct sums of these with a submodule $Q(\lambda)$ or $Q_{a,b}$ we obtain $4(p+1)$ submodules with codimensions $4, 9, 10$ and $15$, or four with codimensions $0, 5, 6$ and $11$. Excluding $Q$ as before, we obtain $4p+11$ proper submodules, and the same number of coverings ${\cal N}\in E_p(\mathcal{M})$, with $\dim({\cal N})=4, 5, 6, 8, 9, 10, 11, 13, 14, 15$ and $19$. These include $4(p-1)$ chiral maps, corresponding to submodules containing a single $Q(\lambda)$, with $\lambda\ne\pm 1$, while the other $15$ are regular. For any $p$ the antipodal symmetry of the icosahedron gives a $9$-dimensional submodule $Q_a=Q(1)\oplus Q_5$ affording $\chi_4+\chi_5$, leading to a $10$-dimensional covering $\mathcal{M}_a$ affording $\chi_2+\chi_3+\chi_4$; if $p>2$ there is also a complementary $10$-dimensional submodule $Q_{a'}=Q_n\oplus Q_{n*}\oplus Q(-1)$ affording $\chi_2+\chi_3+\chi_4$, leading to a $9$-dimensional covering $\mathcal{M}_{a'}$ affording $\chi_4+\chi_5$. The remaining summands $\chi_1, \chi_2, \chi_3$ and $\chi_5$ of $\pi|_G$ each extend to an irreducible character of $A$, so that $\pi|_A$ is a sum of six distinct irreducible characters, and thus $A$ has rank $6$ on $\Phi$. {\color{red} } \section{The dihedron}\label{Dih} If $\mathcal{M}$ is the dihedron $\{n, 2\}$, then \[G=\Delta(2, 2, n)=\langle x, y, z\mid x^2=y^2=z^n=xyz=1\rangle.\] This can be identified with the dihedral group \[D_n=\langle a, b\mid a^n=b^2=1, a^b=a^{-1}\rangle,\] where $x=b$ and $y=ab$ are rotations through $\pi$, fixing a vertex and edge-centre respectively and transposing the two faces, and $z=a$ is a rotation through $2\pi/n$, preserving the faces, so that $H=\langle a\rangle\cong C_n$. Then $H_1(S;\mathbb{Z})$ is an infinite cyclic group, with $a$ acting on it as the identity and $b$ inverting every element. Thus \[\pi = \chi_1+\chi_2\] and hence \[\chi=\chi_2,\] where $\chi_1$ is the principal character of $G$ and $\chi_2(g)=(-1)^k$ for each $g=a^jb^k\in G$. It follows that for each integer $l\ge 1$ there is a single regular covering $\cal N$ of exponent $l$, with a cyclic covering group, corresponding to the unique subgroup of index $l$ in $H_1(S;\mathbb{Z})$. This map is the dihedron $\{ln, 2\}$, a regular map of genus $0$ with $G\cong D_{ln}$ and $A=G\times\langle c\rangle\cong D_{ln}\times C_2$, where $c$ is the reflection fixing all the vertices and edges of $\cal N$ and transposing its two faces. \section{The hosohedron}\label{Hos} If $\mathcal{M}$ is the hosohedron $\{2, n\}$, then for each prime $p$ the maps ${\cal N}\in E_p(\mathcal{M})$ are all regular, of type $\{2p,n\}$ and genus \[1+\frac{p^{c-1}}{2}\bigl(n(p-1)-2p\bigr),\] where $c=\dim({\cal N})$. As in the case of the dihedron, $G$ is isomorphic to $D_n$, but now acting naturally with degree $n$ on $\Phi$, so that the subgroup stabilising a face is now $H=\langle b\rangle\cong C_2$. This makes the analysis of the coverings much more complicated than before. \subsection {Conjugacy classes and characters of $D_n$} First suppose that $n$ is odd. Then apart from the conjugacy class $\{1\}$, there are $(n-1)/2$ conjugacy classes $\{a^{\pm j}\}$ for $j=1, 2, \ldots, (n-1)/2$, and a single conjugacy class $\{a^jb \mid j=0, 1, \ldots, n-1\}$ consisting of the involutions in $G$. In addition to the characters $\chi_1$ and $\chi_2$ of degree $1$ defined in the preceding section, $G$ has $(n-1)/2$ irreducible characters $\xi_k$ of degree $2$, for $k=1, 2, \ldots, (n-1)/2$, where $\xi_k(a^{\pm j})=\alpha_{jk}:=\zeta_n^{jk}+\zeta_n^{-jk}$ with $\zeta_n=\exp(2\pi i/n)$. The character table of $G$ is as in Table~\ref{chartDnodd}, where the column headed $a^{\pm j}$ and the row labelled $\xi_k$ represent $(n-1)/2$ conjugacy classes and characters respectively, as $j$ and $k$ each range over $\{1, 2, \ldots, (n-1)/2\}$. For each $k$, the kernel of the representation $\rho_k$ corresponding to $\xi_k$ is the unique subgroup $\langle a^m\rangle$ of order $\gcd(k, n)$ in $G_0:=\langle a\rangle$, where $m=n/\gcd(k, n)\ge 3$, so $\rho_k$ can be regarded as a faithful representation of $G/\langle a^m\rangle\cong D_m$. \begin{table}[htb] \centering \begin{tabular}{|c|c|c|c|} \hline &$1$&$a^{\pm j}$&$a^jb$\\ \hline $\chi_1$&1&1&1\\ $\chi_2$&1&1&$-1$\\ $\xi_k$&2&$\alpha_{jk}$&0\\ \hline \end{tabular} \caption{The character table of $D_n$ for odd $n$.}\label{chartDnodd} \end{table} By computing the inner product of each irreducible character with $\pi$ we find that \[\pi=\chi_1+\sum_{k=1}^{(n-1)/2}\xi_k\] and hence \[\chi=\sum_{k=1}^{(n-1)/2}\xi_k.\] If $n$ is even the conjugacy classes of $G$ are $\{1\}$, $\{a^{n/2}\}$, $\{a^{\pm j}\}$ for $j=1, 2, \ldots, (n-2)/2$, $\{a^jb\mid j\;\hbox{is even}\}$ and $\{a^jb\mid j\;\hbox{is odd}\}$. In addition to $\chi_1$ and $\chi_2$, defined as above for $n$ odd, there are two more characters $\chi_3$ and $\chi_4$ of degree $1$, together with irreducible characters $\xi_k$ of degree $2$ for $k=1, 2, \ldots, (n-2)/2$. The character table is as in Table~\ref{chartDneven}, with the columns headed $a^{\pm j}$ and $a^jb$ representing $1+(n-2)/2=n/2$ and $2$ conjugacy classes respectively, and the row labelled $\xi_k$ representing $(n-2)/2$ characters. We can take $H=\langle b\rangle$, so that \[\pi=\chi_1+\chi_3+\sum_{k=1}^{(n-2)/2}\xi_k\] and hence \[\chi=\chi_3+\sum_{k=1}^{(n-2)/2}\xi_k.\] \begin{table}[htb] \centering \begin{tabular}{|c|c|c|c|} \hline &$1$&$a^{\pm j}$&$a^jb$\\ \hline $\chi_1$&1&1&1\\ $\chi_2$&1&1&$-1$\\ $\chi_3$&1&$(-1)^j$&$(-1)^j$\\ $\chi_4$&1&$(-1)^j$&$(-1)^{j+1}$\\ $\xi_k$&2&$\alpha_{jk}$&0\\ \hline \end{tabular} \caption{The character table of $D_n$ for even $n$.}\label{chartDneven} \end{table} \subsection{Submodule structure} Instead of considering the reduction mod~$(p)$ of the characters appearing in $\pi$, as in the preceding cases, it is more convenient to use the fact that the group $G=D_n$ has a subgroup $G_0=\langle a\rangle$ acting regularly on $\Phi$. This allows us to give a more explicit description of the $G$-submodules of $P$ and $Q$, and hence of the maps ${\cal N}\in E_p(\mathcal{M})$. We will assume that $p$ is coprime to $n$. In order to find the decompositions of $P=\mathbb{F}_p\Phi$, and hence of $H_1(S;\mathbb{F}_p)\cong Q=P/P_1$, as $G$-modules, we first consider them as $G_0$-modules. Since $G_0$ acts regularly on $\Phi$, one can identify $P$ with the group algebra $\mathbb{F}_pG_0$ of $G_0$ over $\mathbb{F}_p$, or equivalently with the polynomial algebra $\mathbb{F}_p[x]/(x^n-1)$, the elements $a^j\in G_0$ corresponding to the images of the powers $x^j$ of $x$. The $G_0$-submodules are then the ideals of this algebra, each generated by the image of the ideal in $\mathbb{F}_p[x]$ generated by some polynomial dividing $x^n-1$. To find these polynomials we need to determine the irreducible factors of $x^n-1$ in $\mathbb{F}_p[x]$. In $\mathbb{Z}[x]$ we have a factorisation \begin{equation} x^n-1=\prod_{m|n}\Phi_m(x), \end{equation} where $\Phi_m(x)$ in the $m$th cyclotomic polynomial, the minimal polynomial of the primitive $m$th roots of $1$ in $\mathbb{C}$. In fact, equation~(1), together with the equation $\Phi_1(x)=x-1$, can be regarded as a recursive definition of $\Phi_n(x)$ in terms of the polynomials $\Phi_m(x)$ for the proper divisors $m$ of $n$. Each cyclotomic polynomial is irreducible in $\mathbb{Z}[x]$ (in fact, in $\mathbb{Q}[x]$), but in almost all cases it factorises when reduced mod~$(p)$, that is, when regarded as a polynomial in $\mathbb{F}_p[x]$. We are assuming that $n$ is coprime to $p$, so the set $\Omega=\Omega_n$ of $n$th roots of $1$ in $\overline\mathbb{F}_p$ contains $n$ elements. This set is partitioned into disjoint subsets $\Pi_m$ ($m|n$) consisting of the $\varphi(m)$ primitive $m$th roots of $1$ in $\overline\mathbb{F}_p$. For each divisor $m$ of $n$, let $e=e_m$ denote the multiplicative order of $p$ as a unit mod~$(m)$, so that $m$ divides $p^e-1$ but not $p^f-1$ for any $f<e$. Then each $\omega\in\Pi_m$ generates the subfield $\mathbb{F}_{p^e}$ of $\overline\mathbb{F}_p$. The restriction to $\mathbb{F}_{p^e}$ of the Frobenius automorphism $\phi: t\mapsto t^p$ of $\overline\mathbb{F}_p$ generates the Galois group $C={\rm Gal}\,\mathbb{F}_{p^e}\cong C_e$ of this subfield, which partitions $\Pi_m$ into $\varphi(m)/e$ orbits $\Gamma=\{\omega, \omega^p, \ldots, \omega^{p^{e-1}}\}$ of length $e$. Each such orbit $\Gamma$ is the set of roots $\omega^{p^i}$ of an irreducible monic factor $f^{\Gamma}(x)$ of degree $e$ of $\Phi_m(x)$ in $\mathbb{F}_p[x]$. As $\Gamma$ ranges over all the orbits of $C$ on $\Omega$, these polynomials $f^{\Gamma}(x)$ give all the irreducible factors of $x^n-1$, and the ideals $P^{\Gamma}$ they generate give all the maximal $G_0$-submodules of $P$. By taking the ideals generated by all products of these irreducible factors we obtain all the $G_0$-submodules of $P$ as intersections of these maximal submodules. In particular, $P$ is the direct sum of the irreducible $G_0$-submodules $P_{\Gamma}\cong P/P^{\Gamma}$ generated by the complementary factors $f_{\Gamma}(x)=(x^n-1)/f^{\Gamma}(x)$ of $x^n-1$, with the generator $a$ of $G_0$ having minimal polynomial $f^{\Gamma}(x)$ on $P_{\Gamma}$. For instance, the orbit $\Gamma=\Pi_1=\{1\}$ corresponds to the submodule $P_1$ fixed by $G_0$; this is generated by $(x^n-1)/(x-1)=x^{n-1}+\cdots+x+1$, the product of all the irreducible factors except $\Phi_1(x)=x-1$. More generally, if $m$ divides $\gcd(n, p-1)$ then $\Pi_m$ consists of $\varphi(m)$ singleton orbits $\Gamma=\{\omega\}$, each giving a linear factor $f^{\Gamma}(x)=x-\omega$ of $\Phi_n(x)$ and corresponding to a $1$-dimensional direct summand $P_{\Gamma}$ of $P$ on which $a$ has eigenvalue $\omega$. To understand the structures of $P$ and $Q$ as $G$-modules, we need to consider the action of $b$, which acts on $G_0$ by conjugation as inversion $\iota: a^j\mapsto a^{-j}$. Given any orbit $\Gamma\subseteq \Pi_m$ of the Galois group $C=\langle\phi\rangle$ on $\Omega$, the inverses of its elements also form such an orbit $\Gamma^*$, so that $\Delta=\Gamma\cup\Gamma^*$ is an orbit of the group $B=\langle\phi,\iota\rangle$. The orbit $\Gamma$ containing $\omega$ coincides with $\Gamma^*$ if and only if $\omega^{p^f}=\omega^{-1}$, or equivalently, $p^f\equiv -1$ mod~$(m)$, for some integer $f$. (This is always true if $m=1$ or $2$, but otherwise it implies that $e$ is even and $f\equiv e/2$ mod~$(e)$.) This condition depends only on $p$ and $m$, so for a given $m$, either all of the orbits $\Gamma\subseteq\Pi_m$ of $C$ are self-inverse, each forming an orbit $\Delta$ of $B$, or none of them are, so that each orbit $\Delta$ of $B$ is a union of two mutually inverse orbits $\Gamma$ and $\Gamma^*$ of $C$. In the first case, the $G_0$-submodules $P^{\Gamma}$ and $P_{\Gamma}$ of $P$ are also $G$-modules, with $P_{\Gamma}\cong P/P^{\Gamma}$ irreducible and $e$-dimensional, whereas in the second case $P_{\Gamma}\oplus P_{\Gamma^*}$ is an irreducible $2e$-dimensional $G$-submodule, isomorphic to $P/(P^{\Gamma}\cap P^{\Gamma^*})$, with $b$ transposing the two direct factors. For instance, any $1$-dimensional $G_0$-submodules $P_{\Gamma}$ are of the second type, apart those with $m=1$ or $2$. In either case, we will denote this irreducible submodule by $R_{\Delta}$, and we let $d=e$ or $2e$ denote its dimension. The kernel of the representation of $G$ on $R_{\Delta}$ is the normal subgroup $\langle a^m\rangle\cong C_{n/m}$, so that $G$ acts on $R_{\Delta}$ as $G/\langle a^m\rangle\cong D_m$. These $G$-submodules $R_{\Delta}$ are the irreducible summands in the direct sum decomposition of the $G$-module $P=P_1\oplus P^1$; deleting the $1$-dimensional submodule $P_1$ gives the corresponding decomposition of $Q=P/P_1\cong P^1$. We therefore know all the proper $G$-submodules $L$ of $Q$ and hence all the maps $\cal N$ in $E_p(\mathcal{M})$. The possible values of $c=\dim({\cal N})$ range from $1$ (corresponding to $L=P^{\Delta}/P_1$ where $n$ is even and $\Delta=\Pi_2=\{-1\}$) to $n-1$ (for ${\cal N}=\mathcal{M}_0$, corresponding to $L=0$). The number $\nu$ of irreducible summands $R_{\Delta}$ in the direct sum decomposition of the $G$-module $Q$ is equal to the number of orbits $\Delta\ne\{1\}$ of $B$ on $\Omega$. Since these summands are mutually non-isomorphic, the total number of $G$-submodules of $Q$ is $2^{\nu}$; thus $Q$ has $2^{\nu}-1$ proper $G$-submodules and hence $|E_p(\mathcal{M})|=2^{\nu}-1$. \subsection{Examples} To illustrate the preceding arguments, suppose that $n=95$ and $p=7$. We consider the divisors $m=1, 5, 19$ and $95$ of $n$ in turn. The divisor $m=1$ leads to a single orbit $\Delta=\Pi_1=\{1\}$ of $B$, with the corresponding $1$-dimensional direct summand $R_{\Delta}=P_1$ of $P$, generated by $(x^{95}-1)/(x-1)$ and affording the principal representation of $G$. Next we consider $m=5$. The prime $p=7$ has order $e=4$ in ${\mathbb Z}^*_5$; since $\varphi(5)=4$, the Galois group $C={\rm Gal}\,\mathbb{F}_{7^4}\cong C_4$ has $\varphi(5)/e=1$ orbit $\Gamma$ on the set $\Pi_5\subset\mathbb{F}_{7^4}\subset\overline{\mathbb F}_7$ of primitive $5$th roots of $1$. The cyclotomic polynomial $\Phi_5(x)=f^{\Gamma}(x)=x^4+x^3+x^2+x+1$ is therefore irreducible in $\mathbb{F}_7[x]$, giving a $4$-dimensional irreducible $G_0$-submodule $P_{\Gamma}=(x^{95}-1)/(f^{\Gamma}(x))$ of $P$. Since $-1\in\langle 7\rangle$ in ${\mathbb Z}^*_5$, this is also a $4$-dimensional irreducible $G$-submodule $R_{\Delta}$, with $G$ acting on it as $D_5$. When we factor out $P_1$, this appears as a direct summand of $Q=H_1(S;\mathbb{Z}_7)$. Now let $m=19$. Since $7$ has order $e=3$ in ${\mathbb Z}^*_{19}$, and there are $\varphi(19)=18$ primitive $19$th roots of $1$ in $\mathbb{F}_{7^3}$, forming six orbits $\Gamma$ under the Galois group $C\cong C_3$ of this field, we obtain six $3$-dimensional irreducible $G_0$-submodules $P_{\Gamma}$ as summands of $P$, corresponding to six irreducible cubic factors $f^{\Gamma}(x)$ of $\Phi_{19}(x)$ in $\mathbb{F}_7[x]$. Since $-1\not\in\langle 7\rangle$ in ${\mathbb Z}^*_{19}$, these six orbits form three mutually inverse pairs $\Gamma$ and $\Gamma^*$, so they merge to give three orbits $\Delta=\Gamma\cup\Gamma^*$ of $B$. These correspond to three $6$-dimensional irreducible $G$-submodules $R_{\Delta}=P_{\Gamma}\oplus P_{\Gamma^*}$ of $P$, and hence also of $Q$, with $G$ acting on each as $D_{19}$. Finally let $m=95$. Since $\mathbb{Z}_{95}\cong \mathbb{Z}_5\oplus\mathbb{Z}_{19}$ we see that $7$ has order $e={\rm lcm}\{4, 3\}=12$ in ${\mathbb Z}^*_{95}$, so $\Phi_{95}(x)$ is a product of $\varphi(95)/12=6$ irreducible polynomials $f^{\Gamma}(x)$ of degree $12$ in $\mathbb{F}_7[x]$, giving six $12$-dimensional irreducible $G_0$-submodules $P_{\Gamma}$ as summands of $P$. Since $-1\not\in\langle 7\rangle$ in ${\mathbb Z}^*_{19}$ we have $-1\not\in\langle 7\rangle$ in ${\mathbb Z}^*_{95}$, so these give three $24$-dimensional irreducible $G$-submodules $R_{\Delta}=P_{\Gamma}\oplus P_{\Gamma^*}$ of $P$, and hence of $Q$, with $G$ acting faithfully on each. Adding up the dimensions of all these irreducible submodules $R_{\Delta}$, we have \[1+4+3\times 6+3\times 24=95=\dim P,\] so they give the complete direct sum decomposition $\oplus_{\Delta}R_{\Delta}$ of $P$ as a $G$-module; deleting the $1$-dimensional submodule $P_1$ gives the corresponding decomposition of $Q$. Since $Q$ has $\nu=7$ mutually non-isomorphic irreducible summands, we obtain $2^7-1=127$ proper $G$-submodules $L<Q$. Thus $E_7(\mathcal{M})$ consists of $127$ regular maps $\cal N$, with $\dim({\cal N})=4$ (one map), $6$ (three maps), and so on, up to $94$ (one map, $\mathcal{M}_0$). Examples with even values of $n$ can be treated similarly, except that now there is always at least one $1$-dimensional summand $R_{\Delta}$, corresponding to the orbit $\Delta=\Gamma=\Pi_2=\{-1\}$ of $B$ and the polynomial $f^{\Gamma}(x)=\Phi_2(x)=x+1$. This corresponds to a cyclic covering $\cal N$ on which $a$ has eigenvalue $-1$ (see~\cite[\S 9(d)]{JSu}). For instance, if $n=4$ then $E_p(\mathcal{M})$ consists of regular maps of type $\{2p, 4\}$ and genus $g=1+p^{e-1}(p-2)$. If $p\ne 2$ then $B$ has orbits $\Delta=\{1\}, \{-1\}$ and $\{\zeta_4^{\pm 1}\}$ on $\Omega_4$, the last two providing irreducible $G$-submodules of $Q$ of dimensions $1$ and $2$, affording the characters $\chi_3$ and $\xi_1$ of $G$. Thus $E_p(\mathcal{M})$ consists of three maps ${\cal N}$ with $c=\dim({\cal N})=1, 2$ and $3$, of genus $g=p-1, (p-1)^2$ and $(p-1)(p^2-p-1)$, affording the characters $\chi_3$, $\xi_1$ and $\chi_3+\xi_1$. The maps with $c=1$ lie on the Accola-Maclachlan surfaces with $8(g+1)$ automorphisms~\cite{Acc, Macl}; for $p=3,5,7,11$ they are the duals of R2.2, R4.4, R6.4 and R10.11 in~\cite{C}. Those with $c=2$ (the maps $\{2p,4\}_4$ in~\cite[\S8.6 and Table 8]{CM}) are the duals of R4.3, R16.3, R36.4 and R100.5 in~\cite{C} for these primes. \subsection{Prime $n$} If $n$ is prime then the only divisors $m$ of $n$ are $m=1$, giving rise to the $G$-submodule $P_1\le P$, and $m=n$. If $p\ne n$ then $\Phi_n(x)$ factorises in $\mathbb{F}_p[x]$ as a product of $(n-1)/e$ irreducible polynomials $f^{\Gamma}(x)$ of degree $e$ equal to the order of $p$ in the group $\mathbb{Z}_n^*\cong C_{n-1}$. As a $G_0$-module, $Q$ is a direct sum of $(n-1)/e$ corresponding irreducible submodules $Q_{\Gamma}$ of dimension $e$. If $n=2$ or if $e$ is even then $-1\in\langle p\rangle$ in $\mathbb{Z}_n^*$, so these are the $\nu=(n-1)/e$ irreducible summands $R_{\Delta}$ of dimension $d=e$, for $\Delta=\Gamma$, in the direct sum decomposition of $Q$ as a $G$-module. If $n>2$ and $e$ is odd then $-1\not\in\langle p\rangle$ in $\mathbb{Z}_n^*$, so we obtain $\nu=(n-1)/2e$ direct summands $R_{\Delta}=P_{\Gamma}\oplus P_{\Gamma^*}$ of dimension $d=2e$ in this decomposition. In either case, there are $2^{\nu}-1$ maps ${\cal N}\in E_p(\mathcal{M})$, specifically ${\nu\choose i}$ with $c=\dim({\cal N})=id$ for each $i=1, 2, \ldots, \nu$, affording the sum of $i$ distinct irreducible characters of degree $d$. In particular, if $n>2$ and $p\equiv \pm 1$ mod~$(n)$ then $d=2$, so $\nu=(n-1)/2$, with the irreducible coverings affording the characters $\xi_k$ of $G$ for $k=1, 2, \ldots, (n-1)/2$. (If $p=n$ then $Q\cong \mathbb{F}_p[x]/((x-1)^{n-1})=\mathbb{F}_n[x]/((x-1)^{n-1})$, with proper $G$-submodules of codimensions $c=1, 2,\ldots, n-1$, each generated by the image of $(x-1)^c$. It follows that $E_p(\mathcal{M})$ consist of $n-1$ coverings, one for each of these dimensions $c$.) In the next two subsections we give some simple illustrative examples. \subsubsection{Example: $n=3$} The first value of $n$ not covered by the section on the dihedron is $n=3$. In this case the maps ${\cal N}\in E_p(\mathcal{M})$ are regular, of type $\{2p, 3\}$ and genus $1+\frac{1}{2}p^{c-1}(p-3)$ where $c=\dim({\cal N})$. If $p\ne 3$ then $p$ has order $e=1$ or $2$ in $\mathbb{Z}_3^*$ as $p\equiv 1$ or $-1$ mod~$(3)$, so as a $G_0$-module $Q$ is respectively a direct sum of two $1$-dimensional submodules, or is $2$-dimensional and irreducible. In either case $B$ has a single orbit $\Delta\ne\{1\}$ on $\Omega_3$, namely $\{\zeta_3^{\pm 1}\}$ where $\zeta_3$ is a primitive cube root of $1$, so $Q$ is an irreducible $2$-dimensional $G$-module; hence $E_p(\mathcal{M})$ consists of a single map ${\cal N}=\mathcal{M}_0$ of genus $(p-1)(p-2)/2$, affording the character $\xi_1$ of $G$ (this is the map $\{2p,3\}_6$ in~\cite[\S8.6 and Table 8]{CM}). For instance, if $p=2$ this map $\cal N$ is the cube $\{4, 3\}$, with $\mathcal{M}={\cal N}/V_4$, and for $p=5, 7, 11$ or $13$ it is the dual of the regular map R6.1, R15.2, R45.3 or R66.1 in~\cite{C}. \subsubsection{Example: $n=13$} If $n=13$ the maps in ${\cal N}\in E_p(\mathcal{M})$ have type $\{2p, 13\}$ and genus $1+\frac{1}{2}p^{c-1}(11p-13)$. If $p=\pm 1$ mod~$(13)$ then $e=1$ or $2$; in either case $d=2$, so there are $\nu=12/2=6$ irreducible coverings, affording the characters $\xi_1,\ldots, \xi_6$, and a total of $2^6-1=63$ coverings ${\cal N}\in E_p(\mathcal{M})$, with $\dim({\cal N})=2, 4, \ldots, 12$. If $p\equiv 3$ or $9$ mod~$(13)$ then $e=3$; this is odd, so $d=6$, and hence there are $\nu=2$ irreducible coverings of dimension $6$; thus $E_p(\mathcal{M})$ consists of these two irreducible coverings and $\mathcal{M}_0$ of dimension $12$. The same applies if $p\equiv\pm 4$ mod~$(13)$, so that $d=e=6$. If $p\equiv \pm 5$ mod~$(13)$ then $e=4$, so $d=4$ also; thus $\nu=3$, so $E_p(\mathcal{M})$ consists of seven coverings of dimensions $4, 8$ and $12$. If $p\equiv\pm 2$ or $\pm 6$ mod~$(13)$ then $d=e=12$, giving one irreducible covering $\mathcal{M}_0$ of dimension $12$. \section{Branching over faces and vertices} It is easy now to find the elementary coverings of the Platonic maps, where the branching is over their vertices rather than the faces: one simply uses the preceding results to find the coverings of the dual map $\mathcal{M}^*$ of $\mathcal{M}$, branched over its faces, and then takes the duals of the resulting maps. One can also use similar methods to determine the elementary abelian coverings, where the branching is over both the faces and the vertices. In this case, we replace $S$ with the sphere $S'$ punctured at the set $\Phi'=\Phi\cup\Phi^*$, where $\Phi$ is the set of face-centres of $\mathcal{M}$ and $\Phi^*$ is its set of vertices (i.e.~the face-centres of $\mathcal{M}^*$). The resulting homology module $H_1(S';\mathbb{F}_p)$ for $G$ on $S'$ is the quotient $Q'=P'/P'_1$, where $P'$ is the permutation module $\mathbb{F}_p\Phi'=\mathbb{F}_p\Phi\oplus\mathbb{F}_p\Phi^*$ for $G$ on $\Phi'$, and $P_1'$ is the $1$-dimensional $G$-submodule of $P'$ spanned by $\underline\Phi'=\underline\Phi+\underline\Phi^*$. The representation and character of $G$ on $P'$ are $\rho'=\rho\oplus\rho^*$ and $\pi'=\pi+\pi^*$, where $\rho$ and $\rho^*$ are the representations on the permutation modules $P=\mathbb{F}_p\Phi$ and $P^*=\mathbb{F}_p\Phi^*$, and $\pi$ and $\pi^*$ are their associated characters. Since $G$ has two orbits on $\Phi'$, the principal character $\chi_1$ of $G$ has multiplicity $2$ in $\pi'$, afforded by the $1$-dimensional fixed submodules $P_1$ and $P_1^*$ of $P$ and $P^*$, spanned by $\underline\Phi$ and $\underline\Phi^*$; deleting one copy of $\chi_1$ gives the character \[\chi'=\pi+\pi^*-\chi_1=\chi_1+\chi+\chi^*\] of $G$ on $Q'$, where $\chi=\pi-\chi_1$ and $\chi^*=\pi^*-\chi_1$ are the characters on $Q=P/P_1$ and $Q^*=P^*/P^*_1$. We have already determined the $G$-module structures of $P$ and of $P^*$ (the latter in the context of $\mathcal{M}^*$ rather than $\mathcal{M}$), so we can immediately deduce the structure of $P'=P\oplus P^*$, using Lemma~2.1, and hence that of its quotient $Q'$. This gives us information about the set $E'_p(\mathcal{M})$ of coverings of $\mathcal{M}$ by elementary $p$-groups, branched over faces and vertices, since these correspond to the proper $G$-submodules of $Q'$. In all cases, $Q'$ is an extension of a fixed $G$-submodule $Q'_1=(P_1\oplus P^*_1)/P_1'\cong P_1$ ($\cong P_1^*$) by \[P/(P_1\oplus P^*_1) =(P\oplus P^*_1)/(P_1\oplus P^*_1)\oplus(P_1\oplus P^*)/(P_1\oplus P^*_1) \cong Q\oplus Q^*.\] If the modules $P$ and $P^*$ both split over their fixed submodules $P_1$ and $P_1^*$ (equivalently, if both $f=|\Phi|$ and $f^*=|\Phi^*|$ are coprime to $p$), then $Q'$ also splits over $Q'_1$: we have $P=P_1\oplus P^1$ and $P^*=P_1^*\oplus {P^*}^1$, so \[P'=P_1\oplus P_1^*\oplus P^1\oplus {P^*}^1;\] since $P'_1$ is a submodule of $P_1\oplus P^*_1$ there is a complement \[Q'^1=(P'_1\oplus P^1\oplus {P^*}^1)/P'_1\cong P^1\oplus {P^*}^1\cong Q\oplus Q^*\] for $Q'_1$ on $Q'$, giving \[Q'=Q'_1\oplus Q'^1 \cong P_1\oplus Q\oplus Q^*.\] Instead of dealing with all the cases in detail, we will consider one simple example, and just summarise the results in the remaining cases. \subsection{The tetrahedron} If $\mathcal{M}$ is the tetrahedron then $\mathcal{M}^*\cong\mathcal{M}$, and so $P\cong P^*$. We saw in \S3 that if $p\ne 2$ then $P=P_1\oplus P^1$, affording the representation $\rho_1\oplus\rho_4$ of $G$ with character $\chi_1+\chi_4$. It follows that \[Q'=Q'_1\oplus Q_3\oplus Q_3^*,\] affording the representation $\rho_1\oplus\rho_4\oplus\rho_4$ with character \[\chi'=\chi_1+2\chi_4;\] here $Q'_1$ affords $\rho_1$, while $Q_3$ and $Q_3^*$, the images in $Q'$ of $P^1$ and ${P^*}^1$, both afford $\rho_4$. Now $Q_3\cong Q_3^*\cong Q$, so $Q_3\oplus Q_3^*$ contains $p+1$ submodules $Q(\lambda)\cong Q$, where $\lambda\in{\mathbb P}^1(p)$ (see \S 2.5, and also \S7 for a similar phenomenon concerning the icosahedron). It follows that $Q'$ has the following proper submodules $L$: one submodule $Q_3\oplus Q_3^*$ of codimension $c=1$, $p+1$ submodules $Q'_1\oplus Q(\lambda)$ of codimension $3$, $p+1$ submodules $Q(\lambda)$ of codimension $4$, one submodule $Q'_1$ of codimension $6$, and one submodule $0$ of codimension $7$. Thus there are $|E_p'(\mathcal{M})|=2p+5$ coverings $\cal N$ of $\mathcal{M}$, of these dimensions $c$, affording the representations $\rho_1$, $\rho_4$, $\rho_1\oplus \rho_4$, $\rho_4\oplus\rho_4$, and $\rho_1\oplus\rho_4\oplus\rho_4$ respectively. As in \S3, the maps ${\cal N}\in E_p'(\mathcal{M})$ are all regular. The coverings of dimension $3$ corresponding to $L=Q'_1\oplus Q^*_3$ and $Q'_1\oplus Q_3$ are respectively branched over only the faces and only the vertices; the first of these is the covering $\mathcal{M}_0$ of type $\{3p, 3\}$ and genus $p^3-2p^2+1$ described in \S3, and the second is its dual. The remaining $2p+3$ coverings have branching of order $p^c-p^{c-1}$ at the four vertices and the four faces of $\mathcal{M}$, so they have type $\{3p, 3p\}$ and genus $3p^c-4p^{c-1}+1$. When $L=Q_3\oplus Q^*_3$ these are cyclic coverings of genus $3(p-1)$, obtained by assigning mutually inverse monodromy permutations in $K=C_p$ to the vertices and face-centres of $\mathcal{M}$; for instance, if $p=3, 5, 7, 11$ these are the maps R6.9, R12.8, R18.10, R30.10 in~\cite{C}. The $p-1$ coverings with $L=Q'_1\oplus Q(\lambda)$ and $\lambda\ne 0, \infty$ have genus $3p^3-4p^2+1$; when $p=3$ these are the maps R46.27 of type $\{9, 9\}_4$ and R46.28 of type $\{9, 9\}_{12}$ in~\cite{C}. {\color{red} } \subsection{The remaining cases} Here we simply give the homology character $\chi'$ for each of the remaining Platonic maps, in terms of the irreducible characters in the tables given earlier, together with a few comments. The $G$-module structure of $Q'$ and hence the properties of the elementary abelian coverings $\cal N$ of $\mathcal{M}$ can be deduced by methods similar to those used for the tetrahedron. If $\mathcal{M}$ is the cube or octahedron then \[\chi'=\chi_1+\chi_2+\chi_3+\chi_4+2\chi_5.\] The presence of the $1$-dimensional rational characters $\chi_1$ and $\chi_2$ leads to both central and non-central cyclic coverings $\cal N$ of $\mathcal{M}$. As in the case of the tetrahedron, the coefficient $2$ of $\chi_5$ means that the number of coverings is unbounded as a function of $p$: in fact, since $\rho_5\oplus\rho_5$ has $p+1$ summands isomorphic to $\rho_5$ we have $|E_p'(\mathcal{M})|=16p+47$ for $p>3$. If $\mathcal{M}$ is the dodecahedron or icosahedron then \[\chi'=\chi_1+2\chi_2+2\chi_3+2\chi_4+2\chi_5.\] The character $\chi_1$ again leads to central cyclic coverings, but now there are no non-central cyclic coverings. The characters $\chi_4$ and $\chi_5$ are both rational, so the representations $\rho_4$ and $\rho_5$ are realised over $\mathbb{F}_p$. The same applies to $\rho_2$ and $\rho_3$ if $p\equiv\pm 1$ mod~$(5)$, whereas if $p\equiv\pm 2$ mod~$(5)$ then we obtain a $6$-dimensional irreducible representation over $\mathbb{F}_p$ with character $\chi_2+\chi_3$. The presence of four characters with coefficient $2$ leads to the conclusion that $|E_p'(\mathcal{M})|\sim 2p^4$ as $p\to\infty$, the dominating terms arising from $G$-submodules of $Q'$ which are direct sums of four or five non-isomorphic irreducible submodules. If $\mathcal{M}$ is the dihedron $\{n, 2\}$ or the hosohedron $\{2, n\}$ then \[\chi'=\chi_1+\chi_2+\sum_{k=1}^{(n-1)/2}\xi_k\] if $n$ is odd, and \[\chi'=\chi_1+\chi_2+\chi_3+\sum_{k=1}^{(n-2)/2}\xi_k\] if $n$ is even. The $1$-dimensional rational characters $\chi_k$ give rise to cyclic coverings, central when $k=1$ and non-central when $k>1$. The fact that $\chi'$ is multiplicity-free means that $|E_p'(\mathcal{M})|$ is bounded above as a function of $p$, by $2^{(n+3)/2}-1$ or $2^{(n+4)/2}-1$ as $n$ is odd or even. These bounds are attained when $p\equiv 1$ mod~$(n)$, so that the representations corresponding to the characters $\xi_k$ all reduce mod~$(p)$ to $2$-dimensional representations over $\mathbb{F}_p$. For each $n$ there are infinitely many such primes $p$, by Dirichlet's theorem on primes in arithmetic progressions. \section{Branching over edges} Hypermaps (Grothendieck's {\em dessins d'enfants}, see~\cite{Gro, JSi2}) are generalisations of maps, which correspond to triangle groups $\Delta(p, q, r)$ with $q$ not necessarily equal to $2$. These arise if we extend the investigation of coverings of the Platonic maps to allow branching over their edges. The general principles are the same as for branching over the faces, except that $\Phi$ is now replaced with the set $\Phi^{\dagger}$ of edges of $\mathcal{M}$, the sphere is punctured at their midpoints, and $H$ is the subgroup $\langle y\rangle\cong C_2$ of $G$ leaving invariant an edge. Dual pairs of maps $\cal M$ can be treated together, since duality induces $G$-isomorphisms between their edge sets. The decompositions of the resulting character $\chi^{\dagger}$ of $G$ on the homology module are as follows, with the notation for irreducible characters as before. For the tetrahedron (see \S 11.1 for applications), \[\chi^{\dagger}=\chi_2+\chi_3+\chi_4.\] For the cube and the octahedron, \[\chi^{\dagger}=\chi_3+2\chi_4+\chi_5.\] For the dodecahedron and the icosahedron, \[\chi^{\dagger}=\chi_2+\chi_3+2\chi_4+3\chi_5.\] For the dihedron and the hosohedron, if $n$ is odd then \[\chi^{\dagger}=\sum_{k=1}^{(n-1)/2}\xi_k,\] the same decomposition as for branching over the faces of the hosohedron, by the duality between edges and faces of this map; if $n$ is even then taking $H=\langle ab\rangle$ as the subgroup stabilising an edge (since we took $\langle b\rangle$ in \S 9.1 as the stabiliser of a face), we have \[\chi^{\dagger}=\chi_1+\chi_4+\sum_{k=1}^{(n-2)/2} \xi_k.\] \subsection{The tetrahedron} Let $\mathcal{M}$ be the tetrahedron, so that the coverings $\cal N$ are hypermaps of type $(3, 2p, 3)$ and genus $2p^c-3p^{c-1}+1$. If $p\equiv 1$ mod~$(3)$ then $\chi^{\dagger}$ decomposes over $\mathbb{F}_p$ as above, with irreducible summands $\chi_2$, $\chi_3$ and $\chi_4$ of degrees $1$, $1$ and $3$, so we obtain seven coverings $\cal N$, with $c=1, 1, 2, 3, 4, 4$ and $5$. Since $\chi_2$ and $\chi_3$ are transposed by orientation-reversing automorphisms of $\mathcal{M}$, those with $c=1$ or $4$ occur in chiral pairs, whereas the other three are regular. For instance, if $p=7$ then the two cyclic coverings (with $c=1$) are duals of the chiral pair of hypermaps CH12.1 of type $(3,3,14)$ in~\cite{C}. If $2<p\equiv 2$ mod~$(3)$ then $\chi^{\dagger}$ decomposes as a sum of two irreducible characters $\chi_2+\chi_3$ and $\chi_4$ of degrees $2$ and $3$, so in this case there are three coverings $\cal N$ with $c=2, 3$ and $5$, all of them regular. For instance, if $p=5$ then the covering with $c=2$ is a dual of the hypermap RPH36.3 in~\cite{C}. \section{Branching over vertices, edges and faces} Just as in \S 10 we considered branching over the vertices and faces, one can also consider coverings which are branched over any combination of vertices, edges and faces. In the most general case, where we allow branching over all three sets, $G$ has three orbits on the punctures of the sphere; the permutation module $P''$ is therefore the direct sum of $P$, $P^*$ and a third permutation module $P^{\dagger}$ with $\Phi^{\dagger}$ as its basis, and the corresponding homology character is \[\chi''=\pi+\pi^*+\pi^{\dagger}-\chi_1=2\chi_1+\chi+\chi^*+\chi^{\dagger}.\] If $\mathcal{M}$ is the tetrahedron, for example, then $\chi''=2\chi_1+\chi_2+\chi_3+3\chi_4$. The corresponding coverings of each $\cal M$ can now be found by using the techniques described earlier. For instance, in all cases the character $\chi_1$ has multiplicity $2$ in $\chi''$, yielding $p+1$ central cyclic coverings of $\mathcal{M}$ for each prime $p$ not dividing $|G|$: three are branched over just two from the three sets of vertices, edges and faces, and $p-2$ are branched over all three sets. \bigskip